@InProceedings{HDJ20, author="Heged{\H{u}}s, Istv{\'a}n and Danner, G{\'a}bor and Jelasity, M{\'a}rk", editor="Cellier, Peggy and Driessens, Kurt", title="Decentralized Recommendation Based on Matrix Factorization: A Comparison of Gossip and Federated Learning", booktitle="Machine Learning and Knowledge Discovery in Databases", year="2020", publisher="Springer International Publishing", address="Cham", pages="317--332", abstract="Federated learning is a well-known machine learning approach over edge devices with relatively limited resources, such as mobile phones. A key feature of the approach is that no data is collected centrally; instead, data remains private and only models are communicated between a server and the devices. Gossip learning has a similar application domain; it also assumes that all the data remains private, but it requires no aggregation server or any central component. However---one would assume---gossip learning must pay a price for the extra robustness and lower maintenance cost it provides due to its fully decentralized design. Here, we examine this natural assumption empirically. The application we focus on is making recommendations based on private logs of user activity, such as viewing or browsing history. We apply low rank matrix decomposition to implement a common collaborative filtering method. First, we present similar algorithms for both frameworks to efficiently solve this problem without revealing any raw data or any user-specific parts of the model. We then examine the aggregated cost in both cases for several algorithm-variants in various simulation scenarios. These scenarios include a real churn trace collected over mobile phones. Perhaps surprisingly, gossip learning is comparable to federated learning in all the scenarios and, especially in large networks, it can even outperform federated learning when the same subsampling-based compression technique is applied in both frameworks.", isbn="978-3-030-43823-4" }