Event

PhD Defense: Privacy-preserving recommender systems facilitated by the machine learning approach

  • Conférencier  Jun Wang

  • Lieu

    Room 3.100, Maison du Savoir Campus Belval

    LU

Members of the defense committee:

  • Prof. Dr. Sjouke Mauw, Université du Luxembourg, chairman
  • Dr. Qiang Tang, Luxembourg Institute of Science and Technology, vice-chairman
  • Prof. Dr. Peter Y.A. Ryan, Université du Luxembourg, supervisor
  • Prof. Dr. Catuscia Palamidessi, Inria Saclay- Île-de-France, member
  • Prof. Dr. Josep Domingo-Ferrer, Universitat Rovira I Virgili, member

Abstract: Recommender systems, which play a critical role in e-business services, are closely linked to our daily life. For example, companies such as YouTube and Amazon are always trying to secure their profit by estimating personalized user preferences and recommending the most relevant items (e.g., products, news, etc.) to each user from a large number of candidates. State-of-the-art recommender systems are often built on-top of collaborative filtering techniques, of which the accuracy performance relies on precisely modeling user-item interactions by analyzing massive user historical data, such as browsing history, purchasing records, locations and so on. Generally, more data can lead to more accurate estimations and more commercial strategies, as such, service providers have incentives to collect and use more user data.  On the one hand, recommender systems bring more income to service providers and more convenience to users; on the other hand, the user data can be abused, arising immediate privacy risks to the public. Therefore, how to preserve privacy while enjoying recommendation services becomes an increasingly important topic to both the research community and commercial practitioners.

The privacy concerns can be disparate when constructing recommender systems or providing recommendation services under different scenarios. One scenario is that, a service provider wishes to protect its data privacy from the inference attack, a technique aims to infer more information (e.g., whether a record is in or not) about a database, by analyzing statistical outputs; the other scenario is that, multiple users agree to jointly perform a recommendation task, but none of them is willing to share their private data with any other users. Security primitives, such as homomorphic encryption, secure multiparty computation, and differential privacy, are immediate candidates to address the privacy concerns. A typical approach to build efficient and accurate privacy-preserving solutions is to improve the security primitives, and then apply them to existing recommendation algorithms. However, this approach often yields a solution far from the satisfactory-of-practice, as most users have a low tolerance to the latency-increase or accuracy-drop, regarding recommendation services. The PhD program explores machine learning aided approaches to build efficient privacy-preserving solutions for recommender systems. The results of each proposed solution demonstrate that machine learning can be a strong assistant for privacy-preserving, rather than only a troublemaker.