Privacy Attacks and Protection in Machine Learning as a Service

SCHEME: AFR PhD

CALL: 2019

DOMAIN: IS - Information and Communication Technologies

FIRST NAME: Hailong

LAST NAME: HU

INDUSTRY PARTNERSHIP / PPP:

INDUSTRY / PPP PARTNER:

HOST INSTITUTION: University of Luxembourg

KEYWORDS: machine learning model secrecy, training data privacy, black-box attacks, explainable machine learning techniques, DeepSets technique, adversarial generative networks

START: 2019-12-01

END: 2023-11-30

WEBSITE: http://www.uni.lu

Submitted Abstract

Machine learning (ML) techniques have gained widespread adoption in a large number of real-world applications. Following the trend, machine learning as a service (MLaaS) is provided by leading Internet companies to broaden and simplify ML model deployment. Although MLaaS only provides black-box access to its customers, recent research has identified several attacks to reveal confidential information about model itself and training data. Along this line, this project’s goal is to further investigate new attacks in terms of ML models and training data and develop a systematic, practical and general defense mechanism to enhance the security of ML models. The project team including SaToSS and CISPA will also make source codes publicly available and use them in their own courses. This project will provide a deeper understanding of machine learning privacy, thereby increasing the safety of machine learning-based systems such as authentication system and malware detection, helping protect the nation and its citizens from cyber harm. This project PriML combines multiple novel ideas synergistically, organized into three inter-related research thrusts. The first thrust aims to explore potential attacks from the perspective of ML models via black-box explainable machine learning techniques. The second thrust focuses on investigating new attacks from the perspective of training datasets through DeepSets technique which can mitigate the complexity of deep neural networks and facilitate our attacks. Both thrusts include considering different types of neural networks and identifying inherently distinct properties of these types of attacks respectively. The third thrust involves understanding and finding out a set of invariant properties underlying these attacks and developing defense mechanisms that exploit these properties to provide better protection of ML privacy.

This site uses cookies. By continuing to use this site, you agree to the use of cookies for analytics purposes. Find out more in our Privacy Statement