The next meeting of the IRN will take place on Friday Feb 8 at 4:30pm in rm 023N at the Munk School (1 Devonshire Pl). Patricia Thaine (PhD Candidate, Computer Science) will be running a workshop on the challenges (and risks) of artificial intelligence research and privacy protection.
Perfectly Privacy-Preserving AI: What is it and how do we achieve it?
AI is quickly becoming ubiquitous and equipping us with abilities previous generations could only think of as science fiction. For most tasks, the more data that are available to AI the better they will perform and serve our needs. However, necessary data often include sensitive or personally identifiable information. These are required for tasks such as facial recognition, speaker recognition, text processing, and genomic data analysis. Unfortunately, one of the following two scenarios occur when AI need to tackle such a task: either they end up being trained on sensitive user information, making them vulnerable to malicious actors, or their performance ends up being subpar given the lack of access to the necessary data. There are a number of approaches that can be integrated into AI algorithms in order to maintain various levels of privacy and ensure that neither of these scenarios occur. Namely, differential privacy, secure multiparty computation, homomorphic encryption, federated learning, secure enclaves, and data de-identification. We will briefly explain each of these methods and describe the scenarios in which they would be most appropriate. We will then cover some of the most interesting examples of current privacy-preserving AI application. Finally, we will discuss how the privacy-preserving AI applications that have been proposed so far would need to be modified for them to become perfectly privacy-preserving.”