.jpg)
Course unit details:
Security and Privacy in Artificial Intelligence
Unit code | COMP60272 |
---|---|
Credit rating | 15 |
Unit level | FHEQ level 7 – master's degree or fourth year of an integrated master's degree |
Teaching period(s) | Semester 2 |
Available as a free choice unit? | Yes |
Overview
Neural networks, machine learning models and other data-driven AI components are becoming ubiquitous. Yet, they often fail in subtle and unpredictable ways. This unit takes a security perspective on the issue and covers the most common threats on state-of-the-art AI components together with their corresponding defences. Students will be trained in theoretical formalisation, efficient algorithms, software libraries and practical tools.
Pre/co-requisites
Some background in Machine Learning or Data Science is recommended.
Aims
The unit aims to introduce students to security and privacy issues of data-driven AI components and existing countermeasures. Students will learn how to deploy adversarial attacks against the whole machine learning pipeline, with their corresponding deterministic and probabilistic defences. Furthermore, students will learn how to guarantee the privacy of both user data and AI models alike. After attending this unit, each student will possess the fundamental knowledge to continue their personal lifelong learning in this rapidly evolving field.
Learning outcomes
1. Describe common threats to the modern machine learning pipeline.
2. Explain the algorithmic details of adversarial and privacy attacks on AI components and identify the appropriate defence techniques.
3. Design attacks and defences for a given AI component, taking into account the appropriate computational and informational constraints.
4. Apply software tools to find adversarial vulnerabilities in a given AI model and patch them.
5. Use available libraries for deploying privacy-preserving techniques to protect AI models.
6. Communicate the risks and countermeasures associated with state-of-the-art AI models.
7. Assess the suitability of AI security and privacy techniques for a given application.
Syllabus
The unit covers four distinct topics:
1. AI Threats. Understand the scope of cyber threats targeting AI systems, and why it is important to protect AI systems. Identify the correct threat model for each stage of the modern machine learning pipeline: training, evaluation and deployment.
2. Robustness and Adversarial Attacks. Understand the intrinsic vulnerabilities of machine learning models: average accuracy vs worst-case behaviour. Learn existing verification algorithms for certified robustness and their theoretical complexity. Explore potential defences including randomised smoothing, adversarial training and repair.
3. Data Poisoning and Backdoors. Understand the feasibility of training-time attacks such as data poisoning and backdoors. Learn a range of defence techniques including watermarking and anomaly detection. Explore the best practices in regular system audits, ensuring continuous improvement of AI system security.
4. Privacy-Preserving Machine Learning. Understand the impact of privacy attacks from individual training samples to the leakage of whole models. Identify potential defences including differential privacy and encryption primitives. Explore post-hoc analysis techniques to audit a variety of data-theft scenarios.
Teaching and learning methods
This unit includes a blend of face-to-face lectures, practical tutorials, guest seminars, online resources and group work.
Synchronous Activities: weekly lectures and tutorials covering theoretical aspects, algorithms, software tools and hands-on experience.
Asynchronous Activities: guest seminars (online and/or recorded) and directed reading, including reviews of recent advances in the field.
Employability skills
- Analytical skills
- Problem solving
- Research
- Written communication
- Other
Assessment methods
Method | Weight |
---|---|
Written exam | 50% |
Written assignment (inc essay) | 50% |
Feedback methods
Assignment: individual feedback on completion of marking.
Exam: marks will be released after the exam board review. Written feedback will be cohort-level and focus on general trends and common pitfalls.
Recommended reading
Lorenzo Cavallaro, Emiliano De Cristofaro. Security and Privacy of AI Knowledge Guide. The Cyber Security Body of Knowledge (CyBOK). 2023
Ken Huang, et al. Generative AI Security: Theory and Practices. Springer. 2024.
Omar Santos, Petar Radanliev. Beyond the Algorithm: AI, Security, Privacy, and Ethics. Pearson Education. 2024.
Fei Hu, Xiali Hei. AI, Machine Learning, and Deep Learning: a Security Perspective. CRC Press. 2023.
Luc Jaulin, et al. Applied Interval Analysis. Springer. 2001.
Daniel Kroening, Ofer Strichman. Decision Procedures: an Algorithmic Point of View. Springer. 2016.
Christopher M. Bishop, Nasser M. Nasrabadi. Pattern recognition and machine learning. Springer. 2006
Study hours
Scheduled activity hours | |
---|---|
Assessment written exam | 2 |
Lectures | 15 |
Tutorials | 15 |
Independent study hours | |
---|---|
Independent study | 118 |
Teaching staff
Staff member | Role |
---|---|
Edoardo Manino | Unit coordinator |
Additional notes
Coursework: 20 hours
Directed reading: 10 hours
Online seminars: 5 hours
Please contact the unit lead to get permission to do the unit if you are not a Comp Sci student/ unable to enrol onto the unit.