Abstract
Machine learning solutions are revolutionising AI, but their instability against adversarial examples – small perturbations to inputs that can catastrophically affect the output – raises concerns about the readiness of this technology for widespread deployment. Using illustrative examples, this lecture will give an overview of techniques being developed to improve the robustness, safety and trust in AI systems.
About the speaker
Marta Kwiatkowska is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. She is known for fundamental contributions to the theory and practice of model checking for probabilistic systems, and is currently focusing on safety, robustness and fairness of automated decision making in Artificial Intelligence. She led the development of the PRISM model checker (www.prismmodelchecker.org), which has been adopted in diverse fields, including wireless networks, security, robotics, healthcare and DNA computing, with genuine flaws found and corrected in real-world protocols. Her research has been supported by two ERC Advanced Grants, VERIWARE and FUN2MODEL, EPSRC Programme Grant on Mobile Autonomy and EPSRC Prosperity Partnership FAIR. Kwiatkowska won the Royal Society Milner Award, the BCS Lovelace Medal and the Van Wijngaarden Award, and received an honorary doctorate from KTH Royal Institute of Technology in Stockholm. She is a Fellow of the Royal Society, Fellow of ACM and Member of Academia Europea.
Important dates
Track proposal submission: November 14, 2022Paper submission (no extensions): May 23, 2023Position paper submission: June 7, 2023Author notification: July 11, 2023Final paper submission, registration: July 31, 2023Discounted payment: August 18, 2023- Conference date: September 17–20, 2023
Under patronage of
Prof. Krzysztof Zaremba
Rector of Warsaw University of Technology