Can Machine Learning be Hacked

February 26, 2020

This seminar is organized in collaboration with CWI's  Machine learning Group. Besides the presentations, the program includes a lunch, panel discussion, and a reception. Date: *Friday, March 27, 2020*Time: 1000h--1715h   Venue: CWI, Euler Room

REGISTRATION: *REQUIRED* (free of charge) no later than 20/03. Below you find a list of speakers, a description of the seminar, and the relevant links (incl. registration, schedule, abstracts of the presentations, speaker info, etc.)

SPEAKERS: Marten van Dijk (UConn & CWI), Audra McMillan (Boston U & Northeastern U), Thijs Veugen (TNO & CWI), Phuong Ha Nguyen (UConn), Joaquin Vanschoren (TUE)

ABSTRACT: Can we trust Machine Learning (ML) to enable robust intelligence with its ability to sense, learn, reason, and act in complex environments with real-time responsiveness and long-term reflection? How can robust intelligence survive in a malicious world? We need to worry about adversarial examples which seem normal for a human but are wrongly classified by ML models; privacy attacks which extract information about the ML model and used training data set; poisoning and Trojan attacks that maliciously modify a ML model’s behavior. AutoML is about automating the process of applying ML to real world problems – will this be hacked?

LINKS: