AI could be trained to be naughty

AIArtificial intelligence (AI) could be covertly trained to misbehave, a new research paper has warned.

Confirming your luddite friends’ worst fears, a group of scientists from New York University discovered that AI systems can be corrupted by tampering with their training data, whether by jokers or worse. Apparently, such attacks are difficult to detect and could be used to create accidents.

With firms’ AI systems needing huge amounts of data for training, many are outsourcing the work to bigger companies like Google and Amazon – which, the researchers warn, could create security problems.

The paper explores the concept of a ‘backdoored neural network, or BadNet’, an attack scenario in which ‘the training process is either fully or (in the case of transfer learning) partially outsourced to a malicious party who wants to provide the user with a trained model that contains a backdoor’.

The paper continues: ‘The backdoored model should perform well on most inputs (including inputs that the end user may hold out as a validation set) but cause targeted misclassifications or degrade the accuracy of the model for inputs that satisfy some secret, attacker-chosen property, which we will refer to as the backdoor trigger.’

And here is the research team’s dense, esoteric report. I’ll buy a drink for anyone who gets through it.

We put out a less terrifying briefing on AI in the public sector earlier in the year, which you can read here.

 

 

AI could be trained to be naughty

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s