Researchers And Army Join Hands to Protect the Military’s AI Systems

facial2Brecognition2Bsoftware2B1800
As an initiative to provide protection to the military’s artificial intelligence systems from cyber-attacks, researchers from Delhi University and the Army have joined hands, as per a recent Army news release. 
As the Army increasingly utilizes AI frameworks to identify dangers, the Army Research Office is investing in more security. This move was a very calculated one in fact as it drew reference from the NYU supported CSAW HackML competition in 2019 where one of the many major goals was to develop such a software that would prevent cyber attackers from hacking into the facial and object recognition software the military uses to further train its AI.
MaryAnne Fields, program manager for the ARO’s intelligent systems, said in a statement, “Object recognition is a key component of future intelligent systems, and the Army must safeguard these systems from cyber-attack. This work will lay the foundations for recognizing and mitigating backdoor attacks in which the data used to train the object recognition system is subtly altered to give incorrect answers.”
hacking2Bfacial2Brecognition2Bsoftware2B6002B

This image demonstrates how an object, like the hat in this series of photos, can be used by a hacker to corrupt data training an AI system in facial and object recognition.

The news release clearly laid accentuation on a very few important facts like, “The hackers could create a trigger, like a hat or flower, to corrupt images being used to train the AI system and the system would then learn incorrect labels and create models that make the wrong predictions of what an image contains.” 

The winners of the HackML competition, Duke University researchers Yukan Yang and Ximing Qiao, created a program that can ‘flag and discover potential triggers’. And later added in a news release, “To identify a backdoor trigger, you must essentially find out three unknown variables: which class the trigger was injected into, where the attacker placed the trigger and what the trigger looks like,” 
And now the Army will only require a program that can ‘neutralize the trigger’, however, Qiao said it ought to be “simple:” they’ll just need to retrain the AI model to ignore it. 
And lastly, the software’s advancement is said to have been financed by a Short-Term Innovative Research that grants researchers up to $60,000 for their nine months of work.
Original Source