WEST LAFAYETTE, Ind. — Drones and other unmanned machines can save human lives on the battlefield, but adversaries could hack into their artificial intelligence software.
Purdue University will be leading research in partnership with Princeton University on ways to protect the software of these autonomous systems by making their machine learning algorithms more secure. These algorithms are what the machines rely on to make decisions and adapt on the battlefield.
The project, part of the Army Research Laboratory (ARL) Army Artificial Intelligence Institute (A2I2), is backed by up to $3.7 million for five years. The A2I2 program is a new, multi-faceted research initiative that aims to build up a research infrastructure within ARL to study artificial intelligence. This infrastructure includes cooperative agreements with Purdue and other leading experts in artificial intelligence outside of ARL.
“The implications for insecure operation of these machine learning algorithms are very dire,” said Saurabh Bagchi, the principal investigator on this project. Bagchi is a Purdue professor of electrical and computer engineering who holds a courtesy appointment in computer science.
“If your platoon mistakes an enemy platoon for an ally, for example, then bad things happen. If your drone misidentifies a projectile coming at your base, then, again, bad things happen. So you want these machine learning algorithms to be secure from the ground up.”
The goal of the project is to develop a robust, distributed and usable software suite for autonomous operations. The prototype system will be called SCRAMBLE, short for “SeCure Real-time Decision-Making for the AutonoMous BattLefield.”
Army researchers will be evaluating SCRAMBLE at the Army Research Laboratory-Computational and Information Sciences Directorate’s autonomous battlefield test bed to ensure that the machine learning algorithms can be feasibly deployed and avoid cognitive overload for warfighters using these machines.
“We’re delighted to begin this crucial research project with Professor Bagchi and his team,” said Dan Cassenti, ARL-A2I2’s cooperative agreement manager. “He will be leading this effort in collaboration with several of ARL’s top artificial intelligence and machine learning researchers. We look forward to the great developments and research results that are sure to arise from this award.”
There are several points of an autonomous operation where a hacker might attempt to compromise a machine learning algorithm, Bagchi said. Before even putting an autonomous machine on a battlefield, an adversary could manipulate the process that technicians use to feed data into algorithms and train them offline.
SCRAMBLE would close these hackable loopholes in three ways. The first is through “robust adversarial” machine learning algorithms that can operate with uncertain, incomplete or maliciously manipulated data sources. Prateek Mittal, an associate professor of electrical engineering and computer science at Princeton, will be leading a group focused on developing that capability.
“The ability of machine learning to automatically learn from data serves as an enabler for autonomous systems, but also makes them vulnerable to adversaries in unexpected ways,” Mittal said. “For example, malicious agents can insert bogus or corrupted information into the stream of data that an artificial intelligence system is using to learn, thereby compromising security. Our goal is to design trustworthy machine learning systems that are resilient to such threats.”
Second, the prototype will include a set of “interpretable” machine learning algorithms aimed at increasing a warfighter’s trust of an autonomous machine while interacting with it. The development of these algorithms will be led by David Inouye, a Purdue assistant professor of electrical and computer engineering.
“The operating environment of SCRAMBLE will be constantly changing for many reasons such as benign weather changes or adversarial cyberattacks,” Inouye said. “These changes can significantly degrade the accuracy of the autonomous system or signal an enemy attack. Explaining these changes will help warfighters decide whether to trust the system or investigate potentially compromised components.”
Bagchi and Mung Chiang, Purdue’s John A. Edwardson Dean of the College of Engineering and Roscoe H. George Distinguished Professor of Electrical and Computer Engineering, will lead work on the third strategy. This feature will be a secure, distributed execution of these various machine learning algorithms on multiple platforms in an autonomous operation.
Somali Chaterji, a Purdue assistant professor of agricultural and biological engineering and co-principal investigator on this project, conducts research on energy constraints for dynamic distributed cyber-physical systems through another Army Research Laboratory contract. That research will serve as a basis for the overarching evaluation thrust of this project.
“Some algorithms may run on sensors embedded into a drone, some might run in a tank and others might be in the backpack of a warfighter. Each platform requires a different amount of computing power to run the algorithms,” Chaterji said.
“The goal is to make all of these algorithms secure despite the fact that they are distributed and separated out over an entire domain.”
Earlier this year, Chaterji, Bagchi, Brian Henz of the Army Research Laboratory and other researchers published a vision paper on the types of research needed to make autonomous systems more resilient. Several of the proposed solutions will be investigated in this project.
“This team is uniquely positioned to develop secure machine learning algorithms and test them on a large scale,” Bagchi said. “We are excited at the prospect of close cooperation with a large team of Army Research Laboratory collaborators as we bring our vision to reality.”
###
This part of information is sourced from https://www.eurekalert.org/pub_releases/2020-10/pu-rtb102220.php