Researchers to tackle the mysteries of the AI ‘black boxâ problem

Researchers are aiming to shed light into one of the most significant problems in artificial intelligence.

AI computing systems are moving into many different parts of our lives, and offer great potential from self-driving vehicles, assisting doctors with diagnosing health conditions, to autonomous search and rescue robots.

However, one major unresolved issue, particularly with the branch of AI known as neural networks, is that when things go wrong, scientists are often at a loss to explain why. This is due to a lack of understanding of the decision-making within the AI systems. This issue is called the black box problem.

A new 15-month research project led by Lancaster University, and involving the University of Liverpool, will seek to dispel the mysteries of the black box problem and to discover a new way of creating deep-learning AI computing models that make decisions that are transparent and explainable.

The funding is awards through the Offshore Robotics for the Certification of Assets (ORCA) Hub, which is managed by the Edinburgh Centre for Robotics (Heriot-Watt University and University of Edinburgh).The Hub develops Robotics, Artificial Intelligence and Autonomous Systems for the offshore sector. The Partnership Resource Funding (PRF) awards fund research activities relating to the existing ORCA strategic themes through an identified white space, innovative complimentary research or a collaborative project. The funding awards will work to advance the Hub’s research and technology through clearly defined impact acceleration activities. The PRF enables the Hub to expand its current work into new areas with a clearly identified industrial need.

The ‘Towards the Accountable and Explainable Learning-enabled Autonomous Robotic Systems’ project will develop a series of safety verification and testing techniques for the development of AI algorithms. These will help to guarantee robust and explainable decisions taken by the systems.

The researchers will use a technique called ‘adversarial training’. This involves presenting the system with a given situation, where it learns how to perform an action – such as identifying and picking up an object. The researchers then change various elements in the scenario, such as colour, shape, environment, and observe how the system learns through trial and error. The researchers believe these observations could lead to greater understanding of how the system learns and provide insights into its decision-making.

By developing ways to create neural network systems where decision-making can be understood, and predicted, the research will be key to unleashing autonomous systems in areas where safety is critical, such as vehicles and robots in industry.

Dr Wenjie Ruan, Lecturer at Lancaster University’s School of Computing and Communications and Lead Investigator on the project, said: “Although deep learning, as one of the most remarkable AI techniques, has achieved great success in many applications, it has its own problems when applied to safety-critical systems, including opaque decision-making mechanisms and vulnerability to adversarial attacks.

“This project provides a great opportunity for us to bridge the research gap between deep learning technique and safety-critical systems.

“In collaboration with researchers from the University of Liverpool, we will develop an accountable and explainable deep learning model by taking the recent advances of safety verification and testing techniques.

“This research will ultimately enable end-users to understand and trust the decisions made by deep learning models in various safety-critical systems, including self-driving cars, rescue robots and those applications in healthcare domain.”

Dr Xiaowei Huang, Lecturer at the University of Liverpool and Co-investigator on this project, said: “This project has great potential leading to significant impacts on both the artificial intelligence, and the robotics and autonomous systems, where AI techniques have been extensively used. In addition to the theoretical advancement, the project is expected to produce practical methodology, tools, and demonstrations, in order to provide guidance to the industrial development of safe and accountable robotic systems.”

###

This part of information is sourced from https://www.eurekalert.org/pub_releases/2019-12/lu-rtt121119.php

Ian Boydon
01-524-592-645
[email protected]
http://www.lancs.ac.uk 

withyou android app

Leave a Reply

Your email address will not be published.