sciencenewsnet.in

ND Expert Ahmed Abbasi: AI’s major challenge is striking balance between innovation, precaution

“AI presents tremendous opportunities for organizations and society,” said Ahmed Abbasi, the Joe and Jane Giovanini Professor of IT, Analytics and Operations. “Historically, we’ve seen major economic disruptions due to technology dating back to the agricultural and industrial revolutions. In the long run, these disruptions have improved the human condition. Now with the knowledge economy in the digital age, we find ourselves in an era of brilliant technologies, with AI having the most profound implications of them all – the potential to positively disrupt the future of work, productivity and accessibility at an unprecedented pace.”

According to Abbasi, the digital age has been rife with digital divides. If used correctly, he says AI can help alleviate health disparities in developing countries by assisting overworked and geographically dispersed physicians and can allow students in impoverished environments to experience individual tutoring-style educational experiences. 

“We’re already starting to see these positives come to fruition,” Abbasi said. “So that’s the good news. The bad news is that the dangers of AI are also very real. AI is essentially machine learning models. In general, the ability of a model is impacted by three things: the algorithm it uses, the data it is trained on and the quantity of parameters it uses. We can think of data and parameters as experiences and capacity to learn from those experiences, respectively.

“Over the past 10 years, we have seen a hockey-stick-like growth in the amount of data and parameters used in modern AI models, in particular, models used for language (e.g., ChatGPT) and models used for computer vision (e.g., self-driving cars). They’re trained on billions of data points generated by humanity, and trained using close to a trillion parameters. In essence, we’re swapping human intelligence for parameters.”

Abbasi says a major challenge is striking the right balance between innovation and precaution. “When it comes to technology, we know that regulation and governance often lag behind,” he said. “Case in point, the internet, mobile, social media, cryptocurrencies, etc. NIST recently came out with their AI risk management framework. The key components of the framework are to create a culture of governance, and then to map, measure and manage – all with the goal of supporting responsible AI tenets such as fairness, privacy and transparency.”