sciencenewsnet.in

ND Expert Nicholas Berente: Investing in research of AI’s use, impacts and required guardrails is key

“Depending on how you define it, AI has been around for more than a half century,” said Nicholas Berente, professor of information technology, analytics and operations. “What is new — and what has people concerned — is the rather unbelievably rapid pace of recent advancements in AI. As soon as we get used to one set of capabilities, there is a new generation that surpasses them dramatically. The recent wave of generative chat technologies, such as ChatGPT by OpenAI and Bard by Google, have caught on like wildfire. People immediately found uses, for good and for bad, and the power of these generative tools has terrified many and has led to some curious decisions.”

In particular, Berente points out that last month there was an open letter from the Future of Life Institute calling for a pause to certain forms of large-scale AI research. The letter was signed by some of the nation’s most prominent AI researchers and industry executives. Berente said this pause, however well-meaning, is neither a good idea nor will it have any discernible effect. 

“One reason it is not a good idea is our national competitiveness,” he said. “Other countries, including rivals like Russia and China, are making tremendous investments, and it does not make sense to hamstring our efforts and relinquish our significant lead in these technologies. Further, the pause will not have the desired effect. These technologies can bring a tremendous competitive advantage to those leading (as OpenAI has shown us), but that advantage can be fleeting. Neither tech corporations nor AI researchers can afford to avoid staying abreast of this wave after wave of AI technologies and continue to remain relevant. They will get left behind.”

According to Berente, the answer is to make significant investments in studying these technologies — how they are used, their impacts and determining appropriate guardrails for limiting their negative consequences. 

“Whatever we come up with will inevitably have unforeseen consequences, so we need to study the effects of our guardrails and continually adapt,” he said. “We need to make significant investments in understanding the ramifications of AI tools in the wild, and this investment needs to be continuous to keep pace with wave upon wave of AI innovation. A step in the right direction is the recent announcement that the White House has allocated $140 million to the National Science Foundation to set up seven institutes. Although that is the right idea, $140 million is a rather paltry investment, compared to investments from private industry. Companies like Google and Microsoft are spending billions each in creating these technologies. Understanding AI innovations adequately enough to put up appropriate guardrails and regulations while still maintaining national competitive advantage is a never-ending process.”