AI Safety Institute Launched as Korea’s AI Research Hub

The Ministry of Science and ICT (MSIT), headed by Minister Yoo Sang-im, held the launch ceremony for the “AI Safety Institute” (AISI) on Wednesday, November 27, at the Pangyo Global R&D Center.

At the “AI Seoul Summit”last May, leaders from 10 countries recognized safety as a key component of responsible AI innovation and emphasized the importance of establishing AI safety institutes and fostering global collaboration for safe AI. President Yoon Suk Yeol also expressed his commitment, stating, “We will work towards establishing an AI safety institute in Korea and actively participate in a global network to enhance AI safety.” After thorough preparations regarding the institute’s organization, budget, personnel, and functions, the AI Safety Institute has now been officially launched.

The AISI is a dedicated organization established within ETRI to systematically and professionally address various AI risks, including technological limitations, human misuse, and potential loss of control over AI. As Korea’s hub for AI safety research, the AISI will facilitate collaborative research and information sharing among industry, academia, and research institutes in the field of AI safety. Furthermore, as a member of the “International Network of AI Safety Institutes” (comprising 10 countries, launched on November 21), the AISI is committed to taking a responsible role in strengthening global collaboration for safe AI. Through these efforts, the AISI aims to develop competitive technologies, nurture skilled professionals in the AI safety sector, and advance AI safety policies, including their development and refinement, based on scientific research data.

The launch ceremony brought togetherkey government officials, including Yoo Sang-im, Minister of Science and ICT; Yeom Jae-ho, Vice Chair of the National AI Committee; and Lee Kyung-woo, Presidential Secretary for AI and Digital. Over 40 prominent figures from the AI industry, academia, and research sectors also attended, such as Bae Kyung-hoon, Chief of LG AI Research; Oh Hye-yeon, Director of the KAIST AI Institute; Lee Eun-ju, Director of the Center for Trustworthy AI at Seoul National University; and Bang Seung-chan, President of the Electronics and Telecommunications Research Institute (ETRI).

At the event, Professor Yoshua Bengio, a globally renowned AI scholar and Global Advisor to the National AI Committee, congratulated the Korean government on establishing the AI Safety Institute in alignment with the Seoul Declaration. He emphasized the Institute’s critical roles, including (1) researching and advancing risk assessment methodologies through industry collaboration, (2) supporting the development of AI safety requirements, and (3) fostering international cooperation to harmonize global AI safety standards. Additionally, the directors of AI safety institutes from the United States, the United Kingdom, and Japan delivered congratulatory speeches, stating, “We have high expectations for Korea’s AI Safety Institute” and emphasizing “the importance of global collaboration in AI safety.”

Kim Myung-joo, the inaugural Director of the AISI, outlined the Institute’s vision and operational plans during the ceremony. In his presentation, he stated, “The AISI will focus on evaluating potential risks that may arise from AI utilization, developingand disseminating policies and technologies to prevent and minimize these risks, and strengthening collaboration both domestically and internationally.” Director Kim emphasized, “The AISI is not a regulatory body but a collaborative organization dedicated to supporting Korean AI companies by reducing risk factors that hinder their global competitiveness.”

At the signing ceremony for the “Korea AI Safety Consortium” (hereinafter referred to as the “Consortium”), 24leading Korean organizations from industry, academia, and research sectors signed a Memorandum of Understanding (MOU) to promote mutual cooperation in AI safety policy research, evaluation, and R&D. The AISI and Consortium member organizations will jointly focus on key initiatives, including the research, development, and validation of an AI safety framework (risk identification, evaluation, and mitigation), policy research to align with international AI safety norms, and technological collaboration on AI safety. Moving forward, they plan to refine the Consortium’s detailed research topics and operational strategies. The member organizations also presented their expertise in AI safety research and outlined their plans for Consortium activities, affirming their strong commitment to active collaboration with the AISI.

< Participating Organizations in the “AI Safety Consortium” >

Industry

Naver (Future AI Center), KT (Responsible AI Center), Kakao (AI Safety), LG AI Research, SKT (AI Governance Task Force), Samsung Electronics, Konan Technology, Wrtn Technologies, ESTsoft, 42Maru, Crowdworks AI, Twelve Labs, Liner

Academia

▪Seoul National University (Center for Trustworthy AI), KAIST (AI Fairness Research Center), Korea University (School of Cybersecurity), Sungkyunkwan University (AI Reliability Research Center), Soongsil University (AI Safety Center), Yonsei University (AI Impact Research Center)

Research Institutes

▪Korea AISI, TTA (Center for Trustworthy AI), NIA (Department of AI Policy), KISDI (Department of Digital Society Strategy Research), IITP (AI텱igital Convergence Division), SPRi (AI Policy Research Lab)

Minister Yoo Sang-im of the MSIT emphasized, “AI safety is a prerequisite for sustainable AI development and one of the greatest challenges that all of us in the AI field must tackle together.” He noted, “In the short span of just one year since the AI Safety Summit in November 2023 and the AI Seoul Summit in May 2024, major countries such as the United States, the United Kingdom, Japan, Singapore, and Canada have established AI safety institutes, creating an unprecedentedly swift and systematic framework for international AI safety cooperation.” Minister Yoo further emphasized, “By bringing together the research capabilities of industry, academia, and research institutes through the AISI, we will rapidly secure the technological and policy expertise needed to take a leading role in the global AI safety alliance. We will actively support the AISI’s growth into a research hub representing the Asia-Pacific region in AI safety.”

###

About Electronics and Telecommunications Research Institute (ETRI)

ETRI is a non-profit government-funded research institute. Since its foundation in 1976, ETRI, a global ICT research institute, has been making its immense effort to provide Korea a remarkable growth in the field of ICT industry. ETRI delivers Korea as one of the top ICT nations in the World, by unceasingly developing world’s first and best technologies.

withyou android app