Expert proposes immediate attention for addressing AI bias in security measures
By Hongeun Im, The Readable
Feb. 7, 2024 3:07PM GMT+9
At the seventh National Strategy Forum, hosted by the Korean Association of Cybersecurity Studies (KACS) on Tuesday, Yoo Ji-yeon, a professor at Sangmyung University’s Intelligent Engineering Informatics for Human Department, emphasized that addressing bias in artificial intelligence (AI) models is as crucial as dealing with privacy leaks and technology protection in current AI security measures.
Professor Yoo Ji-yeon underscored the importance of scrutinizing bias within AI. She pointed out that while AI models may not be the direct targets of cyberattacks, the tendency of AI to develop biases poses a threat to society. “AI’s deep integration with society affects not only social behaviors but also its own training processes,” she remarked. Highlighting the pervasive influence of bots, as reported by the 2023 Bad Bot Report from Imperva, which stated that 47.4% of internet traffic in 2022 was bot-generated, she raised concerns about children’s ability to discern the authenticity of online content. This led to her proposal for the creation of an evaluation system dedicated to examining AI bias.
AI bias becomes a pressing issue when AI spreads incorrect or misleading information due to data inaccuracies or inadequate training data. Professor Yoo highlighted a case where an AI model inaccurately asserted that the Earth is square, a mistake stemming from flawed information in its dataset. This susceptibility of AI to manipulation, further evidenced by the role it is made to play by bad actors in phishing scams and in shaping the political perceptions and actions of gullible consumers, further illustrates the profound vulnerabilities present in AI technologies—weaknesses that can make AI quite destructive to society.
Professor Yoo outlined potential vulnerabilities at various stages of AI development that could lead to bias, emphasizing the need for robust conformity assessments in AI models. During data collection, an attacker could introduce manipulated or fake data to compromise the system. In the data preprocessing phase, techniques like an Image Scaling Attack (ISA) could be employed, where customized data is included to skew “objective” outcomes in a desired direction. Furthermore, during the training phase, adversaries might utilize adversarial attacks, deliberately inputting data designed to result in incorrect classifications being internalized by the model. These vulnerabilities highlight the critical need for comprehensive evaluations and safeguards at each step of AI development to ensure the integrity and reliability of AI models.
The expert stressed the necessity of regular scrutiny for AI models, noting that unlike traditional cybersecurity systems which remain static, AI possesses qualities of autonomy and adaptability. This distinction necessitates that security assessments for AI bias be conducted more frequently than current practices. The dynamic nature of AI, with its capacity for continuous learning and evolution, means that vulnerabilities and biases could emerge or change over time, underscoring the need for ongoing vigilance in monitoring and evaluating AI systems to ensure their integrity and fairness.
Professor Yoo advocated for creating a specialized research institute or strategy center dedicated to AI in national security. She argued that this would boost the development of AI solutions, specifically for national security challenges. By moving beyond generic AI, data strategies, or privacy laws, the proposed center would focus on unique national security needs, crafting tailored approaches and innovations. This initiative could help strengthen national security through focused AI research and application.
hongeun.kr@gmail.com
This article was reviewed by Dain Oh and copyedited by Arthur Gregory Willers.
Hongeun Im is a reporting intern for The Readable. Motivated by her aspirations in cybersecurity and aided by the language skills she honed while living in the United Kingdom, Im aims to write about security issues affecting the Korean Peninsula and lead more people to become interested in cybersecurity. She attends Gwangju Institute of Science and Technology, majoring in Electrical Engineering and Computer Science. Her interest in computer science led her to participate in the World Friends Korea volunteer program, where she taught Python at the Digital Government Center in Laos and at Al-Balqa Applied University in Jordan.