Newsletter

Thai-Japanese Engineer Warning…Reveal results of research proving AI -Machine learning can be attacked through information hidden in the QR code, highlighting 2 ways to prevent

In the world of the 21st century, Artificial Intelligence (AI) and Machine Learning (ML) technologies are becoming extremely popular like magic. But there is still a question whether it is 100% reliable or not? A team of Thai-Japanese engineers from the Faculty of Engineering Mahidol University in collaboration with the Japan Institute of Advanced Science and Technology (JAIST) released a surprising study published in the International Journal of Attack Machine Learning through Models There are Adversarial Examples (Attack Machine Learning with Adversary Examples ) notes that machine learning is also vulnerable. to harmful attacks. Emphasizes how advanced the technology is Human review is still needed to confirm reliable results.

Associate Professor Dr Jakkrit Suthakorn, Dean of the Faculty of Engineering Mahidol University said that AI and Machine Learning (ML) technology now plays a role in people’s lifestyle, work and many industrial businesses such as medical, health, trade, e-commerce, financial capital markets. Automotive transport, logistics, telecommunications, industrial production So security is important Research Findings Attacking Machine Learning through Adversarial Models This has been published in the international journal Journal of Imaging. It is an international research that has received considerable attention from the community international. Because this is the first time that the ML Model attack is designed in the form of a QR Code, which will benefit the security of services that reach people and organizations in various industries. by a team of 3 Thai-Japanese researchers from the collaboration between the Mahidol University Faculty of Engineering and the Japan Institute of Advanced Science and Technology (JAIST), which includes Dr. Karin Sumangkayothin, lecturer in the Department of Computer Engineering. Faculty of Engineering, Professor Kotani Kazunori and Assistant Professor Prinya Sirithanawan from JAIST

Dr. Karin Sumangkayothin, Lecturer, Department of Computer Engineering Faculty of Engineering Mahidol University Mentioned the origin of the research that AI, artificial intelligence makes computers or machines smarter, while Machine Learning (ML) is one of the learning processes of AI by importing data into algorithms. to create a decision making network for predicting trends or predicting results similar to human decisions To reduce the heavy workload and mistakes due to fatigue. Make decisions faster compared to human work.

However, the understanding of the decision structure of machine learning network models is still fuzzy. And it still needs to be studied further. Now, malicious actors can attack machine learning through such ambiguity. to take advantage of distorted results For this reason, a team of researchers has carried out research on ML attacks with the aim of studying attack patterns, focusing on attacks that can avoid human observation. which can cause massive damage or negative effects Whether in the financial field medical treatment, communication, transportation, information security system, production system, and national defense system

Mahidol Engineering Research Team and JAIST have spent more than 3 years studying and researching together despite the obstacles during the COVID-19 crisis. But it can work online on Japan’s high-performance platform. We have studied, analyzed and designed Attacking ML learning through an adversarial model Theoretically, the aim is to attack the ML Model and it can avoid human observation. This affects the wrong decisions of the machine. The attack can be aimed at any result. that is predetermined We have set up a ‘QR-Code Format’ hidden data attack for the function. Adversary examples of attacks trick AI into confusing it by contaminating targeted data such as visual data, audio data, signal data, or other types of data. The result of the attack can either predict the result of the attack or cause the result to deviate from what it should be.

Conclusion Data compression can attack through information hidden in a scannable QR Code. and retains the ability of targeted attacks as well In addition to testing machine learning attacks, this research can also be used to develop The system also prevents and detects attacks based on ML Model analytics for many purposes, such as protecting privacy. It also proves that machine learning decision making should only be used as a tool and innovation to reduce human workload. Ultimately, it still needs to be reviewed by a human expert to confirm the final results.

From the success of this research, plans for the future, the Thai-Japanese research team will find ways to attack with higher efficiency. keeping up with rapidly changing new technologies AND continuing to maintain the functionality of the original data format (eg QR-Code attacks must be scanned, etc.) There are two ways to prevent future attacks. Using data obtained from Example Adversarial attacks to analyze attacks and design machine learning models that are more resistant to attacks.

Recommendations for IT professionals using machine learning safely Always be aware that inputs can be contaminated with targeted attacks, such as Adversarial Example, or can be bundled with malware. Hence, the use of Machine Learning for stable and secure operation. A System needs to be put in place to check final results by experts To ensure reliable accuracy again, don’t let the machine run at 100%. AI and machine learning are still limited today and will only really help when humans and machines think and work in harmony. Do not refer too much to any one party.