Global AI Experts Discuss Risks and Governance in GenAI at Hybrid Seminar
The hybrid seminar featured 12 top AI experts, scholars, and industry leaders. It attracted over 150 participants from notable academic institutions, including The Alan Turing Institute, UC Berkeley, University of Bologna, Tsinghua University, and Microsoft Research Asia. Dr. Ansgar Koene from Ernst & Young, Dr. Andrés Domínguez Hernández from the Alan Turing Institute, Prof. Huang Minlie from Tsinghua University, and Dr. Wang Jindong from Microsoft discussed risk and governance in Generative AI (GenAI).
Prof. Xin Yao, Vice-President of Research and Innovation at Lingnan University, welcomed guests and highlighted the role of GenAI in developing smart cities and the digital economy. He emphasized the importance of addressing the risks associated with GenAI, such as impacts on privacy, social stability, and national security. Lingnan University aims to lead in responsible AI development. This workshop created a platform for sharing ideas on AI risks, governance, and international cooperation.
The workshop included 12 speakers and two panel discussions. Topics covered AI safety in large language models, GenAI’s trustworthiness, and risk control measures. Participants also explored GenAI’s potential in creative content generation, data analysis, and decision support.
How can interdisciplinary research contribute to responsible AI development and deployment?
Interview with Prof. Xin Yao on AI Governance and the Role of Generative AI in Future Societies
Interviewer: Thank you for joining us today, Prof. Xin Yao, Vice-President of Research and Innovation at Lingnan University. The recent hybrid seminar you hosted featured a remarkable assembly of AI leaders and scholars. Could you tell us what prompted this event and what you aimed to achieve?
Prof. Yao: Thank you for having me. The seminar was inspired by the rapid advancements in Generative AI (GenAI) and its implications on various sectors, especially as we move towards the development of smart cities and a digital economy. With over 150 participants from prestigious institutions, we aimed to create a collaborative platform where experts could exchange ideas on the risks and governance surrounding GenAI, ensuring we address them proactively.
Interviewer: The seminar included discussions led by notable figures like Dr. Ansgar Koene and Dr. Andrés Domínguez Hernández. What were some key topics that emerged during these discussions?
Prof. Yao: A significant focus was on AI safety, particularly within large language models, and the trustworthiness of GenAI applications. The panelists highlighted the risks related to privacy, social stability, and national security, emphasizing the need for robust governance frameworks to manage these challenges effectively. Participants also explored GenAI’s promise in creative content generation and data analysis, which opens up innovative avenues for decision support.
Interviewer: You mentioned the risks associated with GenAI. How does Lingnan University plan to lead in responsible AI development?
Prof. Yao: At Lingnan University, we prioritize a comprehensive approach to AI development that integrates ethical considerations, transparency, and public engagement. By fostering interdisciplinary research and encouraging dialog among stakeholders, we aim to establish best practices that will guide responsible AI deployment, aligning technology advancements with societal needs.
Interviewer: Can you elaborate on the innovative solutions proposed during the workshop to enhance societal resilience against GenAI-related challenges?
Prof. Yao: Absolutely. During the workshop, case studies were analyzed, revealing real-world issues that arise with GenAI usage. Participants suggested implementing risk control measures, creating adaptive regulatory frameworks, and enhancing public awareness on AI impacts. These solutions not only aim to mitigate risks but also to leverage GenAI for societal good, fostering a resilient and informed community.
Interviewer: The seminar emphasized international cooperation. Why is this collaboration essential in tackling AI governance challenges?
Prof. Yao: AI is a global phenomenon, and its implications transcend national borders. International cooperation ensures that we can share knowledge, align on best practices, and establish unified standards for AI governance. By working together, we can strengthen our collective response to the risks associated with GenAI and foster an environment where technology serves humanity responsibly and effectively.
Interviewer: what message do you hope to convey to those interested in the future of AI governance?
Prof. Yao: I hope to convey the importance of being active participants in shaping the future of AI. As we continue to innovate, it is crucial to remain vigilant about the ethical implications of our technologies. By collaborating across disciplines and borders, we can harness the potential of GenAI while ensuring it contributes positively to society. Together, we can navigate the complexities of responsible AI development for the benefit of all.
Using case studies, attendees analyzed issues from GenAI and proposed innovative solutions to strengthen societal resilience. They noted the global community’s active efforts to tackle the challenges presented by GenAI. Overall, the seminar aimed to inspire collaboration and advancement in AI governance.
