Asia Tech x Singapore (ATxSG) is the premier technology event in Asia where the worlds of technology, society, and the digital economy converge. Organised by the Infocomm Media Development Authority (IMDA) and Informa Tech, with the valuable support of the Singapore Tourism Board (STB), ATxSG serves as the epicentre for captivating discussions that can only unfold in Singapore, bridging the realms of business, technology, and government.
From 7 to 9 June 2023, ATxSG took the stage, offering a plethora of co-located events that enabled participants to foster valuable connections and stay ahead of the curve in the ever-evolving landscape of technology. The event serves as an unparalleled platform where visionaries, experts, and enthusiasts from diverse sectors come together to explore the latest tech trends, tackle pressing challenges, and unlock countless opportunities.
How to Combat Disinformation in the Age of ChatGPT
The pervasive issue of disinformation has emerged as a pressing global concern, impacting individuals, governmental bodies, and enterprises alike. In this era marked by the advent of remarkable technologies such as Generative artificial intelligence (AI), exemplified by the impressive capabilities of ChatGPT, a crucial dilemma arises: while it possesses immense potential for constructive purposes, there remains a palpable apprehension surrounding its exploitation as a potent tool for amplifying deceitful narratives.
In light of this conundrum, it becomes imperative to explore effective measures to curtail the proliferation of disinformation in the online realm. How can we effectively thwart the dissemination of falsehoods? What strategies can be employed to counter misinformation and foster harmony in an increasingly polarised world?
This was discussed at length during the panel discussion held during the second day of Asia Tech 2023 at Singapore Expo. During the panel discussion, titled "How to Combat Disinformation in the Age of ChatGPT", prominent executives were present to give their views, including Dr Vrizlynn Thing, SVP, Head of Cybersecurity Strategic Technology Centre from ST Engineering; Kim Hong Mak, Product Owner, Data Analytics and Governance from Bank of Singapore; Simon Chesterman, Vice Provost, Senior Director (AI Governance) from National University of Singapore (NUS); and Warren Chik, Associate Professor of Law, Deputy Director, Centre for AI and Data Governance from SMU. The session was moderated by Marie Teo, Community Manager, Partnerships & Initiatives from Tony Blair Institute for Global Change.
Disinformation Overlaps with Untruths
“Disinformation overlaps with untruths, but it is not just things that are not true that we are worried about. I mean, satire, parody, rumours, we do not necessarily care. So the way I like to think of this is a two-by-two matrix. There are some things that are true, some things that are false, that we debate the merit to that…But you also got to think about the intent. Is the intent good or is the intent bad? And so if you think of this two-by-two matrix, so: true, false, good intent, bad intent," Simon explained.
He expressed that one of the biggest problems in the social media spaces, is falsity with good intent. It is akin to people sharing information but not really caring if they are true or false, and as long as the information is stimulating. That is called misinformation, according to Simon, which is basically false information being shared innocently. But for disinformation, that is knowingly sharing false intention. There is a third category which he called mal-information, which is sharing plausibly true information with bad intent.
The Speed of How Disinformation is Integrated as a Result of Generative AI is Unprecedented
Kim Hong Mak attributed the spread of disinformation in the age of generative AI to three things: speed, volume, authenticity. It becomes harder and harder to tell apart a genuine news and news generated by an AI programme. Even for videos, the entire script can be written by an AI programme, what more of news.
In that case, in any businesses, the first thing is always trust. And education is very important. First, to the employees within the organisation, they need to be able to recognise fake information and report them if anything seems suspicious. In a way, they are our extra pair of eyes. Second, is the customers, which is the centrepiece of any businesses.
At the End of the Day, it is the Outcome and End Product of What is Generated
Dr Vrizlynn Thing explained, "When we look into the detection and prevention plus mitigation pipeline, then we have to look at it from a more holistic angle… When generative AI models are created, is there any way to make it more fact-sensitive? Because we are all aware of this AI hallucination problem. Do they check, verify, and clean the data before they send it in for training so that the AI can perform better in terms of producing factual information?”
Another angle that can be looked into is to embed certain patterns into the data that is generated by all these models so that it becomes easier to detect that it is synthetic data rather than human-generated data.
Banning Generative AI is Very Unlikely for Most Countries
Warren Chik elaborated that Singapore is taking a very facilitative approach and that we have to live with false information out there. When these kind of information has a negative impact, that is when the government steps in. When we talk about things like technology, he felt that there is still a bit of scepticism there. In terms of regulation, political and cultural differences do play a part too.
Overall, there needs to be a stronger emphasis on ethics, transparency, and honesty in the way that people conduct and launch some of these products. There also is a really delicate balance between over- and under- regulating because we do not want to limit the innovativeness and transformative capabilities of these technologies.