Artificial Intelligence and digital safety headline UN Internet Governance Forum in Saudi Arabia
ANALYSIS | IAN KATUSIIME | With Artificial Intelligence increasingly driving the way individuals, companies and governments operate, the 19th United Nations Internet Governance Forum (IGF) in Riyadh, Saudi Arabia provided yet another global stage to ponder on AI and its impact on the world.
The event held under the theme, “Building Our Multistakeholder Digital Future” took place at the King Abdulaziz International Conference Center (KAICC) from December 15-19 in the sprawling Gulf capital with a call for collective action and collaboration among governments, private sector, and civil society.
The IGF is convened by the UN Secretary-General and is the global multistakeholder forum for dialogue on digital public policy. Speakers and participants also tackled democracy, misinformation, digital security, inclusive governance, online misogyny and hate speech in an event that was attended by 6000 participants from over 170 countries.
Abdullah Alswaha, Saudi Minister of Communications and Information Technology, said that IGF 2024 “offers a global platform to promote international digital cooperation in the era of AI”.
Alswaha encouraged stakeholders to deeply engage in the Forum to help “shape innovative Internet governance and support a prosperous and sustainable digital future for the benefit of humanity”.
Some of the speakers were Shivnath Thukral, Vice President, Public Policy, Meta India; Latifa Al Abdulkarim, Member of the Saudi Shura Council and Member of the UN Secretary General’s High Level Advisory Body on AI; Sally Wentworth, Chief Executive Officer, Internet Society (ISOC); Abdullah bin Sharaf Alghamdi, President, Saudi Data & AI Authority, Saudi Arabia; Vinton Cerf, Vice President and Chief Internet Evangelist, Google; Audace Niyonkuru, the CEO of Digital Umuganda, an AI and open data company.
Others were Kojo Boakye. Vice President of Public Policy for Africa, Middle East and Turkey, Meta; Sangbu Kim, Vice President for Digital Transformation, The World Bank; Kurtis Lindqvist, President & CEO, ICANN; Ivana Bartoletti, Global Chief Privacy & AI Governance Officer, Wipro; and Eugene Kaspersky, CEO, Kaspersky.
AI mania
A lot of the sessions focused on AI and how its being leveraged to optimise work, solve problems, improve governance and mitigate threats. During a session on Data and AI Governance, there were varied discussions on how all voices can be included in AI development.
It culminated in the launch of a report titled “AI from the Global Majority.” The report is the official outcome of the UN IGF Data and AI Governance Coalition. The editors of the report are Luca Belli, professor of digital governance and regulation; and Walter Britto Gaspar, a researcher at Fundacio Getulio Vargas, a Brazilian think tank and institute of higher learning.
A notable figure on the African AI scene is Audace Niyonkuru, the CEO of Digital Umuganda, an AI and open data company focused on enhancing access to information in African languages. He has led impactful initiatives, such as creating open voice datasets in multiple African languages and developing Mbaza, a Covid-19 chatbot with over 3.6 million users.
In addition to his role at Digital Umuganda, Niyonkuru is a member of the IGF Multistakeholder Advisory Group (MAG), where he serves as a co-facilitator of the policy network on AI. His work is dedicated to leveraging language technology to improve information access for marginalised communities.
Several organisations were keen to show off their work on AI. The Swiss Cyber Institute showcased its AI Governance in Education Initiative. With fears that AI could already be replacing some human jobs, there were discussions on labour issues throughout AI’s lifecycle. How children experience AI and how they can be protected was part of the full throttle theme.
Gregor Schusterschitz, a European diplomat, spoke at the IGF Open Forum on Autonomous Weapons Systems where he underlined the urgency of creating an effective legal framework for the weapons and AI in military use while respecting human rights. Saudi Arabia is the largest importer of arms from the US and this includes a chunk of unmanned fighter aircraft like drones which have caused wanton death and destruction.
This debate in the military industrial complex is the subject of a SIPRI paper titled “Bias in Military Artificial Intelligence” published in December 2024. The paper authored by Alexander Blanchard and Laura Bruun states that AI systems are biased. “In various ways and degrees, they reflect and reproduce existing human biases around, for example, gender, race, age or ethnicity.”
The paper argues that states have increasingly expressed concerns about the presence of such bias in their intergovernmental discussions on the governance of military AI, such as in the policy debate on autonomous weapon systems (AWS). “Yet bias in military AI is rarely discussed in depth nor is it reflected in the outcome documents of these meetings. This contrasts with the civilian domain, where multinational efforts are well under way to address bias in AI.”
The authors who are researchers in the SIPRI Governance of AI program argue that bias in data processing and algorithm development are a key feature and add that “bias in military AI can have various humanitarian consequences depending on context and use. These range from misidentifying people and objects in targeting decisions to generating flawed assessments of humanitarian needs.”
AI enabled weapons have been widely used in the wars in Gaza and Ukraine which have now become testing grounds for the latest military technology while claiming countless innocent lives.
But away from the downsides, AI has been revolutionary in areas such as healthcare: the ability to analyse vast amounts of patient data enabling doctors to make faster and more accurate diagnosis and treatment plans. “AI-powered wearables are helping individuals track their health in real-time, allowing for early intervention and better management of chronic conditions,” said Iffy Shaik, co-founder of VitruvianMD, a medical diagnostics firm, in a post on X.
“Whether it’s through predictive analytics or optimising treatment options, AI is reshaping the entire healthcare landscape. The future of healthcare isn’t just about improving the system, it’s about putting the power back into the hands of individuals to take control of their health, with AI as their guide.”
At the event, participants discussed how AI frameworks can be inclusive ranging from gender to ethnicity, nationality to socioeconomic status. In addition, speakers deliberated on how AI governance frameworks can ensure equitable access to and promote development of AI technologies for the global majority, how data privacy and security can be effectively safeguarded, including by fostering collective protection of rights as regards personal data processing by AI systems.
With the risks of AI becoming more apparent, governments are taking measures to regulate the out of control technology. The White House issued an Executive Order on Safe, Secure and Trustworthy AI in November 2023. This was ahead of an AI Safety summit that was held in the UK last year.
At the summit, twenty seven countries signed a major agreement on safe use of AI. The pact stipulated that governments would be able to test the AI models of eight leading tech companies before they are released. Some of these companies were heavy hitters like Google, Meta and Microsoft but the jury is still out on what progress has been made a year later.
Meta, the parent company of Facebook, has been playing ahead of the curve in the AI race with the steps it has taken to integrate generative AI in smart phones. In April, it released a new version of Meta AI integrated in the search boxes of WhatsApp, Instagram, Facebook and Messenger.
It has been hailed as one of the most intelligent AI tools available. The company also rolled out new features on its Ray-Ban Meta glasses that include live translation, reminders, ability to call phone numbers, scan QR codes etc.
In early December, Meta CEO Mark Zuckerberg announced that their AI assistant had reached 600million monthly users. Social media sites are awash with AI generated text, images, and videos in a reckoning of the new age that the world has stepped into.
Silicon Valley has ramped up investment in AI and as a result AI start ups have grown exponentially in the last two years hitting billion dollar valuations. Leading the pack is OpenAI, the creator of ChatGPT, which is valued at $86bn, Databricks($43bn), Anthropic($18bn), Mistral ($6bn) according to Forbes.
The UN forum in Saudi Arabia debated strategies that can promote and ensure meaningful transparency in AI decision-making processes and holding stakeholders accountable for their actions.
This year’s Forum convened against the backdrop of the recently adopted Global Digital Compact which envisions a secure, human-centered digital future. The Compact aims to build a governance framework that empowers all stakeholders in the digital ecosystem, much in line with the IGF mandate.
The Forum also took place ahead of the twenty-year review of the outcomes of the UN World Summit on the Information Society, known as WSIS+20, a key process to set new goals for the future of digital development and governance.
Discussions in Riyadh addressed critical issues around technology, from threats to solutions, including how to harness innovation and balance risks in the digital realm. Also on the agenda was cybersecurity and privacy which are costing organisations millions of dollars in hacks and breaches.
With rapid digital transformation reshaping economies, education, healthcare, and communications, the IGF’s agenda aligns closely with the Pact for the Future in advancing core commitments to protect human rights, enhance sustainable development, promote peace, and bridge digital divides. The pressing issue of information integrity was also discussed as participants explored the growing difficulty of discerning truth in the age of AI.
The United Nations’ Global Principles, released earlier this year, was seen as a vital framework to combat the spread of misinformation and disinformation. The principles emphasize collaborative implementation by both the private sector and governments, underscoring the importance of collective action to safeguard truth in the digital era.
United Nations Secretary-General António Guterres highlighted in a video message the “enormous potential” of digital technology to accelerate human progress but that “unlocking this potential for all people requires guardrails, and a collaborative approach to governance.” He added that the work and voice of the IGF would be “critical” as the world implements the GDC.
Additionally, the Forum highlighted the transformative potential of digital technologies in advancing peace, sustainability, and socioeconomic development, emphasizing inclusive access that ensures no one is left behind in the digital economy. Discussions also covered critical issues such as safeguarding human rights online, bridging the digital divide for marginalized communities, and building a safe, inclusive digital environment for all.
Tech titans court Trump
Needless to say, the global internet gathering happened at a pivotal time: the re-election of Donald Trump as US President. Trump has tapped tech billionaire Elon Musk and the world’s richest man by far, to slash the US government. Incidentally, Musk has been vocal about the dangers of unrestrained AI.
The tech industry is warming up to what looks like a friendly White House. Trump’s vice president JD Vance has known ties to Silicon Valley as he was mentored by renowned venture capitalist Peter Thiel.
Tech bosses are positioning themselves to reap from a second Trump presidency. So far Trump has met with the CEOs of Amazon, Apple, Google, and Meta—all of which are trillion dollar companies. The tech giants are also donating huge sums to Trump’s inauguration to curry favour with the world’s most powerful person.
Trump has also met with TikTok CEO Shou Chew. The video sharing app from China has been the subject of a ban in the US. The flurry of meetings show the extent to which the companies are intent on maintaining their power and influence as competition grows fiercely and the internet gets ever more dynamic.
The burgeoning anti-trust movement is also taking on the tech titans. In an ongoing court case, the US Department of Justice wants Google to divest its browser, Chrome, to end its internet search monopoly.
If the ruling passes, it would be a landmark antitrust ruling that would significantly shred Google’s power. The US DOJ has also gone after Google’s AI division saying it exerts undue dominance in the cyberspace. The California-based company has also faced heavy antitrust fines in Europe.
Google pays billions of dollars to Apple and Samsung to have Google Chrome as the default search engine on their devices.
These are some of the issues that the forum in Riyadh was convened to deliberate on. However the tech companies wield immense power and the ruling even if it were issued would likely be held up in lengthy appeals according to industry analysts.
At the forum, there was consensus on how governments, technology companies, civil society, and international organizations all share the responsibility to ensure that the Internet remains an open, safe, and inclusive platform.
This includes implementing robust policies to protect against rights violations, fostering transparency in corporate practices, and bridging the digital divide to promote universal access.
Despite advancements in digital tools and innovations, 2.6 billion people worldwide remain offline, predominantly in the Global South. This ongoing digital divide highlights the uneven benefits of digitalization and the risk of deepening existing inequalities.
Leaders, industry experts, and advocates participated in discussions designed to forge a shared pathway toward responsible digital governance, aiming to ensure that the opportunities of the digital age benefit all, especially those in vulnerable communities.