Monday , December 23 2024
Home / comment / Corporate responsibility in the age of AI

Corporate responsibility in the age of AI

Companies must ensure they don’t knowingly cause harm and governments must hold them accountable

COMMENT | MARIA EITEL | In the past year, a cacophony of conversations about artificial intelligence has erupted. Depending on whom you listen to, AI is either carrying us into a shiny new world of endless possibilities or propelling us toward a grim dystopia. Call them the Barbie and Oppenheimer scenarios as attention-grabbing and different as the Hollywood blockbusters of the summer. But one conversation is getting far too little attention: the one about corporate responsibility.

I joined Nike as its first Vice President of Corporate Responsibility in 1998, landing right in the middle of the hyper-globalisation era’s biggest corporate crisis: the iconic sports and fitness company had become the face of labour exploitation in developing countries. In dealing with that crisis and setting up corporate responsibility for Nike, we learned hard-earned lessons, which can now help guide our efforts to navigate the AI revolution.

There is a key difference today. Taking place in the late 1990s, the Nike drama played out relatively slowly. When it comes to AI, however, we don’t have the luxury of time. This time last year, most people had not heard about generative AI. The technology entered our collective awareness like a lightning strike in late 2022, and we have been trying to make sense of it ever since.

As it stands, generative AI companies have no externally imposed guardrails. That makes guinea pigs of all of us. There is nothing normal about this. If Boeing or Airbus introduced an airplane that promised to be cheaper and faster, but was potentially very dangerous, we would not accept the risk. A pharmaceutical company that launched an untested product, while warning that it might be toxic, would be found criminally liable for the sickness or death they caused. Why, then, is it okay for technology companies to bring to market AI products that they themselves warn pose the risk of extinction?

Even before generative AI burst onto the scene, Big Tech and the attention economy were facing growing criticism for their harmful effects. Products like Snapchat, Instagram, and TikTok are designed to trigger dopamine surges in the brain, making them as addictive as cigarettes. A scientific consensus has emerged that digital media are harming users’  especially children’s mental health.

AI has turbocharged the attention economy and unleashed a new set of risks, the scope of which are far from clear. And while calls for regulation are growing louder, when they come from the very people behind the technology, they come across largely as public-relations campaigns and corporate stall tactics. After all, regulators and governments don’t fully understand how AI-based products work or the risks they create; only companies do.

It is a company’s responsibility to ensure that it does not knowingly cause harm, and to fix any problems it creates. It is the government’s job to hold companies accountable. But accountability tends to come after the fact – too late for a technology like AI.

If Purdue Pharma’s owners, the Sackler family, had acted responsibly once they realised the danger OxyContin posed, taking steps to stop the drug from being overprescribed, the opioid crisis that has gripped the United States in recent years could have been avoided. By the time the government got involved, countless lives had been lost and communities ruined. No lawsuit or fine can undo that.

When it comes to AI, companies can and must do better. But they must act fast, before AI-driven tools are so entrenched in daily activities that their dangers are normalised and whatever they unleash cannot be contained.

At Nike, it was a combination of outside pressure and an internal commitment to do the right thing that led to a fundamental overhaul of its business model. The nascent AI industry is clearly feeling external pressure: on July 21, the White House secured voluntary commitments from seven top AI companies to develop safe and trustworthy products, in line with the Blueprint for an AI Bill of Rights that was introduced last year. But vague voluntary guidelines leave far too much wiggle room.

Our collective future now hinges on whether companies  in the privacy of their board rooms, executive meetings, and closed-door strategy sessions  decide to do what is right. Companies need a clear North Star to which they can always refer as they pursue innovation. Google had it right in its early days, when its corporate credo was, “Don’t Be Evil.” No corporation should knowingly harm people in the pursuit of profit.

It will not be enough for companies simply to say that they have hired former regulators and propose possible solutions. Companies must devise credible and effective AI action plans that answer five key questions:

• What are the potential unanticipated consequences of AI?

• How are you mitigating each identified risk?

• What measures can regulators use to monitor companies’ efforts to mitigate potential dangers and hold them accountable?

• What resources do regulators need to carry out this task?

• How will we know that the guardrails are working?

The AI challenge needs to be treated like any other corporate sprint. Requiring companies to commit to an action plan in 90 days is reasonable and realistic. No excuses. Missed deadlines should result in painful fines. The plan doesn’t have to be perfect  and it will likely need to be adapted as we continue to learn  but committing to it is essential.

Big Tech must be as committed to protecting humans as it is to maximizing profits. If the only finish line is the bottom line, we are all in trouble.

*****

Maria Eitel served as Nike’s founding Vice President of Corporate Responsibility before founding the Nike Foundation and Girl Effect.

Copyright: Project Syndicate, 2023.

Leave a Reply

Your email address will not be published. Required fields are marked *