Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
CommentFebruary 20

AI: proceed with caution

We are now in an environment where we can play with artificial intelligence, but that doesn’t mean serious questions should be ignored
Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
AI: proceed with caution

You know something is at peak hype when you hear it being mentioned everywhere. Such is the case with AI, from newspapers and earnings calls to conferences and everyday conversations in schools and in the office.

Everywhere you look, you’ll see sprinkles of AI fairy dust. Even though the technology has been around for a while now, generative AI, from the likes of ChatGPT and Dall-E, has given the public the opportunity to play and experiment with it. 

Unsurprisingly, in 2023, AI start-ups raised $42.5bn across 2500 equity rounds, according to CB Insights. The deals also got bigger compared with previous years, with generative AI attracting 48 per cent of all AI funding.

But the outsized attention also comes at a cost. While the share of funding for AI start-ups increases as companies embark on an arms race, are we taking focus away from other verticals that matter? And will we end up in yet another hype cycle where we ignore the fundamentals as we chase a shiny new piece of technology?

Most importantly, who will reap the most benefit and who will get left behind? While technology promises advancement in science in a way that we have never experienced — from curing cancer and other deadly diseases to massive improvement in how we work and live — it also threatens to eliminate jobs, create distrust and increase divisions between different social classes and demographics of the population.  

Can technology truly be neutral?

A recent report by consultancy Oliver Wyman found that “while 96 per cent of employees said they believe AI can help them in their current job, 60 per cent are afraid it will eventually automate them out of work”. In fact, the World Economic Forum estimated that AI could displace 85mn jobs globally by 2025, including both service and white-collar workers.

According to the Oliver Wyman report: “Fifty-five per cent of employees use generative AI at least once a week at work, but 61 per cent of users do not find it very trustworthy. Nevertheless, of those 61 per cent, 40 per cent would use it to help them make big financial decisions, and 30 per cent would share more personal data for a better experience.” 

So even when people don’t trust AI, they are still willing to leverage it for something as personal as making decisions about their money.

AI is everywhere, anywhere, and accelerated

As the saying goes, change is the only constant. But when everything is as tightly connected as our world has become, the change gets amplified and moves much faster than before. It also gets more complicated, with the potential for AI to hallucinate and the opportunities for bad actors to exploit it and create deepfakes. 

However, such exploitation is already being imagined. And along with it, our conversations around accountability, ownership, regulations, security and trust have been transformed. What does trust mean in a digital world where bits and bytes dominate? What — or whom — can we trust when a person’s identity can be compromised with lines of code and their voice cloned from a three-second snippet?

Recent reports of corporate scams committed by fraudsters using deepfake voice and videos to impersonate executives paint a picture of such a challenging future and serve as an urgent call to ensure the proper guardrails are in place. While a video of a dancing cat that performs magic tricks might be funny, a deepfake video from a Wall Street banker predicting market collapse could yield a more drastic result. 

Read more 

Proceed… with caution

Of course, progress is impossible without change. What the future brings is dependent on the actions that we take today. Despite the potential risks and challenges, I believe that we can realise the technology’s full potential, with a relentless focus on explainability — the ability to ensure that human beings comprehend AI outputs, ethics, fairness and transparency.

The EU’s AI Act, the first attempt globally to regulate the use of AI, is a great first start, but working through the requirements across interconnected platforms will be complex and difficult. For such regulations to be truly effective on a global scale requires collaboration with other jurisdictions because the digital ecosystem transcends physical borders. 

The impact on people and operations must also be carefully considered. I am less worried about a future where AI takes over the world than the immediate threat that the technology (and the humans behind the AI) is posing, including in terms of biases in data or surveillance technology. 

Algorithms do not exist in a vacuum, and history — along with its biases and imperfection — is ever-present in everything that we do. Dissecting the dualities of AI — both its opportunities and its dangers — requires us to be equipped with not only the technical know-how, but also the empathy for and ability to understand the human implications when it creates dangerous situations.

Theo Lau is a public speaker, writer, and advisor, and author of The Metaverse Economy and Beyond Good

Was this article helpful?

Thank you for your feedback!