Wellbeing & Eatwell Cover Image 1001x667 (8)

Panacea or problem?

Artificial intelligence is the latest buzzword that every tech head seems to want to reference. For anybody tasked with looking ahead at the future, it is a dependable fallback to use. Yet virtually nobody has voted for it, or signed a petition urging its wider adoption.

While the rollout of such disruptive technologies is commonly presented as a foregone conclusion, creating a self-fulfilling prophecy dynamic, it’s important to remember that this line of messaging may be spread by those people who have the most to gain from it economically.

Alongside this, AI is also a focal point for tech-dystopian headlines, and its downsides are increasingly being recognised. Nevertheless, it may offer certain important benefits, especially in the field of scientific research. A central question is whether it is possible to find some regulatory arrangement that curbs its unhelpful excesses.

The AI landscape
Today’s AI models date back to about 2022, with the last few years seeing explosive development in the field. Less visible is the formative thinking on the subject during the 1940s and ‘50s, including from mathematician and computer scientist Alan Turing, and the decades of research that have accumulated since the middle of the 20th century.

Many models have been developed by tech giants such as Microsoft, Facebook, Google and X (formerly known as Twitter). Some are generative, meaning that they can create content on request, while others can work as intelligent assistants for users of certain browsers, platforms and email services. The most prominent player is OpenAI, whose major offerings include ChatGPT (Generative Pre-trained Transformer) and GPT-4, both “large language models”, which means that they have been trained on vast quantities of online content in order to learn.

Becoming embedded in society
The primary advantage that AI offers companies is an economic one, given the savings on staff wages and the potential to speed up operations. Specialised applications in the real world are also multiplying, including in fields such as:

  • Agriculture, where it can identify pests quickly, monitor soil health, aid with drone-delivered herbicide applications and enable laser weed control at an incredible speed.
  • Healthcare. China has recently unveiled a virtual AI-based training hospital. It extends to robotic surgery, generating doctors’ notes and interpreting various types of medical scans faster and with the potential for more accuracy than human experts. At present, human oversight may be required, given that AI models do sometimes make mistakes. AI chatbots are increasingly used by the public for diagnosing medical conditions and managing mental health issues.
  • Warfare, through quality of military imaging, the choice of targets and increasingly lethal autonomous weapons.
  • Scientific research, where it can accelerate pharmaceutical development. It has also been an unqualified success in predicting three-dimensional protein structures (“protein folding”), replacing slow and time-consuming human-directed scientific techniques.

Another consideration is what technology critic Cory Doctorow describes as the “human in the loop” scenario for certain AI applications, where a high-tech activity involves hidden human labour and a high level of vigilance. According to tech website The Information, Amazon’s chain of checkout-free, supposedly AI-powered, “Just Walk Out” Amazon stores required more than 1000 workers in India to track customer purchases.

Choice or no choice
An insidious aspect of AI is that it can work its way into everyday life, sometimes with no opt-out clause, and this can feel like being dropped into a Black Mirror episode. The same opt-out issue applies to other high-tech manifestations such as digital ID systems, supplying biometric information and the slow creep of smartphone-mandatory activities.

Online, the average person is increasingly likely to encounter AI when not looking for it. This can involve online chats on shopping platforms that default to chatbots and which may transfer you to a human being if you ask persistently. Travel websites encourage AI-powered trip planning. On some search engines, AI options are available alongside the regular search, and for some others such as Google, content is ranked using AI algorithms as well as keywords.

Meta has been propagating AI use across its platforms by inserting it in numerous places. When using Facebook and Instagram on phone apps, the search bar defaults to an AI search that is hard to turn off. On Facebook groups, an AI bot may make a comment if no answer has been provided within an hour. In one bizarre example, on a New York City parents’ group, a bot claimed to have a gifted child and the comment was top-ranked by Facebook’s algorithm.

AI is increasingly making the world a more impersonal place. It is used in justice systems for bail, parole and probation decisions, in place of avenues such as face-to-face interviews. Another contentious use is in the human resources field, for resume screening and online job interviews. A 2024 study from the University of Washington found a strong bias towards white and male applicants. Similar racist AI bias can result in unwanted police attention targeting individuals or neighbourhoods. Once bias has been adopted by an AI model, it tends to become baked in.

As a challenge to creativity
In the realm of creative endeavours, AI offers cheating shortcuts such as the ability to spit out college essays, and it can also write generic-sounding news items for media outlets. In such instances, its use can only be identified through what is known as electronic watermarking, which would require a level of regulation that legislators have been very reluctant to apply.

AI image generators such as DALL-E, Midjourney and Stable Diffusion are trained through exposure to creative art. Training on copyrighted work, while currently legal, is arguably a form of plagiarism. When the corporate sector makes purchasing decisions, this AI art is often sourced in place of the real thing, thereby robbing artists of income, and human creativity is at risk of becoming increasingly redundant. Recently, 10,500 creatives, including Robert Smith of The Cure and Radiohead singer Thom Yorke, signed a petition against the unlicensed use of their work.

In the written medium, the number of lawsuits against AI training is multiplying, especially those filed by authors and news outlets. In this complex landscape, some publishers such as Axel Springer and Hearst Magazines have taken a different tack and have signed deals with OpenAI.

The information landscape
Deepfake AI imitations of photos, and particularly video footage, are becoming increasingly hard to distinguish from the real thing. The old saying about believing only half of what you see is as pertinent as ever. Debate is raging over the use of deepfakes to create political disinformation, especially during election campaigns, and how they can facilitate online radicalisation and a shift towards the far-right. Similarly, voice fakery, including of famous people, can be very realistic, and imitation voices of relatives have been used in scam phone calls.

However, using AI for automated content moderation on social media platforms introduces a range of other concerns relating to the risk of accurate or contested content being censored. One of the weaknesses of AI is its challenges in understanding humour and satire.

Sometimes these models “hallucinate”, by putting out false information, or making absurd or dangerous suggestions, such as Google AI suggesting eating rocks for health benefits and attaching cheese to pizza using glue. Some individuals such as Brian Hood, mayor of Hepburn Shire Council in Victoria, had false AI-generated content attached to their names, in his case because he was formerly a whistleblower regarding a bribery crime, and the model confused him with the perpetrators. This was fixed in an updated version.

AI and the environment — a positive or negative?
According to the hype, AI offers substantial environmental benefits and even has the potential to save the world. It can help with monitoring environmental conditions, improving the accuracy of measurements and modelling, aiding reforestation via drone, and fast and accurate identification of elements of waste streams.

But it could be argued that a problem such as climate change that is not technological in origin cannot largely be resolved by a technology that offers a diversion away from difficult belt-tightening sacrifices. AI’s benefits need to be weighed up against the environmental downsides of its data centres, which include energy and water use, material use and downstream e-waste, and droning noise pollution affecting nearby residents.

Data centres as a whole represent about 1–1.3 per cent of world energy consumption, although this figure is far higher in certain countries. Ireland saw 21 per cent of its electricity go to powering data centres in 2023, and this figure is predicted to rise to 32 per cent by 2026. A ChatGPT request is estimated to consume about 10 times as much energy as a Google search.

Looking at water consumption, a “hyperscale” data centre is estimated to consume roughly the same as a town or city of 30–40,000 people.

Financial services company Morgan Stanley has predicted that globally data centres will emit three times more CO2 between now and the end of the decade than they would if generative AI had not been developed. Importantly, this survey also looked at the embodied carbon from construction and building the infrastructure.

The US is seeing a data centre trend towards seeking out nuclear power, which has the attractions of being a baseload energy source that is generally considered very low carbon. Some of this involves “small modular reactors” (SMRs), which are hoped to become commercially available within a few years. Some other deals involve companies working on nuclear fusion.

Perhaps most revealing are the high-stakes strategies for tackling climate change, outlined by tech CEOs and former CEOs. Sam Altman, head of OpenAI, believes in powering ahead and relying on geoengineering, an unproven high-risk set of interventions. Bill Gates believes that AI will offer techno-fixes for tackling the climate challenge. Eric Schmidt, former CEO of Google, believes that we are not going to reach our climate goals anyway, and this technology is more of an asset than a liability in finding a way forward.

Assembling the contradictions
A recent survey of thousands of researchers has found a 5 per cent median risk of AI destroying humanity, although this figure hides a wide disparity. Perhaps only in the field of high tech, with its tendency towards addict-style thinking, would this level of risk be widely accepted.

In June 2024, industry insiders from OpenAI and Google DeepMind came forward in an open letter warning that safety-related concerns are being stifled, with no scope to alert the public.

AI takes on a messianic angle when it is championed by Effective Accelerationism (sometimes shortened to e/acc), a relatively extreme pro-technology Silicon Valley movement. This is best exemplified by the Techno-Optimist Manifesto written by tech entrepreneur Marc Andreessen, whose ideas are founded on a mixture of techno-capitalism and ideological conviction. They have a “post-human” orientation, with a vision of AI ruling over humanity, and an absence of limits and guardrails. (For most of the rest of the world, the issue of humans retaining control over the AI project is of central importance.)

Another, more influential, champion of AI is the World Economic Forum, which consistently advances AI-dominant high-tech visions of the future. As a whole, regulation is currently very laissez-faire. This is generally portrayed as a desire not to stifle innovation, and is sometimes framed as geopolitical rivalry between countries, especially the US and China, to determine who dominates a future high-tech global order.

If the most outrageous ideas are up for consideration, they would have to include the possibility of narrowly restricting it to those areas where it is considered to have the greatest societal benefit. The environmental damage from AI infrastructure is a function of the extent of its use, and much of it is currently trivial, characterised by material generated for clickbait and distraction. However, this restrictive measure would put the technology in the hands of an elite group, rather than democratising it.

At present, much of the debate looks at the way in which AI will dominate our lives in the future, rather than examining whether this is in fact a good idea. The path of least resistance is an easy one to take, but it may lead to a trap.

Resources available on request.

Martin Oliver

Martin Oliver

Martin Oliver writes for several Australian holistic publications including WellBeing on a range of topics, including environmental issues. He believes that the world is going through a major transition and he is keen to help birth a peaceful, cooperative and sustainable reality.

You May Also Like

Wellbeing & Eatwell Cover Image 1001x667 2025 04 30t160229.660

Aromatic middle eastern mince

Wellbeing & Eatwell Cover Image 1001x667 2025 04 24t095138.329

WellBeing Mothers Day Gift Guide 2025

Hypericum

St John’s wort (Hypericum perforatum)

horoscope guide

Fated Paths & Fresh Starts