Wellbeing & Eatwell Cover Image 1001x667 (8)

Panacea or problem?

Artificial intelligence is the latest buzzword that every tech head seems to want to reference. For anybody tasked with looking ahead at the future, it is a dependable fallback to use. Yet, virtually nobody has voted for it, or signed a petition urging its wider adoption. This paradox highlights the AI debate, where the rush to develop and promote it contrasts sharply with the lack of widespread public engagement or endorsement.

While the rollout of such disruptive technologies is commonly presented as a foregone conclusion, creating a self-fulfilling prophecy dynamic, it’s important to remember that this line of messaging may be spread by those people who have the most to gain from it economically.

At the same time, tech-dystopian headlines focus on AI, and people are increasingly recognizing its downsides. Nevertheless, it may offer certain important benefits, especially in the field of scientific research. A central question is whether it is possible to find some regulatory arrangement that curbs its unhelpful excesses.

The AI landscape

Today’s AI models date back to about 2022, with the last few years seeing explosive development in the field. Less visible is the formative thinking on the subject during the 1940s and ‘50s, including from mathematician and computer scientist Alan Turing, and the decades of research that have accumulated since the middle of the 20th century.

Many models have been developed by tech giants such as Microsoft, Facebook, Google and X (formerly known as Twitter). Some AI models are generative, meaning they create content on request, while others act as intelligent assistants for browsers, platforms, and email. The most prominent player is OpenAI, whose major offerings include ChatGPT and GPT-4. Both are “large language models,” trained on vast amounts of online content to learn.

Becoming embedded in society

The primary advantage that AI offers companies is an economic one, given the savings on staff wages and the potential to speed up operations. Specialised applications in the real world are also multiplying, including in fields such as:

  • Agriculture, where it can identify pests quickly, monitor soil health, aid with drone-delivered herbicide applications and enable laser weed control at an incredible speed.
  • Healthcare. China has recently unveiled a virtual AI-based training hospital. It extends to robotic surgery, generating doctors’ notes and interpreting various types of medical scans faster and with the potential for more accuracy than human experts. At present, human oversight may be required because AI models sometimes make mistakes. The public increasingly uses AI chatbots for diagnosing medical conditions and managing mental health issues.
  • Warfare, through quality of military imaging, the choice of targets and increasingly lethal autonomous weapons.
  • Scientific research, where it can accelerate pharmaceutical development. It has also been an unqualified success in predicting three-dimensional protein structures (“protein folding”), replacing slow and time-consuming human-directed scientific techniques.

Cory Doctorow describes the “human in the loop” scenario for certain AI applications, where hidden human labor and high vigilance are involved. According to The Information, Amazon’s chain of checkout-free, supposedly AI-powered “Just Walk Out” stores required over 1,000 workers in India to track customer purchases.

Choice or no choice

AI insidiously works its way into everyday life, sometimes with no opt-out clause, making it feel like a Black Mirror episode. The same opt-out issue applies to other high-tech manifestations such as digital ID systems. These are supplying biometric information and the slow creep of smartphone-mandatory activities.

The Ubiquity and Impact of AI

Online, the average person is increasingly likely to encounter AI when not looking for it. This can involve online chats on shopping platforms that default to chatbots and which may transfer you to a human being if you ask persistently. Travel websites encourage AI-powered trip planning. On some search engines, AI options appear alongside the regular search. For others, like Google, AI algorithms rank content alongside keywords.

Meta has been propagating AI use across its platforms by inserting it in numerous places. When using Facebook and Instagram on phone apps, the search bar defaults to an AI search that is hard to turn off. On Facebook groups, an AI bot may make a comment if no answer has been provided within an hour. In one bizarre example, a bot in a New York City parents’ group claimed to have a gifted child, and Facebook’s algorithm ranked the comment highly.

AI is making the world more impersonal. It replaces face-to-face interviews in justice systems for bail, parole, and probation decisions. In human resources, AI screens resumes and conducts online job interviews. A 2024 University of Washington study found strong bias towards white and male applicants. Similar racist AI bias can direct unwanted police attention toward individuals or neighborhoods. Once an AI model adopts bias, it tends to become ingrained.

As a challenge to creativity

In the realm of creative endeavours, AI offers cheating shortcuts such as the ability to spit out college essays, and it can also write generic-sounding news items for media outlets. In such cases, AI use can only be identified through electronic watermarking, requiring regulation that legislators have been reluctant to enforce.

AI image generators like DALL-E, Midjourney, and Stable Diffusion are trained using creative art. Training on copyrighted work, though currently legal, could be seen as plagiarism. When businesses make purchasing decisions, they often opt for AI-generated art over real art, depriving artists of income and threatening human creativity. Recently, 10,500 creatives, including Robert Smith of The Cure and Radiohead singer Thom Yorke, signed a petition against the unlicensed use of their work.

In the written medium, the number of lawsuits against AI training is multiplying, especially those filed by authors and news outlets. In this complex landscape, some publishers such as Axel Springer and Hearst Magazines have taken a different tack and have signed deals with OpenAI.

The information landscape

Deepfake AI imitations of photos, and particularly video footage, are becoming increasingly hard to distinguish from the real thing. The old saying about believing only half of what you see is as pertinent as ever. Debate is raging over the use of deepfakes to create political disinformation, especially during election campaigns, and how they can facilitate online radicalisation and a shift towards the far-right. Similarly, voice fakery, including imitations of famous people, can be very realistic. Scam phone calls have used imitation voices of relatives.

However, using AI for automated content moderation on social media platforms raises concerns about the risk of censoring accurate or contested content. One of the weaknesses of AI is its challenges in understanding humour and satire.

Sometimes these models “hallucinate”, by putting out false information, or making absurd or dangerous suggestions. For instance as Google AI suggesting eating rocks for health benefits and attaching cheese to pizza using glue. Some individuals such as Brian Hood, mayor of Hepburn Shire Council in Victoria, had false AI-generated content attached to their names. In his case because he was formerly a whistleblower regarding a bribery crime, and the model confused him with the perpetrators. This was fixed in an updated version.

AI and the environment — a positive or negative?

The hype suggests AI offers significant environmental benefits, even claiming it can help save the world. AI helps monitor environmental conditions, improve measurements, assist reforestation via drones, and identify waste streams quickly and accurately.

However, one could argue that climate change, not caused by technology, cannot be solved by a tech distraction. The AI debate becomes relevant here, as its benefits must be weighed against the environmental impact of data centers, such as energy use, water consumption, and e-waste.

The Growing Environmental Costs of AI

Data centres as a whole represent about 1–1.3 per cent of world energy consumption, although this figure is far higher in certain countries. In 2023, Ireland saw 21 percent of its electricity go to powering data centres. This figure is predicted to rise to 32 percent by 2026. A single ChatGPT request is estimated to consume about 10 times as much energy as a Google search.

In terms of water consumption, a “hyperscale” data centre uses roughly the same amount as a town or city of 30–40,000 people.

Financial services company Morgan Stanley predicts that global data centres will emit three times more CO2 by the decade’s end than they would without generative AI. Importantly, this survey also considered the embodied carbon from constructing and building the infrastructure.

In the US, data centres are increasingly seeking nuclear power. Nuclear power is an appealing baseload energy source, generally considered low carbon. Some of this involves “small modular reactors” (SMRs), which are expected to become commercially available in a few years. Some other deals involve companies working on nuclear fusion.

Perhaps most revealing are the high-stakes strategies for tackling climate change, outlined by tech CEOs and former CEOs. Sam Altman, head of OpenAI, believes in powering ahead and relying on geoengineering, an unproven high-risk set of interventions. Bill Gates believes that AI will offer techno-fixes for tackling the climate challenge. Eric Schmidt, former CEO of Google, believes that we are not going to reach our climate goals anyway, and this technology is more of an asset than a liability in finding a way forward.

Assembling the contradictions

A recent survey of thousands of researchers has found a 5 per cent median risk of AI destroying humanity. However, this figure hides a wide disparity. Perhaps only in the field of high tech, with its tendency towards addict-style thinking, would this level of risk be widely accepted.

In June 2024, industry insiders from OpenAI and Google DeepMind came forward in an open letter warning that safety-related concerns are being stifled, with no scope to alert the public.

AI takes on a messianic angle when Effective Accelerationism (e/acc), a relatively extreme pro-technology Silicon Valley movement, champions it. This movement is best exemplified by tech entrepreneur Marc Andreessen’s Techno-Optimist Manifesto. His ideas blend techno-capitalism with ideological conviction. They have a “post-human” orientation, with a vision of AI ruling over humanity, and an absence of limits and guardrails. (For most of the rest of the world, the issue of humans retaining control over the AI project is of central importance.)

Another, more influential, champion of AI is the World Economic Forum, which consistently advances AI-dominant high-tech visions of the future. As a whole, regulation is currently very laissez-faire. People generally portray this as a desire not to stifle innovation. They sometimes frame it as geopolitical rivalry between countries, especially the US and China, to determine who will dominate a future high-tech global order.

The Debate Around AI ’s Future

If the most outrageous ideas are up for consideration, they would have to include the possibility of narrowly restricting it to those areas where it is considered to have the greatest societal benefit. The environmental damage from AI infrastructure is a function of the extent of its use. Much of it is currently trivial, characterised by material generated for clickbait and distraction. However, this restrictive measure would put the technology in the hands of an elite group, rather than democratising it.

At present, much of the debate looks at the way in which AI will dominate our lives in the future, rather than examining whether this is in fact a good idea. The path of least resistance is an easy one to take, but it may lead to a trap.

Resources available on request.

Article featured on WellBeing Magazine 215

Martin Oliver

Martin Oliver

Martin Oliver writes for several Australian holistic publications including WellBeing on a range of topics, including environmental issues. He believes that the world is going through a major transition and he is keen to help birth a peaceful, cooperative and sustainable reality.

You May Also Like

Wellbeing & Eatwell Cover Image 1001x667 (8)

Making a mess with mulch

Option 1 (1)

130 Years of Brancourts Dairy Excellence

Option 1

A Legacy of Wellness & Trusted Natural Products

Skin in Your 30s

Glow Naturally: A Softer Approach to Skin in Your 30s