The worrying ideologies behind Big Tech - pt 2.

Why we should be wary of AI that is shaped by these ideologies

These ideologies all have two things in common. One: they come from a place of enormous (financial) wealth; Two: they assume that tech alone can solve any real-life problem. 

However, it is by no means certain that all the results of AI will benefit society. Instead, it can result in even greater imbalance of power and exploitation, and is typically not geared towards making our societies fairer or more just.

The tech leaders who raise fears about anthropomorphic machines taking control of our civilisation in the distant future do this in order to take attention away from the very real problems we are facing now.

  • Corporations already deploy automated systems which centralise power and increase social inequality.

  • AI is feeding into the move towards political authoritarianism. Machine learning algorithms replicate existing biases and falsehoods. A problem with large language models (such as Chat GPT) is that we can be tempted to view the system as an oracle, trust its output and act accordingly. False information can ruin reputations and ill-founded decisions can be made. 

  • People’s rights and privacy are undermined via intensive data gathering and surveillance, as well as disregarding creative and other rights and the wellbeing of vulnerable groups.

It’s striking that those who espouse these ideologies show little concern for problems such as growing social and economic inequality, the rise of authoritarianism, or the centralisation of power.

Alternative views

Fortunately, not all tech entrepreneurs think the same way.

For example, Mustufa Suliman and Michael Bhaskar, wrote The Coming Wave, Technology, Power and the 21st Century's Greatest Dilemma, which highlights the inevitability of dangers from unconstrained AI and synthetic biology technologies, the near impossibility of containing them, and a possible set of solutions to contain them. Suliman was a founder of DeepMind, whose mission was to develop artificial general intelligence, (AI with human-like adaptability). In The Coming Wave he sets out 10 strategies for the “containment” necessary to keep humanity on the narrow path between societal collapse and AI-enabled totalitarianism. These include increasing the number of researchers working on safety from a few hundred to hundreds of thousands; fitting DNA synthesisers with a screening system that will report any pathogenic sequences; and thrashing out international treaties to restrict and regulate dangerous tech.

Another tech leader, Timnit Gebru, founded the Distributed AI Research Institute (DAIR) which provides encouraging alternatives to big tech. “We believe that artificial intelligence can be a productive, inclusive technology that benefits our communities, rather than work against them. However, we also believe that AI is not always the solution and should not be treated as an inevitability. Our goal is to be proactive about this technology and identify ways to use it to people’s benefit where possible, caution against potential harms and block it when it creates more harm than good.”

JAAG’s view

There is a huge potential for new AI tools truly good for humanity, especially in the medical field.  But, as with most technology, it can be used with good and bad effect.

JAAG believes that we all – users, tech entrepreneurs and legislators - need to focus on providing transparency; this means being open about when AI is being used and what it is used for. We need AI developers and deployers to be fully accountable.

We need protection against exploitative working practices. A central requirement is regulation that protects the rights and interests of people and thereby shapes the actions and choices of big corporations.

Strongly enforced ethical guidelines, reflecting human well-being and society’s priorities will be crucial.

We should be building machines that work for us, not adapting society to the wishes of the few elites currently driving the AI agenda. Those most impacted by AI systems, who include the most vulnerable in society, should have their opinions taken into account. 

Instead of worrying so much about imaginary digital minds, we should focus on the current exploitative practices of companies developing AI, which increase social inequality and centralise power.

 <<< Part 1

Previous
Previous

The use of Artificial Intelligence in warfare

Next
Next

The worrying ideologies behind Big Tech - pt 1