The promise and peril of AI: Will machines make us more or less human?

The promise and peril of AI: Will machines make us more or less human?

by John Hattie, Dylan Wiliam, & Arran Hamilton*

AI promises a revolution in education. Systems like ChatGPT, Claude, and Bard demonstrate astonishing conversational ability and knowledge, and these are likely just the tip of the iceberg as AI capabilities continue to accelerate. To us, this provokes both optimism and apprehension about technology’s likely impact on humanity.

First, the sunny upside. In our world of education, AI tutors could enable personalised, high-quality instruction for all students. These digital coaches would monitor progress, adjust tactics, and proffer encouragement based on individual needs. AI might finally democratise access to world-class learning – offering instructional excellence even in the poorest parts of the world.

Teachers, too, could achieve even greater effectiveness by teaming with AI assistants. Automating administrative drudgery would allow educators to focus on high-value human interactions. Augmented reality glasses could provide real-time prompts and guidance to enhance instructional practice. And AI could also revolutionise school leadership by trawling through school data and identify insights that humans would be unlikely to see.

We also see strong potential for AI to counter fake news. Imagine a bot that scans your news feed to distinguish facts from falsehoods from fruity opinions – perhaps colour coding each sentence based on truthfulness and, optionally, completely re-writing the news to filter out bias.

In the wider economy, knowledge workers are also likely to experience vastly expanded productivity by collaborating with AI partners. Doctors, lawyers, scientists, and strategists could harness AI to search literature, write drafts, rapidly iterate ideas, and enhance insights. The bots do the dirty work, enabling humans to do the cognitive heavy lifting.

In short, AI appears poised to usher in a golden age of achievement by removing rote work and augmenting human talents. This optimistic narrative surely tempts us to race headlong into the AI future.

But as we explore in our new paper, The Future of AI and Education: 13 Things We Can Do to Minimise the Damage, we must also confront the darker possibility that AI leads to permanent downgrading of human skills and knowledge. 

Past technologies like calculators reduced the need for basic arithmetic skills. However, AI now threatens talents we consider integral to humanity – creativity and complex reasoning. As machines eclipse our capabilities in these realms, what incentives remain to learn and grow?

And eclipsing is the right word. The consensus in the computer science community is that human-level artificial general intelligence is a matter of when not if. The most bullish estimates predict this point will be reached within a few years, whilst the bearish think it will emerge nearer 2040 – with impressive incremental improvements along the way. Beyond 2040 we may be at the level of “god in a box”.

The availability of omniscient AI risks a regression where humans become passive consumers of machine knowledge. Like pets obedient to digital oracles, our agency diminishes. With convenience at our fingertips, why labour to expand our tiny minds?

This complacency risks the atrophy of the very skills that define humanity at our apex. As we relinquish exercising our minds, cognitive decline sets in. We see hints today of this mental degradation, as memorisation fades with smartphones and mental maps erode with navigation apps. But AI could rapidly accelerate human deskilling just as office work has rapidly expanded our waistlines.

It would be all too tempting to suggest that the ‘solution’ is for education to pivot from the transmission of knowledge to channelling the distinctly human spirit; that it should focus on relationships, connectedness, empathy, emotion, creativity, philosophy, and moral virtue (i.e., our human USPs). And whilst we see no harm in this, Claude AI (Anthropic’s answer to ChatGPT) wades in:

I do not believe there are any known cognitive capabilities that can be considered permanently and uniquely core to human skills beyond the reach of sufficiently advanced AI. Any such claims of uniqueness could be viewed as contrived efforts to maintain notions of human exceptionalism.

We agree with Claude and think that without oversight, AI may precipitate deeply concerning scenarios:

In one possible future, humans are increasingly relegated to “fake work”. We all clock in each morning but the AI makes the decisions – tick-box consulting with us so that we feel slightly less useless. And using its superior powers of persuasion to convince us to signoff. We are also not convinced there would be much real need for education in a fake work world – except to make us feel better and to pass the time, although there may be nothing wrong with that.

Another alternative future is “transhumanism”. To stay in the race, we upgrade our brains within computer interfaces and can download new skills at the drop of a hat. This technology is still in its infancy, but it could eventually transform us from remedial to average (i.e., still not as smart as the machines). We might also be able to communicate ‘telepathically’ (through brain-to-brain Bluetooth). Although we wouldn’t need to ‘chat’ with anyone for very long to download their whole archive of memories and experiences. Social niceties become replaced with a burst of binary code.

In yet another possible future we adopt a “universal basic income”. This severs livelihood from labour as AI displaces all job roles. Instead, we might fill our time with sport, music, craft, and social activities. If this seems unlikely it is worth remembering that in pre-state societies, many of our ancestors spent little time working – often only a few hours a day. So, there are precedents. But there were also environmental stressors to keep them on their toes: hunger, disease, security. It’s an open question whether in a world run by AI – with plenty of food, good healthcare, policing and nothing to worry or think about – we could maintain our Va Va Voom.

Given the speed of AI advances, averting dehumanising outcomes requires urgent action. Policymakers must implement pragmatic AI safeguards, to collectively give us more time to debate and decide what future we want. Our 13 recommendations include: requiring, for example, frontier AI models to be government licensed before they are released; implementing guardrails to stop AI systems giving students ‘the answers’; making system developers accountable for untruths, harms, and bad advice generated by their systems; and making it illegal for systems to impersonate humans.

Make no mistake, the erosion of humanity is not inevitable. With wisdom and vision, AI could catalyse a new renaissance, help us solve climate change, and cure diseases. But without prudent guardrails, unfettered AI risks a dystopia where human talents and knowledge regress permanently.

The future remains undetermined. But acting now to balance development with ethical constraints offers hope of an equilibrium preserving human dignity and purpose. Our descendants will judge us based on whether we chose human creativity and caution over a reckless plunge into the AI abyss. The time for vigilance is now.

 

Dr John Hattie is Emeritus Laureate Professor of Education at the University of Melbourne; Dr Dylan Wiliam is Emeritus Professor of Educational Assessment at University College London; and Dr Arran Hamilton is Group Director – Education, at Cognition Learning Group.

*With editorial support from Claude AI