Generative AI: The risks, strategies, and opportunities for schools

Generative AI: The risks, strategies, and opportunities for schools

Recently, Microsoft Australia’s CEO Steven Worrall met with several members of the government to address concerns about dizzying advancements in Artificial Intelligence (AI) and their many implications.

Education is one industry already seeing disruption as a result of Large Language Models (LLMs) like ChatGPT. Indeed, some education departments in Australia have flagged their intentions to ban this technology from classrooms, but Worrall says we cannot choose to opt out of the AI revolution.

“This wave is not one you can say we are choosing not to participate in,” Worrall told The Australian in April.

Recognising that generative AI is likely to grow exponentially, both in sophistication and its presence in education, some experts are calling for greater awareness about the ways in which it can be used to make it less of a headache for schools.

According to Dr Jim Webber, Chief Scientist at leading graph database and analytics company Neo4j, says one way to tame generative AI is to train LLMs on curated, high-quality, structured data, leading to better performance and more human-like responses, making them more accurate and useful in tasks like translation, writing, and coding.

Below, The Educator speaks to Dr Webber about the key risks, defence strategies, and opportunities associated with the rise of this technology.

TE: What do you see as the key risks of generative AI bots to students, teachers and principals in Australia?

I think the risks are clear, especially in the secondary stages of K12 where students are expected to work with a reasonable level of independence: unlike a search tool which can help uncover information, the rise of LLMs allows students to seemingly complete an assignment with very little effort. The fact that the answers provided by LLMs are so good (at least superficially) leaves no room for the student to interpret and understand. Not only does this risk perverting the grading system, but students miss out on the skills involved in developing their own trains of critical thought. While it might be argued that students could do this with copy-and-paste of existing Web articles, LLMs take it to the next level.

TE: Are there any way schools can defend against this?

Yes, defences against this include getting LLMs to assess work to see if they think it has been written by an LLM – or more prosaically, educators can spot overnight transformations in a student's style. But it is a constant battle when you can ask ChatGPT to write in a given style, such as "Write an essay in the style of a year 12 student about Australia's role in the Great War." In addition, graph technology – a modern way of storing data as entities and their connections – can make LLMs less biased, more accurate, and better ‘behaved’. The risk of errors can be reduced when an LLM is trained on curated, high-quality, structured data.

TE: What positive uses of Generative AI in schools have you have observed, and what are some opportunities associated with these positive uses?

Clearly as (somewhat) intelligent and knowledgeable authors, LLMs have a lot of promise to help educators. Providing fresh examples of particular styles of prose or poetry, or producing works in the style of historically important people to help engage students with work that might otherwise seem out of touch. For students with communication difficulties or working in a second language, LLMs could be a patient tutor to help them improve their own reasoning and writing skills. But it doesn't quite come for free: educators have to train young people in how to use the tools for these benefits.

TE: How can generative AI tools like ChatGPT be regulated?

In principle, the state could try to constrain the abilities of LLMs, but much like the (generally ill-advised) attempts to regulate cryptography these are doomed to fail, since the Internet doesn't respect borders or jurisdictions. If regulation (in the loosest sense) is desirable, then it should be designed to empower educators to bring these tools into the curriculum and teach responsible, productive uses. The active state here encourages fruitful exploration of technology and educates to minimise nefarious use-cases. To do so, educators themselves need support to have some mastery over the technology and pedagogical framework for its use in the classroom.