One of the most dramatic claims for the “new” AI is that it will be able to act as tutors (or counsellors) for students. A recent manual for higher education by UNESCO suggests that AI could become a “personal tutor… providing personalised feedback based on information provided by students or teachers.”
Some secondary schools are experimenting with AI, providing individualised feedback to answers to specific examination questions. The secret of their success seems to lie in the training given to the AI before the session begins.
Could AI do other tutoring tasks? The highly respected educators, Hamilton, Wiliam and Hattie, have recently suggested that “AI digital tutors [could be used] instead of expensive high-intensity human tuition for learners with additional needs”. Again, the ability to do this will depend upon the real time training given to the system.
A third role of a tutor is to give advice on careers, life choices and, often, relationships. Might AIs reach extend to these more sensitive areas of tutoring?
People are not born as effective school tutors: they learn their craft. Their skills are shaped and honed by initial teacher training courses and by working in schools. Tutors work within defined boundaries of values and ethics.
I have experience of training new teacher tutors as a senior lecturer in the PGCE programme at the University of Bristol and as a head of department in a secondary school. In this essay I reflect on whether AI might be trained to act as a tutor in the same ways as trainee teachers. I conclude with a consideration of whether this is a desirable aim.
It is important to continually remind ourselves that AIs use “large language models” to receive text from a user to predict the words most associated with the text. They do this by recognising patterns and relationships between words. They are trained using massive amounts of information, and the most recent versions can be trained further by users.
The results are impressive, but it is important to realise that AIs do not understand the ideas beneath the words. They do not understand the causes for the English civil war or the phenomenon of quantum entanglement, in the ways that historians and physicists do.
It remains important that new teachers are educated in the subjects that they teach at both primary and secondary levels. Such teachers are trained to think critically about their subjects, even if they have forgotten the exact details that they crammed into their university essays. It is this training that enables them to think about the connections between ideas and develop sequences of thoughts that become meaningful lessons for students.
We cannot send AI to university to learn the nuances of their subjects. We must develop training materials that we supply to AI at runtime. By using prompts, AI can learn to use the ideas to develop answers that go beyond repetition of the words in the training materials.
The key training documents are those that teachers use when preparing to teach courses: published curricula, examination specifications, schemes of work, in service training materials, examination questions and mark schemes, examiners reports and the like.
Other documents, such as nature of the scientific method, rules of historical inquiry, vocabulary lists, formulae sheets are also significant.
These documents can be assembled into folders that could represent Science in Year 6 or French in Year 12. Some AI systems (eg Chat-GPT) allow these to be fed into the system using scripts coded in python. If this could be done at the level of the school or academy trust, then this would be available every time the AI was accessed. This would save teachers time in retraining the system.
Avoiding inaccurate or incorrect statements is a core pre-requisite of successful teaching. Schools become deeply concerned when trainee teachers make mistakes that students copy. But it is only a part of what good teacher tutors share with their student.
Alongside the knowledge and skills, tutors also transmit the values, standards, rules and ethics of the school. These are designed to offer a social education that allows students to fit into the organisation of the school.
How do trainee teachers find out what these “approved” codes of behaviour are? Policy documents produced at national, academy trust and school levels give guidance and can be used to train AIs. Equally importantly, trainee teachers observe other teachers in action and discuss issues arising in their classroom practice. The training is “on the job” because it is reflective and is determined by context. Uploaded training files can never make up for this, and this will inevitably limit the ability of AI to be an effective school tutor.
Some students have educational plans identifying their special educational needs and these are supported by a wider literature offering strategies for successful teaching and learning. If the goal is to offer personalised support to these students, then the AI will need to be aware of their individuality. There are compliance and safeguarding implications to this. One can envisage a time when maintaining the compliance of the AI systems is a recognised role in schools.
None of the above is intended to discourage teachers from trying to use AI: the ability of AI to simplify, translate and produce engaging text is already evident and will only get better as future versions reduce its unfortunate habit of hallucinating false information.
An AI offering careers advice might be a real advantage in preparing students for a face-to-face discussion with a careers teacher, providing the AI was trained with up-to-date information about the rapidly changing employment sector.
How far can this go? Can AI ever really offer effective tutoring in life choices and relationships? Tutors learn to become sensitive to context because they work alongside their students. They know when a student could try harder and when troubles at home require a gentler conversation. They can spot signs of distress and possible safeguarding issues. Can such experience ever be reduced to a training code?
This leads to a wider question for society. Do we really want a machine to meet such basic human needs? Might such a system create a new form of psychological dependence, especially for students developing their identities through screens, amidst social media?
I have little doubt that, since there is a commercial advantage, attempts at such systems will arise. The question that schools and society need to ask is, where should any red lines be drawn?