The Nature of Learning with Helen Beetham

KIMBERLY NEVALA: Welcome to Pondering AI. I am your host, Kimberly Nevala.

In this episode, I am pleased to bring you Helen Beetham. Helen is an educator, researcher, and consultant who has advised universities worldwide on their digital education strategies. She has also worked with international organizations such as the EU, UNICEF, UNESCO, and the Commonwealth of Learning. She is an amazing and prolific author and has also edited several standard texts, including Rethinking Pedagogy for the Digital Age. She is, I believe, currently completing a book on cultivating students' critical digital literacies. She also has an amazing Substack called Imperfect Offerings, which the Guardian Observer has recommended for its wise and thoughtful critiques of generative AI. And I wholeheartedly second that recommendation.

Today, we're going to be talking about the nature of learning, the missteps and opportunity costs inherent in our current approaches to AI in education, and how we might chart a better course. So welcome to the show, Helen.

HELEN BEETHAM: Thank you so much, Kimberly. Thanks for that generous introduction.

KIMBERLY NEVALA: All right. So let's start with the problem space at hand. I think a lot of times when we go awry, whether it's with technology or otherwise, in trying to solve a problem it's perhaps because we don't understand the problem we are trying to solve.

So in your experience, what do we at large, and for those of us who like to solve problems with technology, tend to just misunderstand about the nature and objective of learning?

HELEN BEETHAM: Wow, that's a huge question, Kimberly.

I guess most of my experience is in higher education. And I think in higher education, particularly, there may be some misunderstandings about just how challenging and transformational we expect learning activities to be for students.

We're not trying to add something on to things they already can do necessarily. We're really trying to reframe, often, the ways that students think about the world. So when we ask students to produce content, whether that's writing or perhaps coding or images-- the students I teach might be creating videos, presentations, podcasts even-- we're not doing that so there can be more content in the world. There's a world of content out there.

We're doing that so they can really transform their ways of thinking, that they can be challenged, that they can gain new developmental skills. And so I guess that's kind of fundamental.
But I think in relation to generative AI, particularly, I would want to say that the misstep, the misunderstanding I feel most challenged by myself at the minute is really just how good or bad it is.

I feel like there's a kind of two-tribes moment where a lot of educators feel genuinely that there's transformational possibilities there. A lot of educators like myself have tried and experimented with the tools and really find them fundamentally quite brittle and quite flaky. And that's not only for the basic tasks of information retrieval and summary, but for the fundamental things that, as you say, learning is about.

So as teachers, we have something called "theory of mind." We have a project of engaging students motivationally. We are interested in transformational change. And we are interested in giving responsive feedback. And all of these are things that, in my experience and many educators', these tools really are quite lacking in.

So I think there's some misunderstandings, missteps, or some strange loops, perhaps, going on in our understanding of exactly how useful they're going to be in the classroom and particularly in thinking about how transformational they might be.

KIMBERLY NEVALA: That's an interesting point you make about higher ed - sometimes we'll refer to that as HE – that it's not a means to an end in which the end is content generation. And in fact, generative AI in particular is…these tools are output generators. So are we conflating something about its ability to output something with the intention behind engaging in the process of learning? And if so, what is the implication for students' development as learners and as humans in the world?

HELEN BEETHAM: Well, Kimberly, I think going back to the previous question, one of the issues we have in thinking about learning is that the education system is not just the sum of all the individual learners. The education system is a cultural institution or institutions that we've built over many decades.

And so something that to an individual learner might seem intuitively really helpful, really productive - I can create content more easily, more rapidly. I can practice more. It may actually be that for some learners, some of the time those things are, for the task they've been set, educationally productive, developmental, transformative.

I think when I think about HE - I think about HE as much as a system - and the sort of systems it rests upon. So, yes, creating content is a wonderful thing that we all do. But actually, that's not the cultural purpose of higher education, is it? So I think some of the content production systems that higher education rests upon, like for example, academic publishing, are quite challenged by these new tools and technologies. So when we focus just on the individual benefits and use cases there may be for some, we perhaps neglect to think about what of those systems is vulnerable to these radically new, not just individual use cases, but systemic effects.

So I think that kind of partly answers your question. I think the question about process is one that many, many educators are picking up in a really imaginative way. And I wouldn't for a minute dismiss the effort that so many educators are making to devise new kinds of tasks, to monitor those tasks differently, to encourage students to monitor those tasks differently so that we can shift that emphasis from the end product to the process.

And obviously, there's an opportunity there for us to rethink what learning is and what learning is going on. But I think at the same time, it takes resources. It takes resources both immediately, to rethink practices that have become really well established and long-term, if what we're focusing on is how to support students through process, where we might give them feedback iteratively at several points in that process. Just pragmatically, there's a teaching resource implication there. There's a time and space and platform and content implication there.

The promise that AI is simply going to do away with all those resource problems and speed it all up and make it all possible, I think, then becomes quite a significant problem for those kinds of rethinking that, ideally AI would be encouraging us all to do and is encouraging us all to do.

KIMBERLY NEVALA: And what are the opportunity costs or the experiences that students miss out on when we take this type of an approach?

I listened to another podcast recently, and they made this very interesting point that a lot of the work of an educator and of a teacher has very little to do with the fundamentals, putting teaching-- and they were talking about this in a primary school perspective.

What she said was that students will pick up the knowledge fairly quickly. But maybe it's basic math - she was just talking math skills - they have to be presented with those problems, they have to be presented with those challenges to work through them.

But they really rely on encouragement, on feedback. That a lot of the things that hold students back from learning is not just a lack of access to materials, although that plays. But it's a lack of access to the community, to the supports, to that intrinsic human feedback loop. What is your reaction to that?

HELEN BEETHAM: I think all of that.

But I also think if we think about generative AI specifically and its affordances, learners don't always necessarily know what they don't know. And they don't necessarily know what it is in a task that will be developmental for them. So in that rush to achieve the task, there can be a lack of knowledge and understanding. And I call this the expertise paradox with generative AI.

So if you're already, if you have an expert practice, let's say, in writing or in coding or in all the many things professionally that there are shortcuts. Without a doubt, with some of these systems, if you're already an expert, one, you can see where the shortcuts potentially are. Two, you know how to mitigate errors and fragilities in the system because you're already an expert. You can see what's going wrong. Three, you have the capacity to innovate. You can see what kinds of new things these new tools might be really good for.

If you're none of those things, if you're not an expert, then the rush to use these tools to support what seems to be the task can really sideswipe what is going to be developmental for you.

And I think there is a danger that as professional educators, as people who are by definition expert in a field, we might use these tools in a way that seems really productive for us and perhaps not think deeply enough about what it is that's productive for students. Which is not a rush to complete the task but is actually to encounter the challenge. What the challenge is for them is what they have to meet in the task. And we shortcut that at the peril of their learning.

I think also thinking systemically, again, as I would encourage us to do, one of the things that will be an opportunity cost is the investment in teachers. Because the only way, really, for there to be a profitable use case for the companies that are selling this into education is to de-skill or replace or accelerate the productivity of teaching staff.

And I've been in edtech for nearly 30 years now, Kimberly. And rarely have I seen claims to save teacher time translate into anything other than pressure on teachers to throughput more students more quickly. So I think systemically, there are bigger opportunity costs, even perhaps than there are for individual learners.

KIMBERLY NEVALA: We talk a lot about this rush to quantify everything. And you hear this a lot, particularly in this space, under the guise of personalization.

So you've said in some circumstances the tools can be helpful, for certain learners in certain situations and perhaps at certain levels of training or expertise. So is it true that we should also remember - and we'll see this in both the usage and adoption of these tools - that experts learn differently from novices in any given area?

HELEN BEETHAM: Yeah, I think exactly. That's the point. Experts learn differently.

Some of us are old enough to remember the MOOC hype curve. And also, I'm old enough to remember the promise of networked learning back in the 1990s and early 2000s. When the potential to link people together in convivial learning sets, learning groups where we'd all coordinate ourselves around the things we were really passionate about. What a wonderful vision that was.

And it works really well for people who already have a motivation, an expertise, a field of knowledge, a professional practice. Those kinds of approaches work really well. But by definition, if we believe in mass accessible public education, including accessible higher education, then we're working with students for whom those things are not yet true.
And it's in that space of the not yet true that we need to think about, how does learning really occur? What kinds of care and appropriate challenge and respectful feedback and what kinds of listening to a multiplicity of voices and techniques and approaches in the classroom are going to support the multitude of learners to make progress towards the kinds of expertise and capability that are going to be meaningful for them and aspirational for them?

And I'm afraid a lot of these systems, they converge on a norm. They converge on a norm. They make a huge amount of assumptions about both culturally and educationally what's appropriate. As educators, we don't always step out of that norm. I'm not suggesting we always get that right as educators. We have our own biases. But the classroom and the university and the school are spaces where that challenge can occur. These architectures are not spaces where that challenge can occur.

KIMBERLY NEVALA: This idea that we can both truly quantify human and a student experience and then use the tools to essentially cover over or fill in the gaps in resources, in human teachers, in human supports, and do that in a way that's hyper personalized - so Kimberly will get what Kimberly needs, and Helen will get what Helen needs - is often a selling point. Although what I hear you saying is, in fact, it's a bit of a misdirection.

We may agree, I guess, that there are not enough resources today to effectively and efficiently… whether it's at younger areas, we don't have enough teaching resources, and in higher education, we don't have enough accessibility for everybody to access appropriate and enriching higher education. Techno-solutionists will say, these are the areas where the current system is broken and where tech can happen. But it strikes me as perhaps we've identified the problem, but we haven't necessarily identified the solution correctly.

HELEN BEETHAM: Well, scale is a problem for which technology will always present itself as the solution. So let's take that as a given.

But maybe with your permission, Kimberly, we could get under the hood a little bit of these systems, because you've talked about what's obscured in them.

Now, my friend Audrey Watters has written wonderfully about the history of the promise of personalization in tech-based education. Actually, it goes back to the 1920s, but it certainly goes back a very long way in the digital world. Why has that promise not really been realized? I think we need to understand our learning theory better to explain that. I don't think the technology will explain it. But let's look at the technology we have in front of us, generative AI.

So the generative models, when they first appear from the process of training the data with all the expertise that engineers can throw at that, with all the very subtle adjustments they make to that training process to get an outcome that seems acceptable, that's really very early in the process.
And what gets obscured at that point is all the training that goes on: the human annotation, refinement, feedback. Time did a wonderful exposé of the kind of majority world labor that was being exploited to create the foundation models. And that has carried on. It's not that this is a one and done. So these models continue to be refined in relation to particular use cases.

Now, education is a really powerful use case for potential profitability. So not surprisingly, both with the foundation models and with many of the models that are being built on top of that and offered to education, a huge amount of teacher labor or, I think, data work is often denigrated. A lot of data work is incredibly skillful. But maybe data work below the level of a qualified teacher. A lot of that is going on to create systems that have been fed with content.

In the UK, the government has just produced a contract to build a data pool based on our entire national curriculum. But interestingly, in relation to personalization, that contract also includes anonymized data from generations of students, school students, who have been through the national curriculum. Teachers have spent many, many hours of their lives - I know as a parent governor - collating data about how pupils and students encounter that curriculum, how they make good progress, and how they don't.

That, too, is in the data pool. Because the content alone is not enough to produce these effects. We've talked about responsiveness, theory of mind, personalization, adaptivity, care, appropriate challenge. The content itself does not produce that.

And so we then conceal this additional labor, the layers and layers of additional labor. Some of it based around what teachers themselves have produced, some of it teachers being employed to do that, some of it data workers as teachers doing that. That's all obscured.

And so suddenly, it seems like it's the magic of the system. If you do get something useful out of it, that's a lesson plan, for example. I'm not denying that there can be some really quite thought-provoking lesson plans produced by these systems. That's coming from the labor of teachers. That's not just magically produced by the statistical conjuring act.

You talked about the person behind the curtain. A lot of those people behind the curtain are teachers. And just on the point you made about the US education system, the US education system and its insistence on standardized testing has produced such a wealth of material, all of which is in the training data of the foundation models.

And so not only do they work reasonably well for US-based problems and challenges, they work reasonably well in other anglophone contexts. And given that the US educational publishing industry dominates the world almost as much a the US tech industry, they work reasonably well in other contexts.

But they flatten the concept of what education is to a very singular culture, which has a very particular, standardized, I would say, quite instrumental approach to education, which is replicated in these models because that's what they've ingested.

KIMBERLY NEVALA: And these are all models that work very well on, you said, standardized and mechanized and repeatable patterns. So these are the bits of data that we can really easily and perhaps only quantify. So I have a bit of an instinctive reaction when people say we can quantify or digitify an entire student's experience, understand everything about them, and therefore tailor that experience. I think that denigrates both students as humans and people but also people in general. Interested in your response to that.

HELEN BEETHAM: Well, it's not only objectionable it's also fundamentally a fantasy.

There's the fantasy that if you collect enough data from students or if you produce a complex enough data model of teaching strategies and student responses to them, you can produce a passable imitation of teacherly behavior in an AI system.

It's been a fantasy, as I have said, for many decades now. And it's a powerful one. It allows the tech industry to sell very expensive systems and subscriptions to schools and colleges and universities. Which then require teachers to spend valuable teaching time servicing those systems with more data and more diagnostics.

Now, all of that data labor and data squeeze might produce a bit of juice in terms of knowing a bit more about certain factors that learners bring. But exactly as you say, learners are not bundles of factors. Learners are actors in their own learning. They are investors in their own learning. They're co-creators of their own learning. And so what's being lost is the potential to invest in building wonderful classroom communities, in bringing on new generations of teachers, in students as individuals who have their own voice and who want to direct learning in their own ways.

So that's the opportunity cost. And history tells us this is a fantasy. However much our buying into that fantasy does create some, as I've said, there's a little bit of juice for all of that squeeze, but it's fractional.

KIMBERLY NEVALA: Well, one of the refrains or hypotheses that you will often also hear is this idea that we just fundamentally need to rethink, not just what we teach, not just how we teach things, but what we're teaching because everyone is interested.

We need to make sure that as folks come out of this system, including higher education, that they are AI-ready. And that a lot of the basic skills and foundational elements that perhaps imperfectly - and maybe there are elements of the system that have now been exposed that tell us something about current weaknesses. But not to worry, because these systems and the skills that we're worried about are no longer going to be relevant. So it is more important that everybody understands how to operate the machine.

And again, I'm not knocking digital literacy here and AI literacy. I think it's very important. But therefore, that a lot of the types of objections that folks like yourselves raise are really just moot or without a solid foundation because you are working in an area or looking at or prioritizing skills and expertise that will no longer be valuable or needed in the future.

HELEN BEETHAM: Yeah, I'm very much there for critical digital literacy. In fact, I developed a framework that's still quite widely used in higher education and supported the EU to develop their very widely used framework for educators. So this is where I come in.

But I think any framework and indeed any project in education at some point is a story about the future. It's a story about what that might look like and how we think our learners might flourish there.

Now, the story about the AI future, which is one I characterize now as AI realism, I don't think most of the educators I encounter are deeply enthused anymore. I think that's a minority. I think the vast majority of educators are realists, in the sense that they believe that the full horizon of future possibility, particularly for graduate work, is filled with these systems.

And it's hard to contradict that when you see them every day. Some new platform that I depend on and that you depend on and that we all like to use has some new AI chatbot. I won't name names, but I'm actually involved in a dispute right now to try and opt out of AI functionality in a platform I've used for many years. And it's involving a lot of emails. And I don't think I'm going to get there, particularly because I've asked them to guarantee that none of my data is used for training future models, and I think that's going to be the sticking point. So that's an aside.

It is being integrated. So I think my position has changed since a year ago in terms of what that means.
However, that integration no longer… nobody is telling us that what that looks like is the AI is getting better and better. Now, we forget. AI, when ChatGPT 4 came out, we were expecting GPT 5 in a couple of months' time, and it would be exponentially better. We've conveniently forgotten that. There's lots of reasons for thinking the systems are not going to get better and better. But they are going to get more and more integrated and more and more unavoidable.

So what is a realistic response to that? Which still considers that students need to develop a critical stance not just in relation to AI, but in relation to every tool that they're offered in education, in life, in work? It's absolutely important that they can do that.

Well, I think one of the things is we have to really worry about where the responsibility is being placed. All the frameworks I've seen about critical AI have great terms in them like ethical, responsible, critical, all terms we think are important. But how possible is it to be ethical, responsible, and critical, if the only way to engage technically is to know that those systems are working in the background?

They're using vast amounts of data and compute whether we like it or not. They are presenting top of our list of hits. They're presenting an AI-generated hit to us. When we try and write an email, they're offering to jump in there. Do we perhaps then need to think about where to place the responsibility for criticality, ethical, and responsible behavior?
And perhaps it's not at the level of individuals. Perhaps it's more at the level of corporations, colleges, institutions, universities, and of course, the AI companies themselves. What would that look like?

KIMBERLY NEVALA: Yeah. And I don't know that we have that answer yet. Certainly, the regulatory environment and frameworks… it's an ever-changing, quite fascinating landscape today and perhaps a topic for an entirely different conversation.

But I also wonder if - kind of pulling some of the threads together from what you said before – if we're creating, what is it called, the ouroboros where it's the snake eating a bit of its own tail.

Where a lot of times, again, we hear, well, it's not important that folks develop or acquire certain levels of expertise and skills. We hear a lot, hey, these AI tools are great. They can be your new intern. But the point of being an intern is also to learn. So it's outside of the educational piece. And if you don't learn those skills and you automate them out to a system, then how do you learn that?

Because there does seem to be a bit of a, well, you tell me if this is actually a dichotomy or if I'm just projecting a problem that doesn't exist between this statement that says we want folks coming out of educational systems to be AI-ready, AI-conversant. AI is the future. They just need to know how to use the tools and that's it. But we're also interested in folks who are able to create new and different ideas. And that is not the purview of these types of systems, which are all based, however well curated, on historical data and patterns.

So is there a reckoning coming here? Or is there something I'm not seeing about how we work around this issue?

HELEN BEETHAM: Well, this may be something you know more about than I do, Kimberly, but I keep a pretty close eye on the tech industry and its relationships to its business-to-business relationships. And it seems to me that the narrative that every business is looking for AI-ready graduates has not played out.

We have businesses trying to recruit in ways that actually preclude the use of AI. Finding interesting ways to bring people together, to have days of teamworking where they see who's innovative, who has those soft skills. People actually rejecting some of the previously canonical ways of choosing an intern or a candidate, because they just know that the people who have the highest-paid subscription model will be the ones who get to the top of the pile. And that's not what business wants.

We also see businesses really asking some quite sharp questions about productivity. This was the big promise. It was going to accelerate productivity, we were told, 10 times, 100 times, 1,000 times. It's back to that expertise paradox. If you have a highly expert workforce and you give them a tool like this, some of them are going to see the potential. And they will integrate the shortcuts. And they will continue to want to behave professionally and to monitor what the negative outcomes are.

The last thing you want is a pile of completely unskilled workers using these tools as though they have skills they don't. As a business, you don't want that. It may work for a limited time for students trying to pass in the rather instrumental and outdated and unfortunate ways we ask them to pass. But it won't work when they get into the workplace.

So I think the idea that this pressure is coming from industry for AI-ready graduates, I'm not really seeing that play out. What I am seeing is we have to know about the systems. We have to understand them. We have to expect to encounter them almost everywhere we look. But those tools of criticality, of critical thinking, of innovation will continue to be really, really valued.

Now, I just would make a caveat to that. I think if you listen to what is coming out of the mouths of the Elon Musks, the JD Vances, the Peter Thiels, they're coming out and saying what they think, which is they hate higher education. They don't mean by that they don't want their kids to go to an elite university. They don't mean by that they don't want engineers and entrepreneurs. What they mean by that is they don't think the mass of people, the mass of young people, should be invested in to the extent of having three or four years to develop themselves, to think about the world, to decide what they think about it, to study the humanities, to study critical social sciences. They don't believe in that, that's for sure.

And what they would like the majority of us to do to improve ourselves is to wait for the next upgrade, whether it's Neuralink, whether it's the next subscription. This is how you get smarter. Unless you're one of the elite where you can be innovative, you can be an innovative investor, you can be an AI engineer - we need those people. But the great unwashed - I'm sorry to use that term - you just wait for the next upgrade. Education is never going to improve your life as much as technology can. And that is literally what they believe.

KIMBERLY NEVALA: Yeah, I heard an interesting challenge question the other day. And they essentially said: you first. So isn't it interesting how these folks of this ilk are not educating their kids with robot tutors and teachers and are not foregoing a lot of these standard environments or communities. But they certainly think it is good for the rest of you.

And it brings back to me this refrain of "you first.” Which is, if you believe that truly, then maybe you should adopt it. And let's see how that works. But in fact, we're seeing a pushback in a lot of these cases where they're moving away from technology in a lot of ways, or trying to minimize engagement with it, which, we'll see how that -- whether that's reasonable or not as well. So it does play to some of what you're talking about there.

But as you were speaking, I also wondered if AI in this regard - or the AI solutionism that we're seeing today, as you said, it's not new - is a bit of the canary in the coal mine. So you mentioned the US's turn towards standardized testing and where we've become really enamored with these very repeatable - I'll use the term you said - mechanized ways of assessing knowledge acquisition, of assessing progress.

Do your metrics really matter here? Is this the question that has to be asked? And therefore we have developed systems, perhaps even unintentionally, that are just ripe for this type of techno-solutionism.
And perhaps we can argue over when and where the technology itself needs to get applied.

But is this also a moment where we should perhaps acknowledge that there are some critical weaknesses in how we have approached teaching, learning - we'll put the economics of just paying teachers aside for a minute - we need to address. And are there elements of the way we educate today then that we should change?

HELEN BEETHAM: I love this question, Kimberly. Yeah.

I call it pre-automation. That if automation arrives in a system that is ready for it, that is ripe for it, then it's going to be adopted and accepted. And so yeah, absolutely, I think education is a system where skills, teaching skills particularly, but also research skills, thinking about my own context, have been metricized, standardized, unbundled, and where possible, outsourced.

And we've decided in addition to all of that, that the only kind of expression of ideas that counts is some kind of formal academic English. And some very standard - the US is particularly guilty of this, I'm afraid I have to say - also, even to the point of standardized kinds of argument.

So if you think about the drive to automate marking of essays, which is huge in the US, what that does is it models statistically certain features of writing that can be modeled statistically. And then that practice becomes what's marked for. And then that practice becomes what's valued. And all of that, these large language models have ingested at scale.

So not surprisingly, our students aspire to reproduce that kind of text because it's what gets, it may not get the highest marks, but I've written an essay about passing. And it gets a good pass. It gets a good pass because these models have ingested billions and billions of data points of how to get a good pass in, particularly the US college and higher education system. So there's that.

Then there's also…so I should share that when I was at university writing, I was once praised for writing like a man.

KIMBERLY NEVALA: Congrats?

HELEN BEETHAM: Because with authority. Whatever that meant. And I think we could go further and say that the model of writing that these standardizations prefer, we could say is particularly middle class, anglophone, white, typically male.

There's a kind of failure to value diverse forms of expression. So when we come across those linguistic codes, those alternative forms of expression in the classroom, the tendency is to correct to the norm, try to correct them to the norm.
So I think if we were going to not do that, if we were going to value people coming to education from preschool all the way through with different cultural forms of expression, different cultural forms of knowledge, and really value the richness of that, one, we would have to put a lot more resource in, as we've just said, to teaching, to coordinating that space so that people can encounter those differences successfully and positively.

Converging to a norm is cheap, and it scales. That's what technology loves. But how much richer an experience of higher education we would then be offering or of education generally we would then be offering to people? Not one model of how to produce authority, of how to produce credibility, of how to pass. But the idea that you are here with all your cultural background, all your personal life history, and your aspirations for what you want to do with this time. And we can support that. And while we're supporting you to do that, you become a resource for other people because of the cultural diversity that you bring.

Now, this is literally the opposite of what large language models do. It's idealistic, but god, we need some of it.

KIMBERLY NEVALA: Yeah, I have this - I'm not known for my optimistic takes on most things. I'd like to think they're realistic but I may skew towards skeptical a lot of the time. But I always hearken back to I think it was my fourth and fifth grade class. And I had this fabulous teacher named Mr. McMahon.
And I'm out of a family, of a very large family, and all of us who have gone through his class remember him so fondly.

But we did a lot of stuff back then, even at that age, where whether we were doing math or physics or all those bits. I remember we made rockets. We had to put a bike together. We created a model house and had to electrify it so things would turn on and off. So these very hands-on immersive experiences.
And along the way, we learned all of the basics. It wasn't that we weren't doing - I do remember doing math worksheets and things like that. But then we would look at that in an applied context.

And I'm hoping that the reaction that we have to some of where we're applying technology today is not to just lean into automating what may have become very mechanistic systems. But to actually push back on those and go to maybe a totally different style of teaching. I think that's probably a bit of a rose-colored-glasses view. But that is my secret, secret, secret hope.

HELEN BEETHAM: I don't think it is rose-colored.

I mean, we mentioned earlier about the elites. The elites do not expect their nannies to have their mobile phones out. They send their kids to Montessori schools, where those kind of embodied, fully engaged forms of learning are practiced.

But then there are very expert teachers who can pull out of those embodied situations and encounters, what you call the basics. That's a real skill, to have a class of kids running around, picking things up, sticking them together, being naughty, and out of that, to pull some of the things that some of them need to know or to do to get on to the next stage. This is the aspiration we should have.

And I do believe there is an opportunity here to reverse out a little bit. But what I care about is doing that for everybody, not just for the people who can afford to hold those spaces where the technology is excluded or minimized, who can afford to hold those spaces where the teaching resource is there and where difference is valued.

I think there's a real danger that we will get a two-tier, even more of a two-tier, system than we already have through the whole education process. Where technology, and this is happening in health care already, the technology, the app in your pocket. As you say, doctors are not keen that their elderly parents should be cared for by a robot or that their sick child should have an app diagnosing whether they've got sepsis or not. But these apps are there in people's pockets to provide. And the mantra is, well, teachers can't be there all the time. Nurses can't be there all the time. But the robot, the language model, is never asleep. It's always there.

And that's the kind of mantra I think we should really problematize. Because yeah, there is something always there that can't be turned off, that never sleeps, that never forgets. But that's not care. That's not teaching. That's not attention. And these are the things that don't scale. Let's value them more for seeing how poorly these systems that have had more dollars thrown at them than any technology in the history of humanity, how poorly they do at those things.

KIMBERLY NEVALA: Yeah, absolutely. What steps do you think or actions and particularly as you said, you work at the university level, should we or could we be taking to chart a better course for AI in education? Because I should say here -- and perhaps we answer this question first. You acknowledge quite openly there are opportunities and ways that people are leveraging AI today in useful and realistic ways. So maybe we talk a little bit about those. What are some of the ways that you've seen people using it in your experience that are helpful and reasonable?

HELEN BEETHAM: You challenged me there.

I see colleagues using language models and data models in interesting ways. So for example, more in specialist ways. So in the sciences, in the digital humanities, there's a long history of using these kind of techniques, probabilistic techniques to find new patterns in data. And as researchers, as specialists, as professionals, that can be really valuable. And it can be part of a professional workflow.

So researchers, certainly. I see people who have really established practice, whether it's an artistic practice or a writing practice or a professional practice, I see exploring innovatively. So I wouldn't minimize those possibilities.

And then I'm not the best person to speak about this. I mean, I've actually interviewed some wonderful educators on my own pod, people like Katie Conrad and Anna Mills and Maya Indira Ganesh, who are really pushing the envelope in terms of critical digital skills. And I follow them in terms of what I try with my own students to get them to encounter these technologies but to think really critically. Particularly at the minute about sustainability because so much is coming out about the cost in terms of computation and energy and water. So I follow other educators' lead on that one.

I think my work focuses more, as I've said several times on – Oh, and I should say, I'm also really inspired by some of the youth-led movements like LOG OFF and Tech(nically) Politics and some of the majority world movements. Rest of World and Derechos Digitales have some amazing work going on.

But I think in my own context, I'm much more focused on what systemically universities might do. And something I really want to say in relation to that is, come on guys. We have the resources. Until, I think, about 2014, universities were where the most innovative AI labs were based. There's a long history of military funding. And there's a long history of commercial involvement. But universities had that expertise.

Meredith Whittaker has done a wonderful job of charting the brain drain, but the brain is not completely drained. We also have the cultural kudos to say we need a wider repertoire of knowledge practices than the ones that are being offered to us by the integration of large language models into every system. We need that repertoire. Where else is it going to be held?

There are community groups doing great things, but the resource there is tiny. There are libraries. There are cultural heritage organizations. There are minority languages. There are minority cultures doing brilliant work on the edge of all this.

But universities, we have so much resource. We have so much cultural capital and content. We have the technical resources and the engineers, the ones that haven't been completely co-opted and drained away. And what are we doing to push back? What are we doing to talk back at the AI industry about what we value and how we want to value it and how we want to govern their technologies in our spaces for our students?

Our students will go off and use them in their own spaces as they did with social media. What we did with social media was we said, hang on. Hang on, guys. We need some safe spaces. We need spaces free from sexual harassment. We need spaces free from racialized bias. We need spaces free from exploitation and data commodification. That's where our students can be safe and can develop.

We should be doing the same with AI. It's beyond understanding to me that we are not doing that now. We could also be experimenting with small- and medium-scale models with the kind of emerging open - I don't mean OpenAI - I mean genuinely open stack for developers. There's some really exciting things happening there. We don't need to be in partnership with the foundation models to do some really interesting work.

We also have traditions of building communities of knowledge practice that haven't completely gone away despite the datafication. So I guess my message is to the governance structures of education, departments of education, university leaders, international educational bodies. Where are you guys?
We will be as ethical, as responsible, as critical as we can be with the systems we're given but create us some spaces for alternatives.

KIMBERLY NEVALA: Well, I think we could go on for quite some time, but I think that is a very positive call to action or appropriate call to action to end on. And I just really want to thank you. I think this kind of healthy, positive, engaged critique can only improve all aspects of education and other areas of public policy, technology and otherwise. So I really appreciate your work and your willingness to share your thoughts.

HELEN BEETHAM: I really appreciate your really insightful questions, Kimberly. Thank you so much for having me.

KIMBERLY NEVALA: Awesome. To continue learning from thinkers, advocates, and doers like Helen, please subscribe to Pondering AI now. You can find us on all your favorite podcasting platforms and also on YouTube.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Helen Beetham
Guest
Helen Beetham
Researcher and Lecturer in Digital Education
The Nature of Learning with Helen Beetham
Broadcast by