Raising Robots with Professor Rose Luckin

KIMBERLY NEVALA: Welcome to Pondering AI. Thank you for joining us to ponder the realities of AI for better and worse. In this episode, we bring you Rose Luckin. Rose is a professor of Learner Centred Design at the UCL Knowledge Lab and the founder and CEO of Educate Ventures. It will come as no surprise then that we will be discussing the future of education and AI's role at all ages and stages. Thank you for joining us, Rose.

ROSE LUCKIN: Thank you very much for inviting me, Kimberly. I think it's a very interesting topic, and I'm just really pleased that people want to know about AI in education because I've worked in this area for 30 years. And for many of those years, people in education weren't very interested in AI. So it's great now that they are, and I'm really pleased to talk about it. So thank you very much for giving me the opportunity.

KIMBERLY NEVALA: Yeah, absolutely. I'm curious, what drew you to the field of education in the first place, and what then led you to now found Educate Ventures?

ROSE LUCKIN: Well, before I studied computer science and artificial intelligence, I was an educator. I mainly taught in higher education, but I had taught in schools as well. So I think right from the first engagement with artificial intelligence and computer science, I had an educator hat on, if you see what I mean. I always saw AI through that educator lens, right from the start.

What prompted me to form EDUCATE Ventures Research was really a desire to communicate to as many people as possible within the educational stakeholder community about technology and, in particular, about artificial intelligence. And you can do a lot of that through academia. But when it comes to really applied engagement, it's sometimes easier to do it through a separate organization because we can very quickly develop courses, for example, agree to projects, agree to partnerships that take a lot of time for a big organization to manage. Sometimes when you're small and nimble, you can get in there and do things quickly. But I still keep my feet in academia as well, because it's a very nice place to be.

KIMBERLY NEVALA: What are the types of organizations or enterprises that you partner with or serve through EDUCATE Ventures?

ROSE LUCKIN: A whole range of different organizations actually, Kimberly. We've worked with over 390 small businesses in the education technology sector. Some of them using artificial intelligence. Many of them aspiring now to use artificial intelligence. We've worked with policymakers, with ministries of education, with international school groups, with schools, colleges, universities, and even corporate organizations with their training departments in the main.

So it's really wide, but it's all about education and training. Of course, those two things are not identical. But everything we do is about helping organizations who are involved in helping people to learn to see how they can use technologies, particularly artificial intelligence, more effectively. But also what the existence and the sophistication of those technologies like AI means for the work that they do, for the way their students learn.

So we have a very broad remit to help the educational community leverage AI to their benefit and much of that is helping them to understand what AI is. Because if they understand more about what AI is, how it works, how it's different to human intelligence, what it's good at, what it's not so good at, then it's easier for them to start thinking about how they use it with their students, how they use it with their colleagues. What it means for their organization, what it means for their institution because they've got a better understanding.

KIMBERLY NEVALA: You made a distinction between education and learning there. I don't want to go too far down the rabbit hole, but can you talk a little bit more about that differentiation for the audience?

ROSE LUCKIN: Yes, of course. I was also differentiating between education and training as well, so I'll pick up on all of that because I think there are important differences.

Learning is the process that we go through in order to understand something, to develop a skill, to gain some knowledge, to become capable of completing a particular activity, a particular task. And that learning happens in many ways. Sometimes through an education system, which may involve some formal education in a school, college, or university, or maybe some informal self-study.

Training is something that's more done as part of a career, a job. Something we do that's very specific to a particular activity: generally one that involves us being paid, but it could be something we volunteer for. So it's slightly different.

And the emphasis within education, particularly if you take higher education as the example, is much more about intellectual growth. Understanding what knowledge is and having a sophisticated engagement with knowledge, which at a PhD level for example, definitely includes questioning that knowledge. Whereas training is unlikely to do that. Training would be much more about meeting a set of requirements to demonstrate that you've developed a capability, a skill, and are able to perform particular tasks. Do you see what I mean? Does that make sense?

KIMBERLY NEVALA: Yes. I think that's very helpful in the interest of making sure we're using terms in the same way.

ROSE LUCKIN: Yes, you're right.

KIMBERLY NEVALA: In a similar vein, before we dive into the potential of digital technologies like AI specifically to either address or potentially exacerbate current challenges across that educational spectrum, it would be helpful for us to define what it is that we are even trying to address or achieve with these technologies. What are some of the either overarching or more specific challenges facing the educational system today that you are focused on trying to solve for?

ROSE LUCKIN: It's such a good question. To start that, it's important to think about what we mean by artificial intelligence. Now of course there are hundreds, thousands even, of different definitions of AI and people debate it for long periods. But thinking about it in a simple and straightforward kind of a way, I think if we consider artificially intelligent systems to be technologies that analyze their environment and respond to that environment in order to achieve goals that have been set for them.

So, for example, if I'm using speech-to-text - which I use a lot to dictate into my phone or my computer - the environment is me speaking and the AI’s objective, its goal, is to take that audio stream and translate it into characters on the screen, on my phone or my computer. It's a very simple example but I think that way of looking at AI is very helpful because it tells us a lot about what it can do and that helps us to think about the challenges.

It's also important to recognize that artificial intelligence can be embodied in software or hardware. So we have robots, obviously, some of whom - sorry, some of which - have artificial intelligence. And we have pieces of software such as the speech-to-text application that I was just using in my earlier example.

But if we think about the way in which artificial intelligence is able to adapt to changing environments, to recognize the individual needs of a learner, for example, or the differences between different contexts, different environments, and respond accordingly, then you can see immediately the possible benefits of that. In that if we have a technology that can analyze the way it's interacting with a learner, for example, and then adapt its behavior based on those interactions, based on the environment of those interactions, then we can immediately understand that that technology can provide an adaptive experience for the students. So if it's a technology that's been designed to help a student learn how to solve quadratic equations, for example, it would be able to adapt the amount of support it gives a student, adapt the difficulty of the problems that that student is posed according to their needs. And that's really valuable.

But you asked about challenges. Of course, the benefits come with challenges. Because in that process of analyzing the environment, a lot of data is collected. And that data could be personal data about that individual. For example, if we take the kind of system where you speak to the technology. Even though the people who've built the AI system might work really hard to not keep hold of and certainly not share personal data, when you're capturing an audio stream you might be capturing background information that tells you something personal about that individual. Perhaps they're in a home. Perhaps you can hear other people. Do you see what I mean? You've immediately got a challenge when it comes to privacy and what's happening to that data. And if technology is capturing personal data about me, how is that data being processed? Is that data being processed by an algorithm that has been developed using data that's representative of somebody like me, or is there a risk of bias? There have been many instances where the artificial intelligence has been trained in a way that's biased towards particular populations. And that's often because data sets that are truly representative don't exist.

Added to that difficulty, that particular challenge, we also have the fact that in many companies, the workforce is not very diverse, and so often the people building the systems don't recognize the possible biases. Now we have got a lot better at this, but it's certainly a challenge that we need to recognize.

So we have a challenge around data being collected, around privacy, security, making sure that my personal data is not being shared with somebody that I wouldn't be happy for it to be shared with. We also have a challenge with respect to the way that the artificial intelligence is built to make sure that the decisions that that AI makes in order to achieve its goals are not going to be biased against a particular type of user. Does that help, Kimberly, to flesh out some of those challenges?

KIMBERLY NEVALA: Yeah, it certainly does. As we're talking about issues of bias, for instance making sure that the machine has been trained on data that is representative of the environment, of the population, it strikes me that even if we had a perfect data set, we could still have a challenge.

We know that in a lot of cases, prediction is not necessarily reality and people aren't patterns. A lot of times, the systems that we've seen in place have been just abysmally poor, particularly in social ecosystems: those being used in welfare systems, in policing, in some of these areas. Some of that comes from the fact that at a group level, to some extent, you can look at and say here is a group of people with like characteristics who have shown a propensity or an increased likelihood of something. Whether that's they defaulted on a loan or they've dropped out of school. The issue comes in when we try to generalize that and say, based on the fact that Kimberly has characteristics that are similar to this group, we think she's going to drop out. In fact, understanding that I might be part of a population that is at risk is very different than knowing that Kimberly the individual is, in fact, going to become a dropout.

I wonder when we use these systems for inference and try to project a child's aptitude or their interest based on those who have come before, are we also at risk of limiting the scope of what becomes available to them? Or starting to react and respond to them in a way that may be reflective of some common patterns but not of their individual predilections? And, to bring that point home a little bit more, to what extent should we be using these technologies to help us redesign and think about the curricula, to engage with these types of populations versus trying to target individual students or behaviors? If that makes sense…

ROSE LUCKIN: It does, and it's such an interesting set of questions and concerns. It's not a single question.

KIMBERLY NEVALA: [LAUGHS] Yes, sorry, set of questions. I got going there.

ROSE LUCKIN: You're absolutely right to raise these concerns. I mean, what I said about data and about bias are just part of the picture of challenge. You're absolutely right. There's way more that we need to think about. And one of the areas that I think relates to what you're saying is the imperative for using the AI in the first place. And that's something that often gets overlooked.

So we could have the same technology, the same algorithm, the same data, the same AI system in many, many ways except for the reason that AI system is being used. And perhaps in some of the situations that you're describing, actually AI isn't the right tool, and the imperative of the AI is not one that's actually playing to the strengths of the AI. And it might be that some human intervention or a combined human-AI system might actually be much more fit for purpose.

So we think there's something about the reason we're using AI that often gets too little attention. And the promise of AI to save time, do things a bit cheaper, and to do all this prediction entices us into using it in ways that maybe aren't optimal in terms of what the AI is really good at. So we think that's also a big challenge: identifying what the core purpose is for using AI and why AI is the right tool. Irrelevant what kind of AI: why are we using AI, why is it something that we can't do without AI, and why using AI brings us very clear benefits. I think that's part of it.

I also think that we expect too much from the prediction in many instances. It's interesting. We can do some wonderful things with data analytics using AI and produce some really nuanced information about people's behavior. But that doesn't necessarily mean that A, we can always predict accurately, and B, prediction is actually the most important thing for us to be doing. Because you were saying, I believe, that perhaps we're narrowing down too much through that prediction.

And again, I would come back to the imperative of the AI when it comes to that risk of narrowing down, which I recognize. If, for example, we had two identical pieces of technology that analyzed a student's facial expression and what they were saying. Which we know those two data sources are really helpful for producing relatively good data about the emotional state of the person who is speaking. If we have their facial expression and we have an audio track of their voice, we can draw some relatively good conclusions about their likely emotional state.

There's a whole lot of ethical issues in doing that, and I'm not suggesting that's what we want to do. But I'm just saying we have a system that does that. Or we have two systems. Both do that, but one of them does that in order to help a human supporter to know when to intervene with a student because that student is struggling. Now if we get the ethics of the way the technology works right, that could be a really good system.

If I'm a busy educator and I have lots of students, if I can know when particular students are struggling - I can't watch them all all of the time - then that could be very useful. Particularly when the student may not be in front of me. But if we think about that system being used, to say, oh, these students are weak. These students are struggling. We don't want to support them anymore. We're going to select them out of our group. That is very unethical and that is not appropriate.

Do you see what I mean about the same technology can be used in ways that are positive for the individual and negative for the individual? So identifying the right purpose, the right imperative, and a good reason for AI being used is supremely important. I think that narrowing down is related to that point about why we're using the AI. What are we expecting the AI to achieve for us? And we need to think very carefully before we narrow down people's choices too early or too much.

KIMBERLY NEVALA: Yeah, I agree. Absolutely. I want to come back to what are the right and the beneficial purposes and imperatives. I don't intend to take this down just the ‘where this can all go wrong’ hole, because there are a lot of places it can go right.

But since we're talking about that, another concern that has been raised quite consistently - and the voices are getting louder - is around the implications of using all this data to quantify humans. And therefore starting to measure kids against artificial and, indeed, mechanical ideas. You can see this in a number of ways. Things like kids being watched for signs of distraction, whether they're fidgeting or not paying attention. Where someone else - I come from a large family of kids - would say, well, that's just the way kids operate. Fidgeting or looking around or daydreaming a little bit here and there isn't a sign that this very small child needs some corrective action. So do you hear those kinds of objections or concerns in your work and do you see those as valid?

ROSE LUCKIN: Definitely. I think it's a huge risk. And I think we're being persuaded that the AI can do these things much more accurately than it really can. And many of the AI systems that we're interacting with at the moment instill far too much confidence in most of us. We're trusting them too much in many instances. So I think what you're saying is very wise. We do need to recognize the limitations of what the technology can do.

At the same time, it is incredibly sophisticated and very useful, and it can do many things that can be supremely helpful to us as learners and to educators. But this quantification piece I think is a worry. I'd agree with you on that. It's a real tension. Because it's both, in some senses, the holy grail and in another sense, it's quite the opposite. Because whether we like it or not, a lot of our interactions in the world are tracked and are measured.

Now we might want that to stop, but we have to accept that a lot of that is happening at the moment. And therefore, the likelihood that AI systems-- and just look at social media and the way that that's incredibly effective at manipulating people to behave in particular ways, much of it based on algorithms. If we think about the amount of information that's collected about us as we interact in the world, I think we shouldn't be surprised by the possibility that AI will know a lot about us and will understand us, to the extent that AI ever understands, but will have a lot of information about who we are and what we do and how we behave.

Now one way that we might think about tackling that would be to want to understand that ourselves. To make it more transparent so that we become much more aware of the information that is available about us. And we can learn from it. Because if there's one thing we can be sure about in the current climate, it's that we are going to have to be supremely good at learning. We're going to have to be better at learning than we've ever been before because we've got to stay ahead of the AI. We've got to make wise decisions about how that AI is developed and how the AI is allowed to be become part of the world.

So I think there's this real tension between the risks, and you're right to identify them, of this quantification and the potential benefits to us. If that quantification is done in a way that's transparent to us and we have some control over and we can learn from as well. Otherwise, we run the risk that the AI understands more about us than we do, and I don't think that's a good place to be.

KIMBERLY NEVALA: Or at least we think it does and therefore cede some control and decisions perhaps we shouldn't.

ROSE LUCKIN: Oh, that. You have just hit the nail on the head there. That, I think, is the biggest question. And the one that we need to focus on very, very carefully. And it's one of the reasons we need a much more educated population in order to be able to have these conversations. And that is, what do we offload to the AI and what do we not offload to the AI?

Because you said it. It's easy to believe the AI can do more than it can. And therefore there's a huge risk that we will even perhaps inadvertently allow the AI to take over some of our cognitive thought processes that actually we'd be better off not allowing it to take over. But you can picture it, can't you, Kimberly? Somebody selling you an AI system says, you know, that thing you find really hard to do, that makes your brain hurt. Don't worry. You don't have to do it anymore. We'll do it for you. It's very tempting, isn't it, even if actually the system can't do it that well.

And we've only got to look at what's happened with GPS and the way our brains are constructed is changing because we are using GPS rather than navigating in more traditional ways, and we are losing the ability to navigate. Now maybe that doesn't matter. Maybe it does. But it shows us that if we stop doing things, our brains will change. So we better make sure the things we stop doing are things that we're happy to stop doing, and not let the AI take over things that we'd be much better maintaining the ability to do ourselves.

KIMBERLY NEVALA: Yeah. It's that really silly meme about, hey, with ChatGPT, you don't need to learn to write anymore. And that seems to conflate writing as a communication skill, as a way to express ourselves, with writing an essay as a test mechanism. Do you think that's fair? I think people are not distinguishing those two.

ROSE LUCKIN: I think there is a risk. You're right. I would be horrified if AI ever took away my writing activity because I love writing.

Now I don't like writing fairly tedious bits of text that are template-ish and actually are not very exciting to write and they don't help me develop a better understanding of something. I don't mean that kind of writing. But I mean the sort of writing where you are trying to explain to yourself as well as to other people what you believe about something, or to communicate something complex. And it's the writing process that helps you clarify your own thinking, and that's something to be treasured. I would never want to lose that to AI.

But you have to build that relationship with writing to get to that point of appreciation. So I would think it would be a real negative if we lost our ability to engage with writing. But I think it would be equally a negative if we didn't use these tools to do some of the tedious writing that actually takes a lot of time and we can spend our time more usefully on things that are more important. So again, it's about balance, isn't it?

KIMBERLY NEVALA: I suppose - and I often simplify this question - as are we using these things to reinvent or to automate what's already there? So that example of, well, we use essays to test someone's knowledge and understanding of a topic. Well, you could automate that. A student could automate that for themselves with ChatGPT. And that would be an example of us using the technology to automate something that maybe isn't working like we think it is today anyway.

So coming back to your point about sort of beneficial purposes and imperatives, does this require us to then zoom out and take a look at the system holistically and say, how could we fundamentally do this better and stretch our imaginations. As opposed to simply saying: within the context of our current processes, here's the sticking points. To step back and say, what fundamentally doesn't work in the system as a whole? And then, how do we use the technology in that context versus just automating somewhat blindly what we already have?

ROSE LUCKIN: Yeah, that's a good question. We should be seeing this as a tool to make us smarter. We should be seeing this as something that if a student is writing an essay, now that they have access to a large language model, that essay should be better. We should be expecting far greater sophistication, much greater quality of output, because there's this tool. And that for me is an argument about being very transparent about the use of these tools and saying, OK, I want you to use this and I want you to tell me how you've used it and how you've used it to improve what you've written. Do you see what I mean?

Seeing it as an opportunity. All of this should be a wake-up call to us as humans to realize that actually, we've got to be way smarter now. Our human intelligence is not a finished piece of work. We're still developing for this decade, for the next decade, for centuries to come. So this is a moment in time where we've built some very smart machines. They can't do everything, and we certainly shouldn't hand over too much to them. But they can be very helpful to us. But all of it should be about making us more intelligent, not dumbing us down. And I think there is a risk of that.

But I think part of the way of tackling it is to be very transparent about, sure, maybe set some parameters within which the tools can be used. But to be explicit about using them and how you've used them and how has it helped you to be smarter, basically.

KIMBERLY NEVALA: Rose, you been thinking a lot about the real opportunities, the germane opportunities. When you look at AI in the context of undergraduate education, where do you see
AI can and should be most beneficial?

ROSE LUCKIN: I think there are so many different ways in which AI can be an assistant. And I do think that's how we have to see it: as something that can assist us as learners, assist us as educators, assist us as managers, leaders in the various different tasks that we have to do. And deciding how that plays out comes down to this understanding of what the AI is best at and what we are best at and what the right blend is.

Take something like feedback and assessment. In higher education, it's always been a challenge in all the years that I've worked in higher education. And that is 30 years; a little bit more. The hours that academics spend on marking and desperately trying to get meaningful feedback back to students in a timely fashion, in many cases quite unsuccessfully, when faced with a pile of physical scripts. Of course you can't do it overnight. It takes time. And then by the time you get the feedback to the student, they've moved on. And is that feedback really useful?

So the whole area, for example, of assessment and feedback has been a challenge for many, many, many decades. Now things have improved with digital submissions and certain tools that you can use. But I think what AI provides us with in this particular use case is a way in which we could really improve the speed of feedback for students so that they got that feedback in a much more timely way where it would actually impact on the work that they were doing. It wouldn't come two weeks after they finished the course. It would come when it was really important to them. And I think we can help educators with that workload because marking is a huge workload. There's no question about it. It is a big part of workload challenges.

However, it's also a really important activity for learning about your students. So I'm not suggesting you should hand the whole thing over to AI. I'm suggesting that, actually, quite a lot of the work could be done by artificial intelligence. But it needs human eyes, oversight, and we need to work out precisely what that human oversight looks like in different disciplinary areas.

But there's also the added advantage that by automating or semi-automating - let's put it that way - that assessment and feedback process you become able to understand a lot more about student cohorts, about particular assignments, about particular areas of the curriculum that appear to commonly cause challenges. Perhaps things we didn't realize before.

We do appreciate some of that already. You can actually learn more about your students at the same time as improving the speed of feedback that you're able to give your students, and in some instances, the quality. Because let's face it. When we've marked 95 assignments, are we really able to provide the same quality feedback on the 96th one as we were on the first one? Let's be realistic. I think there could be some quality improvements as well, and also some improvements for the faculty member in terms of their time in which they can spend engaging with supporting students in meaningful ways based on that feedback. Do you see what I mean?

I think there are particular use cases like that and many others as well. Whether it's content creation, whether it's adaptive learning platforms, whether it's adaptive assessment, there are so many potential use cases where AI can be extremely useful and help address the needs of a diverse population of students, including welfare needs, mental health needs. There's lots of possibilities. But I think it's really important to look at that strategic piece.

This echoes something you said earlier, too. We need to be really thoughtful about where the AI is used, how it's used, what it's used to do. So at a university level, there needs to be a clear strategy about what is happening with AI. And not just in terms of, how are we dealing with the fact that students are using it for their assignments and that's impacting on our assessment? I mean thinking strategically across the institution. OK, what does AI mean for us as an institution? We have our vision. We have our mission. How does AI help us to achieve that mission more effectively, or for more individuals, or more quickly? Whatever it is. And to then see the different use cases. Each is a learning experience that feeds into that strategic vision. So I think you do need to have specific examples that you can identify that can help your particular community of undergraduate students.

And what you do for one community in a university that's perhaps a very research-intensive university may be very different to what you do for a group of students who are in a non-research-based university. Because there'll be different students and they'll have different needs. And what the AI can do needs to be thought about carefully as to for this student group, what are the real challenges that they are facing, that we're facing with respect to them, and how can AI help? Then you have your use case.

But you need to think strategically as well. So you come at it from the bottom, if you like. Let's try something out within a set of parameters. And then from the top, what's our strategy? As an organization, what is our strategy with respect to AI? How is AI going to help us to achieve our mission, our vision for the students that we are educating? Do you see what I mean, Kimberly?

There's a real, big piece of work that needs to be done in terms of that organizational response and planning. But I think the organizations who really grasp that strategic nettle, so to speak, will reap the benefits, and therefore their students and staff members will also reap the benefits.

KIMBERLY NEVALA: And what comes through fairly clearly in what you're saying, Rose - although you should correct me if I'm wrong- is that you really see these AI-enabled systems as tools. So they're tools for the educator. They're tools for the learner. For the student. They aren't the educator themselves.

And again, I think this is a little bit of blinders in the current system. Where we are saying, how do we get more kids through the system without increasing the cost of faculty and staff. As opposed to saying (or taking), a bigger step back, and asking what is our purpose and our imperative? Maybe the objective here isn't to reduce the cost of faculty and staff but we actually want to lean into the value and celebrate the value of faculty and staff as educators. Maybe we should be hiring more of them because we know that now we need to rethink. Maybe we've been using things like essays or standard multiple-choice tests to vet understanding or comprehension. And we know that those don't really work or don't work the way that we wish they might have when they when they came about in the first place, because we just didn't have better tools. But now we have better tools.

And maybe we should be having more discussions with our students live and in person. And doing some of these other things as secondary activities which can be greatly expedited and made more effective and more efficient with the use of AI. But this requires a bit of a holistic rethink to some extent about what the outcome that we're looking for is, and a focus on the student and the student's outcome. The learner's outcome as opposed to, perhaps, the cost function of the institute as a business.

ROSE LUCKIN: Yes. I certainly hope, though I suspect I'm going to be disappointed, that this won't be a reason to cut the amount of spending on education. I think it's the opposite. It's an opportunity to improve the education that we can provide for people. And I think, yes, I do see AI as tools and assistants, and certainly not replacements. But I also think we have to look at AI in two other ways.

Firstly, as something we need to help people understand more about. We've got to educate people about AI. We need training for everyone, but particularly in education. You know, Continuing Professional Development Training Course is really important to help people understand. In a non-technical sense, they don't need to know how to build the AI, but they do need to understand some of the basics.

And then secondly, in addition to educating people about AI and seeing AI as a tool, there is the really big piece, which you have really engaged with already earlier. That is, what does the existence of AI mean for education? The fact that we have these technologies that are changing the workplace, are changing our lives, are a reason for us to want to be smarter, more intelligent. What does that mean for our education systems in terms of what and how we teach?

Irrelevant of whether we use AI in that teaching, what do we need our students to learn now that we have all of these smart technologies increasingly becoming ubiquitous in the workplace? What is it that we need to enable our students to be able to do, enable them to be able to understand? And I think it is much more sophisticated thinking, for example.

So it's a moment of significant change in multiple ways. How we use the AI and have strategies to help us know how to do that. What the implications of the existence of the AI are for how and what we teach. And then the need to educate people about AI so that they can reap the benefits and reduce the risks.

KIMBERLY NEVALA: Yes, it's an interesting and exciting point in time, for sure. So as I've been poking my fingers in what may be mostly sore spots, you've been you've been in the midst of this really thinking, researching and practicing what you preach with these technologies. What initiatives or emerging potentials are you most excited about today?

ROSE LUCKIN: Well, I'm very excited that people are now engaging with artificial intelligence and wanting to understand it. That's such a huge transition that I have to be excited about that and I am.

I'm also really excited about the potential that these technologies have for meeting the needs of diverse populations and reducing inequality. Now I have to confess that I've been excited about that for 29 of the 30 years I've been working in this area. And this last year I've become more worried about it because what we're seeing is being driven by big tech companies who want to make money. And that rarely equates to increasing equality.

But nevertheless, I am still excited by the potential these technologies have to meet the needs of diverse populations in a way that actually, as individual humans, we can't. We are however many million short teachers in the world. Surely these technologies can help us to address that need. Not by replacing teachers but help us to assist existing teachers. And in some instances, yes, providing tuition. I'm not going to say teaching, but tuition and assistance to students directly, because that means they get something rather than nothing.

So there's a huge possibility to really increase the educational outcomes of the world, if you like, and the diversity of populations who are able to access high-quality education. That is exciting. It's what gets me out of bed in the morning, to be honest with you.

But I think we've got to fight for it. It's absolutely possible, but we really have to fight for it. That's why I'm so keen on helping educators to understand more about artificial intelligence. Because then they will have more of a voice in what happens with the way AI systems are allowed to enter the education system. And, hopefully, more of a voice with the people who are building the systems although that is of course a big challenge. But I really feel that it would be unethical if we don't try to make this a reality, this increased equality that AI could bring about. So that's what gets me excited.

KIMBERLY NEVALA: I will second hope and that sentiment.

ROSE LUCKIN: Thank you.

KIMBERLY NEVALA: Thank you so much. In the absolute cacophony of voices and opinions around AI today, there is one common thread. It comes through on almost every podcast we do on a variety of topics and facets around AI and that is the importance of raising our educational sights. So I really appreciate your insights and the work you're doing into helping to transform this critical space. Both in terms of learning and ultimately training as well. Hopefully we can get you back on in the future. We'll take a temperature check and see how and where we are making progress.

ROSE LUCKIN: Always happy to do that. No, it's been great talking to you, Kimberly. Thank you so much. And I'm really pleased that you're doing this podcast. Hopefully it will help more people engage with the conversation.

KIMBERLY NEVALA: To continue learning from thinkers such as Rose about the real impacts of AI on our shared experience, subscribe now.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Professor Rose Luckin
Guest
Professor Rose Luckin
Learner Centred Design at UCL Knowledge Lab CEO EDUCATE Ventures
Raising Robots with Professor Rose Luckin
Broadcast by