LLMs and Beyond w/ Mark Bishop

KIMBERLY NEVALA: Welcome to day four of "Insights and Intuitions With Pondering AI." In this episode, Mark Bishop reflects on Large Language Models and beyond.

[LIGHT MUSIC]
It's great to see you again, Mark. We spoke back in February and March of this year – 2023 for the record. Boy has it been wild since. How would you characterize what has happened since we last talked?

MARK BISHOP: Well, interesting. There have been a few big developments since we last talked.

Some of the big headline-grabbing developments have been driven and are focused upon the amount of money that's been sucked into this field. There's huge amounts of money being invested into any company that can tag a ChatGPT or a Large Language Model tag onto their products. And it's attracted a lot of investment.

Riding on the back of that, and perhaps prompted by that, we had in March and May of this year two very prominent letters. One from the Future of Life Institute based in the UK and one from the Center for AI Safety which is, I think, based in the States. Effectively both painting a very doom-laden scenario.
And one in particular, I think with Elon Musk, announcing that the future of humanity itself was at risk because of Large Language Models. And painting a picture based on what's become known as the technological singularity. Wherein AI will get cleverer and cleverer until it outperforms humans on all possible tasks. And that point of singularity will be reached.

And it isn't a surprise that many of the people who signed that letter were people who are strong believers - and I use that sort of religious terminology advisedly here, because it does seem to me that it's a cult and it has certain resonances with bizarre religious beliefs – in the idea that these systems will become more powerful than humans.

That, in turn, then led, at least in the UK, to the British Computer Society launching a third letter reining back on these possible dangers. Myself, I was concerned about this and took a quite cynical view on these letters. Because if you looked at the early signatories of these letters they were people like Sam Altman, Elon Musk. There was a unifying factor: a lot of the early signatures were people who had huge investments in this AI technology. And when you paint this picture, oh, my God, these AIs are going to be so clever. They're going to be better than humans in all areas and the future of humanity is at stake. That in itself leads the public to have an overinflated idea about what these things can do.

And I see it in a sense, to use quite a strong term, almost like stock market manipulation. By releasing these letters, they give a certain credence to the idea that these systems are that much better than they are. Furthermore, it gives the public this false narrative on what things can do, so the public more easily believe these scare tales. And it just creates an unwarranted climate of doom.

It's particularly annoying because there are things to be concerned about with the rise of AI and Large Language Models. I'm particularly personally interested in areas around privacy, around bias, around manipulation in political elections, for example. Or issues around copyright infringement. So there's certainly a lot of things that we ought to be worried about. But an AI taking over is fairly far down my list if it ever enters it at all. So that was one big thing that happened.

Building on from that, fairly recently we've had a number of major authors - George R.R. Martin, the Game of Thrones author, John Grisham, Franzen, and Elin Hilderbran to name a few - taking litigation against companies like OpenAI alleging copyright infringement being used to train their foundational models. Now these cases are just beginning to roll out but perhaps prompted by that, Microsoft was obliged in September to issue a statement saying they would meet any litigation costs that users of their products - the office suites that have these new AI tools, these copilot tools, in them - are subject to. They would pay for any legal costs that may be brought in defending claims around copyright through the use of Microsoft products. So all those group of stories are all linked together.

Purely in the context of what I found interesting around the rollout of Large Language Models and the ChatGPT/OpenAI systems that I didn't necessarily see coming a year ago has been their move into music. We've got quite powerful generative music systems now. We've got one called Stable Audio. And we've also got Generative Speech Systems that can - if you've heard the latest versions of automatic text to audio, - they really do give a sense of a narrative and a sense of emotion to the text that they're reading. I hadn't quite anticipated they would get so good so quickly. So those are some of the things that have interested me since we last spoke.

KIMBERLY NEVALA: Now certainly LLMs and Generative AI seem to be taking up all the oxygen in the room. Are there other consequential developments or areas of R&D in AI that you think have been overlooked or downplayed due to this hyper focus?

MARK BISHOP: That's a really interesting question. It's astonishing the amount of news that there's been around LLMs and Generative AI over the last year. But two interesting stories.

Just this month, DeepMind announced the launch of their Gemini AI system that's going to be happening very, very soon. Demis Hassabis is claiming that this will give Large Language Models, such as those from OpenAI, a really strong run for their money. Because it's going to be integrating, at an architectural level, some of the techniques that were deployed in the AlphaGo system that so astonishingly beat the world's best Go player just a couple of years ago, Lee Sedol.

So I'm kind of excited to see how they're going to integrate Monte Carlo tree search and other techniques such as that into this AI. I gather they're going to be using a mixture of expert approaches. I mean, instead of having one monolithic large model, having perhaps a group of smaller models and then mixing them somehow with I presume - Hassabis didn't say which particular element of technology from AlphaGo they would be deploying - but my intuition is it's going to be something based around Monte Carlo tree search. And how that's all going to fit together. I think that should be quite interesting. And the story didn't get as much traction as it might have warranted.

On a deeply foundational level, I found it interesting that in vision transformer architectures - these are the building blocks, if you like, of ChatGPT - the transformer modules, they've become dominant in computer vision. Just a few years ago the models that were exciting everyone in computer vision and that really moved computer vision forward a big step were convolutional neural networks. And it is again interesting in that just a month or two ago there was some new research come to the fore whereby people have re-engineered convolutional neural networks. So now they reclaim the crown of the best neural networks for computer vision on the ImageNet databases. I think that was work from Zhenzong Wu and colleagues in Korea.

So those two things I thought were quite interesting and things to watch. Because obviously computer vision itself is the technology that's used in a wide range of AI applications.

KIMBERLY NEVALA: So as we look forward into 2024, how do you expect all of this to play out? What should we expect to see next?

MARK BISHOP: Well, my expectation, because of the monies that are getting invested, is that we're going to see more of the same. The alternative technologies - I listed two that interested me - I don't think (will have the same level of investment). Possibly with the exception of DeepMind and Google, that's clearly going to have a lot of money behind it. But there's enormous sums of money and commerce going into LLMs and technology around that. And I see that field continuing to dominate the news headlines next year. So that would be my prediction. More of the same.

KIMBERLY NEVALA: And if you had it your way, what would you like to see happen next?

MARK BISHOP: Well, if I had it my way, I'd like to see more research that looks at the downsides of AI, i.e., looking at privacy implications, how to mitigate bias on these models, how to mitigate against misinformation, market manipulation, and examining where society wants to sit, vis a vis issues like copyright. I can understand authors being upset if their works are being used to train Large Language Models, but there's some deeply interesting philosophical questions here. If it was a human who was reading their books and became an expert on their books, would they be annoyed that a human was reading their text to better understand their fiction?, I'm glad the courts are going to be looking at that question and not me as I think it could be a difficult one to unpick.

KIMBERLY NEVALA: There's certainly a question of scale and distribution that's a bit different between the human and the machine for sure. But it'll be interesting to see.

Thank you so much. I really appreciate the time and the insights. It's good to have you back.

12 days of Pondering AI continues tomorrow. Subscribe now for more insights into what's happening now and what to expect next in the ever-fascinating world of AI.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Professor J Mark Bishop
Guest
Professor J Mark Bishop
Professor of Cognitive Computing(Emeritus), Scientific Advisor-FACT 360
LLMs and Beyond w/ Mark Bishop
Broadcast by