Your (Personal) Digital Twin with Dr. Valérie Morignat PhD

Dr. Valérie Morignat PhD ponders the outsized influence of ancient cultures on technology today, AI’s penchant for amplification, how to avoid opening Pandora’s box and why hybridization is the future.

[MUSIC PLAYING]

KIMBERLY NEVALA: Welcome to Pondering AI. My name is Kimberly Nevala and I'm a Strategic Advisor at SAS. I'm so excited to be hosting our second season as we continue to talk to a diverse group of researchers, policymakers, advocates, and doers all working to ensure our AI enabled future puts people and our environment first.

Today, I'm thrilled to bring you Dr. Valérie Morignat. With apologies, by the way: she's been very gracious about my butchering the French pronunciation. So, thank you Valérie. I did try but the French ‘R’ has KO’d me.

In any case, Valérie is the CEO of Intelligence Story, and a leading advisor on the creative economy. She is a true polymath, and she's going to talk to us about how AI may change our reality virtually and in real life. Thank you for joining us, Valérie.

VALÉRIE MORIGNAT: Thank you so much for hosting me today, Kimberly. And I think your French is just perfect.

KIMBERLY NEVALA: Oh, the French are very sweet and, possibly, not honest.

Anyway, I found amongst your numerous public accolades this one which I think really encapsulated what I believe is your superpower. And it was this: ‘Valérie is a creative unicorn who has a unique talent for taking ideas to market’. And you do have a very unique background. So, can you share a little bit of your early work in the arts.

VALÉRIE MORIGNAT: Thank you so much. I love that question. I hear it often, because indeed, my background is not at all in AI. Although, I've been working on AI and VR technologies for over two decades. But my PhD is in art and arts sciences.

My professional life really began as a doctoral research and teaching fellow at the University of Sorbonne in Paris. And at that time, I was really exploring the transversality of the creative process through the edges. And that led me to work on immersive and interactive arts, as much as ancient arts, for instance, since I collaborated with the Museum of the Louvre in Paris on the work of Leonardo da Vinci.

So, my research interests were as much about virtual reality as it was about ancient arts and how ancient cultures inform technologies and innovation today. And I pursued this journey through becoming a tenured associate professor of cinema and interactive arts. And at that time, I taught over 2,000 hours of courses on topics such as sociology of the future, past digital cinema aesthetics, applied futurology, virtual reality design.

So, it was really interesting. Because this gave me the opportunity to work with my students on both the theoretical approach of AI robots, virtual reality, as they are represented in fiction. And also work on how, as technologies, they were transforming cinematic arts, and also transforming the way we were relating to technologies, and to our own life.

So that's what I've done on the academic side. But that also influenced my work as an artist since I've been also for a little bit over a decade an underwater photographer. That's an aspect that I rarely get to talk about when I'm interviewed about AI. And yet, everything's connected. Because every time I take pictures underwater of mythological scenes, and creative environments underwater, it's always a reflection about immersion, and how immersion shapes our understanding of the world, our perception, how it expands our imagination.

I also worked in health care design, and particularly, in the design of virtual reality experiences, interactive experiences. I led more than 300 projects in that area.

So, everything's connected. Theory and practice, and everything is inspired as much by ancient culture as they are by our innovation today.

KIMBERLY NEVALA: There's no doubt that art influences life, and vice versa. And that has obviously been true since well before AI came on the scene. But we do have this tendency - I've been leaning into this a little bit this season - to think that some of the problems or issues confronting us with AI are somehow entirely new. I would love if you could help us zoom out a little bit and discuss some of that broader historical context in which we can better frame some of the issues that we're confronting today.

VALÉRIE MORIGNAT: Yeah, and I love that. And an example I rarely use, but that is really relevant to this conversation is, if you want to talk for instance about our technology, such as virtual reality, some people think VR has been around for five years. Others know it's been around for actually 50 years. But others will tell you, oh, but in the 19th century, we already had goggles that were designed to actually expand the perspective of an image, and make people travel to distant landscapes. And of course, none of this was using the modern technologies of today.

But I can take you far back in time, further back in time, and take you to prehistorical caves. We know that fire was used at the center of those caves to actually create a moving light that was transforming the way those paintings on those walls were displayed. They were moving, and morphing depending on how the light was moving and reflecting on those walls.

So here, we can see that there was an attempt not only to use storytelling as a means of expanding reality, of expanding the experience of the world, and communicate it to others, but also using the technology of fire associated with storytelling to bring us already in an immersive, and almost interactive before its time experience. So, the ancestor of VR is extremely old.

Now, when we talk about robots and AI, we also tend to think that those are new technologies. And I think the words we use tend to shape that perception. We also think of all the science fiction movies that are representing AI and robots in ways that are not matching the reality of the industry.

The industry is not that advanced. We don't have humanoids walking the streets. We do not have AIs that are smarter than us. AI can achieve - in very, very specialized tasks - AI can achieve outcomes that are, in a way, for some specialized tasks, more intelligent than us. But they are not as intelligent as human beings. But where do they come from?

Actually, I find ancestors of AI and robots very far back in time. I would tell you that one of the first ancestors we can think of is the robot Talos, who was created by the god Hephaestus in Greek mythology. And he was actually a security guard. He was designed to protect the island of Crete against invaders, against intruders. So, he was the first security robot, right, in the 12th century.

I have other examples, I think, of someone who was an absolute genius, the polymath, the Arabic polymath al-Jazari. He published a book in which we have the blueprint for more than 100 automated devices. And they were really close to what today ambient computing is trying to create, right, those environments that are going to react to your movements. They are going to know what you want, and some colors are going to pop on your walls. And music, the exact music you want to hear is going to play, and the fountain-- Anyway, you can think of all sorts of applications. And al-Jazari thought about those in the 12th century.

And if you think of Leonardo da Vinci, you'll have the Mona Lisa in mind. And the extraordinary drawings of Leonardo da Vinci. Pretty much no one knows that Leonardo da Vinci was actually the first inventor of the very first autonomous vehicle. He designed a cart that was designed to deliver vegetables in the city of Florence. And that design inspired the robot Rover that is today on Mars.

So, the history of those technologies is very long. And that's why I often say, I don't think we're going to die through an AI revolution, or robotics revolution. We are going through the realization of the very ancient evolution, something that started thousands of years ago, and that is coming to fruition today, because we are building those things. But the desire to expand our reality, the desire to transfer part of our identities onto objects, the desire to build other forms of intelligence in objects is a very ancient one.

KIMBERLY NEVALA: So, what can we learn as we look back in history - really far back in history there - about some of the innate tensions and conflicts that come up when we think about our tools? And our perceptions about our relationships with tools. Is there a perspective that helps us better understand or frame some of the debate that's happening today?

VALÉRIE MORIGNAT: Yes, absolutely. I think that one of the questions that is rarely addressed is the influence of cultural legacies on the way people adopt technologies. The pace of adoption, and the challenges also that are associated with adoption of those technologies, are directly related to the cultural legacies, the mental model, if you will, that exist in our society, and in users themselves. And I'm going to give you a very concrete example.

There's a huge difference between the way robots for instance, or AIs are represented in fiction in the West world, and in the Far East. In the West world, we have always those very dystopian scenarios, the robot overlord character who is going to inevitably destroy humanity, or who is a character who whom we can't trust, because he's very deceitful. His appearance is deceitful. There's a lot of existential questions that are associated in the West world with those technologies.

And all this comes from the fact, again, that there's a very, very old influence of very ancient symbols that is at play here today. For instance, in Judeo-Christianism, it is really a transgression to try to create a creature that will look like a human being. Why? Because the world, nature, the universe is allegedly the creation of God. So, every time you're doing something that looks like you're competing with the work of God, you are going to be punished. This creature will turn against you and destroy you. So, there's a very negative connotation associated with that.

And even if we go beyond the realm of religion, we can see that historically, those negative associations have stained the image, or the prospect of the robot. We find them for instance, in literature through the figure of the double. Every time a character sees his copy, that's a prophecy of his upcoming death. The double is a threat because, by definition, the devil is the division of what needs to remain one. The individual is supposed to be one in one place.

And we see how those representations, they really question also situations that we live as human beings who have to manage a huge variety of digital identities on a huge variety of platforms. We are more and more fragmented. We have to manage more and more identities here and there. And that can be very disconcerting. That can lead to existential questions also.

So just to complete the picture I'm giving you, something that is going to be interesting to business innovators who are listening to this conversation, is the fact that in the Far East we have a very different influence. The influence is that of Shinto, the influence of Taoism, the influence of Buddhism. And we are not in that doomsday scenario, where technology is opposed to nature, and is going to necessarily be associated with distraction. On the contrary, in those societies, the energy that lives in a living being is the same that lives, quote marks, in a technology, in an artificial creature. So, there's no opposition between society and between humans.

And why is it interesting to business innovators who are listening, or investors who are listening to this show? Well, it's interesting because if you look at who are the top countries ahead in terms of technological adoption in robotics, well, they're all in the Far East region of the world. They're Japan, Korea. And so, there's a direct correlation between the past that we should not ignore at all, and business innovation today. So, if we want to adopt technologies, and if we want to move forward at the pace we want to, we need to address people's fears, people's mental models, people's representation, because they are highly, highly influential for businesses.

KIMBERLY NEVALA: Yeah, and it's interesting, because a lot of times, when we're talking about for instance, AI ethics, or responsible technology, and the differences in ethical codes or mores or just the approach to adoption, and the willingness to adopt these types of issues, even things that are like surveillance tech, we tend to look in the near past, if you will. And we tend to lean into more political influences and assume that this is all about (recent) political history and culture.

But you're really taking this all the way back to fundamentally thinking about how people view their place in the universe. And some of those things that have come through over time. And I suppose this circles back to our relationship with our tools, if you will. Although it sounds perhaps odd to say we have a relationship with our tools…

VALÉRIE MORIGNAT: We do, though. We do. It's absolutely accurate to talk about a relationship with our tools.

KIMBERLY NEVALA: And I think you've said, our behavior influences our tools, and our tools influence our behavior. So why is it important for us to think about that relationship? And to think critically about how these tools are influencing us, and how we influence them. And are there any unique factors today that are accelerating?

VALÉRIE MORIGNAT: Again, I love that question. And it makes me think what you just said of the Marshall McLuhan, who was a theorist of digital media, and media. And he said that we shape our tools. And thereafter, our tools shape us.

And I think that this is really at the core of what you just said. Because every time we create something, and in particular, if it's a tool that has a power of amplification, such as AI. We need to think that at some point, that using that tool is going to influence us on many, many levels. It's going to influence the way we think.

And I'm going to take a - since I'm also a photographer - I'm going to take a photography example here. When digital cameras became a thing, and when they became more and more accessible, I've heard lots of photographers saying, oh, people are going to make awful photos, because they are going to think that this is cheap. So now, they can take tons of photos. And photography is not an art anymore.

And, actually, I thought, well, I don't think it's going to happen that way. I think that because people are going to be able to take a lot of photos, they are going to improve their skills. There will be a human augmentation from the place of using a technology that is cheap, and that does not have that sense of exclusiveness anymore. And that also can be used for personal training on a daily basis.

The same thing is going to happen with those AI applications that we use every day. We don't realize it, but now, we write better and better, right. By virtue of having automated correction of what we write, we are going to be able soon to do a lot more things with an additional layer of intelligence that will be layered on onto every system we are going to use.

KIMBERLY NEVALA: As you talked about photography and the way that AI is becoming a pervasive overlay my mind jumped to recent advances in facial recognition and analysis. How are the legacies of the past influencing the development and adoption of these technologies?

VALÉRIE MORIGNAT: Those are very important topics. And I work also with policymakers on how to regulate those technologies in various geographies, not just in the US, also in other countries. And it's a sensitive topic everywhere.

Why is it sensitive? Not only because of course there are discriminatory bias that exist that have been demonstrated in those technologies, that's a huge issue. And we could hope that those issues are going to be resolved soon. But we don't that. Those technologies work better. But I see the problem elsewhere.

To me, the problem with that technology is that it is the legacy of a classification system that started a very long time ago. Again, I can find examples in Aristotle. Aristotle, that's a very long time ago, right, where there is the assumption that by looking at the surface, by looking at people's appearance, you can infer somebody's personality, and you can infer somebody's intent.

We're close to a scenario of a movie such as Minority Report here. Are we going one day to jail people based on intent, based on a system that would have inferred that this person may be a criminal? Well, that's a s that many people had in the past, in the Middle Ages. The virtue of people who was allegedly readable on their faces. And those faces were caricatured to serve as templates. So those tools, those thoughts were already there. They were just not using the power of AI to be deployed at scale, and to amplify what could already be a bias.

So, I think what's specific with AI, and what we need to pay attention to is the amplification power. And if we pay attention to that, and we focus on reasons why we want to use AI, then we realize that we can use that amplification power to do more good, or we can use that amplification power to aggravate issues that have been there for a very long time.

KIMBERLY NEVALA: Yeah, as technologists, I think we're often sometimes justifiably criticized for always saying, well, we'll figure it out, right. The technology is not perfect now, but we'll figure it out, right. Technology will sort of come to technology's rescue. And I think you're also implying here that there might be scenarios in which we aren't going to figure it out, or maybe we shouldn't figure it out.

I know you also do work in you've mentioned AR and VR, and I know you had even in your early work in the cinema. And then you did some work in health care too, I think, with thinking through things like the application of augmented or virtual reality types of situations. I'm wondering if you can talk a little bit about the idea of responsible innovation through the lens of some of the opportunities and challenges - both obvious, and un-obvious - that you came across in some of that work.

VALÉRIE MORIGNAT: I love it. As can imagine, I always love the un-obvious. Because I think this is where we should look further. So, I'm going to start with sharing with you what I consider as the pillars for responsible design. I get asked that question sometimes, not often enough. But sometimes, because that's also something that I teach as a professor of AI and responsible design.

And I'm going to connect that to VR, and to where I think it's going to be really important for the very near future, because the very near future is the metaverse. It's really happening now. I started working on it 20 years ago, theoretically, but now, it's the beginning. And things take time. What is new takes a long time. That's the theme of today.

KIMBERLY NEVALA: Yeah, that's true.

VALÉRIE MORIGNAT: So responsible design really, to me, the very first question is, what are we building innovative technology, emerging technology for? What problems are they intending to solve? And the very first pillar to me is relevance and beneficence.

Is this going to be beneficial? And if it's beneficial, for who is it going to be beneficial? Is there equity in that benefit? Is this benefiting only a very small number of stakeholders? Or is this technology susceptible to really make an impact? So being socially beneficial and relevant is very important, a very important pillar.

The second one is really to avoid harmful outcomes. And you said something important when you mentioned that it's not just about the imminent challenges. It's about those that we don't necessarily see yet. Before social media, we didn't realize that social media could drive the next disinformation pandemic to use terms that are very current.

We didn't realize that those platforms could actually distort behavior. Those were unforeseen challenges. And I think now, we have enough distance with those technologies to realize that it's extremely important to pay attention as much to the imminent challenges, as to foresee those. So, I think that we should absolutely pay attention to that and try to expand our vision here.

Those tools, they should always be built for safety. And they should be tested for safety. They should be accountable to people in society also. Because we know that tools shape us, and they also have the power to change society. We should never discard human agency. Not everything should be automated.
Do we want to live entirely into virtual reality tomorrow? No, there are lots of things that we still want to experience in the sensory realm of the physical world. Do we want to automate everything? No, because there are lots of tasks that actually are beneficial to people.

So even if AI can do them, well, maybe it's better to keep them with people, so they can learn more. We need to design for human augmentation. Whatever we design, the goal is not to make us completely passive. The goal is to always function with a model that I call the model of the center, a hybrid model where we have human intelligence that compounds with artificial intelligence, and intelligent design. And that's really the triad here.

So, I could go on and so forth about responsible design. But I think we want to talk briefly about VR. I think VR is today, still dependent on technologies that are not advanced enough to make us fully forget about the fact that we have those heavy goggles on our forehead. I have several VR equipments here as you can imagine. And they always make me smile.

Because 20 years ago, when I would teach those topics to my students, and we would watch those movies where characters are-- you think they are living in the real world, so you're experiencing the same deceit they're experiencing. But after a while, you realize that actually, they're not living in the real world. And we don't know where authenticity is.

We don't know whether real world is. Because the real world is completely lost. And these people have to go through that initiated journey in that movie to finally access the real. So those people all alone were trapped in a virtual world. And every time I would work on those topics from a philosophical perspective with my students, they would always tell me, oh, this is very frightening. In less than five years, none of us will know what's real and what's not. And I told them, maybe, but I'm pretty sure it's not going to happen because of virtual reality.

Maybe it will happen, because the science of the real will be confused with fallacies and alternative realities. And today, we can see that this is happening. People have to fact check everything. Deepfakes are used to change situations by literally enacting situations, and situations that never happened. That's not because of VR. That is because of the way all those tools are used to create mistrust in information.

So that's our relationship to technology here that needs to be investigated more than those tools that are not able yet to completely deceive us.

KIMBERLY NEVALA: You also mentioned the sort of quest for reality, shifting into the quest for authenticity, or maybe verification. And certainly, we know in terms of disinformation, and things like that. But, when you start to think about what you project in the digital realm-- and so now, I'm not even talking about bad actors who are purposely putting information out there.

But as these situations in digital environments get more realistic, as we put more of ourselves out there, are you seeing a shift or a tension between we wanted to make things more and more real. And, now, we're looking more and more for authentic relationships, engagement experiences.

VALÉRIE MORIGNAT: Well, authenticity is a very, very interesting theme, and a very deep one for philosophers. And I taught philosophy for many years in the context of my tenure with the University of Montpellier. And my experience as a cinema professor really helps me understanding what's happening today.

Because 15 years ago, when computer generated images were used more and more in cinema, when we could literally create from scratch entire landscapes, which is very common today. But 15 years ago that was really something quite extraordinary. That was an accomplishment. When we were working on that, we were always running into something that is really interesting. When things looked too real, too hyper realistic, this is where they started to look fake. So, we had to downgrade those images. Because the skin was too shiny. That looked fake, almost uncanny. So, what we've learned in the cinema industry, and in the video game industry too is that, if things look too real, they look false. And that's something that some filmmakers use to their advantage to create some horror movies that became really, really popular at that time.

Like, I'm referring here to the Blair Witch Project, which was a movie made for only with $10,000. That was the smallest budget in the film industry that created the biggest business success. And what they did is, that they just took with a very low budget very simple cameras, very low-key cameras, and they filmed that as if almost we were witnessing all those scenes through security cameras, or through a very cheap camera that teenagers would have with them. And that gave that movie a feel of authenticity that was so intense that people at the end of the movie all went on the internet to investigate. Because half of them thought that this was actually real footage.

The entire movie is actually a complete fiction. But it looked so real because it looked so authentic by virtue of looking so poorly executed. And today, we can see that all those videos that are produced by people, sometimes by organizations, that are pushing false narratives, they're very, very low key. They use very cheap cameras. They don't use big budgets. Because the more low key it looks, the more authentic it will feel.

So, the signs of authenticity here are used. All the defect and the defaults are what actually embody the real today. So interestingly, if we look at the past centuries, just to sum this up: in the past, people were searching through the real world. They were searching for the signs of a superior dimension. They were searching for the signs of the spiritual inside reality, within their reality.

Today, people are investigating the virtual to look for the real. You can see that there's really a dimension shift here, where we're in a very different world. I don't know about you, but I spend a copious amount of time fact checking what I'm seeing now, because it's extremely difficult to know what's real and what's not.

KIMBERLY NEVALA: Yeah, it's wild. It's fascinating. And again, I think you have such an interesting perspective in helping us understand where some of those tensions come into play. I did want to take a little bit of a left turn, but still related to your work in the digital world, and as a creator in the digital realm.

A lot of the applications and platforms that are out there today allow us to really become creators. And as you're mentioning, digital cameras, now, we can all take a ton of pictures. And someone might say, well, that just devalues the value of photography. But maybe it actually raises the value of other types of unique perspectives. There are people who are able to ride - a rising tide lifts all boats - kind of perspective.

But there are some rather-- to use your own words - complicated decisions to be thought about in terms of ownership and design in the digital realm. You mentioned, in an earlier conversation, things like avatars. And if I use those (avatars) to create, or I create my own avatar, or if I create a piece of digital music, or digital art, who owns it? Who should? Can you talk to us a little bit about that aspect of the digital economy.

VALÉRIE MORIGNAT: Yes, I think it's a fascinating one. There's a case that was adjudicated recently in Australia where an IP was attributed to an AI for copy that was written, I think. And that's a very important question. And I'm having discussions with intellectual property lawyers on that topic.

Because if we think of models that are extremely powerful in natural language processing, and I'm thinking of GPT-3, which created by the startup OpenAI. This is today one of the most robust models in the world for text generation. And this is going to really revolutionize the way we write. It's going to influence the way we write. But it's also going to produce tools and applications that are going to write hundreds of pages of books most likely, whether they be academic reports, or novels.

Does it mean that they are going to become autonomous enough to replace a human author anytime soon? No. Because let's remember that those models are very good at generating text. It doesn't mean that they understand what they're writing. And we've realized that there are limitations to that.
However, it does raise a question when a writer is using those tools to for instance, start writing a chapter, and let the AI write the 2/3 of the chapter. Who owns this text? Who is the author? Well, earlier, I mentioned the centaur model. I love hybrids. I think they are really interesting mythological figures. I actually wrote my master's degree on it.

I think that we have an interesting metaphor here. Because the future is not going to be about AI and robots replacing us. The future is not going to be well, us deciding that we don't need AI after all. The future is really going to be about the hybridization between human intelligence, and frankly, other forms of intelligence.

Even though human intelligence is used as the primary model in the definition of artificial intelligence, well, I do believe that AI is actually other than that. It's another form of intelligence. So, who will own IP on AI generated creations? I'm not sure. But we're already seeing those situations in court.

KIMBERLY NEVALA: Yeah, we'll see, right. The question now will be, does the platform - whether it's an AI enabled platform or an AI solution - own the content? Or does the user, the wielder of that platform or tool, own the content?

VALÉRIE MORIGNAT: If I can add something to what we've just talked about, because you mentioned avatars, and how people create now. I think that what is really one of the most interesting aspects of those tools is their ability to enhance peoples' knowledge about what they can do, how they want to exist in this world. People are going to learn more. So, the learning capabilities of people are going to be enhanced by that.

And as a result, people will become more and more creative. I do believe profoundly that AI is an extraordinary opportunity to expand creativity from the place of teaching us that, well, creativity is not only a human thing. AI can do it too. Animals are very creative too. And it's going to put in our hands a potential that we didn't have before. I'm sure that if Leonardo da Vinci was alive, he would have his own avatar. He would have a collection of avatars. And he would most likely live in the metaverse already. I am sure of that.

KIMBERLY NEVALA: Well, as you were talking, it struck me. Because you were talking about different types of intelligence, or different versions of yourself, or having other realities. Or even creativity: that technology can be creative. Does that require us as humans to have a bit more humility? Or is it less about humility and more about having an appreciation for there's room for a spectrum of different things, and different inputs?

And it doesn't always have to be-- we want to be I think human centric. And as you said, look at human agency, and make sure that we're shaping the tools in such a way that they're shaping us in a way that we think is appropriate. But how do we contextualize that? Where's the rubber going to hit the road for people in terms of how they think about is this comfortable to adopt, and to accept?

VALÉRIE MORIGNAT: Yeah, so there are multiple questions in what you're raising here. And I really love that.

The first question is, is AI going to teach us a lot about ourselves? The answer is yes. And I'm not the only person to say that. Lots of eminent researchers are saying that. We are learning already a lot about ourselves. We already knew that we were biased. But maybe not everybody realized to which magnitude we were biased and how bias was compounding with other issues.

So, I think that today, we're already learning a lot with AI. AI is like, I often use that metaphor, AI is like a magnifying glass. It shows us things that we thought were not really important, (that) we could do without, or we didn't really need to address now. And ethics could be an afterthought: we'll take care of problems after they arise.

Now, we know that we shouldn't and definitely no company should. The risk for a company in case of an AI ethics scandal is huge. A company can lose up to 25% in market value in less than 12 months following an algorithmic scandal. So that's definitely something that should really raise a question.

Now, how do we make sure that people benefit from those technologies? How do we approach those innovations in a way that lessen the fears, and that create enthusiasm? I think the first responsibility that companies have is to create tools that are safe to use, tools that have cybersecurity guardrails, so people don't open the Pandora's box. Sea mythology is full of good metaphors. Don't open the Pandora's box in their home.

So, I think that building safe technology, and really thinking of the things that are hard to think about. For instance, can this technology likely influence this person in a way that is going to distort the behavior of that person? Like I said, are those objects vulnerable to cyberattacks. Did the data collected for that technology contain any bias? Or was it representative of the diversity of the world?

So, all sorts of questions that will involve the very important topics of ethics, such as principles of ethics, such as fairness, human autonomy, beneficence, those must be embedded in those technology. They must really be applied as not just ethics principles, but design principles. And if people can feel and know that those issues have been addressed, if they know how those predictions are going to be used, and how they have an agency also over those tools, I think that we will see adoption moving forward at a faster pace.

But companies must address that from an organizational standpoint, the c-suite must really understand applications and implications of AI, what they can do, what they cannot do with AI. They must really receive an executive education in terms of AI ethics to fully understand those technologies, and how they impact people in society, so they can put in place the right risk management, assessments, and policies.

And you said the word human centered. You know, when we started designing websites-- and I've been a full stack designer for many years, so I code and everything. So, I've been working on those things for years too. We were not thinking in terms of user centeredness. We were just thinking in terms of having a web page to deliver information. We were almost approaching those websites as just another page, white page where we were throwing information. And we created websites 15, 20 years ago that were undecipherable. They looked like the menu of a restaurant with 25 buttons. And that was not at all effective. Today, no one would build a website like that. And today, websites also reflect laws about data privacy, and people's consent.

So, we are doing things today that are user centric and human centric. And the same thing is going to happen for AI, particularly as AI products are going to really be everywhere in our reality in a very short amount of time.

KIMBERLY NEVALA: And bringing all of those bits together brings us full circle to the beginning of our conversation. Which is, if we do those things well, if we're transparent and we're open about having some of these really difficult discussions, if we really engage in that conversation publicly and privately as organizations then hopefully, we can also start to shift this narrative. In which AI is something that is happening to us, and not something that we are happening to: i.e., it's something that is coming at us that we don't have control over. Because at the end of the day, we in fact do. Those are our tools, and we can shape what they ultimately do.

VALÉRIE MORIGNAT: Absolutely, absolutely. And that the key words here are really trust and benefit. People need to feel that they can trust those technologies. And they will feel that by experiencing how those technologies benefit them. And that's why in an organizational context, the best way to create trust in employees is when we want to adopt a new technology, the best way to avoid misuse, and to avoid false assumption is to incrementally have employees use AI tools that makes their job easier, and that makes them happier in what they do.

And the more we do that, the more those technologies look different, and appear different, and are perceived differently by people, we're changing a mental model. And that's where we should start always going back to people's representations.

KIMBERLY NEVALA: Wow, your energy and your enthusiasm are infectious. And clearly, I could talk to you for a very long time. But one last question. From your very unique vantage point at this intersection of art and technology and your history many, many years in the making now, what are you most excited to see or watch happen next?

VALÉRIE MORIGNAT: I am the most excited about how we can use AI to help the planet. I think that this is really a problem number one. I'm really, really excited about how we use already AI to improve our relationship with energy. Well, the way we manage energy. I'm thinking of smart grids, and new types of energy that we can leverage with AI, and cost savings, and everything. So, I think that there is a huge potential in using AI for conservation, and to help the environment, which is suffering a lot as we know.

I'm very excited also at how AI will enhance the learning capabilities of people, how AI will create more autonomy for people. I think that AI can actually bring a lot of equity. To be successful in doing so, we really need to make sure that first, we distribute AI evenly in the world. Not everybody has even access to electricity today. So those issues are still there.

AI not going to solve everything. But once AI will be there for everyone, it has potential to do a lot of good things. I do believe in it.

KIMBERLY NEVALA: Yeah, that's such a striking point to end on, which is to remind ourselves that we are highly privileged in so many, many ways. And you're right, not everyone has electricity or running water. And we're talking about a lot of these very advanced things and there's a lot of really basic problems in the world.

So, thank you, Valérie. This was a refreshing and inspiring conversation. I really enjoyed thinking more about that interplay between art and life, what we can learn about the future by looking back and how AI can in fact be a positive force for humanity. Thank you again for joining us.

VALÉRIE MORIGNAT: Thank you so much. Thank you, Kimberly.

KIMBERLY NEVALA: Excellent. Now, in our next episode, we're going to continue this discussion of how we can build a better tech future for all with David Ryan Polgar. David is a leading Tech Ethicist, and a Responsible Tech Advocate, and the Founder of All Tech Is Human. You're not going to want to miss that one either. So, subscribe now to Pondering AI in your favorite pod catcher.

[MUSIC PLAYING]

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Dr. Valérie Morignat PhD
Guest
Dr. Valérie Morignat PhD
CEO Intelligent Story, MIT Professor AI Strategy & Ethics
Your (Personal) Digital Twin with Dr. Valérie Morignat PhD
Broadcast by