Public Interest, Politics and Privacy with Paulo Carvão
KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host Kimberly Nevala. Thank you for joining us as we continue to ponder the reality of AI with a diverse group of innovators, advocates, and data professionals.
Today, we are joined by Paolo Carvão. Paulo is a global tech executive, investor and Fellow at Harvard's Advanced Leadership Initiative. We're going to be discussing safeguarding the public interest, policy, and politics in the age of AI. Welcome to the show, Paulo.
PAOLO CARVÃO: Hey. It's great to be here. And thank you for the invitation.
KIMBERLY NEVALA: Let's start with a quick recap of your professional journey through to your current roles and recent studies at Harvard.
PAOLO CARVÃO: Fantastic. So first, a little bit of a personal perspective. I'm a Brazilian American citizen. I've lived half of my life in the global south in Brazil where I grew up and started my professional life. And the other half of my life here in the US where I am also a US citizen.
I've spent a long time, more than three decades, in the technology business working for IBM where I graduated from last year. I've been, for the last couple of years, an investor and mentor and advisor to dozens of venture capital startups both in Brazil as well as here in the US. I spent the last year here at Harvard as a Fellow in the Advanced Leadership Initiative where I'm focusing on technology and democracy. Which is the role that social media and AI have had and driving polarization, radicalization, and some potentially erosion of some of our democratic institutions.
KIMBERLY NEVALA: Did that study come about from personal interest? Or was it the result of some of the work that you have done before you graduated from IBM and in some of your venture capital work?
PAOLO CARVÃO: Yeah, no. I think it was mostly from a personal curiosity. Evidently, I've had a professional life in which I've helped clients implement those technologies, infuse artificial intelligence in their business processes.
But at the same time in my personal relationships, I've seen friends and family go to some dark places and start really changing their opinions. And that made me reflect what was going on: how much of this was just a societal phenomenon, or if technology was playing a role into that? So I'd say it's a little bit of a mix of having had a professional career in the area but also being infused in the reality that we live today.
KIMBERLY NEVALA: So let's talk a little bit about the issues that artificial intelligence and the current systems that we are using bring to the fore when it comes to the public interest. Perhaps a good place to start is by defining the public interest as we will be discussing it here. What is the public interest?
PAOLO CARVÃO: Even before, if we may, we dive into the public interest, I think we need to take a step back and also reflect about the good. I think this set of technologies that we're talking about here have had a massive transformative impact on society. They changed the way we work, the way we interact, the way we frankly now create and generate knowledge. They can have a massive impact on productivity, also both personal productivity as well as business productivity. And it can add trillions of dollars to the economy. So I think that there is an important component of the good that is associated to this and also how it's been used to create social advocacy and mobilize political movements. So all of this is the positive.
But it also brings in itself some potential issues related to biases, especially when we're talking about generative AI. Creating this false illusion of certainty and with that making the spread of false information or misinformation, even if unintentional, a little bit more effective. Also, it lowers the bar in terms of the cost for you to generate some of this bad impact if you have an ill intent.
So it's important that we tackle the issue looking at both perspectives. Not only some of the problems that we're dealing with but also some of what's very good about the technology. So that then we strike the balance. And this begs the question of how did we get here and what we should do going forward?
In public interest basically, I would start by thinking about in the current model that we are in today. Primarily, we have built an attention economy based on monetizing data. Personal data that is considered public domain and that is worked and organized or processed by platforms becomes private property and therefore can be monetized. And so this can potentially generate a series of privacy issues and also not necessarily drive this in the overall public interest, or the interest of society in general but address more profit interest of large corporations or the dominant players in this space.
KIMBERLY NEVALA: There is an interesting dichotomy there. Because if you are using information that is in the public domain - and we are defining that as, therefore, public data - and you train your own systems on it. Then you're claiming that the information that comes out is now private. Is that the crux of the issue that you're talking about there? Because it seems a little off balance if we look at it from that perspective.
PAOLO CARVÃO: Yeah. I'm not sure it's the crux of the issue. But it's certainly one of the geneses of the issue. And what is interesting, if we look at the history of how we got here, it's not a coincidence. This has been litigated and has been consciously operated through regulatory innovation - regulatory entrepreneurship, sometimes that is called. So that this construct in which the personal data, the data that you and I generate, is public domain. And then once manipulated, once processed, it becomes private property. This is one of the foundations, if not the foundation, for a data economy where you can then monetize through advertising this data that you processed and therefore became your property.
So starting to look into that both from a privacy perspective as well as from a regulatory perspective or even an antitrust perspective, may give us some indication of areas to look at as we pursue this balance between fostering and accelerating innovation and at the same time protecting ourselves against some of these issues and protecting the public interest in general.
KIMBERLY NEVALA: Again, it's not necessarily obvious, at least to me, that when folks are putting information out, whether it's on the internet or sharing information for purposes of services, that they've intentionally exposed that to be sort of hoovered up in these mechanisms.
You mentioned regulatory - I think you said entrepreneurship - I'm not sure if that's the same as regulatory capture stated in a nicer way or not. You can define those terms for us. But is part of the reason that has happened is because we didn't have a regulatory framework that was addressing these issues? Because we didn't necessarily back in the day predict or project what's happening now. Or is it just a whole mass of issues that have led to this?
PAOLO CARVÃO: There are a few layers for us to peel the onion here. So starting with the beginning of your comment in terms of whether when we make our data available if we're making a conscious decision to let that data be monetized. There is an implicit trade-off that sometimes is a conscious trade-off. But sometimes we're making that trade-off without being aware. In which we can - back to the good that comes with the technology - we can get much enhanced services. We can get benefits. We can get a much-improved experience if this experience is personalized to my needs, your needs.
So it starts there. There's this temptation that, quite frankly, most of the time, I fall into that saying that who cares. I have nothing to hide. I want the better experience. I want to make sure that the recommendations suit my needs, that the time estimates when I'm driving match my driving style or the restaurant recommendations are in accordance with my tastes, et cetera. And you're making on a daily basis, sometimes on an hourly basis, these trade-offs.
But at the same time, you're not thinking about the aggregate, no pun intended, consequences of that and how this is being used for creating a profile of you that can not only be used to enhance your personal experience but to then target you. And this targeting is where things may start to get a little complicated.
KIMBERLY NEVALA: You've also said, I think one of your writings - in fact, I wrote it down - you were talking about one of the tensions that we're currently dealing with. Not necessarily capitalism itself as a good or a bad, but some of the ideas of capitalism. Where you said “in the conception of capitalist progress, dominant winners are the path to innovation, which benefits us all. And that's in direct conflict with the view that monopolies, which a dominant winner would in theory be, leverage their power to deliver less value while extracting greater rents from consumers”. Can you talk a little bit about that statement and that tension? Is it just more of the same of what we've been talking about? Or does that touch on a different element?
PAOLO CARVÃO: I think some of these were quotes that either I was quoting or paraphrasing the likes of Peter Thiel and some of the other, not investors, but academics in the area that show these two contrasting points of view and make that tension very explicit. And it's good to go back to the beginning and see how we got here.
I think there are three components that have driven this. One is a mix of ideology and culture. Another one is around business model innovation. And then finally, very intertwined with this, the funding of this business model and what has happened in terms of entrepreneurship.
And I think back to culture and ideology, I would go back to, really, the very beginning of Silicon Valley in the late '60s and '70s. It's a very different world. That was a world in a lot of tension: cultural tension, cultural revolution at that point in time. And the early days of the internet, the early days of what became eventually a computing revolution, et cetera. The early days also of Silicon Valley were happening then and that's been the genesis of the hacking culture. And frankly, that still carries over until today in a lot of this libertarian streak that permeates the valley: very, very focused on freedom and self-determination.
So let's say that first we have this cultural and almost somewhat ideological streak where things started. Then fast forward a few decades and get into our current millennium where we have seen this data economy emerging that we were talking about. Through, as we discussed, through litigation, through some of this regulatory capture or regulatory entrepreneurship. So it's creating regulation, and therefore driving this in the form of capture this construct in which personal data is common good, public good and once processed, organized, it becomes private property. So this was the foundation for the attention economy that we have today and how platforms, which monetize this data via advertising, tend to try and maximize the time that you spend on the platform. Which will generate all sorts of other issues that we talk about.
Then, compounding some ideological streak and some of the underpinnings of the of the business model, this has been financed primarily by venture capital. Which has been a very homogeneous set of players: both the technologists as well as the investors. Many of them that originally were technologists are from a very homogeneous group from a racial and ethnic and gender perspective. And they have a focus on exponential growth. A focus sometimes with some blind spots to ethical usage of technology. This focus on, like the quotes, on monopolistic power, of a winner takes all approach. This was, I think, best captured in the Zuckerberg's motto of move fast and break things. And so we've broken a few things.
KIMBERLY NEVALA: Yes. Well, hopefully, we're learning from that. So you're in a unique position because you are in the VC space. What is your perspective on the VC's role in the current AI hype cycle and hypering adoption of some of these newly emerging technologies? In some cases with very limited guardrails or understanding of the implications.
PAOLO CARVÃO: Yeah, yeah. So first, as I mentioned, I am a believer in the technology. I'm a believer in the economic, productivity and some of the positive social impact of the technology.
Now, as we were just talking, we have broken some things. And you can pick your poison here. So from teenage and adult to mental health issues to biases to political polarization. I'm not saying that an AI-infused social media has been the only reason for this. We have many, many other societal issues related to the topic. But they are certainly catalyzers for a lot of this.
And this concept of winner takes all, very focused on advertising, monetization, and now this acceleration to the current discussions on AI. Acceleration without safeguards is a problem. It's a problem and may bring in, itself, the seeds of deceleration instead of acceleration.
Because imagine if we hit some big issues related to this. I'm not in the camp of existential safety issues, that this will-- the paperclips will kill humanity, et cetera. I'm not there. But we have some clear, present, ethical issues that need to be addressed and also some decisions on where to use the technology in high-risk applications where we can potentially generate problems. If this technology is being used to run a nuclear plant and things go awry. Or if there's an issue with the power grid and you bring down a hospital for an extended period of time. If some catastrophic event like this, well short of exterminating humankind, happens, I think it's going to be bad for the industry.
So I am for first principles of ethical development of the technology. There's a whole generational shift and a lot that we can talk about regarding education in this area. But I'm also about building those safeguards. I know it's a little bit of an overused example but I really like the example of the brakes in the Formula One car. We introduced fantastic brakes for race cars. This allowed drivers to brake very close to the turns and drive faster and with that, reduced lap times.
So I think the safeguards, I see (the need for) safeguards. And I see some level of regulation as a way to allow the technology to flourish and allow us to continue as investors, as technologists, to drive the benefit.
KIMBERLY NEVALA: Just to linger on that point for a moment, because certainly the regulatory landscape right now relative to AI is as active as it's ever been. We're just coming out of the back end here of some agreement with the EU AI Act and the Trilogue: still some work to be done there.
But certainly, there remains a bit of perhaps false dichotomy between regulation and innovation. And it seems to be just this endless debate. Not just what and how much we should regulate, but that perceived conflict between those two elements. What I hear you saying very strongly is, no, in fact, this doesn't need to be a negative tension. It may very well be a positive tension. Is it fair to go so far as to say this could very much level the playing field while also making it possible for us to accelerate adoption safely?
PAOLO CARVÃO: Yes.
KIMBERLY NEVALA: OK. Now, there are a few different thoughts about how to approach the regulation and what we should be regulating. And one of those debates comes down to - to grossly oversimplify and then you can expand for the edification of the audience: are we regulating the technology itself or are we regulating the risk posed by applications of the technology? Is that a fair simplification of that debate? And can you tell us a little bit more about the two sides of that discussion or those two perspectives?
PAOLO CARVÃO: Yeah, that's, I think, it's a good framework for us to look into the problem. So, yes, I think the regulation is and safeguards are important to make sure that we continue moving fast.
Now when it comes to how we're going to do this, I'm in the camp that regulating the technology in itself is a fool's errand. I think the best example for this is let's imagine that over a long period of time we had designed the perfect regulation and announced it as a as of October of last year. Just before the launch of ChatGPT and without including considerations for large language models and generative AI. This would immediately be obsolete. And this will happen. There are several people already talking about what's coming next, et cetera.
In the history of technology, and especially when you look at the last 12 months, we've proven that that's probably not the best approach. Therefore I do like a construct in which you're looking at risk and use cases. And you're focusing more on the outcome that the technology is driving and try and act upon that.
Another potential advantage of that approach is that there is existing legislation that we can build upon to do that. It's one of those cases of what a difference a year makes from a technology perspective but also from a regulation perspective. Because if we go back to 12 months ago, it was a much more immature discussion about regulation. And the lines are very clear.
It's almost as if there are three main approaches now. One is what's going on here in the US, the European approach, and I would say like the China approach. Which is more of a state-driven point of view that I don't think is something that would have any chance to be adopted here. So having the debate about what's best from what we have seen with the US approach and the European approach can guide our next steps.
KIMBERLY NEVALA: So what are the…maybe the commonalities isn't the right question. Maybe the question I should ask is: what do you see as the strengths of those two approaches? And what are the key differences? Feel free to throw China in there if you'd like for illustrative purposes.
PAOLO CARVÃO: So starting with China, so we spend not a lot of time on it. But at least to put the frame there. It's government controlling it; it's a state-controlled approach to most of, if not all, of the economic activity. It aligns with a national development plan which, by the way, it's only a "national development plan," quote unquote. It's already a little bit anathema to the way we do things here. But one may say that if your objective is to advance your country from strictly national development, it may be an effective way of doing it. Which, by the way, I recommend that we don't simply dismiss the existence of this approach because, yes, things are going to happen in Europe. And things are going to happen here in the US. But there's a whole global south that is there.And it's looking at these three models and may say that I don't mind taking an approach like China's taking. To have a government-controlled model which can be very problematic from a human rights perspective and could stifle innovation. So these are the main - in a very oversimplified way - characteristics of how China is approaching the problem. And I would only recommend we do not dismiss. Because there are other players outside of the US and Europe that might be observing this.
Then going back to comparing and contrast Europe and US. I think first we need to acknowledge and recognize that we have very different constitutions, either for the European Union or in the United States. We have very different legal systems. And we have very different societal norms and a normative set of values between the two continents or parts of the world. And whenever considering what to adopt and how to adopt and implement, these components are very important. Because at the end of the day, if this was a black and white subject, we wouldn't be having so many discussions. There are a lot of gray areas in which you will have to make normative calls based on what your society wants to happen.
So in the US, it's a very laissez faire type of approach. It is centered around freedom of speech and our first amendment. Going back to how we manage social media, Section 230 and all the things that have or have not worked in the web 2.0 era. It's also a bit more industry-centric approach. And what I mean by industry-centric is the fact that most of the very large platforms are here in the US or in China or very few nascent ones now in Europe. So it's a little easier or maybe appropriate for the US to defer a little bit more to the industry. The US is one country only. So it's easier for you to have a discussion about how to implement this also in all-country basis despite all the discussion about what will happen in each one of the states. But it's very different than operating across multiple countries in the European union.
That's the way I see the US. And I wouldn't discount the fact that leveraging the industry, it can be beneficial. You want to be super attentive to not betting everything on self-regulation and being conscious of the potential and more than the potential, the temptation, for regulatory capture.
But the reality is that a lot of the expertise is in the industry. So there are positive reasons why you should be looking at this.
Now with Europe, on the other hand, starting with the normative principles that govern society. While we focus on freedom of speech here, they focus there primarily on privacy and human rights or dignity. Not that we don't have a focus on this here, but it's a question of priorities.
So it starts from that perspective. They've introduced this concept of risk-based regulation. So let's remind ourselves that the EU Act that just got preliminary approval on the Trilogues. It's been proposed I think, it was like early 2021. So it's been a long journey to get here. It's risk based. It's more centralized. I like certain features that have come out from this Trilogue agreement. Including some innovation in regulation which was the creation of areas where you can - they're announcing the first one in Spain - simulate or run the technology in a less exposed environment and with that design safeguards.
So there is a lot of good innovation both in terms of risk and some innovation in the regulation itself. In addition to that, the third point that is very important, is that it is very inclusive in terms of creating elements for inclusion of academia, civil society, and industry in the deliberation process.
However, it can be very heavy and very cumbersome. It builds upon what happened with GDPR for data protection and privacy and has a very, very heavy set of institutions that are being created that will enforce the mechanisms.
So it's how do we thread the needle to - especially as we move into the final stages of approving the European legislation but also to potentially codify some legislation here in the US - pick up what was the best in the executive order that was announced a couple of months ago here in the US as well as this preliminary agreement on the European data.
KIMBERLY NEVALA: Do you think that this fundamentally requires a new regulatory approach or framework or institutions? Or, when you say we can thread the needle, are there existing foundations we should be drawing on rather than trying to reinvent the wheel here?
PAOLO CARVÃO: I'll go back to what a difference a year makes. At the beginning of the year, we were dealing with Gonzalez v Google going to the Supreme Court here in the US and also not a lot of clarity in terms of how to tackle the issue.
KIMBERLY NEVALA: I'm sorry to interrupt. For folks who aren't familiar, can you give a quick description of what that case was all about?
PAOLO CARVÃO: Fundamentally, we've been using this term Section 230 freely here, but this is a piece of legislation of something that was called the Communications Decency Act from 1996. Most of the act is no longer in effect with the exception of Section 230 of the act which has gone to the Supreme Court. It provides a complete liability shield for internet platforms as it relates to third-party-generated content.
So if you picture yourself being, say, YouTube owned by Google or Facebook, Instagram, et cetera or whomever, whatever platform. Most of the content that is there is third-party-generated content, content that you and I are generating. Under Section 230, the platforms have a liability shield. So they're immune from any liability related to the content that any of us have posted there.
The case - and I'm really super simplified here - but the cases that have gone to the Supreme Court, I think it was February time frame, were basically a challenge to that model. Gonzalez v Google was an unfortunate terrorist attack that happened in France that killed an American student. The family was suing saying that YouTube had played a role in radicalizing the factions – ISIS, in this case - leading to that event. And the Supreme Court punted. Basically, they said that they're not in a position to rule and moved the subject back to congress. But, frankly, I think it's where it belongs.
The whole discussion about Section 230 and this liability shield, you can see both sides of it. Because on one hand, it can give the platforms a free pass to go and do whatever. But also, imagine if we start restricting speech there. Most likely, whenever you have restrictions to free speech the first communities or the first populations to be affected are minorities or underrepresented populations. So a lot of people from the civil liberties side are for the maintenance of Section 230.
Going back, then, to what a difference a year makes. We went from a very narrow discussion of platform liability as it relates to third party generated content to a much, much broader discussion now in the context of artificial intelligence as we can see in the executive order that the White House announced in late October and the European Union AI Act. In which there is much greater degree of maturity on how to tackle the issue and much more involvement of society at large. It is being now codified in Europe and we need to make that decision on how much of this we're going to codify in the US. Or, if it will be a set of voluntary commitments as the ones announced here in the US earlier in the year.
KIMBERLY NEVALA: Some of the interesting points - and this may also exist in the executive order here in the States - it does take a risk-based approach. The question that still raises, to some extent, is does that preference organizations running out to try things and then asking for forgiveness if the outcomes or the risks that result are too high or too harmful? Versus really being methodical - I don't want to say asking for permission - but in thinking things through and not blithely progressing down every path?
There's a secondary component that categorizes certain models of a certain size. Which implies something about an assumed capability just because the model is of set size as opposed to the application of the model. Do you see that as a contradiction in terms? Or does that make sense?
PAOLO CARVÃO: I think it is a little bit of a contradiction. But it is back to that discussion of we all wish this was a very black and white area. I feel this is a very honest attempt at striking a balance. Will it be perfect? No. It's not going to be perfect. You just hinted upon some of the issues that we can raise related to this. They say, so the proof will be in the pudding and in the implementation. That the Act will be adaptable into the future. for example size of models measured on the flops that are used to train the model et cetera is something that can be retrofitted, et cetera.
So I would encourage us to take a step back and look more at the large lines and the framework that are implicit there. Rather than necessarily pick a fight with each one of the details. We're not going to get 100% of the details. But this is probably the best attempt so far in trying to strike this balance.
If you look at what they're trying to do is, first, they have created this more horizontal concept of being risk-based and different categories of risk. Including some very high-risk situations in which they're saying these things are just simply prohibited and are not going to be permitted in any of the European Union countries. That's when you also get into a normative judgment. For example, biometrics and face recognition, et cetera are elements in which they're being very explicit in saying that they're not going to be permitted in Europe. We're not saying that biometrics are necessarily going to exterminate the human race. But it is a normative judgment based on privacy on whether we want this or not.
So the first is a horizontal way of looking at risk and identify certain categories that are going to be prohibited. The other one, which was late-breaking news as an outcome of the Trilogues, were a few of these law enforcement exemptions which was something that I don't think existed before. Going back to some of these technologies, including face recognition, et cetera, they are including exemptions saying that in extreme cases including terrorism and a very loosely-defined serious crime prevention, these technologies are going to be permitted.
But again, looking at this more from a framework perspective, it is degrees of risk, certain exemptions for specific areas and then getting into areas like transparency and fundamental rights. In which you're going to make sure that these models have a degree of transparency and how we're going to enforce that. How are we going to engage with the industry to have a discussion about that?
Another late-breaking conversation that frankly was, as far as I understand, the reason why there was a last-minute delay was this conversation about general purpose AI systems and foundation models. Where we get into these definitions of the size of the company, trying to use the size as a proxy for risk. Also, there was a lot of discussion about how to protect national champions (my word) like Mistral AI in France where there's been a lot of discussions about them recently.
Then finally, in the bucket of regulatory innovation, they created what they call the governance architecture with the AI office, a scientific panel that will have external experts to advise, an AI board, an advisory forum. So, in a sense, it is, again, an honest attempt of including as much of civil society as possible. But, at the same time, a very convoluted structure on how you're going to enforce this across all of these institutions and multiple countries.
So we'll see what will happen as this moves from preliminary agreement to final law. But I think it is important to look at it as a framework and then start to pick the best pieces of it.
KIMBERLY NEVALA: We should certainly, I feel safe saying, preference some progress over perfection. And also understand that there is a level of uncertainty that no regulation will ever be able to remove.
So certainly, the things that we can reasonably project we should be testing. Including the things that we can tax our imaginations to project or predict. We can put some safeguards. But regulation in and of itself is not a silver bullet.
I think you've said this as well, that we can't regulate our way completely out of this. We - way back at the start of the conversation - talked a little bit about people's awareness and their ability to have agency engage and even just be mindful. But it does not seem that we arming the collective with the information they may need to be able to participate as active participants in this economy that's being developed and in the digital world. Nor to ask the right questions.
So, what role does education play moving forward? And how are we doing on that today?
PAOLO CARVÃO: Yeah, no. Absolutely. And thank you for going there because you're right on. So first, regulation is not a silver bullet. Regulation will never be perfect. There is this trade-off of how to do it right so that you do not damage innovation.
But it all starts with developing critical thinking. Going back to the discussion of the trade-offs that they were making in terms of experiencing an enhanced service versus privacy, we need to be cautious of that.
I'm a believer, frankly, that children are going to be our best defense against a lot of these problems. On one hand, they potentially are the most exposed victims for some of the evil that can come with these technologies. But they can also be our best line of defense as long as we develop in them digital literacy. So that they understand what tools should be used for, so that they are aware of some of the dangers and some of the trade-offs. And they start to become, over time, more aware of the ethical decisions that need to be made. Because the children of today are going to be the developers, the regulators, the investors of the future. And we have a chance to educate our generation not only in a lot of these multidisciplinary skills that are needed to develop the technology, but also to apply normative judgment and/or create law that will regulate this technology, to do the right thing.
I think I wrote a piece about this: that I'd rather be an optimist than a cynic. I don't think that we have the answers for a lot of the questions. But the excellent news is that we're starting to ask the right questions. If, in addition to this, we educate future generations that are going to be the ones in charge (just a few years from now), I think we can have a much better future.
KIMBERLY NEVALA: So is part of educating that future generation - I say future as they're growing up around us at the moment - not just better technical or digital literacy? Is this also about the ability to have a nuanced discussion, to ask better questions, to be able to engage in a conversation with an understanding that there is not going to be a single right answer or opinion? That seems to be something we're not really good at today, at least in the public sphere.
PAOLO CARVÃO: There's a skill of holding two truths at the same time in your head that seems to be like a disappearing skill.
KIMBERLY NEVALA: If you were directing the educational agenda, what are the key components you think we should be incorporating into that agenda? And what could we be doing as folks in the industry or as civilians today to help promote that literacy and that education?
PAOLO CARVÃO: I want to preface this by saying that school systems, especially here in the United States and I dare to say around the world, are already overloaded. It's a huge challenge just in terms of trying to put yet another societal responsibility on the laps of completely overwhelmed and underpaid teachers. So let me start there.
But putting that aside for a brief moment, I'd say, first, I feel that we need to be almost bilingual. We have had a moment in which we put extreme focus on STEM education. In a sense, I'm a product of that. I'm an electrical engineer.
But we need both STEM as well as very solid liberal arts education to deal with some of these societal issues. Going back to, I think, where I started in my personal journey this year. I arrived here thinking that we can combat a lot of the evils in technology with technology. Let's use tech to do content moderation and data provenance, et cetera, et cetera. And very quickly, you figure out that there are many other legal, societal, political issues associated with that.
So the first element is this, the emphasis on a bilingual or bimodal education that will blend STEM skills, computer science engineering, software engineering, with political science, economics, law, et cetera. It's a very different approach when you're talking about now higher education.
Now in K to 12, which is where we have this group of very overwhelmed public schools and teachers, especially here in the US, it's also very important to have an age-appropriate curriculum. Evidently the way you treat or expose a five-year-old or a 17-year-old to some of these topics is a radically different. I would go as far to say that maybe below 10 we should avoid exposing them to a lot of this technology.
But having an age-appropriate curriculum that first develops basic skills for critical thinking, for civil disagreement, for intellectual curiosity. Really create that foundation and hen invest in digital literacy. Maybe not necessarily creating tech-specific curricula but infusing some of the technology in the existing curricula.
Starting early and creating these bilingual or multi-modal higher education skills, we'll have a better chance. As I said, I think we're starting to ask the right questions.
KIMBERLY NEVALA: Excellent points and a very optimistic yet pragmatic take, which I definitely appreciate as someone who can definitely veer towards cynicism (I will have to admit.) Thank you so much, Paolo. I really appreciated your time today and your insights into what is a very fluid and multifaceted space. Thanks again.
PAOLO CARVÃO: Thanks a lot, Kimberly. It was awesome to be here with you guys today.
KIMBERLY NEVALA: To continue learning from thinkers such as Paulo about the real impact of AI on our shared human experience, subscribe now.