The Privacy Paradox with Dr. Eric Perakslis, PhD
[MUSIC PLAYING]
KIMBERLY NEVALA: Welcome to Pondering AI. My name is Kimberly Nevala. I'm a strategic advisor at SAS and I'm so excited to be hosting our second season. Once again, we're going to be talking to a diverse group of researchers, policymakers, advocates, and doers all working to ensure that our AI-enabled future puts people and our environment first.
Today, I'm so happy to bring you Dr. Eric Perakslis. He's the chief science and digital officer at the Duke Clinical Research Institute. We're going to discuss the intersection of data-driven technologies like AI with medical research and practice amongst other things. So welcome, Eric.
ERIC PERAKSLIS: Thanks for having me.
KIMBERLY NEVALA: As I understand it, your current role sits right at that intersection of policy, research, and technology. Can you talk to us about how you've come to this work? And how you've become not just a consumer of information technologies in your work, but also an advocate for their safe and appropriate use?
ERIC PERAKSLIS: Sure, my career started a long time ago as a hospital engineer. My undergraduate and my master's degrees are both in engineering. So, I've been bringing technology to medicine for 35-plus years now if I'm being honest about it. It's been a while.
What I'm now doing at Duke really is the culmination of the 17 years I spent in industry working my way through learning drug discovery and development-- a few years at FDA learning something about technology and government and regulation, and as well as all my academic interests, which have always focused around how to solve basic problems in science and in medicine.
What attracted me to Duke was really a chance to fundamentally rethink the approach and the value and even the - I guess - the personas of clinical trials. What is the purpose of the trial? Who really benefits from it? Is it really safe and effective? Is it better?
There's lots of products that could work but then aren't necessarily better than all the other products on the market. So, there's lots of dimensions of health. There's lots of dimensions of value. And the value part is especially difficult. Because, in the US, we more or less have a pay-for procedure health system, in which value's tough. There's a lot of smart people trying to study what value means in medicine. And I don't think there's a working definition yet.
KIMBERLY NEVALA: So, there's a lot of areas I want to explore with you. Obviously, in the work that you do and in the research that you do, you lean into a lot of data and information. You recently co-authored an article in the New England Journal of Medicine, I believe, regarding what you called the hidden dangers of and I quote, "the free flow of information within the medical-industrial complex."
I'm wondering if you can talk to us a little bit about the benefits and where we're using information well. But also, what are some of those key dangers? And why was it important for you to sound the alarm or maybe, better stated, raise awareness now?
ERIC PERAKSLIS: Sure, one thing that's always been true about medicine, and it actually exists for good reasons, is that it's always easier to focus upon the benefits of an intervention. People are enthusiastic, they want to help, right? That's a natural thing.
And sometimes, you don't really look before you leap: you don't realize what some of the unintended consequences of that intervention are, right? They can be very sensitive. The classic example of this would be an arthritis drug that worked really well but had long-term cardiovascular toxicity effects.
It was studied in the skeletal system and the immune system very well. Wasn't really that studied in the cardiovascular system, right? And so, you find out that later. And so, I think what we're seeing if you think about medicine, technology, and the internet, I think that we know in advance what a lot of the harms are.
So, one of the places I spend a fair amount of time is in cybersecurity. And if you listen to cybersecurity people or hackers talk, they'll talk about known-knowns, and unknown-unknowns, and known-unknowns, et cetera. What we try to do in the New England Journal piece was really talk about the known-knowns.
The fact that there are a lot of less than benevolent data brokers out there that are vacuuming up lots of data. And there's lots of legal uses for data that aren't necessarily beneficial or even harmful. As an example, people that are studying demographics in an urban neighborhood could look at, what's the percentage of people that shop at that supermarket that cash their paycheck there? And how often do they cash their paychecks there? OK, these are people that have week-to-week paychecks. Maybe we should open a paycheck loan store on that block. And something that may be helpful to some but could be very predatory to a lot of others, right?
So, the supermarket loyalty cards that are collecting this data, they're not thinking about that. [LAUGHS] And it's kind of not their fault. But this is what I mean by people are really smart and inventive. Both for good and sometimes not for good. And so, data can be used in really unexpected ways.
And then there's the other truism about the internet. Whenever it's out there, it's already out there. You can't get it back.
KIMBERLY NEVALA: Can't get it back.
ERIC PERAKSLIS: You can't get it back. So, what we're really looking at is saying, especially in that piece was, we wanted health care institutions simply to think about who they authorized use of their data to.
KIMBERLY NEVALA: There were a couple of really interesting points. And one thing that jumped out to me is that this idea of ‘first, do no harm’ is a well-established medical principle - as I think you just alluded to. And we are accustomed - we've had these conversations - to thinking about harm resulting from data-driven systems or AI in healthcare in medical terms. And certainly, those algorithms can, and they do, exhibit biases. They make diagnostic mistakes. They might correlate things that are not causal, so on, and so forth.
But you've been really among the first, I think, to argue that the risks of supplying these technologies and gathering this data extend beyond just health care outcomes. So, can you explain why we need to expand our scope and think in terms of outcomes that are not necessarily traditional health care?
ERIC PERAKSLIS: Sure, first of all, if you think about health care data, it's actually a somewhat unique and arbitrary slice. If you look at my EHR record, you'll see I broke my leg snowboarding at 53 years old. And you'd say, well, maybe this guy's not that bright. I don't know. What's going on [LAUGHS]?
KIMBERLY NEVALA: Or he's very adventurous.
ERIC PERAKSLIS: Or he's very adventurous, right?
KIMBERLY NEVALA: [LAUGHS]
ERIC PERAKSLIS: But you get these really weird episodic pieces of your life. And it's almost always private, right? And let's look at something like pediatrics, or let's look at something like domestic violence, or something like that where there's a lot in people's health records that's personal.
I mean, we saw - and this isn't new - if you think to HIV in the 80s and how difficult that situation was for people. They didn't want to keep their HIV status secret. But they were worried about losing their jobs. So, they kept it secret. They were actually putting other people at risk.
So, these aren't new concepts, for sure, right? And so, the idea that some of these things can be quite personal. The other thing is that the way data is often regulated, and the way products are regulated are categories. So, there's health care data. And there's internet data. And there's consumer data and things like that. There's very little legal protection against all of that being pulled together.
So, for example, you think about your medical record. One of the interesting public domain data that data aggregators can get would be, for example, your over-the-counter buying at someplace like CVS where you get those 13-foot-long receipts every time you print something out, right?
Pregnancy tests and such: there's a lot of things that go on in health that are very private, very personal, and can be very misinterpreted. At the same time, data is actually very poorly available within these institutions. Which is one of the other points that we make in this piece is that a lot of these institutions are selling the data to make profit. But they're not using their data or any of the proceeds from selling that data to actually improve data within the hospital.
So, for example, lots of institutions that have large Epic installations. Duke being one of them. This may have changed (since) but as of a year ago, there was no way to link a parent and a child in Epic. And if you talk to any pediatrician, they will tell you that 90% of the wellness of a child is tied to how the parents are. The parents are insecure, the childs insecure. If the parents are out of work, and stuff like that. But that's all blind to a pediatrician. They don't know that situation at home. It could be a grandmother that brought the baby in or something like that, right?
There are so many interoperability problems within the bubble of health care data. [LAUGHS] And there are lots of people that falsely, in my opinion, are saying, well, the way to fix those interoperability problems is to push the data outside the health care bubble. And it's like, well… I don't understand how that helps given that what I see is our clinicians with a baby and a screen in front of them. And they're trying to put the pieces together. And I don't know how going to a third party that's aggregating data's helpful with that.
KIMBERLY NEVALA: Yeah, that brings up another interesting observation. And I think this is true across industries. And that is that organizations often project value-- sometimes, more value than there is - and default to using the data they have in lieu of maybe thinking critically about the data they need. Which may or may not exist, right? It may not be there and/or matter.
Is that a risk that you're calling out with the proliferation of things like electronic medical records – and other data-collecting devices - that might have been intended to collect information in one context being used to do things like research?
ERIC PERAKSLIS: It is, but there's also been a lot of progress. I mean, we hear all the time social determinants of health-- a term that you barely heard five years ago, right? But now, it's ubiquitous. That's a good thing.
It's a good thing to realize that we are not ERs: our Epic Record. There's a lot of other things about us - our health status, our family situations, our vulnerabilities, and our strengths - that are best found from other things.
Like I said, you can find out a lot more about a person from their CVS, over-the-counter buying in their grocery loyalty card than you possibly will find out from an instance or two of their medical records, right?
Say, what are they really eating? What are they really buying? What are they taking over the counter? So, I think that's not a bad thing that you're able to build these comprehensive pictures for research. I think they're important.
And I think we're seeing more implementation science research, especially, that's taking ivory tower concepts and making them useful to people on the ground trying to do good health service work. I think all that is good. But at the same time, health care data is not the new oil. [LAUGHS]
And I wrote a piece about this years ago when that came out. In a piece that I wrote with Andrea Coravos is, it's not the new oil, but it may be the new blood. And one way that I really encourage people to think about this since we wrote this piece in Lancet is that data about you, I think, should be treated like a digital specimen. Like you donate your blood to a blood bank, and you have protections about that. And there's uses of that.
To me, that's a closer alignment with what would make sense in ethical terms that these are actual digital specimens that 100% can and should be consented and used for research in certain things. But then there should also be a very clear list of misuses of them.
And it's interesting. I'm not sure when this will air. But we got several letters to the editor-- to the New England Journal piece-- that we've since replied to. None of them, none of them, brought up any possible harms of doing this.
KIMBERLY NEVALA: Interesting.
ERIC PERAKSLIS: They never addressed it-- never, ever addressed the harm, which is, oh, in the future, technologies, we'll blah, blah, blah. Fine, they never addressed the vulnerability of a patient.
And I got to tell you, I mean, you start a medicine, and it gives you a rash. And the rash gets bad, and you call the doctor. And they switch your medicine. That's what's called an adverse event. You know what? Well, these days, a bad credit score is an adverse event. And it's a lot worse than a rash that takes 72 hours to resolve. If you've ever tried to get something off your credit score.
KIMBERLY NEVALA: Oh, yeah.
ERIC PERAKSLIS: Because legally, you can put anything on. And it's almost impossible for you to prove-- to take it off. And I think that's the paradigm that we need to be scared of.
KIMBERLY NEVALA: Yeah, and it's interesting. I am that really irritating patient in front of you who asks to print the privacy policy and then scratches things out. And I'm pretty sure they don't do anything with it. But I do it anyway. It's annoying for all involved, except for possibly me. So, a little knowledge goes a long way.
But it is interesting that it's not that easy, necessarily for patients to really understand and consent. It seems like it would be reasonable for people to assume that when folks are saying ‘we're going to share your information with other providers (for likely good intent, with good objectives) but the data will be de-identified. It'll be anonymous. They assume that guarantees a level of privacy and security. Why is that not the case?
ERIC PERAKSLIS: We got to this in this article, and I've talked about this. This is actually a really complex thing. And it's a little like those letters to the editor of the New England Journal I mentioned. People will feel that the harms are unlikely. Even though they actually happen and can be quantified.
So, an example of this is if you look at the California Consumer Privacy Act. I forget what the acronym is for that. One of the things that went in there – and there was a lot of debate in that legislation about de-identified data. And making re-identification illegal or some form of liability protection if a patient was re-identified. And the argument that the data-stock-market industry made was, but our de-identification is perfect, right? And I'm like—
KIMBERLY NEVALA: Is it?
ERIC PERAKSLIS: OK-- well, first of all, no it's not. But if they even believed it was, why wouldn't they provide perfections? Because if it was really perfect they should be able to provide some assertation, right?
So, it's like fracking doesn't hurt your water supply. Let us frack in your backyard. It's like, well, guarantee our water supply for 20 years. No, fracking doesn't hurt your water supply. [LAUGHS]
So, the circular conversation doesn't resolve. And I think that's where we have to - at least in my research, I'm going to come down that consumer protection side. I counsel people that I know and say unless they're giving you some protections, ask how it's going to benefit you.
And I recently saw something in the last week in the UK where the National Health System in the UK was going to do a very large data release to private industry. And I believe a bunch of consumer organizations got together and stopped it.
I don't love the outcome there either. Because they've got GDPR. And I'm like, well, there was probably some really interesting research that now won't get done.
Why couldn't that have been handled in a way where they negotiated the right type of outcome so that data could have gone out? It was like it's good or it's bad. And I think that nothing in health is that easy. We know things are bad. [LAUGHS] It's hard to know what's good.
KIMBERLY NEVALA: The other thing I had read around anonymized information is anonymization or de-identifying is not a panacea even when the data seems to be good. Even if it was perfect. Because using anonymized data doesn't always lead, you said, to high-quality research. There are other things that can happen. What are some of those other implications when we're leaning into or depending on just anonymized or de-identified data?
ERIC PERAKSLIS: Yeah, so first, the way a lot of these anonymization schemes work-- they work by stripping out certain amounts of information. The imperfection there is that even though you've taken some of that information out, that same information may be available somewhere else, like a supermarket loyalty card. And it would allow people to then reconstruct the data sets, right? So that's where it's imperfect.
Now, what that also does though, and we saw this early in COVID with several high-profile paper retractions from top-tier journals is that people that are aggregating these data sets may be misaligning things that they're aggregating. Large instances of Epic probably have 40 or 50 different versions of hemoglobin listed in their system. Amy Abernathy, who's brilliant - people know who Amy she is now has a great quote where she said, “you never really know a data set until you work with it.”
KIMBERLY NEVALA: Yeah,
ERIC PERAKSLIS: Right, and I'm like, OK, well, then how do you know if 20 data sets were linked based on some anonymization, tokenization scheme that you're actually comparing apples to apples? Or even apples to fruit when it comes down to it?
So, I think there needs to be more work done on that. I don't think that's a reason not to do it. What my understanding is, there are researchers right now that are doing that type of research. Then they're taking the full data sets and just seeing what it would be like. So, there's quality issues.
The other thing is that, in general, anonymized data research is not governed by institutional review boards.
KIMBERLY NEVALA: Ah.
ERIC PERAKSLIS: Right, so real simple, anonymized-- huge gap, right?
KIMBERLY NEVALA: So, this is interesting because we have the propensity, or what I observe is a propensity, for people to believe that because some of these technologies are new, or the techniques are new, the mechanisms to govern that technology also must be new. But health care does have a rich ecosystem of safeguards from things like the FDA and CDC all the way down to institutional review boards. Should we or could we be leveraging these more?
ERIC PERAKSLIS: I think when it comes to regulation, the more that you can make things similar, the less you're going to hinder innovation. And it's one of these things where I if you look at things like the Google Brain and some of the things Facebook did, I believe those might have been interesting studies. I just wish they had gone to an IRB, gotten the right approvals, and ran it. They would have saved themselves a scandal. And the world might have benefited from the research. It's like, you know what? It's six weeks. Just do it. [LAUGHS]
KIMBERLY NEVALA: For those that aren't familiar with the concept of an IRB - because I think that does exist in other industries - under different names. What is it, and what is it intended to address?
ERIC PERAKSLIS: Sure, so Institutional Review Boards for human subject research are basically multistakeholder, multidisciplinary groups of people whose job it is to establish that research is ethical, legal, and all those things. And ethical sounds like it's soft. But in medicine, it's actually not soft, right?
For example, are the people doing the research qualified to do the research? Have all measures been taken to protect the subjects of that research from harm, right? Is everybody licensed? Is the facility licensed? Is there a valid statistical plan? So, if the research comes out, they won't misdo something.
Because we see that happening, right? I mean, it's fascinating. Because I just turned 55 last week, and I've been doing science for 35 years, and I don't know if I'm supposed to eat an egg. Because at least 30 times in my life, eggs have been good or bad. [LAUGHS]
KIMBERLY NEVALA: That's right. [LAUGHS]
ERIC PERAKSLIS: So, I don't know if… I eat them anyway. But you don't know if you're going to eat an egg. And I think the idea of IRBs is that you can't have that. You can't just have do this, don't do that. Do this, don't do that.
And so, it's built, of course, to protect the institution. That's why they call it an institution. But they really are about protecting the subjects. Ethics, legal review, clinical review, engineering review: all the different disciplines that need to say, this is a good study are there. So, if you're an independent person trying to do health data research, and you want to prospectively collect data, you can absolutely go to an IRB like Western IRB. And they'll walk you through how you get an IRB approval, or you can go to the IRB. And they'll say, you get an IRB exemption, and you don't need an IRB. That's the hook.
They always say, you can have an exemption when you're anonymizing the data, right? So again, it's a little bit like a parlor game that people aren't doing it. Because they're saying it's unnecessary. But they're not proving it's unnecessary.
KIMBERLY NEVALA: That's interesting. So, in some ways we're sidestepping the issue entirely. And again, I can't imagine that it's a lack of caring. Is it a lack of awareness that there may be not only medical or research implications to a patient, is it a lack of the mechanism and literacy? Is it a combination of all those things or other?
ERIC PERAKSLIS: Yeah, having been involved in this type of thing and patient security for so long, the way I view it is that when something bad happens, it's almost always a mistake. But there actually are a few real bad guys out there.
And so that's the way I look at it, right? Most of these things exist to, and most of these measures need to exist, to keep people from making well-intentioned mistakes.
But there are some very, very predatory (actors out there). I mean, look at cyber threat in this country right now. This is another thing that those Letters to the Editor didn't adjust. We've seen this since our election last November. We have seen foreign entities probing and testing and simultaneously taking down large portions of our national infrastructure including health care. We're in a warm war if not a hot war - cyber war - right now internationally. And this industry is acting like that's not happening when it comes to what they're doing. [LAUGHS]
So, I would look at it and say, so in cybersecurity, we talk about attack surface, right? It's all the different ways someone can get here. By aggregating these massive data sets outside the institutional protections, you're setting up huge targets for misuse.
KIMBERLY NEVALA: This idea of customer centricity and thinking about harms, intended use, applications, outcomes - and in the case of health care, that's patient centricity. That's been a focus, right? It's been a rallying cry it's got to be for a decade. And I'm not going to tell you how old I am but not far behind you. Is (taking a patient/customer centric view) easier said than done in this realm?
ERIC PERAKSLIS: I actually think it's easier said than done in every realm. And it's not the best of intentions.
KIMBERLY NEVALA: Fair.
ERIC PERAKSLIS: What is it, the Henry Ford thing if I listened to my customers, I'd be making faster horses?
KIMBERLY NEVALA: Yeah.
ERIC PERAKSLIS: Right? [LAUGHS] It's that today, in some ways. But at the same time, I do think that it has moved a lot in medicine: you're seeing that a lot of these institutions have patient advisory councils and things like that that they can bring their idea. We're going to do this big deal with Google. What do you think of it?
But one of the things you most often will hear is, that institution had a large patient advisory council and didn't use it before they did that data deal. That's kind of self-inflicted. So, I do think it's getting better.
I also think that people are far more shrewd consumers about this stuff than we think they are, right? I mean, I think most people that have significant online activity have made conscious decisions about it. Meaning that they've chosen to be very, very closed down and very hidden and very private, or they assume none of its safe anyway, so they've been very open. [LAUGHS]
KIMBERLY NEVALA: I'm not sure which is scarier, honestly.
ERIC PERAKSLIS: I had a banker once who told me he actually uses social security number as his password and everything. Because he said, it's the thing about me that's probably-- everybody knows, which isn't the right way to do it.
But I think when it comes to health care, they don't necessarily know the harms if they're not denied employment. And I think where this comes down is (that) the concept of privacy is really challenging, right?
Because, as I often write about privacy, I actually don't believe that privacy in and of itself is that important. It really doesn't do anything for you. It's really like the absence from harm. What I think we need to do is possibly follow the Genetic Information Non-discrimination Act, GINA, which isn't a privacy law. It's a nondiscrimination law.
KIMBERLY NEVALA: OK, interesting.
ERIC PERAKSLIS: So, when the human genome came out, and everybody was all about how the genome was going to transform health, regulators and lawmakers were very smart. And they said, we're not going to hinder the use of this data. We're going to tell you what you can't do with it.
And in some ways, that's what I almost think would be a great solution for what we're seeing. Make the bad things illegal, and people won't do them. And then it matters a little bit less that your privacy doesn't exist.
KIMBERLY NEVALA: While we work to shift some of those perspectives in the landscape, are there discrete steps that we can take as patients or as organizations working with the state or as advocates to increase the safety and security-- I don't know if those are the right words, really-- of these technologies and the associated data?
ERIC PERAKSLIS: Yeah, I mean, they're great. And I advised several groups like the Light Collective, which has a large group of administrators of breast cancer forums on Facebook and other things - 30,000 breast cancer patients -- that are doing a lot to really try to find and ensure that their data is being used productively.
If you're an internet security guy like me, and you're thinking about the vulnerable-- I'm thinking about the young mother who just had a mastectomy and is debating whether she should do a reconstruction. And she goes online and starts talking to different people in these groups. And she shares a picture of her scar. And it's all being scraped for pornography.
I know what happens out there. It's not OK. It causes real harm to people. And then that ends up, somehow, that pops up on something when an employer is looking at her profile, right? And it's like you can't get it back.
So, I think education is really important - without saying they should shut the internet down. [LAUGHS] In some ways, you really have to-- I think getting together is important. Working with your institutions-- even what you said about scratching things out on the form. I bet you when you do that, everybody that touched that form remembers that you did it. Even if they didn't use it, they remember that you did it. And it left an impression, right?
So, I'm a big believer in all the small things and the big things. But I do think this idea is really pressing - I don't have any optimism about the US coming up with a far more modern privacy law. But I do think we should be pressing for non-discrimination law.
KIMBERLY NEVALA: That's an interesting perspective. And it'll be interesting to see how that really plays out over the next couple of years.
The other thing that I take away from those comments is that in the same way as in the retail realm, we've seen this merger. And I think most people-- although maybe not as many as we would expect-- are aware of the integration between their offline and online digital worlds when it comes to retail.
We also need to be talking more about health care also being part of this bigger ecosystem. It's not just in the privacy of your doctor's office anymore - just staying within that office and that. And I'm not sure we talk about that conceptually enough as well.
ERIC PERAKSLIS: We lean into the technologies, which I think is great. I'm a huge fan of telehealth like a lot of people. But I do work with groups like the National Institute for Domestic Violence and Mental Health where we know that people that were only safe talking about being hurt at home don't have any place to talk about that when it's a telehealth session.
KIMBERLY NEVALA: Yeah.
ERIC PERAKSLIS: Things like that. So, does that make telehealth bad? No, it doesn't. But you've got to look at these tools and say, OK, so what is the fallback? Do we need clinics or something where these people can go? Because now they're not safe, and now they're dropping out of care.
All these technologies are great. But you've really got to look at it in aggregate. And you've got to look for those gaps. Not because they're any way intended to criticize the technology. But because if you don't address them, they eventually will hold the technology back.
I never started in my career with an intent to actually be all that interested in privacy or security. But I had a lot of outlandish research ideas. [LAUGHS] And so over the time, I established expertise and authority so I could do cutting-edge research safely. I never wanted to go to law school and any of that stuff. But it was a matter of that's what I wanted to do, so I became an expert in it.
And I think that's the opportunity for people to do. People should lean into this. People should lean in and say, look, this will be even better if we do this. And the folks that discount it or are too focused on the fast buck, I think we have to assume they're a little bit dangerous.
KIMBERLY NEVALA: We have these conversations, and I always come back to this point: that the folks we’re having these discussions with - like yourself - who raise these issues, who raise these concerns, are not the detractors. You're advocates. We see and believe in the power of these technologies.
So, to end things on a forward-looking note, what emerging research or developments are you most excited to watch develop in the space over the next…
ERIC PERAKSLIS: The one that should be the simplest that I'll end with, is clinical trial participation. Can people be in a trial no matter where they work or live?
Today, they can't. It's primarily white men. It's primarily within a mile radius of someplace like an MD Anderson, and stuff like that. And so that single mom, that young dad, or something like that (are left out). The blogs that I did in Health Affairs is an example and I think we can change that.
All the technology works, it's there to do it. We need to innovate on how to do safety. Because that'll be the one thing: how do you keep track of safety when you're not seeing the patient regularly or something like that?
But if we can do it for the drugs that have known safety profiles first? I mean, so you've got to chip away at things when it comes to regulation. If you look at, everybody talks about real-world evidence as being important in health care. And it is. If you look at where the FDA has accepted real-world evidence instead of prospective data, it is for things like randomization of rare diseases where there simply aren't enough patients to randomize, right? So, go to the hard things and solve a hard problem and make a point. And then the stuff will look easy after that.
And so, I mean, for me, it's basics of equity. Because I think if we can crack basic equity, we'll actually, really invent better products. I mean, I was recently asked about how I talk to people about research.
Because there's a lot of people, especially now with misinformation and disinformation and lack of trust in research. And the simple way to say it is that if you know certain populations don't participate in research, they'll eventually have something prescribed for them, without it ever having been tested in them, by people that don't even know it was never tested them. [LAUGHS] So it ends up being harmful long-term.
KIMBERLY NEVALA: Wow, wow, you heard it here, folks. We will certainly be following and watching Eric's work and those like him in the field.
Thank you so much. That was an incredibly insightful look into these very complex interactions and considerations in that intersection between health care policy, research, and data-driven technologies.
ERIC PERAKSLIS: Thanks so much for having me.
KIMBERLY NEVALA: Next up, the discussion continues with Yonah Welker. Yonah is a tech explorer leaning into the future of learning, wellbeing, and human-centered innovation. They think differently and will raise your awareness about the importance of neurodiversity, why inclusion is not enough and the role of social AI. You are not going to want to miss this thought-provoking discussion so subscribe now to Pondering AI in your favorite pod catcher.
[MUSIC PLAYING]