KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala. Thank you for lending us your ear as we continue to ponder the realities of AI, for better and worse, with a diverse group of innovators, advocates, and doers.
Today, I'm so pleased to be joined by Dr. Christina Colclough. Christina is the founder of The Why Not Lab, where she consults on the future of work, the politics of digitization, and data rights and governance in the workplace. We will be discussing what it takes to ensure the digital future works for all workers.
Thank you for joining us, Christina.
CHRISTINA COLCLOUGH: Thank you so much. It's a real pleasure to be here.
KIMBERLY NEVALA: I'm so looking forward to this. So let's start off: can you tell us a little bit about your own work history and why you founded The Why Not Lab?
CHRISTINA COLCLOUGH: Yeah, I mean, it's a long, winding road for many of us.
But if I look at immediately before founding The Why Not Lab, I worked for many years for the regional and global trade union movement. Where the last job I had was to really build their future work policies and strategies. And this is going back quite some years. And it was fascinating because as we were doing that - the general secretary then and me - people were looking at us like, what planet have these people fallen off? I mean, it was still very much science fiction. It was still very much, what has this really got to do with us and the world of work, and so on.
As part of that, I started digging around into the AI narratives, the data narratives, and very, very quickly saw how many gaps there were in relation to the regulation of the labor market. So this was back in 2017. I wrote what then became the union movement's first principles on worker's data rights but also on the ethics of AI.
But then, as suddenly the exponential growth of all of this happened, and I wanted to dig deeper and deeper into this. And in order to do so, I actually left the union movement and created The Why Not Lab to be able to serve this alternative voice, really. So progressive governments, organizations, labor organizations across the world, I'm now helping and supporting in building this critical understanding.
KIMBERLY NEVALA: Yeah it's been mind-opening.. When I found your work, I started to really look at what is out there publicly. And there are a lot of narratives around the future of work. But the explicit discussion of the future of workers is notably absent once you start to look deep.
So I'd love to start with getting some perspectives: as you said early on, you were looking at the onset of digitization and the digital future. And people were saying, why is this going to matter? But digital transformation has now been on every company's agenda, it's been on every analyst's lips for I think it's fair to say decades now. The last 10 to 20 years, for sure, it's been top of mind. How have you seen that narrative evolve? And has there been a change in both the explicit or implicit objectives or outcomes people are talking about?
CHRISTINA COLCLOUGH: So I think there's something we need to distinguish here. We need to distinguish between what's happening in the realms of the C-suite, markets, and so on, and then what's happening on the ground. What's being perceived or not on the ground? And here, I'm old enough to have lived before the mobile phone and before apps and all of these things. And if we look, if we stay within the realm of the public right now and we look at how rapid we went from mobile phones. Which you almost had to hold with two hands, to the introduction of apps to all of this, it has gone really, really quickly.
And my generation have a lot to blame. We kind of sleepwalked into all of this. Fascinated by first the color screen, and then the apps, and then all of these services offered for free. And we all know, in hindsight, of course, we're so clever, that there's nothing called a free lunch. But, you see, we never critically addressed this. The ordinary public just found, oh, this is really handy, Google Maps on my phone, or this and this. And now I can track my steps, and now I can order pizza online, and on, and on, and on.
It is what Tim Wu so brilliantly called the tyranny of convenience. We fell for the convenience of this without really questioning how does this work. So that's one thing. And I think to be brutally honest with you, I think it was a deliberate strategy to keep the public fascinated but uncritically so.
And then you have, as you say, the discussions in the C-suite. And led in particular by the big consultancy companies around the need for digital transformation, whatever that may then be. But the promises of productivity and efficiency, which again, of course, has an exponential growth in relation to how digital tools would be the answer to that in more recent years.
So those two discussions are not necessarily a shared narrative between the public, the workers, and then the C-suite.
KIMBERLY NEVALA: And what are the most obvious or important dichotomies or differences do you think between those two narratives?
CHRISTINA COLCLOUGH: Well, one is, OK, we can really pick this apart now.
So this is the promise of productivity and efficiency. Now, that promise, you've got to try and trace where does that come from. Well, of course, it comes from the capitalist drive of increasing productivity, efficiency, and all of that. But it's also come at a time in our history in the capitalist development where workers' productivity is actually - and the OECD has a graph on this - is actually in decline. The growth rate is in decline, It's still rising but to a lesser degree at the same time as we've had a massive introduction of digital technologies.
So this, you've got to start thinking what's behind this. So in order to have this massive uptake of digital technologies and, at the same time, pin these technologies are being at the answer to many of our problems, like the need for new solutions, climate change, and so on.
Those two narratives together have led to almost a tech determinism. You know, as long as we do this, we will be OK. The competitors are introducing these technologies. We better as well, without really questioning or critically assessing the underlying functioning, the depth and breadth of the impacts of these technologies.
So it's a double whammy. It's a truth that has become self-sustained, in certain way, uncontested because of the complexity of these digital technologies. Boosted by the fact that the world needs new solutions because, let's face it, our planet is burning. And there's huge problems we're not managing to address. So I think it's a mix there that has really led into this very tech-deterministic path.
KIMBERLY NEVALA: And I want to come back to some of the dichotomies you point out that this tech determinism raises. But, in thinking about how we speak even on digitization: whether it's just digitization of business or digital transformation. Very often, that conversation does go hand in hand with, quote unquote, "innovation." Although, when we peel back the covers, as you said, the real objective seems to be productivity and efficiency.
And there is an underlying narrative or change, then, I think in the way that we view the nature and the value of work and of workers themselves - human workers themselves. Is that fair? Have you observed that as well and what has been the change?
CHRISTINA COLCLOUGH: Well, so again, you're pointing at such an important issue here, because who is a worker? When I say "worker"-- and I wish I could see people listening to this podcast. I would ask them, who do you see? Who do you see in front of you? Are you a worker?
And fascinatingly - and you can see there's one of my speeches online from Chile this January - where I asked the audience in the room, who in this room is a worker? And a certain amount of people put their hands up. And a little bit later, I asked, who in this room is paid, is a wage earner in an organization or company? Many more people put their hands up. So we no longer self-identify necessarily as workers, even though we are wage earners.
KIMBERLY NEVALA: Aha.
CHRISTINA COLCLOUGH: So you can be a highly paid, very privileged wage earner. You are still a worker. So our own identification of being a worker has kind of collapsed. And so has if you look at AI regulation, the principles of ethical AI and all of that, they relate to businesses, to governments to some extent, and to consumers.
Where are the workers? Where's the labor market? We are regulating the market far more than we're regulating the labor market in relation to digital technologies. Yet, there can be no economy without the labor market. So there's a lot of weird stuff going on here. But workers and the respect for the role of decent work in our economies has somehow slipped off the radar of especially our politicians, but also many others.
KIMBERLY NEVALA: Mhm, mhm. And that framing of, we are all workers, although many of us are probably-- as we're thinking about the use of technologies – we think about that for other people and not ourselves. I can see many, many ways that goes bad.
You mentioned the strong narrative built on tech determinism. There's a lot of dimensions to this. We see this today in the conversations, the fury and furor of conversations around large language models. And are GPT systems going to take over the world? And it does seem that if you're not for it, you're against it. And there's a very broad divide, which I find concerning. Is that something we should be concerned about?
CHRISTINA COLCLOUGH: Deeply concerned. So I can't tell you, Kimberly, the amount of times where I've been giving a speech and where I'm told, Christina, you're a Luddite. Or, Christina, you're against change, because I have a critical angle, because I say, I actually believe that technology could be put to good purposes, but the current trajectory we are on is not good.
And I'm automatically put into the camp of, you're conservative, you're against change, or you're a Luddite or whatever that may be. Where I say to people, can you name any other topic in the world, which is so black and white? Either you love technology, and if so, uncritically, or you're perceived as somebody who doesn't like technology, as backward-looking, and so forth.
And I usually say to people who call me a Luddite, I say, thank you, that's a great compliment. Because the Luddites actually wanted the technology, but they wanted it to be implemented in a different way. They wanted to break it down and build it up so it served the people. Wonderful. That's exactly what I too. I want to put people and planet before profit in this, whatever technological change we have.
So, yes, it is extremely black and white, and where somebody who has a critical voice like I have is really not always appreciated because it's a bit of a disturbance on the way hey party of, off we go, you know? [LAUGHS] So it's very fascinating.
KIMBERLY NEVALA: And maybe that off-we-go attitude is part of that first dichotomy that comes about with tech determinism? We've got an issue of we want to get down the path fast. But perhaps doing this right requires some time. So is that a fair assessment? And then where is that leading us astray?
CHRISTINA COLCLOUGH: So [LAUGHS] again, Kimberly, great.
So, yes, so let's go out, oh, this is so fascinating. Look what we can do, exponential growth, exponential change, radical change. Whoa, whoa, whoa, whoa, whoa. And then they realize that to get this right, to prevent harms to marginalized groups or to any form of harm, it will take time. And that time, in being to assess the systems, to adjust them, to make them culturally sensitive, et cetera, et cetera, is seen as a burden. And as regulation: regulation is a burden.
We hear this time and time again that regulation will stifle innovation and all this nonsense, right? Yet, some of the most regulated economies in the world are also the most innovative. That is the narrative we have to break apart, that regulation does not stifle innovation. On the contrary, it can create a level playing field, very clear rules of the game. But that's the narrative.
So the optimism-- oh, let's go and just push this out into the world-- even though we know the systems are not mature, or the systems haven't been tested. Let's send it out there, put a disclaimer on it. If you use this, it's at your own responsibility, as they're doing now with the large language models.
But to get it right, it takes the time and this is one of the dichotomies of our current day. That the break-fast, fantastic, [gasps] amazing speed, efficiency that's promised on the one hand. To get that right, it takes time. And it's that time, to get people's head around that or to the governance, the necessity of doing human rights impact assessments, of adjusting the algorithms, et cetera, et cetera. That's what we need to overcome.
KIMBERLY NEVALA: Mhm. And you've also said this, then, because they are bright and shiny objects, we all want to get in on them - corporations and individuals, right? People want to get our hands on it. And the systems, both the marketing of the systems and the use of the systems themselves are, in a lot of cases, somewhat manipulative.
And so they're working at an individual level, but to counteract all of this, we need, you said, collective action. Can you talk about? That tension between individual - I was going to say targeting, but I think that might sound a little more forceful than I intended - and the need for collective understanding and action?
CHRISTINA COLCLOUGH: So this is, again, one of these things, if you don't know how these systems work, you don't know how you're being affected. So I typically say, we don't know what we don't know. And this is because we've been kept in the dark. We've been sold the internet and all of the apps, the digital platforms, as emancipatory. This is setting us all equal.
But what they've indeed done is walled us off into separate gardens. The world I see through the algorithmic manipulations that I'm subject to is not the same as yours. If you've been profiled as somebody they could swing in your vote, then you see a version of the world which is very different from mine. So the story of freedom and equality is just a mask. The reality is we've been divided.
On top of that, then, if we look at some of our regulations, data protection regulations, they are, at least in America, Canada, Europe, the Western part of the world, based on the individual's right. So it's the individual's right. It's the individual who's being targeted. Yet, to really say, listen, we want to tap into the potential of digital technologies. We want to solve some of the grave problems facing humanity, climate right now. We will need a collective response. Will mean we need to first realize we've been individualized: that we've been compartmentalized relative to one another, and then find the organizations the means to which to collectivize again.
It's not that all collectivity is dead, of course. It's not. We see young people today burning around ideology. But on a grander scale, we need to realize that the forces who don't want us to collectivize have become embedded so much into everyday life that we need to overcome that manipulation to really collectivize.
KIMBERLY NEVALA: Do you think that's an awareness that people have or that all of us, the collective workers, have today? And are the regulations and the way we're approaching this, even in the context of AI, serving to raise that awareness?
CHRISTINA COLCLOUGH: No. I mean, again, you don't know what you don't know. And, luckily, as I can look back at we went from science fiction to now, a lot of labor unions across the world are engaging on these issues. They are learning to know what they need to know. They're learning to see, OK, how did these systems work, where should we put guardrails in, and so on, and so forth.
But from knowing that to changing practice, to also changing their practices as private people who also have become dependent on various social media or whatever it may be. Or, as in the Global South, are almost forced to use some of the big tech social media. Because they're bundled as free into the data packages in countries where data is extremely expensive. So you really have no choice.
Anyhow, back to your question, do we realize that this individualization has to be overcome to create the collective response? Yes, I think on a theoretical level, but on a practical level, it's very, very difficult. There's not money in doing good tech-wise at the minute. If there were, there's lots of apps and services. You could imagine an algorithm that trawls through a company to ensure that it at all times was in compliance with the collective agreement. There's no such system. Why? Well, it doesn't pay to be good, to develop good.
KIMBERLY NEVALA: Mhm. You've also mentioned that the approach to digital rights or digital safety - here in the States, we have something called OSHA, which is about physical safety and rights of employees and in workplaces or places of employment. We don't, to my knowledge, have the same level of safety or protocols or regulations in the digital realm, or do we?
CHRISTINA COLCLOUGH: Well, I mean, the US is doing some amazing stuff on certain levels. But for the very fact that you, for example, don't have a federal data protection bill, that you have this on a state-by-state basis, which is really, really splitting, of course, each state apart. With some of them undercutting or undermining each other's. Others, California, for example, going way beyond the average. Even in relation to what is perceived as the best general data protection regulation in the world, the European GDPR. There, California has just adopted in their amendments the only data protection regulation in the world that includes an anti-commodification clause, i.e., that you as a citizen or worker has the right to say no to the selling of your data. This is amazing. This is not in the GDPR. It's not anywhere else. But you have that in the US on a state level. Then you have other states with very weak data protection regulations or none at all. I mean, for the US to really take this seriously, to take human rights seriously, it's about time that you get a federal data protection regulation, absolutely.
KIMBERLY NEVALA: We see lots of emerging regulations and approaches to governance of artificial intelligence. You have said that a lot of those approaches still favor what I think you call a market access approach. Can you explain what you mean by that and what the implications of that are?
CHRISTINA COLCLOUGH: Yeah, so you get one of these hunches. It was a couple of years ago. I was on the board of the Global Partnership of AI, which is an intergovernmental cooperation. And I was in lots of governmental arenas and I picked up more and more than mentioning certifications and standards. And then came the draft EU AI Act, which, again, says we will certify AI systems through certifications and as a market access guarantee. So if a tool has been certified, it has access to, in this case, the European market.
Now we see my hunch has been kind of confirmed in the EU-US Trade and Technology Council, they're also talking about discussing and agreeing upon standards and certifications, not necessarily the same, but the contents, the requirements in these certifications. The same you now hear in the OECD, the G-7, the G-20, and so forth. So what they want to do, in short, is that you have a certification. That system gets certified. If it gets that, it's allowed into the market.
But there's some fundamental problems with this. Let me take just some of them because this is worth debate. Number one is, how do you certify something that, by nature, is changeable and fluid? A machine learning system, deep learning system, large language model. You ask ChatGPT something today, you ask the same question tomorrow. The answer is not going to be exactly the same. So how do you certify something that, by nature, is changeable? So the only way I can get my head around that you could do that to allow it into the market is if you then required periodic and constant governance of this tool. Is it doing as it says on the tin? Or, as we've seen across the world, numerous examples of AI systems actually doing something totally different than it says on the tin, right? But that requirement of governance is not really included.
Interestingly, in the EU AI Act, there's different requirements of any form of ex-ante governance. If it's a consumer-faced product than if it's a worker-faced product. For the workers' automated management systems, management can self-assess. They don't have to show or the authorities, they have no right of access to this assessment. It's totally self-assessment, where there's other requirements if it's consumer based. So very here, why is this happening?
Then back to the whole problem with standards and certifications, if you look at any of the standard bodies, I would say they're 99.9% industry dominated. This is not representative. This is not representative of various groups in society, various cultures, thought patterns, beliefs, and so on, and so forth. They are also highly closed. If you want an ISO standard, you're going to have to pay tens of thousands of dollars to get it, i.e., it's not transparent and open to the public.
So we don't have the governance requirement. We have a big push that it's these certification bodies who will do the work. And then, ultimately, you must ask, is this a privatization of regulation?
KIMBERLY NEVALA: Hmm.
CHRISTINA COLCLOUGH: Is this moving regulation out of the realm of democracy into the hands of a more or less closed group of experts? So that's a very, very big problem. And then, of course - and I'll be talking about this in the Danish parliament next week - is about this curiosity around the difference between regulating automated management systems versus systems that face the public or consumers. Weird.
KIMBERLY NEVALA: I think that could be a whole other episode. We might have to have you back. I'm going to resist the urge to go down that, but I'll encourage folks to follow that discussion.
You mentioned audits and impact assessments. And, certainly, there's more and more of a call for those. There's organizations that will come and do those. Are there some implicit assumptions or gaps baked into this approach as well that you think leave some ground uncovered?
CHRISTINA COLCLOUGH: A lot to be desired for?
KIMBERLY NEVALA: Yeah. Mhm.
CHRISTINA COLCLOUGH: Yes, I do. No, I mean, I'd rather turn it around and say, if we really wanted to assess this impact on various human rights, social rights, social groups, et cetera, et cetera. The only way we truly can do that is to have the voices or representatives of those who are subjects of these systems at the table.
So if you imagine this is an automated system to do with social benefit payouts. Well then, we should have representatives of those groups. If it's for handicapped people or a child, young families, or whatever, there should be their associations at that table. Because the impact of these decisions - like we have seen the whole Dutch government had to step down because they applied in their social services a system which was designed and fundamentally discriminative. So the government had to step down. Now, if there had been representatives at the table, we would have been able to see and flag this discrimination much earlier and then remedied this.
So this is what we need. We need inclusive governance, as I call it. And in these assessments, we would therefore need to ask the perceptions of those who are subjects of these systems. But that's not the case. Right now, a lot of these assessments are either done by technologists: so in a very flowcharty type manner: tick a box, yes or no. Or it's done by a very well-intended and probably also very clever academics, but who have a very different sense of reality from the work on the ground than the minority group here and there.
So we need ex-ante. So before system is put into the market, we need governance that's inclusive. And then we need that inclusive governance exposed after the system has been introduced and periodically as we go along. And right now, that's not the case.
KIMBERLY NEVALA: So two thoughts came to mind. Does effective governance also, though, really require that the folks who are deploying - making decisions about when and where to use these systems - and a lot of them are going to be bought not built - understand, actually understand, really understand how the systems work?
CHRISTINA COLCLOUGH: Yes, Kimberly. That, they do need.
Here, unfortunately, again, a lot points in the direction right now that's not the case. As you say, the majority - and it's very hard to put a firm estimate on that. But those who have a good idea say to me, between 90% and 95%, 96% of all AI systems being used in companies today are third-party systems. So, OK, so the typical labor management relations suddenly has a third bedfellow here, right? The developers, the vendors of these technologies.
Now, for some reason, the deploying company decides to use this system. This can be because competitors have it, a persuasive consultant said, you should be using this, or whatever it may be. So the C-suite says to the IT department, do the scoping. Buy the system. But it's the local central HR department who's supposed to be using them, let's say, in the hiring process.
What we see in many companies is that this new. The depth and the breadth of the quantification of people, the data gathering, the instructions to the algorithm, all of that. Who's responsible for governing that? For keeping an eye on the system isn't inappropriate for our context, our culture, that it isn't learning the wrong things. It's falling between the seats of these various managerial layers. So this is what I call managerial fuzz.
Who is actually responsible for what? This should be 101 very basic governance, that this is made clear. If it's an automated hiring system or scheduling tool, then the worker should know. This is the responsible managerial department. They are assessing this tool in this and this way. We will be welcoming your opinions as the subjects of this scheduling tool on these and these occasions. Please flag - here's a whistleblower line - if you have a sense of any form of unintended or intended negative consequences, and so forth. And that would be made very clear, but this is not the case.
Companies are using systems that they literally, as one CEO said to me, our managers don't have time to learn how to govern these tools. Because we're introducing, on average, a new system every four months. So there you go. There's no responsible AI if you do not ensure the governance layer.
Which also requires new competencies amongst managers. How does a technologist identify human rights impacts, potential as well as real? How does the HR department understand what the instructions to the algorithm were. Or what the difference between training data, historical data, and then the outcome, the data that's outputted? You know, so new competencies are needed. And here, and all impact assessment audit models seem to assume that managers know what they're dealing with. And reality is that they don't.
KIMBERLY NEVALA: That might be the most terrifying statement I've heard in a while, and I've heard a few. I might be easily scared. But the idea that we're putting systems in so fast that we don't have time to manage them. Which, by extension, means you don't know how they're being used or the implications for the people using them.
Interestingly, as we've been talking, I was thinking, when we think about something like participatory design, having the folks who are going to be impacted by the system, there's a lot of discussion about that in terms of marginalized groups. Or from a larger social context, folks that might be impacted or the system will be used for and/or against, depending on your perspective.
As I think about that, we're not always that explicit about workers or employees being involved as part of that discussion. Are you seeing that happen? And how are you seeing that happen? And then - don't let me forget - I have a question: you talk about the importance of co-governance and then we talk about participatory design. Are those things the same or are they related, or not at all?
CHRISTINA COLCLOUGH: You want me to answer that one first?
KIMBERLY NEVALA: Sure, yes.
CHRISTINA COLCLOUGH: OK, so the co-governance is that you have this inclusive governance. You have the voices of representatives of the subjects around the table. Participatory design has to do with really a bottom-up code discussion, determination of what system do we need for what purposes, who needs what, and together, we design that.
But as we've discussed before, lots of technologies being deployed in companies today are third-party systems. So what's the likelihood that we can design them? Probably quite small. But in a participatory design workshop, which I just was a co-host on, we discussed with some of the theorists behind participatory design as a method. And whether we should be talking about participatory deployment rather than design.
So in the deployment of standardized software, it could be up to the individual workplace of the sector to say, OK, we would welcome this tool to improve and govern the health and safety of the workers. To allow managers to know very quickly where the miners or the drivers or the home care workers where they are. But in the deployment, we will prevent, we'll put guardrails down, so that this data cannot be used for employee evaluation purposes, for example. Or, if a worker has to take a device home with them, because it's either in their car or on a device they're wearing, that data will not be collected after working hours, for example, to ensure the privacy of the worker. So the co element, the participatory element here, is in the deployment and setting and deciding on the guardrails. And, therefore, also on the purposes we will allow this technology to be used in our companies.
And that, I think, is fascinating, because here, again, what this literally says is: how can we tap into the potentials of this technology without having invasive surveillance of the subjects, without the hoarding of data because we one day might be using it? But we embody and we live the principle of data minimization. And that we do this in respect of the core value of the human worker, right? That, for me, is the essential. Let's bring dialogue back into vogue.
KIMBERLY NEVALA: There's something potentially subtly revolutionary, some might call incendiary, involved there because it requires not just - again, I'm going to use the term, and it's, I think, the negative connotation…this is where my lack of coffee is catching up with me this morning…
Where there's some level of coercion to say, we're going to use this to monitor your performance for your own good. Versus talking to a worker and saying, how could we use this in a way that is actually helpful to you? Those are fundamentally different actions …
CHRISTINA COLCLOUGH: Very different approaches.
KIMBERLY NEVALA: Mhm.
CHRISTINA COLCLOUGH: Very, very different approaches.
And this, again, is maybe culturally embedded, but it doesn't mean it can't be changed. And just to give you a sort of a metaphor on this. Before I joined the union movement, I was a researcher at a university and was interviewing workers, managers across the world. And I noticed that when I was making interviews in the UK and France and the US, the shop stewards or the workers would say "they" about the company or the management. They want to restructure. They want to cut down. They want to do this. But if you went to Denmark, Scandinavia, Holland, Germany, the main narrative was "we." We have to cut down. We have to do this and this.
And this was fascinating because "they" reflects what a researcher called the boxing model. If you get something in here, you as the management, I lose. If I gain something, you lose. So it's the boxing match.
But then you have the more dancing relationship, which you have in Scandinavia, Germany, Holland, where we dance away. We step on each other's toes now and again. But we agree to disagree every two or three years when we renegotiate the collective agreement, right?
But that "we" model is essential for this culture of dialogue, which you have in many parts of Europe, at least. In the United States, that cultural dialogue is weak. Management is perceived as the enemy of labor is seen as the enemy of management.
And again, if we look at the powers, the potential oppressive powers, from to developers of these technologies, which are third parties. So it is powers over the deploying companies, as well as the workers. It should be in the interest of the deploying companies to sit together with their workers and say, how can we make this technology work for us. Without transcending unnecessarily human rights, boundaries, privacy, and all of that, right?
KIMBERLY NEVALA: That also or, let me ask this as a question. Does that imply that we ascribe some fundamental value and worth to the work that humans do, regardless of their level and where they are in an organization. As opposed to, again-- we could go on a whole other diatribe or down another rabbit hole - about this idea that we're going to help you by making it possible for you to not work. Which is a weirdly - I don't understand it - that narrative just doesn't sit well with me.
CHRISTINA COLCLOUGH: It doesn't sit well with me either and I could say a lot of not-very-nice things about the American labor market. But especially around your private health insurance, which is a way to trap workers into insidious working conditions because, otherwise, they would lose their health insurance, which they in no way could afford to pay themselves. That's a different story.
What you were talking about there is what value do we put onto the human labor? This is such an essential question because, right now, we're heading into the opposite direction. Where we're heading towards the quantification of labor. Where it is no longer seen the labor of the worker, of everything that worker is, the glue that sticks the organization together, the idea maker, the change maker, the heart. But it's the results of the productivity measures. So this quantification is making us blind to the attributes of the human.
And again, an anecdote there: my aunt, she was a director of a big conference center, and they had some financial difficulties and got the advice from a consultancy company to fire one of their more senior but also elder people in the accounting department. And then my aunt said, are you absolutely mad? Because she is the one who holds that whole department together. She's the one who has concern, oh, how's your wife, Paula, you told me she was ill? Or, oh, congratulations with your children. She was that one who kept that unit together, right? OK, her productivity scores might have been lower, but the significance of her being was so much higher.
And this is where the stupidity of algorithms - the basic sort of, as Cathy O'Neil writes in Weapons of Math Destruction around this horrific story of teachers who got fired - the stupidity of them, the narrowness of them, many of them, could mean that the whole, that flourishing, the whole human of everything that we can tell of jokes, of fun, of emotions, of temper becomes insignificant. That's not surely a world any of us truly want, regardless of whether you perceive yourself as a worker or not. You are being quantified right now. And surely, this is not what we want.
KIMBERLY NEVALA: I surely hope not. I will mirror your sentiment there.
So looking forward, what would you like to see happen? And do you have any advice for whether it's employees, employers, workers, which is all of us, to help start to shift this conversation in a more - I was going to say productive, but now that's a loaded term - in a better direction?
CHRISTINA COLCLOUGH: In a better direction. I think more sustainable direction, respectful direction.
No, I think we need to demand the very clear lines of responsibility. A company who decides to deploy a third-party software is responsible for the harms, the outcomes of using that tool, i.e., we should have as part of the legal requirements that they are responsible for governing this tool. So that has to be number one put into law. Again, in this AI regulation, it has to be part of that. And that governance cannot, in any truthful or meaningful way, be done by a junior compliance officer in the corner there. It has to be inclusive. So that's number one.
Number two is we should demand transparency. The majority of us don't know what's going on, what systems we're subject to. So if we think of the workplace, it should be a legal requirement - also collective agreements - depending on where you are. That transparency is ensured. Now, we have that in the GDPR in Europe. Yet, in actual fact, there's many cases where workers say to me, we don't know what digital tools are being used. So that's a breach of the GDPR. So we need to enforce the transparency. We then need, as I said, the co-governance structures and requirements in place.
And then we all need to start discussing with our friends, with our family. Somebody who brings an Amazon Alexa into their home, you know, start doing a 360 on this. Well, you're actually inviting a private company, on a device that can hear, into the heart of your privacy. Is that a good idea? What is the cost of convenience, right? How do we be careful around our cell phones: how much we use them, and so on? Right now, everybody's up in arms over TikTok and Chinese government getting hold of all the kids' data. Well, come on, let's face it. Facebook ain't much better, right? But it's just a commercial company rather than an authoritarian state. But the fact remains the same that they're leaching our data, creating all of these profiles about us.
So one thing is, let's kick up that conversation. If we and our family and our circle of friends don't have the answer, let's demand from our counsel, our schools to put some public events on, some debates. You know, just in two weeks - and this is so funky - there's a little, small town in Denmark who's holding a two-day tech fair. Where they're going to be discussing: are we tech savvy? Do we know what we need to know? There's going to be panel debates, specialist debates, and instructions on how can you safeguard your privacy and blah, blah, blah, blah. Two days!
KIMBERLY NEVALA: That's awesome.
CHRISTINA COLCLOUGH: This should be everywhere, right? Everywhere. So those are the things.
Start by demanding things from our politicians, who I think have been hopelessly behind on all of this. And this is across the world.
Really demand that there is no economy without labor, so let's protect labor. Let's ensure that there's quality jobs and stop this exploitation. I read, of course, with large language models, it came out that, oh, 300 million jobs will be lost. Well, we surely only lose what we let get lost. This technological determinism cannot decide over us. We will then say, OK, that job, so the description might change. So we will put policies in place that will ensure that that work or those groups of workers get the skills they need in this new reality. But just throwing out figures like that, sort of doom and gloom, no responsibility, that's not a solution.
So just repeating that. Make sure we have a voice: transparency is key. Responsibility, defining that. Protecting human rights is absolutely key. And then let's collectively overcome the ignorance that we have been kept in by kicking up this discussion.
KIMBERLY NEVALA: Fabulous, fabulous. That is a fantastic and strong note to end on. So thank you so much, Christina.
CHRISTINA COLCLOUGH: You're very welcome.
KIMBERLY NEVALA: I will reiterate what I said at the top. We often hear about the future of work. We hear about impacts and implications in a very broad, conceptual way. I think we are talking far too little about explicit impacts and implications for workers ourselves. And I say "ourselves" because we are all workers. So thank you so much. I really appreciated the insights. You've got my mind whirling, and I'm sure that's true for the listeners as well. I appreciate the work you're doing to ensure that the digital future will truly work for everyone. Thanks again.
CHRISTINA COLCLOUGH: Thank you for having me, Kimberly.