Scroll down for full podcast transcript.
COVID transformed the world of work, but AI’s impact will be much bigger.
“It’s the first time in the history of humanity that we have to rethink what it means to be human. It’s no longer, ‘I think, therefore I am’. Most of our thinking can be outsourced to machines.”
Artificial intelligence is about to transform the world of work, says Tomas Chamorro-Premuzic, Chief Innovation Officer at ManpowerGroup and the author of ‘I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique’.
He looks at the huge changes COVID and home-working have already wrought, and how we can cope with the even bigger AI revolution.
Check out all our podcasts on wef.ch/podcasts:
Join the World Economic Forum Podcast Club
Podcast transcript
This transcript has been generated using speech recognition software and may contain errors. Please check its accuracy against the audio.
Tomas Chamorro-Premuzic, Chief Innovation Officer, ManpowerGroup: It's, like, the first time in the history of humanity that we have to rethink what it means to be human. It'sno longer 'I think, therefore I am,' - most of our thinking can be outsourced to machines
Robin Pomeroy, host, Radio Davos: Welcome to Radio Davos, the podcast from the World Economic Forum that looks at the biggest challenges and how we might solve them. This week: if all the thinking is done by artificial intelligence - what’s left for us?
Tomas Chamorro-Premuzic: The IQ battle is probably lost against machines. But the EQ battle is where we have a unique opportunity to differentiate.
Robin Pomeroy: Tomas Chamorro-Premuzic is Chief Innovation Officer at Manpower Group and the author of a new book called I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique.
He says you don’t have to love the AI revolution, but you’d be a fool to ignore it.
Tomas Chamorro-Premuzic: If you're dismissing it and you're not interested, or you are just assuming in a utopian way that it will make people happier, healthier and more productive, but you don't understand the pros and cons, you're missing out on something that will definitely play a big role in the interactions and experiences people have with work.
Robin Pomeroy: And Manpower’s Chief Innovation Officer has advice for how human resources managers should approach AI.
Tomas Chamorro-Premuzic: You should minimize feedback that seems creepy, Orwellian or Big Brother-esque. AI is difficult sometimes because it's either crappy or creepy, right?
Robin Pomeroy: Subscribe to Radio Davos wherever you get your podcasts, or visit wef.ch/podcasts where you will find all our back catalogue and other shows.
I’m Robin Pomeroy at the World Economic Forum, and with this look at what it is to be human in a world of AI...
Tomas Chamorro-Premuzic: I do feel somewhat optimistic that it will create that appetite for the analogue world.
Robin Pomeroy: This is Radio Davos.
And Radio Davos once again is looking at artificial intelligence. It continues to hit the headlines. Last week, as I record this, a study predicted 300 million full time jobs could disappear thanks to AI. Days later, 1,400 AI leaders, including Elon Musk and Steve Wozniak, called for a six month pause in AI development. They say governments and policymakers need to create proper governance before AI effects what they call 'a profound change in the history of life on Earth'.
In this episode, we speak to the author of a book called I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique. He's called Tomas Chamorro-Premuzic. He's a psychologist and he's the chief innovation officer at Manpower.
My colleague Gayle Markovitz, started by asking him, in a world of work transformed by COVID and now set to be even more revolutionised by artificial intelligence, what does the ideal employee look like today?
Tomas Chamorro-Premuzic: The ideal employee looks a lot more like the freelancers of pre-2019 or people who are self-employed or work for themselves who have been working at home for a while or who have no interest in going to an office or see interacting with others in an office as a distraction and as a reduction in their productivity efficiency.
So I think, potentially, you're better off now if you're introverted, if you can manage yourself, if you are self-disciplined, and if you don't need to have somebody who works behind your back or behind your screen telling you, Oh, come on, come on, what are you doing now? And micromanaging you, which has never been really an ideal boss. But I think now there's definitely a need to be more independent and you're more adaptable if you can basically manage yourself.
Gayle Markovitz: You speak about managers. How do you manage effectively in that context?
Tomas Chamorro-Premuzic: Yes, it's a lot harder than before.
Let's start with people who really were strong on the people skills side of things and who maybe were emotionally intelligent, had a natural ability to understand people, connect with them, see them, interpret how they're feeling, etc. and now you know, you're suddenly shut off or I mean, that's cut off from you, right?
So you basically have to think of how people are doing. You can't see them. You have to revert to digital means of communication. You have to meet people where they are. Some people might like Slack, others WhatsApp, others email, others Teams. Some people on virtual meetings might not want to put the camera on and you have to respect them.
So, you know, suddenly it's a lot more complicated. The intuitive aspects of seeing people goes away. The one size fits all part, which always makes things easier, goes away. You have a team of five people and everybody has different needs and everybody sees freedom and flexibility in a different way. Some people want to be in. Maybe you as the boss don't want to be in, but people are, 'Hey, where are you? You know, I'm at the office.'
So you have to customise then personalise, which makes your work much harder. And then fundamentally, I think the hardest thing for managers to do is to basically get their act together to evaluate what people produce and not, you know, the levels of activity.
I always recall one of the early anecdotes from the beginning of the pandemic when people were sent home, and a colleague of mine said, after her office shut, she said, "But without the office, how will I pretend to work?"
Which is a very cynical but a wonderful line, because a lot of people go to the office to perform the performative act of working, and managers like that. If you're a boss and you walk around and you see your people seemingly busy, even if they're watching Netflix, or YouTube, they are like, "I must be a good boss because they all seem so focused."
Now that's gone. Of course, pre 2020 or 2019, there were already reasons to actually evaluate what people produce and the value they contribute to a team or organisation. But now you have no alternative because that's the only way in which you can treat people like adults and give them freedom of flexibility.
There's this paradox in management that the more you get paid for doing your job and the more skilled you are, the harder it is to know whether you're actually doing a good job or not.
”You have to be more rigorous and try to be as objective as possible. Evaluating things like their KPIs, what they deliver and giving them feedback on that.
And that's hard work, especially when you have very skilled people and it's in simple kind of blue collar or predictable jobs, etc., like an Uber driver is very easy to know whether somebody is performing or not. But if you're a management management consultant or a CEO, it's not so easy. So there's this paradox in management that the more you get paid for doing your job and the more skilled you are, the harder it is to know whether you're actually doing a good job or not.
Gayle Markovitz: You alluded also to digital distractions. And workers might be watching Netflix. And I know in your book, knowledge workers account for 60% of GDP and that costs as much as $650 billion every year of lost time to digital distraction. Do you think that's worse when people are at home or no different?
Tomas Chamorro-Premuzic: Great question. Yeah, I mean, we don't have data on that, right. But I mean, those 650 billion a year is staggering. And I think I note in the book it's like 15 times more performance detriments or deficits due to sickness leave and health related issues, which are obviously on the rise as well. Which I think explains why productivity went up in the first phase of the digital revolution, like from 2000 to 2008-10. And then it started going down.
If 50% of the time you're at work, you are or you're supposed to be working, wasting time on your phone or a computer, that's not making it very productive.
But I think you're making a really interesting point because possibly now going to the office is less distracting because if somebody stops you in the corridor and you connect with them and you do small talk and you find out you know how it's going, etc., that lubricates the social ties, that actually enhanced collaboration. And actually you're at least not looking at your screen, whereas at home, the temptation of all these algorithmic nudges and infused platforms to just refresh and check again and get distracted under the illusion that you're multitasking, the temptation is huge and you know, nobody's stopping you from doing that.
That's why I go back to you need to have a lot of self-control and self-discipline to be productive working from home. Even after the time you save commuting or the time you save getting ready for work and the time that your colleagues spend distracting you at work.
Gayle Markovitz: You mention multitasking, and I know that you've said it's a nice idea, but also a great myth. Can you talk a bit more about that?
Tomas Chamorro-Premuzic: There's research on this going back to the sixties or seventies. You know, basically we never multitask. We just switch tasks. And every time you switch, there is a cost and a deficit in attention, in concentration, in reconfiguring it.
And there's recent studies that have looked at that specifically in the context of digital distractions, and I think it's something like, if you're doing, let's say, you're about to write an article or you're preparing a client proposal or a presentation. And you're thinking, you're doing that, after five or 10 minutes, you know, an email comes in or a message comes in and you go to WhatsApp, you respond, you're having fun, etc. When you go back, there's something like 15 to 25 minute reset to actually achieve the same level of focus and be where you left the task.
So imagine, every time you switch, it's very quick to switch, but actually there's a reset that brings you back 20-25 minutes. And, you know, that's why I think in the early age of social media we had all these blockers, etc.. And, I find that myself, I tend to read a lot of books at the same time. So typically when I travel, I have everything on my iPad, but I never go online on my iPad because if you're reading or you're trying to read when you're having some downtime and then you allow for all these messages or refreshes to come in, you're not going to do more than two pages in a row.
So you have to control yourself and you have to treat it as a real distraction. We don't know, by the way, what this is doing to our brain. But, it's not unfeasible to think that in 20 years time we will have brain scanning studies that actually show what spending 21 years of an average lifetime on your screen does to the brain.
And it's interesting that now people are attacking TikTok and there's a lot of attention on TikTok as digital crack cocaine or whatever, because the algorithms are so ruthless and it's so addictive. But actually any of these platforms relies on AI to hijack attention, to compete for attention, and to make people come back. And, you know, they are very, very good at doing this because they can optimise for what we want to see and what's distracting in the first place.
Gayle Markovitz: One last thing on workplace before we talk about AI in full. Tracking software, workplace tracking software. You said that it's expected to be adopted by 70% of large firms in the next three years. What is tracking software and what are companies trying to find out about their workers?
Tomas Chamorro-Premuzic: Most companies already, large companies have something in place under their cybersec or cybersecurity governance. You want to monitor phishing emails, spam, dangerous hacking, threatening activity, etc. With that basically to the ability to see, at least at the aggregate level, whether people have good digital hygiene or not. With that comes a lot of information as like in the pandemic, you know, lots of companies started saying are people actually logging in and out when they say, because if we can't see them, how much hours are they spending and what will that do to productivity?
And then you can actually get more granular. Of course, you can do this at the individual level, even though mostly this information is anonymized and preserved. But I think the opportunity is to sort of transition from a place where you're just monitoring hacking and dangerous activities to actually get aggregate insights that predict things like employee engagement, well-being, productivity. And I think the opportunity with AI is to actually give individuals some feedback on what their daily patterns of activity say about them.
So imagine that we scrape your email metadata or your Slack metadata or whatever, and we tell you, Hey, you know, today you connected with people outside of your main central network. That's great because you're collaborating with people who have a different function, who are in different places, and that can give you ideas, or even scrape the type of language that you use because natural language processing can be used to translate your patterns of communication into morale motivation. It can tell you, congratulations, today you've emailed like a high potential employee, used words that are very enthusing or motivating. And so I think any feedback that can help people understand how they're doing and how they can do better, sort of like an automated or virtual coach, not different, by the way, from the wearables and the quantified self tools that we already use in our phones or smartwatches, etc., that can really help.
And I think, the way to do it is in an ethical way. You should allow people, you should definitely make sure that people understand what's being done with what data and for what purpose. You should allow them to opt in if they find that useful or not, you should minimise feedback that seems creepy or, you know, Orwellian or Big Brother-esque. And I think, you know, AI is difficult sometimes because it's either creepy or creepy, right? It either doesn't work and the other like, Oh my God, you know, these updates are telling me nothing. But then sometimes it's like, Oh my God, how does it know this about me?
And then of course, there should be a benefit to the employee or to the user, whether it's, you know, getting to understand themselves better improvements in morale, productivity, etc.. And I think if you do that, it can be not so much a surveillance tool, but a productivity or even a tool for understanding people, which is the main problem that organisations need to solve, right? Helping employees understand themselves and helping managers or leaders understand that talent.
Gayle Markovitz: I'm surprised, actually, that you've painted such a positive, potentially positive picture.
Tomas Chamorro-Premuzic: I know I am a little bit naive, a bit naive and optimism.
Gayle Markovitz: But, I mean, I'm more, I'm not surprised because it is surprising. I'm just surprised because your book is quite negative. Or perhaps just realistic. There's a lot of hype around AI and just yesterday there was that report that said that 300 million full time jobs will be impacted. But I know that your angle is much more about how it affects relationships and wellbeing and social behaviour. So, what are the behavioural tendencies that you think AI has unleashed?
Tomas Chamorro-Premuzic: So here we go on to the dark side, and that's probably the bleakest part of the book.
And again, you know, I'm not trying to make predictions about the future. I'm talking about what we have seen so far, because no one has data on the future and we've seen many times predictions like the one you mentioned - 300 million jobs will disappear or whatever. Yeah, let's see. Usually these predictions are sensationalist and off.
What we have seen, however, is that in the last ten years, the majority of people have interacted with AI through the platforms that AI inhabits, where it fuels via very, very impressive and quite predictive algorithms. It fuels traffic visits, revisits to those platforms and hijacks our attention. And, you know, these are basically tools that minimise the effort required to make decisions. It's like we have this digital concierge called AI and we outsource a lot of our decision making to that. What movies we watch, what music we listen to, whom we date, where we die, and where we go on vacation, what we buy, etc..
And I'm very interested in the behavioural impact that this is having, spending so much time on these platforms and under the influence of AI. And there's evidence to me that it has made us more unfocused or distracted, more impulsive, more biased, more narcissistic. I mean, these platforms have basically normalised or democratised narcissism very, very clearly. It's not that they have made us narcissistic. We were already quite narcissistic. And it's gasoline to the fire. And also more boring and predictable because when you optimise all of your days and life to not think, it's like the first time in the history of humanity that we have to rethink what it means to be human. It's no longer 'I think, therefore I am'. We're now asking ourselves what it means to be human in an age where most of our thinking can be outsourced to machines.
And when that happens, the AI or the machines, in order to sell more accurate predictions to investors and marketeers and others, have an incentive to actually constrain the range of behaviours in which we invest.
It's just like with your friends. If you have a friend who is really unpredictable and you don't know what to buy them for birthday, Christmas, where to go on vacation, what they would like, whatever, it's a pain. You have to think a lot. And every time it's like a trial and error or situation. But if you have a friend who is a bit OCD and predictable, you know, like Jack Nicholson in As Good as It Gets, it's very easy to relate to them and you know where they're going to sit, what they're going to order. So AI is pushing us into that direction, the very monotonous, predictable, boring, repetitive direction.
We're quite happy to spend most of our day training AI to understand us better and just scrolling up and down liking this, sharing this, resharing articles without even reading it, having AI autocomplete our messages and now having a co-pilot at work that does all the thinking for us. And what what do we do with the time we save? How do we ensure that we upskill and reskill? And are we actually becoming smarter and more creative? Or is our intelligence becoming some kind of a latent dormant passive muscle that we don't need to use anymore? Those are the questions, I think.
Gayle Markovitz: Do you think there's a role for employers to strategize about this?
Tomas Chamorro-Premuzic: Definitely. Let's start with HR, the people we mostly deal with commercially and professionally. They are HR directors or people who are in the business of helping organisations understand people issues which are the main problems that organisations have.
Today if you're an HR professional, you need to have a minimum level of curiosity to understand AI. I mean now, just ChatGPT, which is just one tool. If you're dismissing it and you're not interested, or you are just, you know, assuming in a utopian way that it will make people happier, healthier and more productive but you don't understand the pros and cons, you're missing out on something that will definitely play a big role in the interactions and experiences people have with work.
And the same goes for, I think just like 20 years ago, if you were in HR and you only did payroll or admin and you didn't understand the philosophical aspects of what's talent, what's potential, what's leadership, what's reskilling, upskilling, you were at a loss. Today you need to understand how people interact with technology.
With that comes what I think is, well, I think the two biggest imperatives for organisations today are how to ensure that we reskill and upskill people so that the new jobs that are created when old jobs go and more jobs are created, usually people who lose their previous jobs have access to the new jobs. It's not automatic, right? If you're a store manager, a brick and mortar store manager, and people start shopping online, you can't become a cybersecurity analyst or an AI ethicist to talk about the impact of AI and behaviour. So reskilling and upskilling is really important. In Europe it's addressed more by governments and the U.S. it's addressed more by companies.
And then the second one is, I think, to re-humanise work, to ensure that people actually have some fun, some joy. And, you know, work can come close to doing these things that we actually claim it should, like people finding a sense of purpose, identifying with a work persona, feeling proud of what they do, thriving. That's not going to happen if you just basically focus on productivity and optimise work for machines and not for humans.
And that's a big, big imperative. The opportunity, of course, is to use data to be more meritocratic and to sanitise some of the politics and nepotism that exists in organisations. But even if you do that, you have to do it with a human and humane touch. And this is very, very difficult because we can't expect HR leaders or executives to be moral philosophers or to be psychologists, but actually an understanding of these issues is, I think, very critical today.
Gayle Markovitz: And for individuals, how do you think they can navigate it? How do I keep hold of my humanity? I have to admit I asked ChatGPT what I should ask you today. And it wasn't a bad list of questions. So, what should I do, especially in an industry that's definitely something that's threatened.
Tomas Chamorro-Premuzic: Yes. I think journalism, media is a really good one to pick
What should you do? And let's take you and what you do as an example of a job. I think definitely don't dismiss this. Number one. Don't engage in self-enhancing reality distortion, saying, 'Oh, my God, I tried that and the seare the mistakes. I'm so much better.' Because then you really are at risk. Right?
And relatedly, don't see this tool as a competitive threat, but see it as an instrument that you can use to become more creative.
Third, find ways in which you reinvent what you do and how you work that provide the human touch and the human value.
To me, basically, ChaGPT is the intellectual equivalent of fast food. The fast food industry rose, is very successful now still, and has made it very easy for people to stay on their chairs and order in and eat vast amount of highly caloric, not very nutritious or healthy food for a low amount of money. And, at the same time, it has given us the farm-to-table movement. It has not killed but enhanced demand for Michelin star chefs. And it has basically highlighted the importance of eating healthy and understanding - to the point that even McDonald's would put the calories up and have salads.
So I think we have the ability to self-regulate on an individual level and a collective level. And fundamentally, what this means for most people, is to really focus on developing or harnessing the qualities that AI won't emulate or won't replicate: empathy, general curiosity, a deep desire to understand things and learn things. I'm always sad when the term 'deep learning' is associated with machines and not with humans because it should be a human quality.
And also, kindness, respect, caring for others, self-awareness. You know, if you ask ChatGPT whether it has self-awareness, it will say, 'No, I'm just a large language model,' which paradoxically makes it very self-aware, aware of its lack of self-awareness.
I worry about humans for their self defensiveness. And when I hear that people are just reporting the inaccuracies, let's remember this is version 4. What about version 14 or 15 if this keeps improving.
And people say, 'Oh yeah, but it's not creative, it's not funny, it's not self-aware, it's not empathetic.' My answer is always: neither are most humans by the looks of it.
So maybe this is a bit of a wakeup call for us to harness these qualities and understand that the IQ battle is probably lost against machines. But the EQ battle is where we have a unique opportunity to differentiate.
And of course, we need to find ways to benefit from this technology and leverage it to create something different.
When photography is invented, it doesn't kill visual artists. Visual artists start to use photography and you get pop art and Andy Warhol. When synthesisers are invented, it doesn't kill orchestra conductors and so on and so on and so on.
I have seen a lot of positive activity with this tool whereby people share funny things they make it do. Obviously to write the right prompts requires an understanding of how it works, creativity, ingenuity, etc.. So, you know, I think that's the glass half full. And then there is the glass half empty, which we covered as well.
Gayle Markovitz: I'm always fascinated by how things don't always play out to the logical conclusion. And I just wondered if you foresee, I know you don't like predicting the future, but if you do foresee any curveballs. There's a lot of hype with generative AI now. Do you think some of these language models are going to be just a little blip in the history of AI, or do you think it's really going to fundamentally change things?
Tomas Chamorro-Premuzic: This is a really, really smart question. And I think, again, even though I'm uncomfortable predicting things, especially if they're less than, you know, 50 years, because 50 years everybody will be dead and nobody will check. So it's fine. But if it's in my lifetime, I don't want to I don't want to do it.
But look, it's a really, really smart question. And just out of intuition, I would say it is overhyped. I think mostly because the data science that underpins it is not groundbreaking. It's not a tipping point. It is a combination of different things that were there. And what it has done really well is the user interface and the user experience. It's very Her-like, like the movie, right? Minus Scarlett Johansson, of course, which you know, is disappointing for a lot of people.
But I think that probably in terms of accuracy rather than speed, it has a ceiling. We may not have reached it, but it's not going to be, oh my God, artificial general intelligence or singularity. I don't think so.
But I do think that - and we're not going to become Luddites either or technophobes - this is going to be there. And of course, there is a business interest in pushing it. And Microsoft is already there. It's a brilliant move from a commercial perspective. I do feel somewhat optimistic that it will create that appetite for the analogue world because we will value the things that we kind of do on this platform and with these tools.
So maybe, you know, maybe it's a utopian vision of the future where there is a little space in the universe made of 3D objects and physical kind of things far away from the metaverse, where nuanced analogue people meet and have deep discussions about what it is to be human and enjoy their company without checking their phones or multitasking or having Neuralink in their brains.
I mean, that's what I'm hopeful for. But I'm, maybe I'm of a certain age that I'm showing that I'm already in my mid-life crisis and nostalgic of the analogue world that I grew up with. And, you know, when I tell this to my students, they're like, what is this guy going on about?
Robin Pomeroy: Chamorro-Premuzic, chief innovation officer at Manpower. His book is called I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique, He was speaking to Gayle Markovitz.
We'll have lots more on the two topics raised in this episode, artificial intelligence, and the world of work, in forthcoming episodes of Radio Davos, including coverage of the World Economic Forum's Future of Jobs report, that's out in a few weeks.
Ensure you don't miss any of that by subscribing wherever you get your podcasts. And if you like what you hear, we would really appreciate it if you could click to give us a rating and maybe even write us a small review. And to discuss anything you heard here or to discuss any of your favourite podcasts, please join us on the World Economic Forum Podcast Club. That's on Facebook.
This episode of Radio Davos was written and presented by me. Robin Pomeroy. Studio production was by Gareth Nolan. We'll be back next week, but for now. Thank you for listening and goodbye.
Podcast Editor, World Economic Forum
Head, Written and Audio Content, World Economic Forum
Chief Innovation Officer, ManpowerGroup