There’s a conversation I often end up having with innovators and product developers, and it goes something like this:
“Our product does what it’s designed to, and it’s safe. Yet the public isn’t scientifically literate and rational enough to understand this. Our biggest fear is that they’ll reject it because they don’t understand it”.
I have some sympathy for this perspective - as a scientist myself, I can appreciate how easy it is to assume that a lack of “scientific thinking” leads to an irrational rejection of technology.
Yet this assumption is dangerously misleading. It suggests the false conclusion that whether or not we accept technology is simply a matter of understanding, and that the solution to the problem is more and better education.
The reality is far more complex. Failing to understand why some people are concerned about emerging technologies, and how those concerns might be effectively addressed, can spell disaster for new technologies and the products that use them.
The fear of public fear
In 2002, Michael Crichton’s fast-paced techno-thriller Prey was published. In it, out-of-control nanotechnology threatens life as we know it. It’s a familiar trope that stems back to Mary Shelley’s Frankenstein: an ambitious, naïve scientist tinkers with a technology they don’t understand, and chaos ensues.
I was a part of the U.S. Federal Government’s National Nanotechnology Initiative (NNI) at the time, and vividly remember deep concerns that the book would undermine public trust in nanotechnology. Things got even tenser when it was announced that 20th Century Fox had bought the movie rights (although the movie was never made).
As it turns out, these fears were largely unfounded—there wasn’t a strong public backlash against nanotechnology following the book’s publication. But this didn’t stop proponents of nanotechnology worrying about it.
We are all members of “the public”
The story sticks with me because it so closely echoes concerns around many other emerging technologies, from autonomous vehicles and artificial intelligence, to gene editing and the Internet of Things.
In part, I suspect, fears like this are fueled by a long history of technological "mishaps" including asbestos, thalidomide, the chlorofluorocarbons that caused a hole in the ozone layer, and global warming as a result of CO2 emissions. Innovators convince themselves that such blunders won’t happen on their watch, but they’re not sure how much others will believe this.
There’s also the underlying assumption that new technologies are too complex for members of the public to understand. Of course, it’s true that if you ask someone on the street to explain the science behind gene editing, or the fundamentals of natural language processing, you’re not likely to get an accurate answer. But then again, if you were to walk into a random academic department at a top university, the chances are that you’d have the same response.
And this underlines the fact that, outside our immediate area of expertise, we’re all members of this entity we like to call “the public”.
Lessons from Michael Crichton's Prey
In reality, scientific and technological understanding are poor indicators of someone’s ability to be play a role in responsible decision-making.
Here, what is more important is how an emerging technology might potentially impact people, and their ability to both understand this, and have a voice in developing technologies that are responsive to their needs and concerns.
This has less to do with scientific literacy, or level of education (although both have their place), and more to do with different groups and organisations working together to make sure that new technology is genuinely useful.
In the case of Michael Crichton’s Prey, advocates of nanotechnology made the mistake of treating the public as a barrier to success, rather than potential partners in success. The same could be said of the disastrous introduction of Genetically Modified crops into Europe in the 1990’s, when they were promptly labelled as “Frankenstein foods”.
Instead, if technology innovation is approached as a process of co-creation with different groups - including the public - things begin to look very different.
For a new technology to be accepted and adopted within society, communities that are potentially impacted by it need to understand the benefits and risks to them, and how they can play a role in increasing the former while reducing the latter.
Engaging people in technology innovation
Effective engagement like this requires members of the public and others to have access to clear, relevant and trustworthy information on the potential risks and benefits of a technology - including who stands to gain and loose, and how this balance might be shifted.
It also needs to be built on multi-way engagement between developers, manufacturers and stakeholders, that enable individuals and communities to have a say in how new technologies are developed and used. And it demands respect between those who are developing new technologies, and those potentially impacted by them.
The bad news is that most people are too busy to be actively involved, which raises a number of challenges. Here, we’re still learning what works and what does not. But there are some basic lessons to be built on, including the importance of listening to all stakeholders, working with organizations that specialize in bringing different groups together and ensuring that accurate information on the impact of new technologies is available.
Have you read?
- Can we really trust scientists?
- Why the world needs to embrace science
- The AI revolution is coming fast. But without a revolution in trust, it will fail
Connecting with casual learners
To me, this last point is especially interesting. Although most people are too busy to be formally engaged in technology innovation, many are nevertheless casually interested in what’s coming over the horizon, including what new tech does, how it works, and how it might affect them.
Ensuring these “casual learners” have access to credible, trustworthy, relevant and understandable information is essential to getting people involved. And yet, such resources are often hard to come by.
I came up against this recently in the area of nanotechnology. For nearly two decades, billions of dollars have been invested globally in designing and engineering materials at close to the scale of individual atoms. This is science, engineering and technology that has the potential to profoundly impact our lives - in fact, this year’s Nobel Prize for Chemistry was awarded to three scientists working in the field. Yet as I describe in the journal Nature Nanotechnology, it’s surprisingly difficult for casual learners to find clear and useful information online about what the technology can do, and what the implications might be.
Creating better resources for casual learners would be a major step toward building bridges between new technologies and members of the public. But it’s just one step toward developing new technological capabilities that are successful because they are socially responsive.
The reality is that people are pretty savvy when it comes to accepting or rejecting technologies based on the perceived value to themselves and their communities. Recognizing this, and finding better ways to partner with members of the public rather than fighting them, is likely to pay substantial tech innovation dividends in the long run.