Self-driving cars: a spy on every street?

David Lindsay
Share:
The Big Picture
Explore and monitor how Automotive and New Mobility is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Automotive and New Mobility

Autonomous vehicles, or self-driving cars, are likely to be seen more widely on roads in 2015.

Already, legislation authorising the use of autonomous vehicles has been introduced in the US states of Nevada, Florida, California and Michigan, with similar legislation being planned for the UK. To date, these laws have focused on legalising the use of autonomous vehicles and dealing, to an extent, with some of the complex issues relating to liability for accidents.

But as with other emerging disruptive technologies, such as drones and wearables, it is essential that issues relating to user privacy and data security are properly addressed prior to the technologies being generally deployed.

Understanding autonomous vehicles

There is no single, uniform design for autonomous vehicles. Rather, it is best to understand an autonomous vehicle as a particular configuration of a combination of applications, some of which – such as adaptive cruise control, lane departure warnings, collision avoidance and parking assistance – are already part of current car design.

The most well-known prototype, Google’s self-driving car, uses a variety of technologies, including: a laser range finder (LIDAR) that generates a detailed 3D map of the environment; radars; cameras for detecting traffic lights; and a GPS. Other projects, including prototypes being developed by Mercedes-Benz, Volkswagen, Toyota and Oxford University, use different combinations of technologies.

This means that the privacy and data security problems arising from autonomous vehicles depend upon the precise technologies applied in any particular design. Some generalisations are, however, possible.

The relationship between the virtual and the real

The rules (or “code”) governing the online world have been different to those that apply offline. For example, online activities invariably generate digital traces, including metadata, which can be used to build profiles of users.

With emerging technologies, such as drones, wearables and autonomous vehicles, we are increasingly seeing the transposition of virtual models onto the real. One consequence of the range of sensors and data collection devices being deployed (and interconnected) is that our offline activities can leave traces at least as extensive as those generated online.

One way to understand types of autonomous vehicles is by reference to the kind of data collected and the ways in which that data is processed. For instance, autonomous vehicles often incorporate event recorders, or “black boxes”, to provide essential information in the event of an accident. This raises questions about who has rights to this data and about who can have access to the data.

Anonymising data

There is an overlap here with questions of liability, as insurance companies have clear incentives to collect as much data about user behaviour as possible. The potential for intrusive surveillance of personal activities is particularly jarring, as the car has been an archetypal space of personal privacy and freedom.

A fundamental distinction must be drawn between self-contained autonomous vehicles, in which the data collected from sensor devices installed in the car are stored and processed in the vehicle itself, and interconnected vehicles, in which data is shared with a centralised server and, potentially, with other vehicles.

Regardless of whether a vehicle is self-contained or interconnected, design decisions have to be made about whether or not the data collected is anonymised or linked to individual users. If the data is not anonymised, especially with interconnected vehicles, this poses serious surveillance threats. After all, once the data exists, and especially if it is connected to a server, it is vulnerable to access by third parties.

It is possible to envisage implementations of autonomous vehicles where data about a particular user is linked to other data sources, such as an online profile, for purposes such as tracking or marketing. This might take the form of personalised advertising displayed in the car, or even adjusting a vehicle’s route so that it passes retail outlets which match a user’s imputed preferences.

What else is at stake: human autonomy and hacking

We are now familiar with technologies, such as predictive search, which in the online context, attempt to predict what we want to do and make more or less persuasive suggestions.

It is likely that some versions of autonomous vehicles will implement predictive technologies. In any case, the progressive delegation of human decisions to machines raises system-wide questions about the cumulative impact on human autonomy: the more people are habituated to decisions being made for them, the less likely they may be to make their own decisions.

We are also now depressingly familiar with the vulnerability of computer systems to malicious third parties. Just as effective data security is essential to online safety, autonomous vehicles must be designed with a high level of data security, especially given the potentially calamitous consequences of hacked vehicles. As interconnected data processing systems are progressively rolled out in applications such as wearables and autonomous vehicles, we seem likely to see an offline version of the same sort of perpetual guerrilla warfare played out online between information security and hackers.

Protecting privacy at the design stage

Autonomous vehicles promise significant social and economic benefits, especially in potential improvements to road safety. There are, nevertheless, considerable legal and regulatory challenges. As with other emerging disruptive technologies, it is vital that privacy and anonymity be properly protected at the design stage.

To date, in the face of significant challenges relating to the legality of autonomous vehicles and liability issues, the privacy rights of users have been relatively neglected. But unless the era of artificial intelligence is to be accompanied by us sleepwalking into ubiquitous surveillance, we must recognise that safety and security needs to be balanced against the legitimate rights of people to control their own data and to retain their fundamental rights to privacy.

Published in collaboration with The Conversation

Author: David Lindsay is an Associate Professor in the Faculty of Law, Monash University where he teaches cyberlaw, copyright and trusts. He is the General Editor of the Australian Intellectual Property Journal (AIPJ) and a board member of the Australian Privacy Foundation (APF). David has published widely in the areas of technology law, copyright, privacy and data protection, and media law.

Image: A sensor is seen spinning atop a Google self-driving vehicle before a presentation at the Computer History Museum in Mountain View, California May 13, 2014. REUTERS/Stephen Lam

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum