It was the data breach that launched a thousand think pieces.

A sea of ink has flowed, and billions of bytes have streamed in response to the recent Facebook/Cambridge Analytica scandal. The practices of both companies reveal a pervasive problem about how our data is collected, shared and used to manipulate us.

But the issues raised go beyond them and beyond how we are marketed to by corporations and profiled by political parties. The data we disclose with our clicks, links, posts, and emails are also used to shape important human-resource decisions – who is hired, fired, promoted, what kinds of pay incentives we receive, and more. So much so that some are posing the question: does big data take the human out of human resources?

Just take Vera, the AI software designed by the Russian start-up Strafory to cut the costs of recruitment, interview applicants and narrow the field of applicants to the top 10%.

Or this example from a call centre in the US where employees at an international banking giant reported that analytics software monitored every word and scored them on the friendliness of their tone. The bank used this information to judge workers’ performance, and a bad score could cause financial harm, discipline, or even job termination. Workers said there was little to no recourse to correct mis-scorings, even though it was proved that the software failed to recognize certain speech patterns, accents or impediments.

We provide data as workers, and this data has value. Our CVs, biometric data such as our fingerprints or iris scans, our geographical location, our interpersonal networks, and the abundant data mined on us as employers monitor our workflows, breaks, routes we take and even our keystrokes. All of which can be sold and analyzed for use in marketing, advertising and HR.

Yet workers across the world – with the exception of those in the EU, who will soon benefit from the provisions in the upcoming General Data Protection Regulation – have no control over the mountains of data their employers collect on them. Nor do workers know what data employers are actually using to hire, fire or discipline them. Are your health file, social media connections, union membership, financial history or political beliefs determining factors in your employment prospects?

Filling the regulatory gap

To address these issues, combat insecurity and propose sustainable ways forward, UNI Global Union has issued the “Top 10 principles for workers’ data privacy and protection”. One key principle is that workers must have the right to access data collected on them, including the right to have data corrected, blocked or erased. This data should also be portable – a demand especially important for platform workers, who have hundreds-of-hours of equity in their ratings.

Another critical principle is the “right of explanation”, meaning that workers must be able to see what data employers are collecting and how it is used to inform key management decisions. Without this right, there will be inadequate checks and balances on management decisions, and no possibility to check whether they are using data in ethical, non-discriminatory ways.

UNI also gives guidelines for biometric data, data transparency, on the use of location tracking through so-called wearables, and the set-up of company-wide data governance committees.

UNI General Secretary Philip Jennings comments: “Data collection and artificial intelligence are the next frontier for the labour movement. The new generation of tech-savvy union members and leaders are not willing to let inhuman algorithms set the new rules. Just as unions established wage, hour, and safety standards during the Industrial Revolution, it is urgent that we set new benchmarks for the Digital Revolution.”