Words by Ana Obradovic
We know our data is increasingly being collected, combined and algorithmically analysed. The Snowden leaks demonstrated how deep the spying (sorry, “data collection”) goes. Just like in Minority Report, predictive policing, targeted advertising, and facial and optical recognition abound.
The predictive capacities of new technology can infer information about us to such an extent that the tech seems to know more about ourselves than we do. For example, take the revelation that Target can, by tracking customer purchases, go so far as to work out whether a customer is pregnant. In an infamous case, the department store correctly inferred that a 16-year-old girl was pregnant before she had chosen to reveal that information to her family.
A more serious example is predictive policing. Just like in Minority Report, police can now use algorithmic analysis of crime statistics to choose which districts to target, and which citizen “profiles” are most at risk of offending. This is undemocratic, and disproportionately targets the poor and underprivileged. Poor, black and grew up in a “bad” neighborhood? Increased “random” checks and harsher relative punishments for the same crimes are now your daily reality.
These decisions are not made based on careful assessment of personal character. In the pursuit of effective governance, (here, the ideal of lowering crime rates to zero, like in Minority Report) algorithms are used to make inferences about possible future behavior. Unfortunately, these inferences are based on the profiles of strangers that, for opaque reasons, are deemed similar to one’s “type”, and, in this case, decide whether the person in question might reoffend.
The point here is that new technology assembles information in ways that can result in discriminatory or unfair conclusions. These assumptions can have serious effects on opportunities in life, and are often based on arbitrary correlations. An employer may choose not to hire a candidate, simply because they submitted their resume via Internet Explorer, instead of Chrome (apparently, workers who don’t use default browsers are more “proactive”). A bank can refuse a loan based on arbitrary inferences that mark an otherwise respectable client as a “risk”. Health insurers may raise premiums for the same service, based on private information gathered from sources like FitBit data (how active is the client?), or even private medical records. This is why we should be critical of data centralization initiatives like the MyHealthRecord app.
Governments, corporations and researchers currently take advantage of Big Data’s potential in the apparent pursuit of better governance, services and public goods. The logic is simple, the more we know about the world around us, the better we can apply it to bring positive improvements.
But more data collection in increasingly wider fields also means more surveillance. And while surveillance can be validated in the name of providing better security (national or otherwise), it’s important to ask: who really benefits when such capabilities go unchecked?
Understanding how something (or someone) functions makes it easier to control. Just like in Minority Report, advertising can be tailored to precisely fit your digital profile. In our capitalist society, mental manipulation regulates consumer choices, while algorithmic filtering of news stories seen online fosters misinformation amongst citizens, who, unwittingly, find themselves in informational “filter bubbles”. The result inhibits informed citizen participation in democratic decision-making, as well as less effective scrutiny of those in power. Transparency and accountability, two essential features of democratic governance, are lost.
Our interactions, and our private information, leave permanent digital traces that can be tracked and combined to create extensive profiles of who we are, what we do, and, most disturbingly, how to get into our heads. Often we actively sign that information away in private legal contracts (end-user license agreements or EULAs) that we don’t actually read, and can’t do much about anyway: the choice is between acceptance of creepy and invasive terms of service, or abstaining entirely from platforms like Facebook, Linkedin and Gmail. Considering these services have become so integral to the daily functioning of our modern world, it’s useful to ask whether we have a choice at all. If we can’t choose the terms of our “consent”, and are coerced into consenting by the almost compulsory nature such platforms for modern daily functioning, should such unfair terms of service be allowed to exist?
The General Data Protection Regulation rule passed by the European Union is trying to address these issues. But while this legislation marks a move towards a more democratic use of Big Data, there’s a long way to go, especially in Australia where such legislation does not apply.
A pertinent case is the My Health Record initiative, where several serious issues exist. As an opt-out service (instead of opt-in), the service undermines the democratic necessity of informed consent. How many citizens now have permanent records of personal health information collected without even realizing, or understanding the extent of information kept?
Second, the extensive data collected will be made available for research. The problem with this is that several studies have shown that anonymisation of personal data used for research purposes is complicated and often ineffective. Sometimes “anonymous” information can actually be reconstructed to reveal the identity of the person to whom the information belongs.
Finally, the security of the information is questionable. Given the recent hacks to the Australian Parliament House computer network, how confident can citizens be that their information will be safely kept? When information is centralised like this, only one successful attack is necessary for all private information to be publicly revealed.
My Health Record marks a further loss of citizen control over private information. The overall atmosphere of our modern society is one of deeper and deeper information collection — which amounts to more invasive citizen surveillance. We know mass surveillance impacts citizen freedom and autonomy. Life opportunities become potentially limited, and citizens may be manipulated into behaving in certain ways.
Predictive governance, in the form of openly predictive, Minority-Report-style policing, or more insidious governmental and corporate coercion, is something that affects us all. We should all be paying attention. It’s important that we remain critical of the undemocratic implications of modern Big Data initiatives, including those with apparently benevolent intentions like My Health Record.