The state has always been engaged in security-related activities, but these have changed over time and especially, in an accelerating way, in the twenty-first century. Such activities include surveillance, understood here as any personal data acquisition and analysis for management, influence or entitlement. Today, state activities cannot be considered without noting the role of data flows between private corporations and government agencies, and of the part played by new technologies themselves — especially as artificial intelligence is promoted to a leading role.
The upshot is that public trust is threatened as governments become preoccupied with issues that do not strike citizens as central to their own security, and as data breaches and undemocratic practices proliferate. New methods of data analytics demand new approaches to how data is framed, analyzed and used. A duty of care is vital, encompassing the sources of data and their curation, the algorithms used for analysis, and the uses permitted for those data.
Back Story
In the long history of surveillance, the state has always been the key player. Some notion of security has been a central rationale. Externally, surveillance relates to geopolitical and military purposes or commercial advantage. Internally, surveillance might be pursued for the pacification and administration of the population — from electoral rolls to health care and welfare provision.
Since the mid-twentieth century, surveillance by state agencies expanded enormously, driven by the Cold War, anti-terrorism activities, and new enabling technologies. The very technologies invented for military use became the backbone not only of state surveillance, but also of industrial enterprise and everyday commercial and personal activity.
The internet, invented as a Cold War communication network, became public and commercialized in the 1990s. Web 2.0 followed, facilitating user-generated content. Soon it morphed into the Internet of Things, embedding surveillance in all kinds of objects — from buildings and cars to vacuum cleaners and fridges. Data is now "skimmed off" from mundane practices including driving, shopping, and sending messages.
"The very bedrock of democratic involvement — trust, based on an informed citizenry — was being eroded from within."
Since the late twentieth century, with the rise of neoliberal policies, the relationship between state agencies and commercial corporations deepened. Corporations supply the know-how and equipment for monitoring, while the data desired for policing and intelligence originates in ordinary online exchanges, searches, interactions, and phone calls. Public-private partnerships developing since the 1980s and 1990s are now normalized and commonplace.
Rapid Developments
The events of 9/11 represented a crucial shift. The rapid securitization of many aspects of government and everyday life in the name of anti-terrorism is now seen as normal. Much of this depended on the intensified deployment of information technologies from companies that, at the end of the twentieth century, had been feverishly seeking new markets. Biometrics, for instance, which had been languishing as an idea without an application, suddenly appeared to offer vital and reliable support for identifying suspects.
The Edward Snowden disclosures of June 2013 demonstrated beyond doubt that the global security-surveillance network was in high gear. Government agencies were making extensive use of personal telephone and internet data, and it was increasingly hard to distinguish between internal and external surveillance. The crucial category was metadata — the details of where and when communications or transactions occurred, between whom, and so on. Trust was further eroded by official denials that metadata was consequential, even though it comprised the same data a private detective would seek: highly personal, just not of the older date-of-birth or street address type.
In Canada, the Communications Security Establishment adopted a new analytic method from about 2012, described in "scientific revolution" terms. The switch was made from suspicion-led to data-driven approaches, heavily dependent on computing power and algorithmic analytics. Communications were to be monitored and analyzed to discover patterns producing actionable intelligence.
Big Data Surveillance
New relationships mean that once-distinct public and private entities now shade into each other. Government works closely with businesses and research groups, and there is also a sense in which the technological systems themselves participate, especially as AI and machine learning become more significant. Such developments challenge conventional modes of scientific and technological practice, and the time-honoured approaches to policing and security.
Today, huge amounts of data are sucked into systems that store, combine, and analyze them, to create patterns and reveal trends used for security, health, marketing, governance, and many other areas. This is a worldwide trend, seen in global IT companies — now often referred to as surveillance capitalism — and also in the programs and activities of national security agencies. This major shift from causation to correlation raises many questions about privacy and data protection regimes that sometimes seem to be sidelined.
Each key feature of big data surveillance relates to further issues: how big data practices exploit loopholes in current privacy laws, how security is mobilized as a permanent rationale for increased surveillance, and how new channels of power and influence disproportionately disadvantage certain population groups. This is clear in "predictive policing," in which scrutiny of those already under suspicion is intensified, and the influence of race, class, and neighbourhood are magnified through big data practices — with deleterious effects on public trust.
"How far can such new methods be trusted, especially when they carry such heavy freight of responsibility for people's choices, life chances and even human life itself?"
Confronting New Questions
The large question to be addressed has to do with data governance. This is closely connected with questions of trust and ethics, in both relations with the state and with corporations, in all their early twenty-first century complexity. Trust has been deeply damaged in both corporate and governmental domains, due to data breaches, surveillance overreach, unfair outcomes in policing and security, and disturbingly protective secrecy. Data governance should not be seen in only a technical or legal sense; data justice in data governance would align this with human flourishing and the common good.
Countries cannot expect to advance their strategic and economic interests, let alone foster human flourishing, without rebuilding trust. If citizens suspect that the actual focus of security refers to governmental, economic, or technological activities and systems alone, trust is once again threatened. However, if those interests are seen to be under an umbrella of human security — where personal, communal, and environmental protection are the focus rather than states or national security — this will help to recover trust.
As far as analysis is concerned, duties of care are required both in data collection and curation, and in the use of algorithms central to any analysis. Both internal audits and external reviews should be guided by the duty of care. If analysis involves profiling and/or automated decision making, tight regulation is called for. Democratically organized oversight functions are vitally needed at each level.
Conclusion
The "state and surveillance" is a far more complex equation than it may at first appear. Developments in political economy — neoliberal public-private partnerships — and in new data-enabled practices of analytics, machine learning, and AI all complicate relationships. This makes it hard to know what exactly transpires within the agencies — security and policing — that are early adopters of new technological and analytic styles of operation.
While genuine benefits may well emerge from new analytic methods or from predictive policing, current trends indicate there is a significant trust deficit and a sense of unfairness, in both procedures and outcomes.
Requirements for data justice, as well as for greater transparency, accountability, and oversight, need to be part of programs to ensure appropriate data governance. This, too, is only a means to other, societally more significant aims — those of seeking to deepen trust and thus human flourishing and the common good.