The question of how to calibrate a proportionate response to potential or incipient security threats is one that has important significance for contemporary politics and democracy.
For example, among legal practitioners, the question of proportionate security measures is complex, with human rights lawyers concerned that “proving something beyond reasonable doubt is in tension with security systems that learn to act where there is doubt”. In this instance, there is a growing gap between the technological security capability and the conventions and language of public and juridical discourse. In our times these questions have often found a public response in ideas about privacy and data protection – to “keep safe” the relation between a data subject and a body of identifiable data.
In many contemporary data-led systems, however, the data become decoupled from the person, often stored in a ‘passive’ database and retrieved for the anonymised writing of new algorithmic rules for future people. Put simply, long after my data becomes no longer strictly my own as such, it will live on to have effects on others. So, what is the space of the public in a world of fragmented and reassembled data elements?