As a reminder, this is part of a series of write-ups based off a class I’m teaching at Tufts University called Data and Power: Deconstructing Surveillance. If you would like more of a description of what the class is, check out the post from Week 1. If you want to keep up with this series, you can subscribe to receive email updates or on Medium where the write-ups are posted!
This week is a different one because instead of reading new content and grappling with new concepts, we are taking the week to create small projects analyzing surveillance systems. Students are picking an interesting surveillance measure, deconstructing it along the lines that we have studied thus far: its capacity for exploitation, how it was justified, its normalizing power, any tendency to normalize along oppressive lines, and more. Next week, I will share back some of the projects that especially interest me, and I cannot wait to see what everyone came up with.
But this short break from new content presents an important opportunity for reflection. Students are taking the time to synthesize all that we have examined so far, and packaging it into an analysis. We can take the same opportunity to make connections between the concepts and examples we have studied up to now, and even pose some questions that may be answered down the line as we continue.
What is technological surveillance?
We have read so many different accounts of surveillance technologies: algorithmic surveillance on social media, panoptic surveillance, Lantern Laws, the Amazon warehouse scanner. They are all tied together by their capacity to allow some to observe others. But Browne reminds us that surveillance is not just observation, it is viewing by the more powerful of the less powerful. Foucault noted its frequent use in a process of normalization: making individuals mold themselves to a certain expectation, or “norm,” for fear of disciplinary action against them. We noted how many of these normalizing pressures and disciplinary actions fall along typically oppressive lines, where those disadvantaged because of race, class, gender, or more, are examined and judged as needing discipline in a manner that is unjust.
When we get at the core of surveillance, we see that it is a key piece of a process that exerts power. Technology is also a tool that lends power — power to overcome “natural” barriers by manipulating oneself or environment. Browne’s citation of Gary Marx theorizing about the “new surveillance” was notable; that digital systems radically shift barriers that had previously existed for large surveillance systems, including information storage, the speed of sharing information, ability to remember, and even the raw capacity to perceive being limited to human eyes. In this sense, we must recognize that we live in a world whose technology enables an incredibly pervasive system of surveillance never before possible. Those who create and wield surveillance technologies are powerful in ways previously incomprehensible.
The consequence of such pervasive and powerful surveillance is that we are all subject to new pressures of normalization through disciplinary power. Companies are constantly trying to mold our purchasing behavior by manipulating our emotions through advertising. Governments are keeping tabs on every individual in ways previously thought impossible. Police and organizations like Immigration and Customs Enforcement (ICE) are enforcing their orders with the aid of huge databases and tracking technology. Our media diets are custom tailored and shape our worldview based on what the algorithm feeds us. We even police each other and our viewpoints more often through social media.
Now, this is not to say that all of these systems have perfect power over us. Microtargeted political advertising has been shown to be less effective in changing voter opinion than mainstream media. Governments do struggle making meaning and “actionable intelligence” out of the huge amount of information they ingest. We do not believe everything we see, so simply being exposed to certain information on social media does not guarantee we are being manipulated. There are clear limitations to the disciplinary steps of examination and normalization, even if the surveillance piece has been radically empowered.
But one conclusion we can draw is that there are more ways for powerful entities to influence us than there were before digital technology. We can also argue that if means of examination (making sense out of all the data) and normalization (molding thoughts or behavior based on those examinations) improve, then we really stand to be in some trouble.
Take, for example, the Amazon warehouse scanner. Because of the ease of examination (tracking your location and time to scan items) and disciplinary power Amazon wields (firing you if you do not adhere to their norms), their surveillance tool is much more dangerous. If other surveillance practitioners adopted such effective disciplinary tactics, the rest of the surveillance systems would become much scarier.
There are some surveillance systems that have been discussed in more popular discourse that edge towards this effective disciplinary power. Insurance companies want to use data from car sensors to custom tailor prices to drivers, but their criteria include many aspects of their definition of a “good” driver. If you stray from those criteria (even in the case of having to break a law to avoid committing some harm, like running a stop sign to avoid hitting an animal), you may get penalized through higher rates or even having your car disabled.
Facial recognition systems have been used in cities for enforcing traffic laws, including pedestrian traffic. Think of how many times we jay walk for the sake of efficiency or because the coast is simply clear anyway. A camera that recognizes you and identifies your lawbreaking may send you a ticket or otherwise penalize you. Eventually, you will likely start using more crosswalks.
We may conclude that the most insidious systems have the most effective loops of surveillance and discipline. Perhaps the systems described above trigger your sense of injustice, and you feel they should be stopped. We can also reflect on how we judge surveillance technology, and why it incenses us in its most effective cases.
Do we agree with technological surveillance?
We have tried, through this course, to start to identify and put words to the feelings that arise when we read about surveillance technologies. Why do they often make us so mad? What about our complicated feelings when we feel that they are partially doing good, partially doing bad? These are very important questions to answer if we are to develop analytical skills to judge and act for or against certain technologies.
I think that using the Amazon warehouse scanner is again an illustrative example. Students in class were fairly unanimous in their hatred of the scanner, feeling that it violated so many aspects of justice that they thought workers be entitled to. They identified problems in its ability to track you and route you away from other workers, its timing you and the harsh penalty system if you are too slow, and how it pushes workers to do such exhausting labor that the company offers free painkillers in vending machines as a “solution.”
The sense that we get when feeling indignant at the scanner is that workers should have some basic rights — to be free from physical pain, to talk with other coworkers, to be free from such regimented oversight lest they be fired. When surveillance is part of that type of disciplinary system, it is entirely unjust. Notably, we can identify that surveillance is key to this disciplinary process, as without tracking worker location and time to scan, there would be no means to push workers so hard or punish them if they “slack.” Moreover, the technology enables something that human eyes would be hard pressed to do: the warehouses are so large that hiring enough overseers to do what the scanner does would likely be impossible, if not financially infeasible. Here, the technology is undoubtedly bad, and easily identifiable as such.
A different example was less cut-and-dry in class: customizing content in social media apps through surveillance capitalist practices. Students were quite split in their judgements of these systems — some having no problem with them, others strongly disliking them — and several individuals were even split within their own mind. There is a sense that surveilling individuals to serve them custom content that helps them, either through bringing informational utility or just joy, is an acceptable practice. But there was a pervasive feeling of “creepiness” that almost all students felt — a sense that being watched and tracked feels gross.
This example, generating a complicated reaction, is useful to break down and learn from. It gets at the idea that the outcome of surveillance is important to people. Privacy seems like a fine cost to pay for some benefit. The disciplinary power at play with targeted advertising or content filtering does not seem so bad that it is entirely unjust.
However, if there was more risk of being disciplined unjustly because of this data collection, it may seem less complicated. If it was easily demonstrable that advertising powerfully shaped your thoughts about the world, or was very effective in making us buy things we do not want, then targeted ads may be more angering. If filtered content was making people racist, sexist, or otherwise hateful or ignorant, than that could be a more easily identifiable harm. Moreover, if your data was being sold to organizations whose disciplinary capacities were more powerful (e.g., your online browsing being used to judge how risky you are as a driver, or to get a loan), this also may trigger feelings of injustice.
At one level, the capacity of surveillance to discipline us, and those acts of discipline conflicting with our sense of justice, is key to understanding why we judge some systems as okay and others as clearly bad. When we dig into our judgements of disciplinary action, we may learn that we think that workers deserve some semblance of rights, that we deserve the freedom to drive however we want (within reason), or that hateful ideologies have no place in our society. These are then pillars of our ethics that we can use to evaluate disciplinary power and surveillance technologies’ capacity to enact it.
But there is even a deeper level at which we should evaluate surveillance practices. For example, even if advertisements are not so effective in swaying consumer behavior, is part of the “gross” feeling we sometimes feel that we do not like the attempted manipulation? Even if content is filtered towards funny memes or making fun of others, do we get a hint of a feeling that our time is being wasted in order to glue us to our devices for longer amounts of time?
There are aspects of our social lives that we also have to judge as justifiable or not. It may be trickier to get at this deeper level, but we all must grapple with whether or not we think its justifiable for companies to try to make us buy their products, knowing that they are manipulating our emotions to do so. We have to figure out if we think governments collecting data on citizens is something we agree with, and why, even if we do not feel the immediate effects of its disciplinary actions.
This layer of ethics is closer to the realm of virtue than consequence: are we conducting ourselves in a way that makes us proud, or ashamed? It may be more difficult to assess these feelings, because we have to dig deeper. But in class, we have tried to practice this type of thinking, asking several rounds of questions to ourselves to get to the root of feelings. This style of almost Socratic questioning is crucial for critical thinking, and a key part of ethical development.
Problem to Practice
In sum, we can see that the problems related to surveillance technology can be quite varied. Its role in disciplinary systems can be shockingly unjust, and even its underlying logics can be critiqued as not in the interest of a virtuous society.
This is not to say that we have figured out all the problems with these technologies, or have discussed all we can discuss regarding forming an ethic for how technologies should be created and governed. There is so much more we could deconstruct. Truly, this type of work is continuous, the work of a lifetime. But the work that we have done to question these systems has begun a process, exercised a muscle that we may not be used to using.
For next week
Community announcements
None! This week there were no announcements sent in.
Readings and reflections
For next week, we will not read anything! Enjoy the break for your brain. I will even spare everyone a reflection.
Look forward to hearing about student projects next week!
Thanks for reading along, and as a reminder, if you want to keep following these weekly updates or share them with friends, either follow my blog on Medium, or subscribe to the email list via this Google Form.