We use social media all the time — and I really mean all the time. Applications like Facebook and Twitter have transformed into more than just platforms to share daily updates or post funny pictures of cats. What started as an honest attempt to connect individuals over time and distance has since become a trusted source of information for many. People all over the world now get their news from social media, and run certain parts of their lives based on the content with which they’re presented. In the not-too-distant past, news organizations prided themselves on discovering truth among rumor, and standing as the vanguards of fact. Social media has since replaced many of these institutions as it markets its “news feeds” and “trending news”. While we put so much trust in them, are these platforms always looking out for our best interests? If they weren’t, would we even be able to stop them?
What powers do we have?
Let’s say you’re browsing Facebook. You scroll through your news feed and notice that somebody has posted some extremely hateful content. Perhaps something disparaging Jews as secret conductors of world politics, or the all-too-common racist, hate speech post. Maybe we even saw several news reports incorrectly describing an event that we witnessed.
Social media sites often allow people to flag posts such as these in an effort to keep inappropriate content off of people’s feeds. Flagging and reporting content has been a staple of Internet forums and social platforms for well over a decade now. This feature is nice, and may help solve our problem; but when we flag something, how important is our request? If we truly saw something horrible happening on social media, and attempted to alert some authority within the company, would we really be able to help stop it? How much can we actually change our social media feeds?
When we see something wrong with our social media feed, our request for change most likely ends up in front of some sort of support team. In some companies, an algorithm may filter and prioritize flagged content. Unfortunately, support teams most likely can’t directly help us, so the rest of the company has to get involved. If a user found something truly horrible happening online, the support request would have to be passed through multiple hands, and picked up by a development team at the company. Even if thousands of users all requested the same change, the inquiries would have to bounce between managers, senior managers, or perhaps even a CEO before being considered valuable enough to tackle — at which point the task trickles back down the corporate structure.
It seems that we as users, or even lower-level employees, have little to no power over what social media companies do, while executives have much more power.
This is really only the case for requests that come from the bottom of a corporate structure, or from the users of its product. The same problem arises if many employees of the company see an issue and attempt to communicate it up the chain. On the other hand, if the board of directors at a company like Facebook or Twitter wanted something done, they could easily set it as a high priority item and send it all the way down the chain — superseding any existing priorities. Anyone who has worked for a corporation knows that this is the case. Sometimes the decrees from on high make no sense, or are completely against the attitudes of workers or users at the bottom. It’s a lot of explicit power for a small number of people to wield.
This paradigm holds true for all corporations. It seems that we as users, or even lower-level employees, have little to no power over what social media companies do, while executives have much more power. Most people operate under the assumption that corporations do have their best interests at heart. Facebook, until recently, may as well have been a utopia for friendly business people to usher in the future; guided by a benevolent nerd turned magnate. Unfortunately, even Google’s slogan of “do the right thing” is subject to someone’s interpretation of “the right thing”. It’s easy to imagine a scenario where this corporate power can be abused because, sadly, we see it all of the time.
What powers do they have?
It’s not that far-fetched to imagine that sometime in the future, Facebook’s new CEO, while being a wealthy and influential business figure, believes something horrible like all Muslims are evil. The top levels of the company could covertly order divisions to censor information advocating for Muslim rights, and users would have no way of knowing — especially if Facebook is their main news source. In big companies this can even be accomplished without employees knowing. An algorithmic change to news feed recommendations could require the work of multiple teams — each only understanding a part of the change.
Even more frighteningly, teams could be directed to morph recommendation algorithms in a way that ends up recommending seemingly credible news to everyone that furthers this racist agenda. Right now, Facebook’s news recommendation algorithm may have some sort of bias that won’t even be discovered until a year’s time from now. A social media company’s potential power of censorship and propaganda is immense.
Without some sort of limits on the extent to which social media companies can reach, the only thing standing between us and covert manipulation is the benevolence of whomever happens to be in charge at the time.
We have seen actions like these taken by Facebook throughout its lifetime. Certain news sites’ pages have been temporarily shut down — with Facebook citing pretty vague reasoning — only to be turned back on days later. Even specific articles have been deemed “Fake News” by Facebook’s team of news curators, for reasons that some think were politically motivated. Most of the time, decisions like these are made by only a few individuals at the company. In the case of deciding which articles are “Fake News” at Facebook, the team is comprised of only a handful of individuals from different outside organizations.
Social media companies themselves, however, are not the only ones to use their platforms in immoral ways. Wealthy companies and governments have used both Facebook and Twitter to orchestrate campaigns of influence aimed at manipulating popular attitudes. In one instance, using Facebook data sold to them, the technology company Cambridge Analytica was able to figure out who was on the fence concerning a U.S. presidential election, and target those users with advertisements tailored to their psychological profiles in order to nudge their opinions toward favoring a certain candidate. The same company helped affect the Brexit vote by using the same mechanism of targeted advertising to influence opinion.
Propaganda and Censorship
Certainly, social media companies can’t be the first ones to do things like this. Other institutions must have structures with top-down direction, a monopoly on their industry, and almost no ways for regular people to influence them, right? There must be some historical instance of systems of power that censor, and propagandize in secret. Well, yes — but unfortunately the most popular example of systems just like this are authoritarian governments.
Though all corporations share the same type of power structure as autocracies, this is lauded as a virtue by the business world. Single-mindedness brings clarity of vision. Top-down power dynamics serve the purpose of efficiency, as nobody gets tangled up in time-consuming debate over what to do.
A key difference in this case, however, is that few authoritarian governments have systems to monitor its citizens that are as effective as social media sites. The amount of information about ourselves that we voluntarily give up to these corporations would be a surveillance state’s dream. Most dictatorships would have to work really hard to do what social media companies do. So, really, censorship and propaganda from say, the Egyptian government, pales in comparison to what social media companies could do very easily.
When the power and potential influence of social media mixes with these top-down power dynamics, we can see a pretty scary picture being painted. Without some sort of limits on the extent to which social media companies can reach, the only thing standing between us and covert manipulation is the benevolence of whomever happens to be in charge at the time.
Checks and Balances?
If push came to shove, would existing governments be able to do anything to curb the power of social media companies? Well, we’ve recently seen the United States Congress give it a try. They attempted to question and admonish Facebook for its implications in election manipulation due its sale of tons of private user data. However, the inquiry fell short of being effective because, among a host of other reasons, the committee doing the questioning didn’t have a good understanding of the underlying technology.
Even big tech companies who have been fined for misdeeds, such as Google, make back the amounts in a matter of days. In 2017, Google was fined $2.7 billion by the EU for being anti-competitive. Based on its 2017 revenue of $109.65 billion, the company would effortlessly recoup the costs in just 9 days. So far, big technology and social media companies have completely gotten away with wielding enormous power; completely unchecked.
It’s important for us, as users of such powerful technologies, to have a voice in how they’re used — especially when they have the power to affect us without our consent or knowing.
Given all of this, is there anything that could change things? It appears that we’re safe so far because executives seem to have our best interests in mind — for now.
One possibility is that we could capitalize on this favorable setup to request that social media companies put in protections against potentially malevolent future CEOs or board members. Public pressure can sometimes do a lot to change company actions. To take a similar political example: few people took issue with President Barack Obama’s expansion of executive power during his presidency. To call him out for setting dangerous precedents was not a popular opinion to have, but it proved to be a consideration that should have been taken more seriously. It was only once President Trump benefited from those same executive powers that some Obama supporters realized the folly. Mistakes such as these could prove a great teacher in thinking about power of monopoly corporations.
If all else failed, an effective way to squeeze monopolies in the past has been economic pressure through boycott. Deleting our accounts wouldn’t stop companies from selling our data, but we could at least put an end to their gathering of new data. Theoretically, social media companies would respond to boycott on a mass scale, but doing so would be difficult. In light of its addictive quality and how ingrained it is in everyday life, social media services may be harder to boycott than traditional monopolies.
Governments came up with democracy as a way to manage power to some extent. It would be interesting to see a corporation come up with a similar solution to give some power back to the people it influences. Very few companies even allow their employees to elect managers or executives, let alone allow general people to vote. We as people do a pretty good job of calling for ends to autocracies when they exist in government. So far, we have let the corporate sector freely pursue nearly as much power as they can get their hands on — with a few exceptions.
We could also encourage engineers, designers, managers, and all who work at big tech companies to feel empowered to speak up against misuse of their skills. Employees at Google have already used their power of speech and resignation to influence the company’s recent decision to forego development of artificial intelligence for U.S. military drones. Software engineers share a responsibility for the projects they work on and the effects they have. Having clarity of this responsibility amidst the free snacks and high salaries is rarely found among some of the most influential workers of our time.
It’s important for us, as users of such powerful technologies, to have a voice in how they’re used — especially when they have the power to affect us without our consent or knowing. Our voices can be louder than we imagine, and should be heard not just by our governments but by powerful companies too. Social media has the capability to be a tool for much more good than bad. Together, we should collectively focus on building the social media that we ourselves would be proud to use.
Thank you very much for reading! If you enjoyed the piece, please follow me so you’ll know when the next one is released.
If you’re interested in hearing or reading more about this topic, check out the long-form written, or podcast version of my discussion of social media.