A Modern Day Take on the Ethics of Being a Programmer
June 09, 2015
Eighteen years ago a woman walked into the hospital. She was having complications with her pregnancy. At 23 weeks pregnant, her cervix wasn’t strong enough to hold the growing baby inside her uterus and a foot was starting to poke out. Her doctor at the time said it seemed like a hopeless case. Based on the problems she was having, he thought it very likely that she would lose the child. But his job and the job of his staff was to do their very best no matter how hopeless a case seemed. The trick here was to get the amniotic sac fully back inside the uterus without rupturing it and then do a cerclage, a procedure that helps the cervix stay closed during pregnancy. In this case, the best efforts of her doctor and his team were enough; they were able to save the baby and the mother. The woman, who was on bed rest for the remaining 4 months of her pregnancy, carried to full-term and delivered a healthy baby boy. Just last year the doctor was invited to a graduation party to meet the 6’5” star athlete and honor student who they didn’t think would survive past 23 weeks.
I first heard this story at the dinner table as a child because the doctor in the story is my Dad. While this is a heart-warming tale with a happy ending, it’s important to note that my Dad didn’t do his best just because he's a good person. Doctors are both legally and ethically required to follow certain protocols when treating patients. They are legally and ethically obligated to try their absolute best to save every patient that walks through the door or they can be sued or lose their license to practice.
As an adult, I have often wondered what my ethical obligations are as a software engineer. Doctors swear to the Hippocratic Oath, lawyers have codes of professional responsibility, and engineers in Canada are given an iron ring that reminds them of their ethical responsibility to build with humility [1]. But what do we have as software engineers? Since I started programming, discussions about ethics and responsibility have been rare and sporadic. Programmers have the ability to build software that can touch thousands, millions, and even potentially billions of lives. That power should come with a strong sense of ethical obligation to the users whose increasingly digital lives are affected by the software and communities that we build.
History of Free Speech and Privacy on the Internet
The beginning of modern computing happened during a time of large-scale international war. The original computer, nuclear power, jet engines, and cybernetics were all invented during World War II [2]. A great deal of early computing was linked to military defense and the militarization of computers shaped early software engineers' ideas about their ethical responsibilities. Norbert Wiener, the father of cybernetics during WWII, was also one of the early pioneers of ethics in computer programming and was vehemently against political and military interference with scientific research [3].
The 60s in the United States saw widespread civil unrest, which included anti-Vietnam War protests and the Civil Rights movement. This set the stage for a period in history where the government and large institutions played the role of the antagonist. As a result, early engineers were concerned with building computer systems that protected users from these institutions. Ethical discussions during that time focused mainly on free speech and privacy and were widely influenced by the anti-war movement, the civil rights movement, and the hippie/free-love movement. It’s no accident that, shortly following this period, Richard Stallman started the Free Software Movement, which encourages people and companies to share the software they create for free [4].
The approach to ethics made by early software engineers makes one very big assumption: that the biggest threat people face on the internet is from governments and institutions. The software engineering community today needs to realize that the biggest threats to privacy and safety for many users online isn't the government and large institutions—it's other users.
Programming Today
Today, software products touch billions of lives. The internet is pervasive throughout most of the world, and people have access to more content and information than ever before. Programmers and software companies can build much faster than governments can legislate. Even when legislation does catch up, enforcing laws on the internet is difficult due to the sheer volume of interactions. In this world, programmers have a lot of power and relatively little oversight. Engineers are often allowed to be demigods of the systems they build and maintain, and programmers are the ones with the power to create and change the laws that dictate how users interact on their sites.
Given the immense power and freedom programmers experience, the community is relatively benevolent and ethically minded. Otherwise, the world would have gone to hell in a handbasket by now because the tools we build manage people's money, allow them to share photos and intimate moments with others, and allow people to communicate in new and novel ways. Most of our discussions about ethics focus on free speech, privacy, security, and user agency, which are all very important topics. But as the tools we build continue to expand, programmers need to start talking about ethical obligations towards online users and communities experiencing harassment, stalking, and privacy violations from other users within their online community.
Online Harassment and the misuse of "Freedom of Speech"
Twitter, Reddit, 4chan, and countless other websites are havens for online harassment. While many people experience harassment on the internet these days, either through social media or gaming, recent targets of severe online harassment have largely been women, people of color, or people in the LGBT community [5]. If you’re bored and want to see in real-time the types of things that are tweeted at people, you can search for this on Twitter (be sure to click on the Live tab when viewing the results):
bitch OR cunt OR whore @spacekatgal
For those who don't want to take a look first hand, there are a bunch of messages calling @spacekatgal any or all of the names listed. One video I found tweeted at @femfreq was actually pornographic. So how did this happen? How did a group of normal software engineers build a platform where hate speech, death threats, and gross sexual objectification are a significant percentage of the interactions these users experience every day? I live in fear of what I call the internet’s Eye of Sauron ever looking at me; the mob of internet trolls who descend on people and make their life on and off the internet a living hell. My brief foray into the Twitter reply feed of users like @spacekatgal and @femfreq has further solidified the validity of that fear.
The Conundrum of Free Speech
People regularly use "free speech" to justify a lot of the terrible things said on Twitter, Reddit, and other online forums. Free speech, at least in America, is one of the most misused concepts. It's common in recent times to call out companies for "censoring" speech when they try to regulate the comments that people make on their platforms [6]. But free speech has only ever applied to public spaces and has always been regulated. In the United States, free speech has a list of exclusions that include inciting imminent lawless action, obscenity, true threats, defamation, child pornography, and fighting words [7].
Additionally, private companies, like Twitter and Reddit, are ill-equipped to regulate free speech on their platforms because they are not technically public spaces. A private website claiming to be a space for free speech is like a shopping mall masquerading as a public square. A privately owned institution is lawfully allowed to regulate what is said or done in their space, and should not be thought of as a forum for free speech. Furthermore, private institutions don’t have the history of understanding and upholding free speech that the government and judicial system have. These private spaces are playing pretend judge and jury and doing an incomplete job. But the government is also not fully equipped to protect free speech in online forums because of the volume of communications and the potential anonymity of users. So, it seems, we are at an impasse. Companies are unwilling to regulate user speech on their platforms and governments are incapable of policing it.
The Right to Not Listen
Historically, free speech has referred the right that someone has to speak an opinion. But there is another side to free speech we forget to talk about: the right to not listen. The right to not listen is so ingrained into the western idea of Freedom that we talk about it very little. It's the right a person has to remove themselves from situations where they are uncomfortable, unhappy, or feel unsafe. Forcing someone to stay and listen is at best harassment, at worst it is kidnapping. The right to not listen means people can choose their news sources, thus avoiding a totalitarian "Big Brother"-esque government that forces its citizens to listen to one source of propaganda. In fact, the anti-vaccine movement could be seen as people invoking their right to not listen...to science. And while the right to not listen has its boundaries (which should hopefully include vaccinating children), it is undeniably a freedom that most of us enjoy in our day to day lives.
Software engineers and programmers have been so focused on codifying freedom of speech into the internet that we forgot one of the fundamental freedoms people enjoy in the real world: the right to not listen. We’ve given so much power to anonymity and free speech on the internet that hordes of people can band together online to attack and harass individuals who have little or no recourse for fighting back. Being @ tagged is the digital equivalent of someone being able to say things to your face, yet users are unable to control hate tweets and death threats said to them directly. The fact that Twitter and many other sites give users very few tools to regulate the messages that are sent to them is not representative of a person's ability to escape a situation in the real world.
So what can we do about it?
User Defense Tools
The first thing we can do is simple in theory: give users the right to remove themselves from or defend themselves in online interactions. Some of the tools to allow users to protect themselves from unwanted communications are already out there. Email is an example of a technology that had an early problem with unwanted information overload. Without SPAM filters, there’s a good chance email never would have reached its current level of popularity, which is estimated to be about 2.6 billion people worldwide by the end of this year [8]. Email uses many different techniques to detect SPAM, and engineers have been working on honing SPAM detection and filtering systems since email was first created [9]. SPAM filtering systems are by no means perfect, but most people would prefer them to the onslaught of unwanted messages they would receive otherwise. Especially given that more than 92% of the 4 billion emails sent every day are SPAM.
Companies and software engineers should build platforms that give users access to features like filtering, SPAM and harassment detectors, and blocking. Not as afterthoughts, but as necessary features for users in online communities. Additionally, users should have the right to regulate, remove, or unpublish content that links directly to their online persona.
Take User Problems Seriously
The second thing we can do is realize that our current efforts are not enough. We need to internalize the fact that we, the software engineering community, are responsible for our users. That we have an obligation to take user issues seriously, and to do our best to fix their problems with harassment. Even when the problems are really hard and even if we might fail.
Dick Costolo, Twitter’s CEO, recently admitted this in a companywide memo. Twitter rolled out a new suite of anti-harassment tools that mostly just improved people’s abilities to report harassment to Twitter and to the police. Reddit recently released a new anti-harassment policy for their site. These are all good steps but not quite enough.
An anti-harassment policy without tools to fight harassment is like the constitution without a judicial branch and a police force: a piece of paper with some pretty ideas on it. Without regulation and oversight that enforces these ideas, it’s difficult to change or regulate user behavior.
Conclusion
To be clear, I don’t think that programmers at Twitter or Reddit are bad or evil. Nor do I think that the leaders and executives at these companies should be condemned or vilified. We as a community need to talk about our ethical responsibilities when building software so that we have a standard of behavior. So that we can encourage our peers in the community to address problems with their software because we have an understanding of what it means to be responsible for our users.
So I ask you, is it our responsibility as engineers to fix harassment on software platforms we create? Is it the responsibility of the company and its engineering team, from an ethical perspective, to address these types of problems? Is a user experiencing extreme harassment on our platform the equivalent of a patient walking into the ER with a serious medical issue? When someone on a platform that we helped build and are responsible for maintaining is being hurt by other people on our platform, through our platform, are we ethically obligated to do everything in our power to try to fix it? Even if it’s really hard? Even if we might fail?
Resources
- Iron Ring. In Wikipedia. Retrieved June 8, 2015, from http://en.wikipedia.org/wiki/Iron_Ring.
- Top10contributor. Top 10 Inventions Discovered During World War II. top-10-list.org, 2012.
- Wiener, Norbert. In Wikipedia. Retrieved June 8, 2015, from http://en.wikipedia.org/wiki/Norbert_Wiener.
- Free Software Foundation.
- Duggan, Maeve. Online Harassment. Pewinternet.org, 2014.
- Twitter And Censorship: What Does Freedom Of Speech Mean In The Social Media Age?. Cbc.ca, 2012.
- Freedom of Speech in the United States. In Wikipedia. Retrieved June 8, 2015, from http://en.wikipedia.org/wiki/Freedom_of_speech_in_the_United_States#Exclusions.
- Radicati, Sara. Email Statistics Report, 2014-2018. The Radicati Group, Inc., 2014.
- Anti-spam Techniques. In Wikipedia. Retrieved June 8, 2015, from http://en.wikipedia.org/wiki/Anti-spam_techniques.
- Newton, Casey and Nitasha Tiku. Twitter CEO: 'We suck at dealing with abuse'. The Verge, 2015.