Ethical Software
I've been thinking a lot lately about ethical software. I'm not sure exactly what it means yet, or how to tell people about it, but I'm convinced it's important. Software has become critical to the way the world functions, and (more importantly) to the way people live. It's no longer possible to participate in society without using software. It's built into our phones and cars. It manages our friendships and hides our affairs. We tend to think of viruses and adware when someone says "malicious software," but what about software that invades our privacy or exploits our social links? Shouldn't we call those malicious as well?
Doctors, lawyers, and other high-impact professions follow codes enforced by society. "First do no harm," doesn't actually appear in the Hippocratic Oath, but binds nonetheless. Social convention, law, and morality all woven into a code of ethics together bind medical practitioners. We collectively choose this because the potential harm of a trained doctor misusing her knowledge far outstrips the abilities of a layman. The danger they pose is eclipsed only by their utility, so we limit their decisions for the good of all.
Just as there is a code of ethics for doctors, there is an ethical code for medicines. We've drawn lines in the sand. They define acceptable side-effects, and a route from development through testing to use. We enforce these rules through peer review, regulation, and a strict approval process. The children of thalidomide have sunk deep into our collective memory. And because of this, and other failures, we hold medicines and other tools of the doctor's trade to extremely high standards. Because the potential for harm is deeply terrifying.
Software developers too are dangerous. This is perhaps not yet well-known. There is a code of ethics for software engineers, though it is also not well-known or well-taught. They hold the power to break lives and expose secrets. We trust them with the smooth function of every industry and every life. We trusted software developers at Target, Sony, and Ashley Madison to protect us. For convenience, we call malicious developers "hackers." We want to pretend they're somehow different from the friendly geek down the hall, but the distinction is semantic at best. Every developer is also human, and all humans are just one really bad day from breaking. We've barely begun to explore the harm a few individual developers could do.
We define ethically acceptable behavior in order to minimize harm to humans. Ethical codes bind our most trusted professionals, for the safety of everyone. We have strict rules about the ethics of our foods and medicines. But there is no similar ethical code for software, even though it now gnaws at the roots of our society. Nobody knows what is unacceptable, so they presume that everything is permissible. Very little thought is given to if or how it harms us humans.
As software digs its way deeper into our lives, economies, and psyches the dangers posed by rogue developers will only grow. Not just in terms of our credit card information, but also our employability, our economy, and our selves. We're already trapped in filter bubbles run by companies like Facebook and Google. The information on which we rely skews further and further away from reality, as intelligent algorithms show what we want to see, rather than what we need to see. How can we make good decisions when we're confined to a tiny fraction of the information available?
This is true malware. It sneaks insidious fingers into your social interactions: tracking your location, mapping your social graph, hijacking your identity to influence your friends. Maybe your tweet, placed just right, will help sell someone shampoo or cars or the latest bird-themed phone game. It infiltrates our thoughts and actions. Do we Instagram our meals because we want to? If some faculty of our brain compels us to apply faux-vintage filters to food photos, it grew there very recently.
Software already harms individuals in ways we may never fully understand. In the future, as it subsumes more of society, its potential to harm will grow – a twin of its potential to heal. We have a moral imperative to decide what functions of software are acceptable to society, as we have done with medicine via regulation and weapons of war via the Geneva Conventions. After all, its utility is eclipsed only by the danger it poses.