The Technical Problem of Evil
I went to a Bitcoin meetup in San Fransisco last week. Zooko Wilcox was presenting about Zcash, which launched just a couple days later. It was a great session, driven mostly by audience questions. We covered the basics of zk-SNARKS, the origins of the project, its interactions with other software, the parameter generation ceremony, and several other interesting parts of Zcash. Towards the end of the night, someone asked a question that seemed to be very simple: "What if someone uses this for evil?"
This is actually a deceptively complex question. We tend to see evil as simple; black and white. We know evil when we see it, and we assume others view it the same way. Evil is interpreted through the cultural lens of heroes and villains. Because we have an intuitive grasp of evil, but no real knowledge of it, we can only ask "what if we use this for evil?" from a position of innocence. To answer the question, we have to unpack those assumptions, and figure out what evil really is. We have to eat from the fruit of the tree of knowledge, as it were.
It turns out that most people don't understand evil. We think we have a very good grasp of it, but in practice we demonstrate otherwise. For example, when confronted with difficult moral propositions, like the ever-relevant Trolley Problem, very few of us will display consistency across our own moral decisions. This is because most people never examine their own notions of good and evil. It's hard for us to challenge ourselves this way.
So when a developer asks another developer, "what if people use this for evil?" it's not a simple question at all. Neither of them know what the other means by evil. Neither have a way to measure this. They both share a nebulous sense that they should generally prevent evil, but no concrete tools for figuring out how to do that. We all understand that this software is capable of doing harm, but have no idea how to approach this problem. This is where philosophy comes in.
As it happens, a small number of very intelligent people have been working on problems like this for millennia. The discussion has continued from Socrates to Kant and down to the present day. There are dozens of competing branches and theories of ethics, and many well-reasoned arguments. There are concrete methods for determining the ethical quality of one's actions, and what responsibilities we have to others to prevent evil.
It turns out that most philosophers don't understand evil either. There is no majority view among philosophers. This means that no matter what position is "right" (if there even is a single answer), most philosophers are "wrong." So sure, we can't say that ultimate truth is out there, just waiting for us software guys to find. But we can point to the thought of thousands of educated and intelligent people, thousands of years of thoughts, and thousands of books devoted to solving problems that we're struggling with. We can draw concrete lessons, and compare them to our lives and projects. It can only benefit us.
We all understand that these questions are important. We all want to minimize evil done using our software. It's incredibly irresponsible to ignore all the research ever done in that space. Software developers have a pressing need to read philosophy.