Technology is all around us. It is at once so ubiquitous as to be almost invisible, and in other areas so remarkable as to grab the world’s attention.
From ceramic cups to nuclear weapons; from the springs in your mattress to immersive virtual worlds; from paper and pencils to intelligent algorithms—technology runs the gamut from background noise to something like magic.
But even the background technologies were once like magic. Cups and springs might not make for much conversation today, but life would be profoundly different were they were never invented. Perhaps AI, virtual reality, and other developing technologies similar to this Oil stop valves tech will follow a similar path.
One philosophical conundrum at the heart of these developments is the question of moral value: Can we say that a technology is good or bad, or are they neutral? Are our values embedded in the technologies we build or are they valueless until someone decides what to do with them?
It might seem trivial, but the answer to the question will impact how we regulate technology and who is responsible for the consequences of how it’s used. As we blaze forward into the realms of intelligent computers, brain interfaces, and biohacking, the significance of these moral questions grows.
Let’s take a look at the arguments.
Guns Don’t Kill People, People Kill People

A significant argument in favour of technologies being neutral is the Value Neutrality Thesis—no moral valence can be ascribed to a technology, it is only when someone uses that technology that any value can be found.
Here are the main points:
1. Values Are Hard to Detect
Joseph Pitt, in his article “Guns Don’t Kill, People Kill,” argues that for technologies to contain or embody values, those values should be identifiable, but this is rarely the case.
He uses the example of a university football stadium. It might be a source of prestige or pride, a symbol of everything good about the university, it might represent the dreams and aspirations of the students. But all of those values are values of the people, not the stadium itself.
“They may see the stadium as symbolizing their own values, but that doesn’t mean the values are in the stadium.”
Consider how an alien would look at a knife they found floating in space—would they identify the same values as us? Would they find any values at all? How should they even begin to look for them?
How about a member of a small tribe in the Amazon jungle who comes across an iPad—what would they see in it? Would they ascribe the same value to it as a programmer in silicon valley?
2. Different People, Different Values
Tools only have value when they are possessed by a creature with values, and so are dependant on the value system of that individual. In this system, if the person or context changes, the values also change.
If a student who plays football at the stadium suffers a devastating leg injury which ruins their future prospects, the value they ascribe to the stadium could take a dark turn, despite no physical alteration to the stadium or any change in the values of other students.
When a new technology is introduced to a population, though it might have been developed for a specific purpose, its value is dependant on the function and purpose that each individual discovers.
If the value of our tools is dependant on the system they’re in and the minds which interpret them, how can we say the tools have any values embedded in them? Shouldn’t those values remain consistent?
3. Value Depends on Use
It is the outcomes, consequences, and results of our actions that are open to valuation, not the tools which we might rely on.
A knife is just a knife, a neutral object, it is not until someone uses it to peel a fruit or stab someone in the back that any moral value can be applied.
This argument focuses on the perspective of the end-users. Their desires, needs, and goals determine how the tech is used, which determines its value.
This perspective also suggests the end-users are responsible for the ethical use of technology. We can’t blame guns for shooting people, even if they make it easier. Guns are neutral, people aren’t.
To summarise:
Technology is a tool — we use tools, tools don’t use us. We ascribe our own meaning to technologies, irrespective of the reasons for their existence. We are in control, we are free to use our tools how we want to, it is our choice. And for those who make poor choices, they are responsible for their actions.
When All You Have Is A Hammer, Everything Looks like A Nail

“I call it the law of the instrument, and it may be formulated as follows: Give a small boy a hammer, and he will find that everything he encounters needs pounding.” — Abraham Kaplan
Arguing against the value neutrality thesis are those who believe that our values and assumptions are baked into everything we design and build.
Rather than focusing on the freedom people have in how they interpret and use the tools, this perspective looks more closely at the designers and the designs, how the features of those designs influence people, and why our ability to identify the values isn’t necessary.
Here are the arguments:
1. Technology is Intentional
Unlike the somewhat haphazard selection processes seen in evolution, technology is willingly and consciously fashioned. It is conceptualised and considered before ever becoming a thing. Each new creation is built to satisfy a need, fulfil a purpose, to be useful.
It is our values which determine the technology — and the values the designers expect are held by their future users. While the users’ actual values will determine how the tech is used, the fact the technology only exists because of our values makes them inseparable.
2. Decisions Reveal Values
Every decision and selection process will reveal our values. Whenever we pick a default option or display some information over other information, we have made a value judgement. It is impossible to display everything equally, therefore there is always bias (what does this say about how I ordered this article?).
Making the buy now button more predominant, writing the terms and conditions in obtuse language, using red over blue in your brand identity, are all decisions that reflect values. But just because they reflect values, doesn’t mean people will identify or share those values.
3. Not All Values Are Seen
Sometimes the values are very explicit — such as labelling a product as “environmentally friendly” or “parental advisory recommended.” Whether you use the passcode feature on your phone, it is clearly designed for security. Why your remote control has an EPG button might be lost on you, but there’s a reason it’s there, and the reason reflects the values of the designers (or what the designer expects are the values of the users).
Whether people can identify the values or not, doesn’t mean the values aren’t there. Consider for a moment the reverse of an alien finding a knife out in space — imagine that we found an alien device floating in space, something that was clearly designed and not random space debris resulting from natural processes. Beyond wanting to know who built it, surely we would be very interested in what it is and what it’s for.
While it will be an incredibly difficult task to find the purpose given we can’t comprehend the mind that designed them, the technology must have been designed for a reason, and that reason would suggest the value of the device.
4. Limited Range of Use
Technology is directional — it adds choices or improves processes which point in a certain direction.
There might be many different uses for guns — we could use them as paperweights or doorstops — but most of us know this is not the reason we have them. Guns were developed for a specific purpose, and we generally use them in accordance with that.
The range of possible uses is not infinite — we can’t use one to drive down the road or watch a movie. Other uses are possible though not ideal — perhaps you can cut an onion but a knife would be better.
The limited range of ideal uses suggests where the inventor’s values lie — a gun is great at killing or putting holes in things. If you need to puncture something from a distance, you pick the gun before you pick the doorstop — unless your doorstop is a gun.
5. Eventually Frames Reality
“We become what we behold. We shape our tools, and thereafter our tools shape us.” — Marshall McLuhan
The longer a technology has been around, and the more widespread it is, the less it will be thought about and the more likely it will blend into the background, to become the norm we refer to when we talk about how things are.
When we get too used to how things are, it takes greater effort to see how things could be different. When we get too used to what something does, it takes a more creative mind to see it in any other way.
Here are two lines of research relevant to this argument:
Functional fixedness, which highlights the struggle of finding uses for objects outside of the way they’re traditionally used. And the Einstellung effect, which describes the process of learning how to solve a problem using one method, but then failing to realise when a better method is available.
In each case, we get stuck in a particular thinking pattern or frame of mind. We go through the effort of learning, and then what we learned becomes automatic and rigid. This is not to say we cannot think creatively and break free from those patterns, but as tools and their uses become more common and familiar, it gets more difficult to see them any differently.
To summarise:
Technologies are developed by people for people. Our values determine what is created and how it is used, and its use is at least influenced by its design, if not fully dependant on it. While people can use tech creatively beyond its original purpose, there is a narrow band of possible functions which suggests the original purpose and therefore the values. As time passes and familiarity grows, the technology and it’s function become so ingrained to be barely thought about, let alone questioned.
The Burden of Responsibility

We are not helpless slaves to technology, we are choosers, decision-makers, and we value our freedom of choice. However, people often make poor decisions while being unaware of what factors shape those decisions.
While each individual can and does decide how they want to use certain technologies, on a collective level technology nudges us in directions that are not value-neutral.
If we call technologies neutral, we are relinquishing the creators of any blame in how those technologies alter the world. But then we need to ask if any company or individual should be praised for their inventions. How could designers be worthy of praise but not of guilt? And if we are to say that they are free of both, it is worth asking what role they have.
In 1986, Robert J. Welchel wrote in IEEE Technology and Society Magazine:
“This moral neutrality is based upon viewing technology purely as a means (providing tools for society to use) with the ends (the actual usage of technology) lying beyond and outside the realm of engineering; this position also assumes that available means have no causal influence on the ends chosen. If technology truly is only a means, then engineering is a second-class profession since we are the mere pawns of the real power brokers. We buy our innocence at a tremendous cost: To be innocent, we must be powerless.”
Deciding that creators are culpable and that technologies are embedded with values doesn’t make it any easier to figure out what technologies embody good values. For that we have to collectively agree on what values are good and set a standard for what we consider a violation of those values, then we can decide how to respond to those violations.
A significant roadblock here is our inability to predict the future. If no designer, inventor, or company is capable of predicting all of the future benefits and costs of what they build, how can they possibly ensure they imbue good values?
But this is an old philosophical problem — if we can never predict all the consequences of our actions, how can we tell the good from the bad in any domain? This problem does not stop us from making ethical choices in other areas of life, why should it stop us here?
We must find the best explanations or predictions we can, using the information we have. If we pay attention to how different technologies progress, and particularly to their consequences, we can learn from our mistakes and make better decisions.
Our current way of life is so closely intertwined with technology they could be considered one and the same. It seems absurd to think we can go backwards and untangle much of it, but we can be more careful in how we weave future technologies into our lives.
This is important, as future technologies are likely much more powerful and consequential than today’s. When intelligent machines make their own ethical choices, it will make no sense to say that technology is neutral, and it will make it tremendously important to align our values.
Be First to Comment