Menu Close

We must develop ‘techno-wisdom’ to prevent technology from consuming us

Modern surveillance cameras can use artificial intelligence to identify targets. Shutterstock

This is the fifth article in a series in which philosophers discuss the greatest moral challenge of our time, and how we should address it. Read part one here, part two here, part three here, and part four here.


My first real awareness of our psychological attitudes to technology came from an unusual source: the British comedian Eddie Izzard. Izzard describes two diametrically opposed attitudes to technology: techo-fear and techno-joy.

Those with techno-fear are hesitant, blundering and worry that technology will cause the end of the world. Those with techno-joy are blindly optimistic about what technology can do. Izzard explains his own techno-joy:

When I get a new machine I think, “Yes! This machine will save my life, I’ll never work again!” … And the first thing you do if you’ve got techno-joy is you get the instructions and throw them out the window!

One of the great moral challenges of our time will be to find something between the categories of techo-joy and techno-fear. We need to find something resembling “techno-wisdom” (though I doubt it would make for good comedy).

It’s going to take a lot of people working together to map out exactly what this techno-wisdom looks like. Happily, lots of different academics and organisations have been working on versions of this for a while.

Argumentative themes

Most arguments about technology centre around three distinct themes:

  • technology overcomes: technology will either save the world by overcoming our greatest challenges or it will overcome us. An example is the debate around lethal autonomous weapons systems

  • technology influences: technology will either free us up to focus on what matters or it will distract us from what matters. Negative examples appear in just about every episode of the television series Black Mirror. More optimistic versions can be found in the debate over “ethical nudging”.

  • technology amplifies: with technology we’ll either be able to do great things quickly, efficiently and at scale or we’ll be able to do horrible things in the same fashion.


Read more: Looking for truth in the Facebook age? Seek out views you aren't going to 'like'


The parameters of the debate are set and nobody seems to be budging in their opinions. But this impasse itself generates ethical challenges. The opportunities are too great to ignore technology, but the risks are too high to allow it to proceed unrestrained.

Understanding technology is vital

In Izzard’s comedy, ignorance and ineptitude drive those who fear technology. Interestingly though, he paints those with techno-joy in much the same way. Neither understands technology. This is where our techno-wisdom should begin: understanding what technology is and how it works.

Philosophers of technology such as Martin Heidegger, Jacques Ellul and Albert Borgmann have argued that technology reflects a distinctive way of seeing the world around us. It tends to reduce the world to a series of technical problems to solve and an assortment of things to use, measure, store and control.

On this understanding, technology isn’t value-neutral. It encourages us to seek control, values efficiency and effectiveness over other considerations, and reduces everything to a unit of measurement.


Read more: We need to become global citizens to rebuild trust in our globalised world


There are countless examples to prove this point. Online technology is challenging traditional journalistic values in favour of speed and reach. Dating apps commodify our potential romantic partners and try to free dating from the perils of rejection or unwanted advances. Computer-generated porn allows you to make your favourite celebrity crush do whatever you want. She doesn’t have to consent. She doesn’t even have to know.

If this is the values system behind technology, are we comfortable with it, even if it does make life incredibly convenient? If not, what should we do about it?

Focus on means

Despite being polar opposites, techno-fear and techno-joy have a common ethical thread: a focus on outcomes. Each side agrees that ethical technology must lead to positive change in the world (or at least, not create more problems). They disagree about whether technology will end up being a force for good or ill.

However, focusing on outcomes blinds us to another dimension of technological ethics: the means by which those outcomes are achieved.

Many people are thinking about technological processes and their ethical implications, but often they’re focusing on them because they’ve brought about bad outcomes. The discussion becomes another battlefield on which to have a debate about outcomes.


Read more: Don't shoot the messenger when confronted with inconvenient ideas


For instance, debate about COMPAS – the data sentencing algorithm that was the subject of a widely-read Pro Publica investigation – focused on the fact that it tended to produce racially-loaded outcomes. That’s important. But it’s also important to understand how COMPAS worked, even if the outcomes weren’t so evidently problematic.

Let’s imagine we knew an algorithm like COMPAS was 100% effective at predicting an offender’s likelihood of re-offending. Let’s also imagine that the reason it was so accurate was because its data set was so comprehensive. It included every piece of private communication an offender had produced in the past ten years. Every text message, Facebook post, email, phone call, webpage view – all of it. This data enabled a crystal-clear psychological profile of the offender and incredibly precise predictions of re-offending.

There would still be reason to object to this technology, not because it achieved awful results, but because it achieved good results in a way that undermined our commonly-held principles around privacy and civil liberty. That’s where an exclusively outcome-driven philosophy becomes a real problem.

Humans first

Technology is likely to be part of the solution to most of our great moral challenges. But not alone. One of technology’s functions is to amplify human activity. This means humans need to get their own house in order before tech can be helpful.

We also need to get the technological process right. We need to change our standard of what counts as “excellent” technology away from the logic of speed, effectiveness and control. If we don’t, technology is likely to become our next great moral challenge. More worrying, by then we may have ceded too much power to the machines to be able to do anything about it.

Want to write?

Write an article and join a growing community of more than 180,900 academics and researchers from 4,919 institutions.

Register now