Cyberbullying is a form of technology misuse that is a problem not only in schools, but in the wider community.
A serious case in point is that of the two Australian Defence Force Academy (ADFA) cadets who used Skype to transmit and receive images of a sex act without the knowledge or consent of the female partner.
Last week, at the trial, the cadets were able to convince the court that they did not know what they did was a crime. This led to Australian Federal Police detective superintendent Nigel Phair to call for a major education campaign to ensure all Australians knew how to be “good digital citizens”.
Ongoing investigations suggest the ADFA Skype case was not an isolated incident. It prompted Australia’s army chief David Morrison to issue a strong warning – see the video below – to all defence force members: anyone who degrades another will be given their marching orders.
As Morrison bluntly says:
If that does not suit you, then get out! You may find another employer where your attitude and behaviour is acceptable, but I doubt it.
The ADFA Skype case is an example of how new technology can be misused, and also how new behavioural protocols become established – albeit in a top-down command way that is possible in the military but more difficult in the civilian world.
We can probably all think of examples of people using new technologies in ways that range from outright criminal all the way down to merely impolite.
Eventually, but not soon enough for some, society evolves rules of acceptable use that become established as standard behaviour.
Principles for ethical technology use
Is there a set of general rules for ethical technology use that everyone can use? Arguably, there is.
These guiding principles are based on the work of philosopher Immanuel Kant whose ideas continue to exert a strong influence on the study of ethics today.
They are simple enough and general enough to work in the virtual world as they have in the physical world. At the very least, they are a starting point for discussion.
At the risk of oversimplifying Kant’s ideas, I’m suggesting that his categorical imperatives (unconditional requirements that are always true) be adapted as guiding principles for ethical technology use:
Before I do something with this technology, I ask myself, would it be alright if everyone did it?
Is this going to harm or dehumanise anyone, even people I don’t know and will never meet?
Do I have the informed consent of those who will be affected?
If the answer to any of these questions is “no”, then it is arguably unethical to do it. These rules are based on rational principles and hold true in both the virtual world and physical world, applying the same standards to both.
Technology as a force for good
While it is true that technology has the potential to harm people, let us not forget that at its finest, technology also has the potential to help people to become the fullest expression of their human potential.
Imagine Mozart in a world before the technology of the piano had been invented, Van Gogh in a world before inexpensive oil paints or George Lucas before the technologies of film. Today, there are millions of children being born for whom their technology of self-expression has not yet been invented.
Why we need a code of technology use
Technology is becoming more and more a part of our lives. More than just hardware, technology is an extension of our body and mind. Computers allow us to extend our ability to think and process information beyond our biological brain. Most of us would be devastated if our computer had a fatal accident. It would be not unlike having a stroke to suddenly lose that memory and processing power.
This ability to push our minds out into the world did not begin with information technology. We have been doing it for at least a hundred thousand years, probably much longer.
Cognitive scientist Andy Clark describes how brain scans show that if you were to pick up a tool – such as a garden rake – and you start to use it, within a short time your brain will map the tines of the rake to be extensions of your hands.
With our technological tools being an extension of our biological brain, people have a more personal relationship with their computers than they realise. As millions of extended minds reach out and merge with each other we can observe a remarkable phenomenon, the formation of a new layer of cognition in the world: the internet.
As Wired magazine founding editor Kevin Kelly observed in his excellent 2007 TED talk (below), the internet is a neural network that at the time approximated one human brain in complexity.
Kelly suggested the internet is doubling in computational power about every two years. He predicted that by 2040, the total processing power of the internet will exceed that of six billion human brains. A staggering number, by any standards.
The singularity approaches
This burgeoning growth in complexity and computational power is leading us towards an event horizon some time around 2045 that influential futurists like Ray Kurzweil and Vernor Vinge have called the singularity.
With exponential growth in computing power coupled with advances in artificial intelligence, the singularity is the point at which a super intelligence comes into being. Some may dismiss this as mere science fiction, but that would be to display a serious lack of understanding of this complex situation.
The singularity will come about when a critical mass of computational power, vast amounts of accumulated data and advanced artificial intelligence capable of intelligently organising the data all combine in a spontaneous moment of creation.
In his book The Singularity is Near, Kurzweil predicts this moment will occur about 20 years from now. Even if Kurzweil is the ultimate techno-optimist, his predictions are based on rigorously quantitative measures, so his conclusions are certainly worth thinking about.
Whether you love or loathe this vision of the future, one thing is for sure – if we do not adopt a code of ethical technology use, the consequences in this brave new world will be very interesting indeed.