The Conversation receives a lot of comments each day and you can’t read everything. That’s why we occasionally end the week with a selection of community highlights: comments we enjoyed or thought interesting. Read on for five comments and discussions I thought worth highlighting.
Les Morrow shared their concerns about suffering and euthanasia:
Watching Q and A on Monday night and reading comments on today’s article, I am surprised that there seems to be no effort to tease out people’s understanding of what is meant by the idea of “suffering” in this context “so far”.
For me suffering may entail far more than physical pain and discomfort, but also includes mental and emotional anguish. It seems to me that those who opposed a right of choice to take charge of the process of our own dying equated suffering with physical pain and seemed to make the assumption that palliative care was all that was necessary to alleviate suffering. My understanding is that palliative care is provided to relieve physical pain, but the issues of mental and emotional anguish do not seem to be directly confronted.
Personally I attach a great deal of importance to the maintainance of my “independence” and my ability to maintain a large degree of control over the circumstances of my life and I equate a loss of ability to maintain that control to a sense of emotional suffering. I recognise that there are circumstances during life when that control may be greatly diminished on a temporary basis, when recovery to my usual state of “independence” can be expected to be restored, but the idea that there would be no possibility of returning to that state represents a great deal of “suffering” to me. Others may not experience that sort of suffering and may not wish for a choice to end life in that situation, but why should my well considered position be ignored and overridden? I accept that others may be able to offer me relief from physical pain, but they cannot get inside my head and relieve my mental anguish.
I wish for the ability to obtain a prescription for a substance, such as nembuthal, which I can self administer at a time of need to relieve my own experience of suffering, or in the absence of an ability to self administer, I can pass on to another person who may be willing to administer to me to relieve that suffering. Is that too much to ask, or too difficult to provide sufficient safeguards to enable the fulfilment of my wishes, if so I would like to hear from anyone who would oppose my having such an ability to explain why, because I do not believe that human ingenuity can not be sufficient to maintain safeguards to prevent any mis-use of such a process.
I do not wish any other person to make those decisions for me against my express will. Why is that not my right?
Georgina Byrne offered some ideas and advice for finding happiness in later life (although younger people may be wise to take it all to heart):
Interesting and rather sad article, Mark. Could it have something up do with the concept that having the wherewithal to indulge oneself in food, drink, endless entertainment and the acquisition of “stuff” is the main, if not the only aim in life? Those who are happiest in old age, or pretty much at any time, it seems to me, are those who retain some kind of connection with nature… gardening, fishing, bush walking small scale farming or even just walking the dog in a park (dog ownership itself being a connection with nature). I wonder if recreational swimmers and surfers are amongst those resistant to the dangers you’re concerned about? Green and blue have tremendous healing and calming properties associated with them. People in inner city areas tend to behave much better if they have access to attractive outdoor green spaces, especially if they’re able to interact with others in them. Music making is another very positive activity, it seems to me, especially in company. Those who form or join choirs, bands or groups and play/sing for fun and pleasure are probably OK too. None of these things is connected with the always stressful business of profit making/taking or “keeping up with the Joneses”.. Forgetting that they are merely animals, albeit extremely clever and sociable animals …but still in need of natural environments is a very big problem for a great many people. Maybe ageing women tend to do do better because activities like gardening, engaging in creative projects and playing with children are seen as socially more appropriate for them and because they tend to form/join supportive non-competitive groups like book clubs, sewing groups and gardening/conservation groups.
Andrew Holliday and the article’s author Brendan Gogarty had a discussion about how driving A.I. can or should minimise harm to humans and the work of sci-fi author Issac Asimov:
To some extent the answer we want to put in place here depends on whether we want the software to replicate the more immediate proximate human reactions (avoid the child on the bicycle, swerve and (perhaps) hit the bus), or apply that more detached utilitarian approach (hit the bicycle, but avoid the bus). I strongly suspect the proximate response will be the way to go. Not only will it ‘feel’ right (most of the time) but will probably be easier to put into practice technically.
We would do well to consider all this has been explored, at some length via Isaac Asimov and his laws of robotics. There were initially three of them:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Later, to address the very issues raised here, a 4th (but numbered 0, to reflect a hierarchical series) was added:
A robot may not harm humanity, or, by inaction, allow humanity to come to harm. This was your ‘greatest good of the greatest number’ amendment. And the problems with it were recognized immediately:
Trevize frowned. “How do you decide what is injurious, or not injurious, to humanity as a whole?”“Precisely, sir,” said Daneel. “In theory, the Zeroth Law was the answer to our problems. In practice, we could never decide. A human being is a concrete object. Injury to a person can be estimated and judged. Humanity is an abstraction.” (taken from Foundation and Earth)
Which brings us back to programming the software to deal with proximate crises, sequentially (avoid the bicycle and then do the best possible with whatever follows) - just as we do. As courts (and most ethics classes addressing the trolley problem) tend to understand and accept.
P.S. The legal precedent linked to in this piece is addressing a coolly planned piece of utilitarian calculus - not a response to an immediate and unpredictable threat. To that extent it isn’t an appropriate comparison. To the extent that it reflects the difficulties for programmers - it answers its own question.
Thank you Andrew. I do love the Foundation series! I just re-read it actually. Have you read Asimov’s Robot series?
I’m not sure we’re up to Asimov’s positronic brain yet, nor the ability to make such complex hard-AI decisions.
Court’s actually don’t accept the utilitarian response to the Trolley Problem. There is no necessity defence to criminal law. Hence, if there are five people aboard a stranded vessel and the only way to survive is to kill and eat one person, its still manslaughter or murder (depending on the circumstances). The focus is on the intent or recklessness as to harm.
As to the sequential nature of harm, an AI would ordinarily be dealing with non-linear decision trees. It must make a decision between the cyclist and those in the car; the cyclist, those in the car and the oncoming traffic; the the cyclist, those in the car and the oncoming traffic and the pedestrians on the side of the road. That’s pretty complex but arises from a pretty obvious set of circumstances.
Ultimately where a situation is predictable and will result in harm the law imposes a duty on those who could exercise control over that harm to do so. If they don’t, or are reckless then they will either be criminally liable (if they made a conscious / prospective decision) or negligent (if they should have made a decision but didn’t or made an unjustified decision). That has traditionally meant that if you did nothing in the trolley problem you were, at best liable in negligence (unlikely); but if you pulled the lever you were criminally liable for murder. Hence the trolley problem was really just a philosophical one. The law would never make someone pull the lever. My feeling is that AI programming for future harm is equivalent to pulling the lever, which means we’re faced with a real trolley problem on the road for the first time.
Thanks for the response. Interesting stuff.
P.S. Perhaps we’re all getting a bit complex here. Won’t the easiest solution simply be to include the usual tick box prior to the software install saying ‘I agree to the terms and conditions…’ which will include the occupant/owner of the vehicle being ultimately liable for whatever happens (in a footnote to subsection 37 on page 512). Sorted.
PPS. Read all of Asimov’s sf. Many years ago mind you…. :)
I think you need to author a book “Shrinkwraps for killer cars” … maybe more Philip K Dick than Asimov, but a bestseller I’m sure.
Erik Hoekstra discussed the problems he faces when trying to hire teachers and, by extension, the problems faced by students:
Where do you start? When I am interviewing prospective teachers, I ask them what is the key to good classroom management. I get a bevy of answers relating to behavioural issues. Occasionally, I get what I’m after: interesting, well planned and well executed lessons.
The key to engagement is that the curriculum needs to be relevant and well presented. It has to relate to students’ lives and it has to lead them to understandings they create for themselves.
The model of schooling that is leading to this disengagement has to change. See any of Ken Robinson’s TED talks about schools destroying creativity, about the use of standardised testing driving teachers to boring and unfulfilling lessons, and so on.
There are some schools in Australia that are attempting to address this. For example, Parramatta Marist High in Sydney uses Problem Based Learning where students are not taught subjects in separate boxes but look at real life problems and come up with solutions using learning from a wide variety of areas of study.
All in all, the solution is simple but there is too much vested in the current system and it is unlikely to change in the near future.
Suzy Gneist touched on similar issues when she shared the schooling experiences of her sons:
I too am interested in changing a system that seems to not work for so many, including the bright, creative ones.
Both my sons find school a boring chore. The oldest, completing yr 12 this week, views it with sarcasm - he ticks the boxes, jumps through the hoops yet considers the tasks set too easy, not challenging or not relevant.
My youngest is of more concern to me since he has a vivid interest in practical projects, builds anything he fancies by teaching ng himself skills via the internet or learning from our engineer neighbour how to use different tools. Yet, still in year 8 now, he just does not want to go to school, sit around watching someone do stuff at the front, fill out bits of paper with crosses and predescribed problems - it all seems a waste of time to him who just wants to learn and do some more challenging projects.
It has not helped that he was identified as ‘gifted’ since this only extends to the opportunity to attend more stem events, most of which continue to involve sitting and watching someone talk or do something!
I am concerned that he may leave school early because it just drains his creativity, although i can see him excelling in a field such as engineering with new ideas that we sorely need for our future.
So how can we change the system to engage the students that are being failed?
And, finally, here’s my unofficial “word play of the week” comment. This week it goes to Christopher Fowler:
A relatively good article on a subject with gravitas… Thank you indeed Krzysztof ;¬)
Read a comment you thought interesting? Let me know during the week. You can leave a comment below or send me an email.