If you had the opportunity to vote for a politician you totally trusted, who you were sure had no hidden agendas and who would truly represent the electorate’s views, you would, right?
What if that politician was a robot? Not a human with a robotic personality but a real artificially intelligent robot.
This is not to say that people have lost interest in politics and policy-making. On the contrary, there is evidence of growing engagement in non-traditional politics, suggesting people remain politically engaged but have lost faith in traditional party politics.
More specifically, voters increasingly feel the established political parties are too similar and that politicians are preoccupied with point-scoring and politicking. Disgruntled voters typically feel the big parties are beholden to powerful vested interests, are in cahoots with big business or trade unions, and hence their vote will not make any difference.
Another symptom of changing political engagement (rather than disengagement) is the rise of populist parties with a radical anti-establishment agenda and growing interest in conspiracy theories, theories which confirm people’s hunch that the system is rigged.
The idea of self-serving politicians and civil-servants is not new. This cynical view has been popularised by television series such as the BBC’s Yes Minister and the more recent US series House of Cards (and the original BBC series).
One alternative is to design policy-making systems in such a way that policy-makers are sheltered from undue outside influence. In so doing, so the argument goes, a space will be created within which objective scientific evidence, rather than vested interests, can inform policy-making.
At first glance this seems worth aspiring to. But what of the many policy issues over which political opinion remains deeply divided, such as climate change, same sex marriage or asylum policy?
Policy-making is and will remain inherently political and policies are at best evidence-informed rather than evidence-based. But can some issues be depoliticised and should we consider deploying robots to perform this task?
Those focusing on technological advances may be inclined to answer “yes”. After all, complex calculations that would have taken years to complete by hand can now be solved in seconds using the latest advances in information technology.
Such innovations have proven extremely valuable in certain policy areas. For example, urban planners examining the feasibility of new infrastructure projects now use powerful traffic modelling software to predict future traffic flows.
Those focusing on social and ethical aspects, on the other hand, will have reservations. Technological advances are of limited use in policy issues involving competing beliefs and value judgements.
A fitting example would be euthanasia legislation, which is inherently bound up religious beliefs and questions about self-determination. We may be inclined to dismiss the issue as exceptional, but this would be to overlook that most policy issues involve competing beliefs and value judgements, and from that perspective robot politicians are of little use.
A supercomputer may be able to make accurate predictions of numbers of road users on a proposed ring road. But what would this supercomputer do when faced with a moral dilemma?
Most people will agree that it is our ability to make value judgements that sets us apart from machines and makes us superior. But what if we could program agreed ethical standards into computers and have them take decisions on the basis of predefined normative guidelines and the consequences arising from these choices?
If that were possible, and some believe it is, could we replace our fallible politicians with infallible artificially intelligent robots after all?
The idea may sound far-fetched, but is it?
Robots may well become part of everyday life sooner than we think. For example, robots may soon be used to perform routine tasks in aged-care facilities, to keep elderly or disabled people company and some have suggested robots could be used in prostitution. Whatever opinion we may have about robot politicians, the groundwork for this is already being laid.
A recent paper showcased a system that automatically writes political speeches. Some of these speeches are believable and it would be hard for most of us to tell if a human or machine had written them.
Politicians already use human speech writers so it may only be a small step for them to start using a robot speech writer instead.
The same applies to policy-makers responsible for, say, urban planning or flood mitigation, who make use of sophisticated modelling software. We may soon be able to take out humans altogether and replace them with robots with the modelling software built into itself.
We could think up many more scenarios, but the underlying issue will remain the same: the robot would need to be programmed with an agreed set of ethical standards allowing it to make judgements on the basis of agreed morals.
The human input
So even if we had a parliament full of robots, we would still need an agency staffed by humans charged with defining the ethical standards to be programmed into the robots.
And who gets to decide on those ethical standards? Well we’d probably have to put that to the vote between various interested and competing parties.
This bring us full circle, back to the problem of how to prevent undue influence.
Advocates of deliberative democracy, who believe democracy should be more than the occasional stroll to a polling booth, will shudder at the prospect of robot politicians.
But free market advocates, who are more interested in lean government, austerity measures and cutting red-tape, may be more inclined to give it a go.
The latter appear to have gained the upper hand, so the next time you hear a commentator refer to a politician as being robotic, remember that maybe one day some of them really will be robots!