A recent open letter calling for a temporary artificial intelligence development hiatus is more concerned with hypothetical risks about the future than the issues that are right in front of us.
Longtermism may be derided for focusing on implausible sci-fi scenarios of space colonisation and robot apocalypse, but it raises philosophical questions that are hard to dismiss.
Your great grandchildren are powerless in today’s society, but the things we do now influence them, for better or worse. What happens when we consider them while we make decisions today?