Danielle Williams, Arts & Sciences at Washington University in St. Louis
Enthusiasm for the capabilities of artificial intelligence – and claims for the approach of humanlike prowess –has followed a boom-and-bust cycle since the middle of the 20th century.
People can trust each other because they understand how the human mind works, can predict people’s behavior, and assume that most people have a moral sense. None of these things are true of AI.
In the future, our computer may be able to produce long-term forecasts in areas such as climate change, bushfires and financial markets – while being cheaper and more accessible than supercomputers.
Metaphorical black boxes shield the inner workings of AIs, which protect software developers’ intellectual property. They also make it hard to understand how the AIs work – and why things go wrong.
Artificial intelligence tools are making waves in almost every aspect of life, and astronomy is no different. An astronomer explains the history and future of AI in understanding the universe.
We’ve seen AI systems writing texts that are indistinguishable from human texts. Some are even rendering impressive 3D artworks from short text inputs. But it doesn’t mean they can ‘think’ like us.
Our research on a recent Australian court case shows how experts and lawyers can overcome opaque AI technology. But regulators could make it even easier, by making AI companies document their systems.
New software that can generate images and text on command may deliver ‘good enough’ creativity in advertising, copywriting, stock imagery and graphic design.
Emotions play a key role in many types of spontaneous thoughts. Even microemotions — which are often fleeting and unconscious — can affect thoughts and influence attention.
Understanding when and how neurons die is an important part of research on neurodegenerative diseases like Lou Gehrig’s, Alzheimer’s and Parkinson’s diseases.