From open letters to congressional testimony, some AI leaders have stoked fears that the technology is a direct threat to humanity. The reality is less dramatic but perhaps more insidious.
The flood of misinformation on social media could actually be worse than many researchers have reported. The problem is that many studies analyzed only text, leaving visual misinformation uncounted.
The government faces legal restrictions on how much personal information it can gather on citizens, but the law is largely silent on agencies purchasing the data from commercial brokers.
Dramatic improvements in computing, sensors and submersible engineering are making it possible for researchers to ramp up data collection from the oceans while also keeping people out of harm’s way.
Visual artists draw from visual references, not words, as they imagine their work. So when language is in the driver’s seat of making art, it erects a barrier between the artist and the canvas.
The Saudi government is using digital technology to help the hajj run smoothly and safely – the latest updates in a 200-year history of technology and the hajj.
If you drop advanced AI into a dumb organisation, it won’t make it smart. It will just help the organisation do dumb stuff more efficiently (in other words, quicker).
Public comment could soon swamp government officials and representatives, thanks to AI, but AI could also help spot compelling stories from constituents.
Large language models have been shown to ‘hallucinate’ entirely false information, but aren’t humans guilty of the same thing? So what’s the difference between both?
Antitrust suits against Google for its advertising practices center on the technology for buying and selling online ads. A computer scientist explains how these ad networks work.