Record-breaking technology can sequence an entire human genome in a matter of hours. The work could be a lifeline for people suffering from the more than 5,000 known rare genetic diseases.
Social media platforms have enabled wildlife traders to connect as never before. Some operate legally, within the boundaries of international laws. Others are less scrupulous.
The increasing use of artificial intelligence and machine learning in public decision making is raising critical issues around fairness and human rights.
A UK controversy about school leavers’ marks shows algorithms can get things wrong. To ensure algorithms are as fair as possible, how they work and the trade-offs involved must be made clear.
Once analysts gain access to our private data, they can use that information to influence and alter our behaviour and choices. If you’re marginalized in some way, the consequences are worse.
You have evolved to tap into the wisdom of the crowds. But on social media, your cognitive biases can lead you astray, something organized disinformation campaigns count on.
Peng Zhang, The Rockefeller University et Yuzong Chen, National University of Singapore
Many features of proteins are analogous to music. Mapping these features together creates new musical compositions that help researchers learn about proteins.
Applications of artificial intelligence have been shown to include discriminatory practices. This creates a need for meaningful rights-based regulations to ensure that AI will not exacerbate inequalities.
You have evolved to tap into the wisdom of the crowds. But on social media your cognitive biases can lead you astray, something organized disinformation campaigns count on.
Researchers have long tried to unravel the puzzle of Jan van Eyck’s use of perspective in his masterpiece, the Arnolfini Portrait. New research suggests he may have had help from a novel machine.
Politicians of all stripes, computer professionals and even big-tech executives are calling on government to hit the brakes on using these algorithms. The feds are hitting the gas.
Robots are more likely than people to misclassify emotions when reading faces that are partially covered. This could lead to unexpected behaviours when they interact with people wearing masks.