When algorithms are at work, there should be a human safety net to prevent harming people. Artificial intelligence systems can be taught to ask for help.
Algorithms used by social media networks expose users to divisive content separating them into bubbles. But the ways in which they are trained amplifies the effects of the filter bubble.
What do the Carlos Ghosn scandal, the rising power of algorithms and the “gilets jaunes” have in common? The need to extend the spatial and temporal definitions of responsibility.
New research suggests media organisations that rely on Facebook to build audience are trapped in an attention economy that delivers traffic but no money.
If you check a website multiple times for a flight on a specific date, the seller might assume this is the only date you’re interested in and increase the price on offer.
A legal loophole could grant computer systems many legal rights people have – threatening human rights and dignity and setting up some real legal and moral problems.
Luc Meunier, Grenoble École de Management (GEM) and Sima Ohadi, Grenoble École de Management (GEM)
Automated portfolio-allocation software can provide financial planning services that meet clients’ financial situations and future goals. But can it help investors make more rational decisions?
The Achilles’ heel of law technologies: training. Only 10% of such initiatives are aimed at law students, so how should this issue be managed to win the AI race?
Gola Romain, Institut Mines-Télécom Business School
Large-scale data collection and analysis can target consumer behaviour. Faced with the risk of drifts, transparency and ethics of algorithms become paramount.