Methods stemming from decades of research on disordered materials are used to describe algorithmic phase transitions, and to design new algorithms in machine-learning problems.
A man was recently sent to prison for six years at least in part by the recommendation of a private company’s secret proprietary software.
Crime data reflect only what crimes are identified by the police – not all the crimes that occur. So decisions based on crime data are necessarily biased and incompletely informed.
The Productivity Commission’s report on data availability and use is disappointing for consumers, who won't be able to stop firms collecting their data or challenge automated decisions made using it.
Relying less on fossil fuels is one of the key challenges of energy transition, and taking weather variations into account can help increase the overall efficiency of a renewable-energy system.
Algorithms can have enormous consequences on people's lives, yet a federal law prevents us from studying whether they may be biased, unfair or discriminatory.
A European Union law will require human-understandable explanations for algorithms' decisions. A team of researchers has found a way to provide that, even for complex calculations.
How do you know your search results or social media feeds aren't being manipulated for political purposes? It's not a crime to do so. But we believe it should be.
Up to 50% of the people who take the efavirenz antiretroviral react particularly badly to it and need to change drug regimens.
The ethics and psychology of trust suggest ways we might learn to understand self-driving cars, but also show why doing so might be more challenging than we expect.
Politicians want to regulate the software that decides if we get a loan or a job, but existing laws can already protect us – if we know how to use them.
If the site is increasingly where people are getting their news, what could the company do without taking up the mantle of being a final arbiter of truth?
Algorithms that learn from large data sets can pick up inherent social biases. That could perpetuate the biases, or even worsen them.
Business Briefing: trusting an algorithm with investment decisions.
The Conversation13.9 MB (download)
Financial advice was once the realm of bankers and brokers now startups are developing digital platforms to take advantage of how trusting we are of investment advice from computers.
Algorithms can discriminate, even when their designers don't intend that to happen. But they also can make detecting bias easier.
Machine learning is being used to see if it's possible to predict whether someone will commit a crime some time in the future. But does this risk condemning people for a crime they haven’t committed?
Data-driven algorithms drive decision-making in ways that touch our economic, social and civic lives. But they contain inherent biases and assumptions that are too often invisible to the public.
Software is eating everything in this online, digital world. We need to design code that uses as little energy as possible.
Math isn’t prejudiced, goes the argument. But these arithmetic programs can learn bias from the data fed into them by human beings, leading to unfair treatment and discrimination.
Humans are no longer the only judges of creativity. Computers can perform the same task – and may even be more objective.