I'm studying a DPhil in Engineering Science with Mike Osborne at Oxford. My research considers the expected behavior of generally intelligent artificial agents. I am interested in designing agents that we can expect to behave safely.
I started studying AI safety after readying Nick Bostrom's Superintelligence, and became convinced that the default outcome from creating an agent smarter than we are is the extinction of biological life. I got a master's in computer science with Marcus Hutter at the Australian National University. Having studied the actual math behind general intelligence since then, this has remained a major concern of mine: as of 2019, from a handful of published algorithms for general intelligence that exist (but are too slow to run), every one of them would attempt to gain arbitrary power over humans. There is little reason to think that fast approximations of these algorithms will be safe by default.