Content-type: text/html Downes.ca ~ Stephen's Web ~ In their quest for AGI safety, researchers are turning to mathematical proof

Stephen Downes

Knowledge, Learning, Community

This post summarizes what is quite a good paper (17 page PDF) by Max Tegmark and Steve Omohundro called "Provably safe systems: the only path to controllable AGI'. As the authors note, a strong approach is needed not because AI is inherently dangerous but because potential misuse by humans will likely override less secure safeguards. But there's an approach that can be followed: "Before worrying about how to formally specify complex requirements such as 'don't drive humanity extinct', it's worth noting that there's a large suite of unsolved yet easier and very well-specified challenges" such as provable cybersecurity, a secure blockchain, secure privacy, and secure critical infrastructure. The approach (as I read it) focuses less on the AI itself and more on the AI's output, for example, by creating a specification that generated code must satisfy and then generating a proof that the generated code meets the specification. Via Martin Dougiamas.

Today: 0 Total: 101 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Nov 22, 2024 03:30 a.m.

Canadian Flag Creative Commons License.

Force:yes