We read here (21 page PDF) that "On a test set of 30 latest olympiad-level problems, AlphaGeometry solves 25, outperforming the previous best method that only solves ten problems and approaching the performance of an average International Mathematical Olympiad (IMO) gold medallist." So much for the argument that AI is bad at math. To train the model developers needed examples of other proofs in the same domain, a dataset that is hard to come by. So they used an expert system to generate a synthetic data set containing "100 million synthetic theorems and their proofs, many with more than 200 proof steps." A language model was pretrained on the synthetic data and performed almost as well as the top mathematicians in the world. Via Rose Luckin.
Today: 6 Total: 190 [Share]
] [