Good honest reflection of what goes wrong with predictions, with a nod to Nate Silver. One thing that struck me in the column was this criticism of Silver's methodology: "O'Neil argues that Silver assumes that 'the only goal of a modeler is to produce an accurate model,' something that might hold true for some topics — topics in which Silver happens to have expertise, like baseball, gambling, and polling — but that doesn't hold true for other areas he covers in his book, including medical research and financial markets."
But - from my perspective - this is just the same as saying that medical research and financial markets are unscientific, because that's how science is practised these days: we don't make one-off predictions, we make models. This isn't just metaphorically true, it's literally true. When I tinker with gRSShopper (which I am constantly doing) I'm making a model of what I think learning looks like. And you'll find models deeply embedded in medical and financial sciences. Now these models are not perfect - they are not reality, and we cannot infer from them (the evidentiary basis for science is still evidence; that was the core proposition in my Master's thesis, Models and Modality). And there is the intractable problem of selecting between models. But if you're not making models, you're not doing science.
Today: 0 Total: 16 [Share]
] [