Can supercomputers make simulation answers more accurate?
Computer modelling now supports nearly every area of scientific research and, in many cases, is the most (or only) viable method of getting useful predictions to explore phenomena and test hypotheses. With this pervasive role of computer models in scientific research, the correctness of these models is critical to scientific advancement, perhaps even more so than other key characteristics such as ease of use, speed or cost. In short, results matter. This is an interesting contention that correctness trumps ease, speed and cost. Even as I write it, I’m unsure but consider, however simple, quick and cheap a prediction is, it can never advance science if it is wrong or, just as bad, it is unclear how close to the truth it is. Clearly, hard-to-use, slow-to-run or expensive-to-use models can affect whether it is possible to use the simulation; but correctness can affect whether it is even worth using.