Why is the “auto-scaling” in Pervasive DataRush so important?
Traditional notions of “scaling” normally mean using parallelism to scale up for ever-larger data volumes or shrinking time windows or both. However, within the status quo it is important to realize that the majority of traditional solutions built on parallel techniques are notoriously “brittle” – that is, they were engineered by typically very expensive and rare concurrent programming talent with specific knowledge of the current-at-the-time hardware configuration, who then implemented scenarios that, while highly optimized for that specific scenario/configuration, will in almost all cases NOT automatically scale as additional hardware resources are thrown at the problem. This can be a very frustrating moment for users/buyers, who experience a fairly painless hardware expansion, but then find the software application/solution does not scale for the new hardware, and in fact has to be “tuned”at length or in some cases re-written altogether. Of course, with the gathering speed of advanc