What is Grid Computing?
Grid computing is the act of sharing tasks over multiple computers. Tasks can range from data storage to complex calculations and can be spread over large geographical distances. In some cases, computers within a grid are used normally and only act as part of the grid when they are not in use. These grids scavenge unused cycles on any computer that they can access, to complete given projects. SETI@home is perhaps one of the best-known grid computing projects, and a number of other organizations rely on volunteers offering to add their computers to a grid. These computers join together to create a virtual supercomputer. Computers networked together can work on the same problems, that traditionally were reserved for supercomputers, and yet this network of computers are more powerful than the super computers built in the seventies and eighties. Modern supercomputers are built on the principles of grid computing, incorporating many smaller computers into a larger whole. The idea of grid co
Over the last few years, Grid Computing has dramatically evolved from its roots in science/academia, and is currently at the onset of mainstream commercial adoption. But with the recent explosion of commercial interest in grid, we’re seeing some industry confusion about what the term means. This is partially because the true definition has been somewhat convoluted by the onslaught of marketing hype around the category. So what is grid? I created a three-part checklist with my colleagues back in 2002. According to this checklist, a grid: • Coordinates resources that are not subject to centralized control. A grid integrates and coordinates resources and users that live within different control domains — for example, different administrative units of the same company, or even different companies. A grid addresses the issues of security, policy, payment membership, and so forth that arise in these settings. • Uses standard, open, general-purpose protocols and interfaces. A grid is built f
For a number of years, the economics of high performance distributed computing have been changing dramatically. Servers and storage have continued to rapidly improve their “price for performance” by leveraging new innovations and manufacturing efficiencies, and the same trend has finally taken hold for bandwidth. The effect is to transform distributed computing into a competitively priced commodity. At the same time, TCP/IP has become the only networking protocol suite considered, and UNIX (TM) or Linux has become the operating system of choice for scientific computing. In contrast to the commoditization of technology, skilled people have remained scarce, and concerns about security and quality of service have increased. The logical response to these changes is to move from a model based on discrete infrastructure components to a distributed computing model which fully leverages the computing capabilities of the infrastructure. This distributed computing environment operates as a unifo