Can CUDA Language Open Up Parallel Processing?
EE Times Europe (04/30/08) Holland, Colin More universities ought to offer massively parallel computing programming courses while more graphic processor (GPUs) providers should consider facilitating use of NVIDIA’s Compute Unified Device Architecture (CUDA) programming language on their devices, said NVIDIA chief scientist David Kirk in a lecture at Imperial College, London. “Massively parallel computing is an enormous change and it will create drastic reductions in time-to-discovery in science because computational experimentation is a third paradigm of research–alongside theory and traditional experimentation,” he said, adding that massively parallel computing also has the potential to effect a democratization of supercomputing. There must be an emphasis on massively parallel computing in education for every scientific practitioner and not just computer scientists and electrical engineers, Kirk stressed. NVIDIA created CUDA to operate on its own GPUs, and the language has gained a l
Related Questions
- I see different practices used by different language catalogers/institutions for constructing parallel non-Latin headings in bibliographic records—why is that?
- What is the OCLC policy for creating parallel anguage bibliographic records based on non-English language catalog records?
- Old English is most closely related to which language?