User:ShuangJ

From CompSciWiki
Revision as of 01:19, 25 November 2008 by ShuangJ (Talk | contribs)

Jump to: navigation, search

SUPERCOMPUTER

Introduction:

     A supercomputer is an extremely fast computer that can perform 100’s of millions of instructions per seconds and is used for jobs that take massive amounts of calculating, like weather forecasting, engineering design, nuclear simulations and animations. It has a greater processing power than others of its days; typically they use more than 1 core and are housed in large clean rooms with high air or water flow to permit. 

Characters: • Is the fastest, largest, and most powerful type of computer • Is one of the most powerful available at a given time in the world with vast capabilities • Has a enormous processing capacity and build with several multiprocessors • Is the most expensive type of computer designed • Can perform the same operation on all the items in a vector at once • Refer to the best hardware, systems software, and applications software

Why We Need Supercomputer: • Be at the frontline in terms of processing capacity, particularly speed of calculations for much high-level scientific research • Be useful for intensive number-crunching programs such as weather forecasting or high-quality graphics

How to Solve the Problems of Supercomputers: • A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major problem. • Information that moves between two parts of a supercomputer must have latencies. • A Supercomputer consumes and produces massive amounts of data in a very short period of time.

Structure of a supercomputer

Overview • Supercomputer applications • Architectural constraints • Generic structure • Network types • Implications for programmers

Supercomputer applications Examples • Communications intelligence • Meteorology • Nuclear explosion simulations • Computational fluid dynamics

     Common characteristics

• Lots numbers to crunch • Large, monolithic dataset

Architectural Constraints: • Need multiple processors in order to get adequate performance • Cannot fit all the required memory on a single memory bus

Generic structure

Network types


1. In most large computers, processors spend most of their time waiting for operands 2. Goal: Get the operands to the processor ASAP! • High bandwidth • Low latency • Minimum number of hops

Implications for programmers 1. Latency to global memory is significant • Code should be structured to make efficient use of global memory accesses 2. Programming is necessarily concurrent, shared memory, multi-process • Use appropriate protection • Process synch points • Minimize communication overhead • Use shared memory rather than message passing wherever possible



http://www.google.ca/search?hl=en&rlz=1T4PCTA_enCA285CA285&defl=en&q=define:supercomputer&sa=X&oi=glossary_definition&ct=title