The fact that HPC is on the lagging edge is a culture shock for our community. We haven't yet internalized what that means, especially in terms of leveraging off the products and services being offered for the Internet. A model for us to consider is the Web Data Center. This model could give us distributed services (compute, I/O, viz..) and multiple networks (data, mgmt...). We could capitalize on colocated farms and distributed nets of the Internet content providers. (No one will host their own Internet content, but everyone will host their own Intranet content.) This buys us servers, big and small that are robust, and well managed. Component compatibility, integration and system testing would be done 'for us.' We wouldn't have to worry about installation and maintainence. We could leverage a large professional staff. Currently we have to build our own clusters with nodes from vendor A and interconnects from vendor B. This shift could give us that, but we'll still have to do our own scaleable software for HPC. This software needs to be smart and adaptable. In a perfect world, we would never buy a new machine again, rather we would configure the machine we need for a particular application out of these piece parts. The hardware would be faster and lean, in the sense that it would only have enough smarts to tell the network what its characteristics were and and have the hooks such that the system software could adapt the appropriate hardware to the given application. Alternatively we would have a given set of hardware and the software could configure itself on the hardware for the particular application. From another point of view we heard that the outlook for terascale computing looked bleak: "the applications will run slower than expected, the system will fail before the applications completes, but it won't matter because we can't recover anyway." The conclusion here again was that we neede to concentrate on terascale software.