Along the lines of Cloud and High Performance Computing (HPC) Amazon pushes its web-services into research applications : “With Amazon Web Services businesses and researchers can easily fulfill their high performance computational requirements with the added benefit of ad-hoc provisioning and pay-as-you-go pricing.” The image above has a link to the tutorial on setting up a HPC environment in less than 10 minutes – demonstrating the setup of a 7-node virtual cluster and running a molecular dynamics simulation with CHARMM.
Similarly, the power of current many-core (>500 cores!) general-purpose graphics processing units (gpGPUs) can be awesome, of course the code has to be adjusted to take full advantage of the architecture. The release of the CUDA Toolkit 4.0 (Common Universal Device Architecture) in April simpliﬁed parallel programming already
and foreshadowed future CPU-GPU architectures. Think yesterdays beowulf-cluster shrunk to a single card that fits into your desktop machine. Depending on the application, GPUs seems to pack at least 10x the punch of a comparable CPU, for Amber this factor seems to be about 20x. NVidia has a test-drive of Amber available … Simulators, start your engines! Also there are a couple of standard Bioinformatics applications readily available for GPUs. Just last week, they announced “New NVIDIA Tesla GPU Smashes World Record in Scientific Computation“. Allegedly, this was achieved on only 4(!) GPUs – that fits into a midi-tower under the desk. Imagine if you’d stuff a full-height 19inch rack with these beasts – you’d not quite get to something like the Nebulae or Tianhe1A but definitely would land somewhere among the TOP500 Supercomputers in the world.
And finally in this category, also Googles App-Engine (GAE) is developing nicely, although I haven’t yet found a bioinformatics-related pet-project that would motivate me to test it more thoroughly. “App Engine enables your application to scale automatically without worrying about managing machines.“. Yep, that’s the spirit.