Posts Tagged HPC

Europe’s science cloud: Helix Nebula

… a consortium of leading IT providers and three of Europe’s biggest research centres (CERNEMBL and ESA) announced a partnership to launch a European cloud computing platform. ‘Helix Nebula ‐ the Science Cloud’, will support the massive IT requirements of European scientists, and become available to governmental organisations and industry after an initial pilot phase.
The partnership is working to establish a sustainable European cloud computing infrastructure, supported by industrial partners, which will provide stable computing capacities and services that elastically meet demand.

Building an efficient scientific cloud infrastructure in europe is a good thing, considering the onslaught of data from genomics, high-energy physics and sattelites. But I somewhat can’t shake off the uneasy feeling that the big-science flag-ship projects don’t leave any room for grassroots developments anymore, i.e. movements like the WWW when it took off in the mid-nineties. Along these lines, I’d rather (or at least equivalently) see the LinkedOpenData (as advocated by Tim Berners-Lee for several years) agressively being pushed forward and funded appropriately, the pay-offs are hard to (over-)estimate. But anyway, here are some links to make up your own mind:

, , , ,

Leave a comment

Speeding up molecular shape comparison

gpGPU acceleration is steadily contributing to progress in the computational LifeSciences, for example in rational DrugDesign. OpenEye scientific software announced a performance-increase of their molecular shape comparison ROCS by about 2-3 orders of magnitude (100x-1000x(!)) using GPUs. With this they won the “Best Show” award at the 2011 BioIT-World.

Now FastROCS processes 2 million conformations per second on a Quad Fermi box.

This enables all vs. all shape comparison across entire compound libraries. Below is an interview with Joe Corkery at the BioIT World, and they also have a couple of interesting posts on their blog.

, , , , ,

Leave a comment

News from the parallel multiverse

The International Supercomputing Conference (ISC’11) in Hamburg just ended yesterday, and there’s plenty in terms of a video blog, social media feed, live-stream etc. to check out. There is quite some stuff happening in terms of hardware-developments in HighPerformanceComputing (HPC), also with respect to applications in the Life Sciences. Probably the main headline is that there is a new Nr.1: The new japanese supercomputer K (@ Riken) now packs more of a punch than the next five systems on the top500-list combined, displacing the Tianhe-1A, who took pole position last October.

The K computer system at Riken's laboratory in Kobe, west Japan. Photograph: Riken/EPA

However, some things have not changed:
* Linux is still the dominant OS
* Big Blue (IBM) still dominates the market, followed by HP and Cray
* The trend towards GPU acceleration continues (although the “K” doesn’t use them)
* massively parallel processing (MPP) systems continue to increase their share

for more in-depth-info, see http://www.hpcwire.com/

, , ,

1 Comment

Production Release: CUDA Toolkit 4.0 and Parallel Nsight 2.0

Parallel Nsight 2.0 Released

The latest development tools for CUDA (Compute Unified Device Architecture) GPU programming were just released by nVidia:

CUDA 4.0– Features include Unified Virtual Addressing (UVA), Thrust C++ Template Performance Primitives Libraries and GPUDirect 2.0 GPU peer-to-peer communication technology. Download at www.nvidia.com/getcuda

Parallel Nsight 2.0– Features include full support for Microsoft Visual Studio 2010, CUDA 4.0, PTX/SASS Assembly Debugging, CUDA Debugger Attach to Process, CUDA Derived Metrics and Experiments, as well as graphics performance and stability enhancements.

A comprehensive Webinar Series is now open for registration. On their latest hardware developments checkout this post from last week.

, , ,

2 Comments

Molecular Dynamics Simulation for Drug Design

comparison of x-ray structures (blue)results of MD simulation (red) of villin (A) and FiP35 (B).

 

In a preview of his upcoming keynote at CHI (Cambridge Healthtech Institute) and Bio-IT World’s Eleventh Annual Structure-Based Drug Design conference, David E. Shaw, Chief Scientist of D. E. Shaw Research, talks with Bio-IT World editor Kevin Davies about a specialized supercomputer, called Anton, that has simulated the behavior of proteins for periods as long as two milliseconds. Excerpts from some of these simulations, showing events such as drugs finding their own binding sites, will be shown during his upcoming keynote address – “Millisecond-Long Molecular Dynamics Simulations of Proteins on a Special-Purpose Machine.”

As a sneak preview to the keynote, the Podcast “Anton: Molecular Dynamics Simulation for Drug Design” is available for download at http://bit.ly/mutTaM, for full details see http://bit.ly/ktxLs0.

In the podcast, D.E. Shaw discusses the combination of improvements in hard- and software that enabled them to go for such long simulations – “… many of the kinds of phenomena that are most interesting from the viewpoint of drug binding take place over longer timescales than was previously possible, even on the worlds fastest supercomputers to simulate“. The co-development of algorithms and specialised hardware results in Anton being a machine that is “so highly specialised that it wouldn’t be very useful for pretty much anything else“.

The podcast-links were posted on LinkedIn by James Prudhomme, Marketing Manager at Cambridge Healthtech Institute (CHI), hence this post should be marked as “advertisement” and treated (pretty much like anything else) with caution and a criticial mind. On the latter, I recommend Bosco’s excellent article “Thousands of hours of Molecular Dynamics saves you minutes of a Monte Carlo calculation” and more recently, “Purity in the atomic force-fields of molecular dynamics simulations“.

References: Here is a link to the article in science
Science 15 October 2010: Vol. 330 no. 6002 pp. 341-346 DOI: 10.1126/science.1187409

and to the full list of D.E. Shaw publications

, , , ,

Leave a comment

News from the parallel multiverse

Using all the processing power of of your computer while it is idly waiting for you is a great idea – especially when its used for advancing medicine and science. It’s become so simple, secure and fun – so here are some of the recent developments in the “parallel multiverse”:

Open-Source Software for Volunteer Computing und Grid Computing.

BOINC 6.12.26 released to public
The next version of BOINC is now ready for public use. Check the release notes and version history for details. (17 May 2011)
BoincTasks 1.00 released
Version 1.00 of BoincTasks (a Windows program for managing BOINC clients) has been released after 2 years of hard work and with the help of many volunteers. (19 May 2011)
BOINC Workshop
The 7th BOINC Workshop will be held 18-19 August 2011 in Hannover, Germany.

Read the rest of this entry »

, , , , ,

Leave a comment

Speeding through the CLOUD

video tutorial to set up your HPC environment in less than 10 minutes.

Along the lines of Cloud and High Performance Computing (HPC) Amazon pushes its web-services into research applications : “With Amazon Web Services businesses and researchers can easily fulfill their high performance computational requirements with the added benefit of ad-hoc provisioning and pay-as-you-go pricing.” The image above has a link to the tutorial on setting up a HPC environment in less than 10 minutes – demonstrating the setup of a 7-node virtual cluster and running a molecular dynamics simulation with CHARMM.

NVIDIA unveiled the Tesla M2090 GPU this week. Equipped with 512 CUDA parallel processing cores, it delivers 665 GigaFLOPS of peak double-precision performance and 178 GB/sec memory bandwidth...

Similarly, the power of current many-core (>500 cores!) general-purpose graphics processing units (gpGPUs) can be awesome, of course the code has to be adjusted to take full advantage of the architecture. The release of the CUDA Toolkit 4.0 (Common Universal Device Architecture) in April simplified parallel programming already
and foreshadowed future CPU-GPU architectures. Think yesterdays beowulf-cluster shrunk to a single card that fits into your desktop machine. Depending on the application, GPUs seems to pack at least 10x the punch of a comparable CPU, for Amber this factor seems to be about 20x. NVidia has a test-drive of Amber available … Simulators, start your engines! Also there are a couple of standard Bioinformatics applications readily available for GPUs. Just last week, they announced “New NVIDIA Tesla GPU Smashes World Record in Scientific Computation“. Allegedly, this was achieved on only 4(!) GPUs – that fits into a midi-tower under the desk. Imagine if you’d stuff a full-height 19inch rack with these beasts – you’d not quite get to something like the Nebulae or Tianhe1A but definitely would land somewhere among the TOP500 Supercomputers in the world.


And finally in this category, also Googles App-Engine (GAE) is developing nicely, although I haven’t yet found a bioinformatics-related pet-project that would motivate me to test it more thoroughly. “App Engine enables your application to scale automatically without worrying about managing machines.“. Yep, that’s the spirit.

, , ,

1 Comment

%d bloggers like this: