Posts Tagged parallel
Just recently, I found this REAL bug sitting on the edge of my screen while coding – the (admittedly quite nerdy) irony of it is hard to miss. Rest assured, I ‘guided’ it away from ‘the system’ to the outside as gently as possible, resisting any impulse to to squash it using the keyboard on the spot. You know the rule, “Never touch a running system”, and unfortunately double-clicking and pressing <DEL> didn’t seem to work here.
A more funny (and nerdy) take on debugging code is this video by Atlassian called “Software Bugs” that made my morning:
“All bugs welcome! … create some buzz, … and when the spider gets here, I guess we can start talking web development”
Some more in-depth understanding of the issues involved is provided in this talk by Prof. Stephen Freund on “Stopping the Software Bug Epidemic” – he also touches on the halting problem, memory leaks and parallel code execution.
Although the talk is very informative throughout while presenting the basic issues in an entertaining way, I wonder why he didn’t mention the “Dining Philosophers Problem” – I guess it’s hard to trace deadlocks by automated checkers? In addition, he only refers to the (ancient) waterfall-modell of software engineering. Some comments on how more modern development philosophies (eXtreme programming, agile etc.) fit into the picture would have been nice. Anway, Happy deBugging!
The International Supercomputing Conference (ISC’11) in Hamburg just ended yesterday, and there’s plenty in terms of a video blog, social media feed, live-stream etc. to check out. There is quite some stuff happening in terms of hardware-developments in HighPerformanceComputing (HPC), also with respect to applications in the Life Sciences. Probably the main headline is that there is a new Nr.1: The new japanese supercomputer K (@ Riken) now packs more of a punch than the next five systems on the top500-list combined, displacing the Tianhe-1A, who took pole position last October.
However, some things have not changed:
* Linux is still the dominant OS
* Big Blue (IBM) still dominates the market, followed by HP and Cray
* The trend towards GPU acceleration continues (although the “K” doesn’t use them)
* massively parallel processing (MPP) systems continue to increase their share
for more in-depth-info, see http://www.hpcwire.com/
So far when dealing with hu-Hu-HUGE networks, the data cannot be processed in the memory of a single machine. Usually, we store the network in database tables (or similar, but worse: excel spreadsheets) describing the nodes and edges. Then you have to implement the graph-algorithm of your choice in this framework, which usually is leads to sub-optimal performance (putting it mildly). Straightforward optimizations would be for example in adressing a single node, the database could already load the adjacent edges into memory (cache) so the immediate next steps do not require additional access to the the disk-drive. Also, you might want to distribute parts of the network across several machines. Of course a carefully handcrafted and optimized object-relational mapping with tuned indices can do little wonders when you get it right, but the nagging thought remains that this can – and has to! – be dealt with in a better way. By now not only bioinformaticians and google-employees feel the occasional need to crunch BIG GRAPHS. Read the rest of this entry »
CUDA 4.0– Features include Unified Virtual Addressing (UVA), Thrust C++ Template Performance Primitives Libraries and GPUDirect 2.0 GPU peer-to-peer communication technology. Download at www.nvidia.com/getcuda
Parallel Nsight 2.0– Features include full support for Microsoft Visual Studio 2010, CUDA 4.0, PTX/SASS Assembly Debugging, CUDA Debugger Attach to Process, CUDA Derived Metrics and Experiments, as well as graphics performance and stability enhancements.
Using all the processing power of of your computer while it is idly waiting for you is a great idea – especially when its used for advancing medicine and science. It’s become so simple, secure and fun – so here are some of the recent developments in the “parallel multiverse”:
BOINC 6.12.26 released to public
The next version of BOINC is now ready for public use. Check the release notes and version history for details. (17 May 2011)
BoincTasks 1.00 released
Version 1.00 of BoincTasks (a Windows program for managing BOINC clients) has been released after 2 years of hard work and with the help of many volunteers. (19 May 2011)
The 7th BOINC Workshop will be held 18-19 August 2011 in Hannover, Germany.