Intranet
Norsk

Parallel Computing

The present evolution of commodity hardware and the increase in computing-intensive applications in CBC call for a strengthened focus on parallel and high-performance computing. Three activities will be pursued. First, we must continue experimenting with new architectures, such as multicore systems interacting with graphics cards, with an emphasis on the algorithms arising when solving partial differential equations. International collaboration with Scott Baden (UCSD), Garth Wells (Cambridge), and David Ham (Imperial College) will be important in this regard, as well as more strategically extending the parallel computing activity at our host institution.

Second, we need to improve the parallel performance of FEniCS libraries and enable GPU computing. This means taking the most promising results from the experimental work with new architectures and generalize the approaches so that the FEniCS form compilers can generate a variety of hardware- and problem-specific code. The core idea of FEniCS, i.e., compiling the definition of the PDE problem in Python UFL code to specialized high-performance C++ code, obviously puts FEniCS in an excellent position to become the tool for giving users trivial access to the most modern hardware and associated efficient algorithms. Realizing this potential is resource consuming, but yields an extremely useful service to the international computational science community.

Third, the general overlapping and non-overlapping domain decomposition algorithms described in the original proposal are yet to be realized in a generic Python framework that can make it easy to use such algorithms in any FEniCS-based PDE solver. We plan to continue this work, since these algorithms represent an alternative, effective way of parallelizing PDE solvers, and a way of handling multi-physics and multi-numerics problems.