Current Projects

Pericles logoPericles (2013-2017): Promoting and Enhancing Reuse of Information throughout the Content Lifecycle taking account of Evolving Semantics (Pericles) is an integrated project in which academic and industrial partners have come together to investigate the challenge of preserving complex digital information in dynamically evolving environments, to ensure that it remains accessible and useful for future generations. We address contextuality and scalability within the project. Contextuality refers to probabilistic framework that considers the broader and narrower context of the data within a quantum-like formulation, whereas scalability allows executing algorithms on massive data sets using heterogeneous accelerator architectures. Funded by Framework Programme 7.

CHiP-SL

ChiP-SL (2013-2014): Big data asks for scalable algorithms, but scalability is just one aspect of the problem. Many applications also require the speedy processing of large volumes of data. Examples include supporting financial decision making, advanced services in digital libraries, mining medical data from magnetic resonance imaging, and also analyzing social media graphs. The velocity of machine learning is often boosted by deploying GPUs or distributed algorithms, but rarely both. We are developing high-performance supervised and unsupervised statistical learning algorithms that are accelerated on GPU clusters.  Since the cost of a GPU cluster is high and the deployment is far from being trivial, the project  Cloud for High-Performance Statistical Learning (ChiP-SL) enables the verification, rapid dissemination, and quick adaptation of the algorithms being developed. Funded by Amazon Web Services.

ncpol2sdpaApproximating the Ground State of a Many-Particle Quantum System with Semi-Definite Relaxations (2013): Identifying the ground state of a many-particle system whose interactions are described by a Hamiltonian is an important problem in quantum physics. During the last decade, different relaxations of the previous Hamiltonian minimization problem have been proposed. Interestingly, they provide lower bound the ground-state energy, complementing the upper bounds that are obtainable using variational methods. These algorithms can be understood as lower levels of a general hierarchy of semi-definite programming (SDP) relaxations for non-commutative polynomial optimization. The main goal is to identify physically relevant situations in which SDP relaxations beat any of the existing numerical methods to establish lower bounds to the ground-state energy and, in particular, exact diagonalization of the Hamiltonian. Sponsored by Red Española de Supercomputación.

Completed Projects

tsaTSA (2012): The Trotter-Suzuki approximation leads to an efficient algorithm for solving the time-dependent Schrödinger equation. Using existing highly optimized CPU and GPU kernels, we developed a distributed version of the algorithm that runs efficiently on a cluster. Our implementation also improves single node performance, and is able to use multiple GPUs within a node. The scaling is close to linear using the CPU kernels, whereas the efficiency of GPU kernels improve with larger matrices. We also introduced a hybrid kernel that simultaneously uses multicore CPUs and GPUs in a distributed system. This project was a research visit to the Barcelona Supercomputing Centre funded by HPC-EUROPA2.

squalarSQUALAR (2011): High-performance computational resources and distributed systems are crucial for the success of real-world language technology applications. The novel paradigm of general-purpose computing on graphics processors offers a feasible and economical alternative: it has already become a common phenomenon in scientific computation, with many algorithms adapted to the new paradigm. However, applications in language technology do not readily adapt to this approach. Recent advances show the applicability of quantum metaphors in language representation, and many algorithms in quantum mechanics have already been adapted to GPU computing. Scalable Quantum Approaches in Language Representation (SQUALAR)  aimed to match quantum-inspired algorithms with heterogeneous computing to develop new formalisms of information representation for natural language processing. Co-funded by Amazon Web Services.

logo-shaman SHAMAN (2010-2011) was an integrated project on large-scale digital preservation. As part of the preservation framework, advanced services aid the discovery of archived digital objects. These services are based on machine learning and data processing, which in turn asks for scalable distributed computing models. Given the requirements for reliability, the project took a middleware approach based on MapReduce to perform computationally demanding tasks. Since memory organizations which are involved in digital preservation potentially lack the necessary infrastructure, a high-performance cloud computing component was also developed. Funded by Framework Programme 7.