pen icon Communication
quote

Parallel computing on heterogeneous clusters using MPI

GF

Membre a labase

Ghassan Fadlallah

Résumé de la communication

Rapid improvements in network and computer performance are revolutionizing high performance computing transforming clustered commodity workstations into the supercomputing solution of choice. Parallel computing on interconnected workstations or workstation clusters is becoming a viable and attractive proposition due to the high communication speeds of modern networks. Parallel computing is needed in both science and technology but, until recently, has been available only to those with supercomputer access. Recent publications suggest that contemporary disparate personal computers can be interconnected to create a parallel computer with capabilities rivalling those of a supercomputer. Some recent research projects address the possibility of marshalling heterogeneous resources into virtual parallel computers. These projects are concerned with issues of efficient utilisation of computing resources, reliability, security, and network load adaptation. To efficiently commit more than one computer to the execution of a program, the computers must share data and co-ordinate access to and updating of the shared data. Within clusters, the most popular approach to this problem is to exchange data through messages from one computer to another. The Message Passing Interface (MPI) approach is currently one of the most mature methods used with parallel programs running on computer clusters. This is due to the relative simplicity of implementing the method by writing a set of library functions or an API callable from C/C++ or Fortran programs. MPI was designed for high performance on both massively parallel computers and workstation clusters. Today, MPI has become the de facto standard for message passing in the parallel-computing paradigm. MPI implementations for emerging cluster interconnects are an important requirement for useful parallel processing on cost-effective computer clusters. The goal of our project is to investigate the possibility of achieving real time parallel computing on heterogeneous computers network using MPI. We strive to increase available computing power and obtain better performance by using numerous low cost computers. This project aims to program heterogeneous computer clusters as a single computing resource. This paper will present this project, a description of the environment, a preliminary performance evaluation, an assessment of the network load, and a summary of future work.

Contexte

news icon Domaine de la communication :
Génie électrique et génie informatique
host icon Hôte : Université de Sherbrooke

Découvrez d'autres communications scientifiques

news icon

Thème du communication :

Génie électrique et génie informatique

Autres communications du même congressiste :

news icon

Domaine de la communication :

Génie électrique et génie informatique