header applications

This website uses cookies to manage authentication, navigation, and other functions. By using our website, you agree that we can place these types of cookies on your device.

View e-Privacy Directive Documents

The LHC (Large Hadron Collider) experiments at CERN collect enormous amounts of data, which need to be pre-processed, treated and then analysed to extract the scientific information which physicists look for. This makes the codes developed for LHC paramount examples of HPDA applications.

 

The Compact Muon Solenoid (CMS) is one of the four large experiments hosted by the Large Hadron Collider (LHC) that detects and records proton-proton collisions. In order to reconstruct the recorded collision events, the CMS experiment employs CMSSW, a custom data processing framework, which contains O(1000) different algorithms that are running concurrently. The end result of this process are high level physics objects, well familiar to physicists (e.g., electron, photon, etc...), that are ready to be analysed and classified. CMS Event Classification is the process of identifying the main type of the event (e.g., ttbar). The reason behind this classification is that given large amounts of collected data, it is crucial to filter events that are not of interest for the downstream analysis.

 

CMS Event Reconstruction workflow is a completely data parallel workload, where each event is independent, therefore the distribution of processing across MSA is quite trivial, i.e., no communication. Each node processes completely different set of events and produces output data products. Within the DEEP-EST project, several time consuming parts of CMSSW (i.e. Hadron and Electromagnetic calorimeters) were identified and ported to utilize Nvidia GPUs. The idea is to use all the available resources and if possible, the more performant one, i.e. in presence of CPUs and Nvidia GPUs, choose GPUs for the parts that were ported. It is important to note that adapting the whole CMSSW to heterogeneous architectures is an ongoing activity.


CMS Event Classification workflow is a distributed deep learning training workflow that utilizes PyTorch for the training part. The distribution is implemented using the NNLO package, which uses MPI to communicate the weights. Furthermore, it also incorporates the use of Horovod for the purpose of distribution and communication to enhance the more basic Master-Worker approach implemented in NNLO package. More specifically, the model tested out on the DEEP-EST prototype is called JEDI-net, which stands for Jet Identification algorithm based on interaction networks. Jets are typically thought of as collimated cascades of particles which are abundant in hadron collisions, such as proton-proton collisions at LHC. Within CMS Event Classification, we employed JEDI-net neural network, which is trained to identify different types of such jet clusters.

 

CMS Event Reconstruction

 

CMS Event Reconstruction is a data-parallel workload, which does not involve any kind of inter-process or remote-process communication (no MPI). The same set of algorithms (a single executable) is replicated across all of the available nodes/cores. The event reconstruction can run on all available types of nodes on the DEEP-EST prototype, with the ability to exploit Nvidia GPUs if present on the node:

 

 

 

 

The focus was on porting compute-intensive parts of the workload to utilize GPUs (NVIDIA GPUs in particular). This was achieved during the project and resulted in a huge performance boost when comparing running just CMS Hcal/Ecal reconstruction on existing CPU-based systems versus running on Nvidia V100 GPUs available in the DEEP-EST prototype.

 

 

When integrating the above developments with other activities within CMS Experiment, we can see substantial speed-up running full CMS HLT Run 3 configuration on various node types available on the DEEP-EST prototype when employing just CPU-based reconstruction versus CPU+GPU configuration:

 

 

CMS Event Classification

CMS Event Classification is a distributed DL training application. Training was performed using ESB nodes with the most important outcome of these measurements being the fact that this distributed training workflow shows good strong scaling behavior when employing more and more ESB nodes. In particular, this is important when we compare running training on ESB to other systems that contain special Nvidia Inter-GPU links and other optimizations. Employing the modularity features of the ESB we can scale up quite flexible the number of nodes available for training: