- Hosting Legal Entity
- Interuniversity Consortium of High Performance Systems
- Type Of RI
- Via Magnanelli 6, CINECA, Casalecchio di Reno, PO: 40033, Italy (Undefined), Undefined
- Coordinating Country
- Life Cycle Status
- Operational since 1969
- Mission and objective
- CINECA is a non-profit Consortium, made up of 67 Italian universities, the National Research Council (CNR), the National Institute of Geophysics and Oceanography, and the Ministry of University and Research.CINECA is the largest Italian academic supercomputing centre, one of the most important worldwide, with an HPC environment equipped with cutting-edge technology, the most advanced hardware resources and highly-qualified personnel who cooperate with researchers in the use of the technological infrastructure, in both the academic and industrial fields. The present supercomputing facility for technical computing is based on integrated infrastructure of more than 30,000 cores. The state of the art HPC system for scientific computing, PRACE tier0 system is FERMI IBM Blue Gene /Q supercomputer with 10K nodes, for a total of 160k cores. The interconnection network is based on 5D IBM Torus technology and the storage accounts for about 8 Petabytes of storage. The aggregate peak power of FERMI Blue Gene/Q is 2.1 Petaflops.
- RI Keywords
- Supercomputing,Supercomputer,Big Data,Big computing,Cloud computing,E-Infrastructures,Computing time,Computing services
- RI Category
- Centralised Computing Facilities
- Scientific Domain
- Information Science and Technology
- Access WebPage
1) FERMI (June 2012) IBM BG / Q, Architecture MPP, Processor IBM Power2A 1,6 GHz, # of core 163.840, # of node 10.240, # of rack 10, total RAM 163 Tera Byte, Interconnection IBM 5D Torus, O.S. RedHat, Total power ~ 1000 Kwatts, Peak Performance ~ 2100 Tera Flops. FERMI BG/Q is currently ranked in the top 100 of the TOP500 ranking list.2) PLX (June 2011) IBM IDATAPLEX, Architecture Linux Cluster, Processor Intel Westmere Ec 2.4 Ghz, # of core 3.288, # of node 274 + 548 Nvidia Fermi GPGPU, # of rack 14, total RAM ~ 16 Tera Byte, Interconnection Qlogiq QDR 4x, O.S. RedHat, Total power ~ 350 Kwatts, Peak Performance ~350 Tera Flops3) EURORA (2013) Eurotech AURORA, Architecture Hybrid Linux Cluster Prototype, Processor Intel Sandy Bridge 8c 2.4 Ghz, # of core 1.024, # of node 64 + 64x Nvidia Kepler K20+ 64x Intel Phi, # of rack 1, total RAM ~ 5 Tera Byte, Interconnection Infiniband QDR + Toro FPGA, O.S. RedHat, Total power ~ 50 Kwatts, Peak Performance ~ 150 Tera Flops. EURORA is currently ranked number 1 in the GREEN500 Ranking list.
More than half of the production capacity is provided in kind to PRACE and the access process is directly managed by PRACE (http://www.prace-ri.eu).The remaining production capacity is reserved to the national scientific community.The CINECA national programme is open to all scientific researchers affiliated with an Italian research organization needing large allocations of computer time, supporting resources and data storage to pursue transformational advances in science. Projects' Principal Investigators are expected to be affiliated wtih an Italian institution, while no restriction is applied for the Co-PI and collaborators. It is expected that the research will be performed at Italian institutions. The objective of CINECA supercomputing programs is to support large-scale, computationally intensive projects that would not be possible or productive without petascale computing. Services does include:HPC for Science, HPC for Industry, HPC for Life Science, Data Management and repository of registered data. Remote Visualisation.