Kohinoor 2

Introduction

Kohinoor 2 is the second High Performance Computing (HPC) cluster in the Kohinoor tetralogy of clusters installed at TIFR-TCIS Hyderabad with a real-time peak computing capacity of 9.95 TF. This cluster is composed of 33 nodes, in which one is a head node and all other are execution nodes. This cluster is composed of 1 head node, 32 CPU nodes and 50 TB NAS storage. The cluster nodes are connected using Infiniband HBAs through a 36 port completely non blocking Intel truscale QDR Interconnect infiniband (IB) switch. The cluster is managed by the open source batch scheduler “Open Grid Engine” software for job scheduling and load balancing. The head-node allows user logins for job submission in cluster. The cluster has a local Network Attached Storage (NAS) of 50 TB across the nodes through IB switch, which is used for computational runs.

Vendor

OEM – Fujitsu (Supplied and installed by M/s. Locuz Enterprises, Hyderabad)

Kohinoor 2 Overview

  1. Master node
    • FUJITSU Primergy PY RX300S7 2U rack model
    • 8-core Dual CPU Intel Xeon E5-2680 2.7GHz 20MB cache
    • 64GB ECC DDR3 1600MHz RAM
    • 1TB 2.5in 7.2K RPM 6Gbps SAS Hard drives
    • 8GB FC dual-port HBA
    • Intel Truescale Infiniband HCA
  2. Compute nodes (CPU only nodes) [32 Nos.]
    • Fujitsu Primergy PY RX200S7 1U rack model
    • 8-core Dual CPU Intel Xeon E5-2680 2.7 GHz 20MB cache
    • 32 GB ECC DDR3 1333MHz RAM
    • 1TB 7.2K RPM 6Gbps SAS Hard drives
    • Intel Truescale Infiniband HCA
  3. NAS Storage (Compute space)
    • Fujitsu Eternus DX 60 S2 Dual Controller 2U rack model
    • 3TB 3.5in 7.2K RPM NL SAS Hard drives
    • 8Gb FC 4 Port Daughter Card
    • 50 TB aggregate storage capacity
    • NFS share over Infiniband
  4. Networking & Interconnect
    • Primary compute nodes communication network is through a completely nonblocking interconnect of 36 port Intel truscale QDR IB switch
    • Secondary communication network for cluster management is through a 24 port Gigabit Ethernet switch
  5. System Software
    • Operating System – CentOS 6.3
    • Clustering tool – Rocks Cluster Distribution 6.1
    • Job Scheduler – Open Grid Engine
  6. Libraries
    • GNU compiler collection
    • Intel Compilers non commercial edition
    • openMPI 1.6.5
    • MVAPICH2 – 2.0b
  7. Application software/Libraries
    • LAMMPS, NAMD, GROMACS, HOOMD, FFTW, CHARMM, etc.,

TCIS-Kohinoor 2 Cluster Document