Kohinoor 1

Kohinoor 1 is the first High Performance Computing (HPC) cluster installed at TIFR-TCIS Hyderabad. This cluster is composed of 17 nodes, in which one is a head node and all other are execution nodes. This cluster is a heterogenous cluster which is composed of 12 CPU nodes and nodes and 4 CPU nodes with 2 Fermi M 2090 GPUs per node. The cluster nodes are connected using Infiniband HBAs through a 36 port completely non blocking Mellanox QDR Interconnect infiniband (IB) switch. The cluster is managed by the open source batch scheduler “Open Grid Engine” software for job scheduling and load balancing. The head-node allows user logins for job submission in cluster. The cluster has a local Network Attached Storage (NAS) of 40 TB across the nodes through IB switch, which is used for computational runs and a 36 TB NAS is attached to the head node of the cluster for the purpose of archiving and post processing of data.

Kohinoor 2

Kohinoor 2 is the second High Performance Computing (HPC) cluster installed at TIFR-TCIS Hyderabad. This cluster is composed of 33 nodes, in which one is a head node and all other are execution nodes. This cluster is composed of 1 head node, 32 CPU nodes and 50 TB NAS storage. The cluster nodes are connected using Infiniband HBAs through a 36 port completely non blocking Intel truscale QDR Interconnect infiniband (IB) switch. The cluster is managed by the open source batch scheduler “Open Grid Engine” software for job scheduling and load balancing. The head-node allows user logins for job submission in cluster. The cluster has a local Network Attached Storage (NAS) of 50 TB across the nodes through IB switch, which is used for computational runs.

Kohinoor 3

Kohinoor 3 is the third HPC cluster in the Kohinoor 3 trilogy of clusters in TIFR – TCIS, Hyderabad, which is installed and operation from September 2016.
This cluster is composed of 69 nodes, in which one is a head node and all other are execution nodes. This cluster is a heterogeneous cluster which is composed of 64 CPU nodes and nodes and 4 GPU nodes with 4 Nos of Nvidia Tesla K40 per node. The cluster nodes are connected using Infiniband HBAs through a 6 Nos of 36 port completely non-blocking Mellanox FDR Interconnect infiniband (IB) switch. The cluster is managed by the open source batch scheduler “SLRUM” software for job scheduling and load balancing. The head-node allows user logins for job submission in cluster. The cluster has a locally attached Parallel File System (Open source Lustre) of 200 TB across the nodes through IB switch, which is used for computational runs and another 200 TB NAS is attached to the head node of the cluster for the purpose of archiving and post processing of data.