Research in the School of Life Sciences generates a huge volume of data, especially the advanced time-lapse microscopy and proteomics experiments. Our challenge is to manage the high performance infrastructure that is able to store and transfer these volumes of data in a secure and efficient way.
Our data centre is split into areas with different functions:
The storage system contains disk arrays and a SAN (Storage Area Network), which allows the disks to be connected to the servers.
The compute cluster is like a hive with tasks broken down into individual pieces that are then processed by the different workers. A manager node checks on progress and assigns new tasks from it’s queue as workers become available . Capacity can be increased simply by adding other workers.
Backup and Archive
When the data has been processed, it’s important that we back it up and archive it. For that we use a large tape library. The high capacity solution uses an automated tape robot, which provides not only backup but also allows data to flow seamlessly from disk to tape and then back to disk again.