We’re aware of ongoing issues with the file system performance on Saga and are investigating the cause. This also affects logging in to Saga, where the terminal will hang waiting for a prompt.
Updates will be provided in this post as soon as we have more information to share.
Sorry for the inconvenience.
Update 2021-07-15, 16:33: The issue was identified as a faulty connection between the storage server and the cluster. Performance should be back to normal, but we will monitor the system a bit more before declaring it healthy.
Update 2021-07-14, 15:00: We’ve discovered some faulty drives that are currently being swapped. We hope that the performance will improve once these are in production again.
Update 2021-07-13, 10:03: The file system is a bit more stable now, but we’re still looking into the cause for the degraded performance.
[2021-06-25 08:45] The maintenance stop is now over, and Saga is back in full production. There is a new version of Slurm (20.11.7), and storage on /cluster has been reorganised. This should be largely invisible, except that we will simplify the dusage command output to only show one set of quotas (pool 1).
[2021-06-25 08:15] Part of the file system reorganisation took longer than anticipated, but we will start putting Saga back into production now.
[2021-06-23 12:00] The maintenance has now started.
[UPDATE: The correct dates are June 23–24, not July]
There will be a maintenance stop of Saga starting June 23 at 12:00. The stop is planned to last until late June 24.
During the stop, the queue system Slurm will be upgraded to the latest version, and the /cluster file system storage will be reorganised so all user files will be in one storage pool. This will simplify disk quotas.
All compute nodes and login nodes will be shut down during this time, and no jobs will be running during this period. Submitted jobs estimated to run into the downtime reservation will be held in queue.
We’re experiencing very slow file system on Saga at the moment and are working on identifying the cause.
Update 13:09: The file system is much more responsive now, but we’re still seeing that logins are hanging for ~30 seconds before getting access to the file system. This is being investigated further.
Updates will be provided once we have more information.
Sorry about the inconvenience.
The 120 new nodes installed on Saga last week were unavaialble between 03:15 and 08:30 this morning, due to a configuration error. The configuration has been fixed and the nodes are back in production again.
60 jobs running on the nodes at the time of the incident were requeued and have later restarted again.
We are sorry for the inconvenience!
Today, Saga has been extended with 120 new compute nodes, increasing the total number of CPUs on the cluster from 9824 to 16064.
The new nodes have been added to the normal partition. They are identical to the old compute nodes in the partition, except that they have 52 CPU cores instead of 40.
We hope this extension will reduce the wait time for normal jobs on Saga.
Dear Saga and Fram users,
The VNC service is not working smoothly at the moment and we are investigating the issue.
We are sorry for the troubles this might cause.
Dear Saga user. We need to schedule a small downtime of Saga from Thursday 18th February 2021 starting at 08:00, until Friday 19th February 2021, ending at 16:00. If maintenance is finished earlier, we will also open up the machine earlier.
During the maintenance, we will continue the work we started in early december and connect the new storage expansion to the system, and make it available for general use. This will give us several Petabyte extra for project storage and other usage..
We apologize for the inconvenience
As previously announced, Saga will be down in the coming week, from 7th December 08:00 until 11th December 16:00.
The downtime is allocated for expanding the storage. When we come back we will have ca 4 Petabyte in addition to the already existing 1 PetaByte.
Update: Saga is back online and running jobs again. The new storage is not online yet, but all the hardware has been mounted.
We are going to expand the storage on Saga. This will happen during week 50, between 7th and 11th December. Hopefully this will give oss a few Petabytes extra and enough storage for the lifetime of the system.
All services and file systems are now back in operation, including NIRD-Services and NIRD mounts on HPC systems
Please be aware that some projects on NIRD has changed home systems from NIRD-TOS to NIRD-TRD and vice versa.