Maintenance Stops on Saga, Fram and Betzy

[Update, 2022-04-30 11:10] The Fram and Saga maintenance is now over, and jobs are running again.

[Update, 2022-04-29 08:00] The Fram and Saga maintenances have now started.

[Update, 2022-04-28 12:56] The Betzy maintenance is now over, and jobs are starting again.

[Update, 2022-04-28 08:00] The Betzy maintenance has now started.

There will unfortunately be maintenance stops on all NRIS clusters next week, for an important security update. The maintenance stops will be

  • Betzy: Thursday, April 28. at 08:00
  • Fram and Saga: Friday, April 29. at 08:00

We expect the stops will last a couple of hours. We have set up maintenance reservations on all nodes on the clusters, so jobs that would have run into the reservation will be left pending in the job queue until after the maintenance stop.

We are sorry for the inconvenience this creates. We had hoped to be able to apply the security update with jobs running, but that turned out not to be possible.

[Resolved] NIRD mount unavailable on Saga and Betzy

We have identified that the NIRD mount is unavaialble on Saga and Betzy and are working on finding the cause and putting a fix in place.

28-03-2022-13:20 – Mounts should be back now, the problem was caused by Friday’s maintenance on network gear …

We hope that the above has not caused too much frustration for you guys and we would like to wish a very nice day to everyone !

NRIS HPC staff

Downtime on Saga and Betzy, Thursday February 3.

There will be a short maintenance stop of Saga and Betzy on Thursday, Feburary 3. at 15:00 CET, due to work on the cooling system in the data hall. The downtime is planned to last for three hours.

During the downtime, no jobs will run, but the login nodes and the /cluster file system will be up. Jobs that cannot finish before 15:00 at February 3, will be left pending in the queue until after the stop.

–gpus-per-task not working correctly on Saga

We have recently discovered that using ‘–gpus-per-task’ on Saga leads to wrong accounting within the Slurm system. This has two effects, first the job will not be scheduled as quickly as at should, because Slurm thinks the job will require more resources than it asks for. Secondly, the job will actually be deducted more project hours than it should.

This is a bug in the Slurm batch system which we are trying to fix as quickly as possible.

For now, we recommend all GPU users to revert to ‘–gpus’ or ‘–gpus-per-node’ which we have ensured behaves as they should.

[FINISHED] Saga downtime 17th November 12:00 -19th November 15:00

[Update: 2021-11-19 09:50] The maintenance work is now done, and Saga is back in full production and running jobs as normal.

[Update, 2021-11-18 12:40] login nodes are ready for users, users can access their data and work with it. Compute nodes are still under maintenance thus running jobs are still not possible.

[Update, 2021-11-17 12:00]: The maintenance has now started

We will conduct firmware update/maintenance on all of Saga during next week, starting on Wednesday 17th 12:00

Downtime will last until 15:00 on friday 19th, but we will bring back access to login nodes and file system as soon as the upgrade is done on vital parts of the system. Compute ndes will be brought back sequentially while they are updated.

[Solved] Saga file system performance issue

We’re aware of ongoing issues with the file system performance on Saga and are investigating the cause. This also affects logging in to Saga, where the terminal will hang waiting for a prompt.

Updates will be provided in this post as soon as we have more information to share.

Sorry for the inconvenience.

Update 2021-07-15, 16:33: The issue was identified as a faulty connection between the storage server and the cluster. Performance should be back to normal, but we will monitor the system a bit more before declaring it healthy.
Update 2021-07-14, 15:00: We’ve discovered some faulty drives that are currently being swapped. We hope that the performance will improve once these are in production again.
Update 2021-07-13, 10:03: The file system is a bit more stable now, but we’re still looking into the cause for the degraded performance.

[DONE] Saga Maintenance Stop 23–24 June

[2021-06-25 08:45] The maintenance stop is now over, and Saga is back in full production. There is a new version of Slurm (20.11.7), and storage on /cluster has been reorganised. This should be largely invisible, except that we will simplify the dusage command output to only show one set of quotas (pool 1).

[2021-06-25 08:15] Part of the file system reorganisation took longer than anticipated, but we will start putting Saga back into production now.

[2021-06-23 12:00] The maintenance has now started.

[UPDATE: The correct dates are June 23–24, not July]

There will be a maintenance stop of Saga starting June 23 at 12:00. The stop is planned to last until late June 24.

During the stop, the queue system Slurm will be upgraded to the latest version, and the /cluster file system storage will be reorganised so all user files will be in one storage pool. This will simplify disk quotas.

All compute nodes and login nodes will be shut down during this time, and no jobs will be running during this period. Submitted jobs estimated to run into the downtime reservation will be held in queue.