Fram downtime 23rd – 24th February

[Update, 2022-02-24 22:30]: The maintenance is over and Fram is in production again. Thank you for your patience!

[Update, 2022-02-24 20:30]: The maintenance is taking a little longer than planned. We plan to get back into production at 22:00.

[Update, 2022-02-23 12:00]: The maintenance stop has now begun.

Fram supercomputer will be unavailable due to maintenance on the cooling system from February 23rd 12:00 until 24th 20:00

If time allows it we will also upgrade whole or parts/components of the storage system, including file system clients (compute nodes)

Downtime Betzy, 6th December 08:00 until 9th December 20:00

There will be a scheduled downtime for Betzy lasting three days starting on Monday 6th December at 08:00. Downtime will last until Thursday 9th, 20:00.

During the downtime we will conduct:

  • Full upgrade of the Lustre filesystem (both servers and clients)
  • Full upgrade of the infiniband firmware
  • Full upgrade of the Mellanox infiniband drivers
  • minor updates to other parts of the system (Slurm, configs, etc)

Please be aware that this does also affect the storage services recently moved from NIRD to Betzy.

We apologize for the inconvenience

Update 08.12.2021 18:00 : Betzy downtime is over, and system is open for users. All planned update is performed .

[FINISHED] Short 1 hour downtime on Fram, 24th November 12:00

[Update, 2021-11-24 14:15] Now the NIRD mounts are working again.

[Update, 2021-11-24 13:30] We are back in production and jobs are running som normal again. We are missing the NIRD mounts on two of the login nodes, but are working on fixing that.

[Update, 2021-11-24 12:00] The maintenance has started now.

There will be a short 1 hour downtime for Fram on 24th November, starting at 12:00.

During downtime we will update the firmware on interconnect infiniband switches

[DONE] Fram Maintenance October 6 — 8.

Update, 2021-10-11 08:15: The maintenance is now finished, and the compute nodes are in production again. (There are still some nodes down, they will be fixed and returned to production. Also, the VNC service is not up yet. We are looking at it.)

Update, 2021-10-08 15:40: We have now opened the login nodes for users again. The work on the cooling system is taking longer than we hoped, so the compute nodes will not be available until Monday morning.

Udate: The maintenance stop has now started.

UPDATE OCTOBER 4TH:

Login and file system services will be available during Friday or earlier, but running jobs will not be possible until Monday morning

There will be a maintenance stop on Fram starting Wednesday October 6 at 12:00 and ending Friday 8 in the afternoon. All of Fram will be down and unavailable during that time. Jobs that would not finish before the maintenance starts will be left pending until after the maintenance.

The main reason for the maintenance is replacements of some parts of the cooling system. During the stop, the OS of compute and login nodes will be updated from CentOS 7.7 to 7.9, and Slurm will be upgraded to 20.11.8 (the same version as on Saga).

[DONE] Saga Maintenance Stop 23–24 June

[2021-06-25 08:45] The maintenance stop is now over, and Saga is back in full production. There is a new version of Slurm (20.11.7), and storage on /cluster has been reorganised. This should be largely invisible, except that we will simplify the dusage command output to only show one set of quotas (pool 1).

[2021-06-25 08:15] Part of the file system reorganisation took longer than anticipated, but we will start putting Saga back into production now.

[2021-06-23 12:00] The maintenance has now started.

[UPDATE: The correct dates are June 23–24, not July]

There will be a maintenance stop of Saga starting June 23 at 12:00. The stop is planned to last until late June 24.

During the stop, the queue system Slurm will be upgraded to the latest version, and the /cluster file system storage will be reorganised so all user files will be in one storage pool. This will simplify disk quotas.

All compute nodes and login nodes will be shut down during this time, and no jobs will be running during this period. Submitted jobs estimated to run into the downtime reservation will be held in queue.

New compute nodes on Saga

Today, Saga has been extended with 120 new compute nodes, increasing the total number of CPUs on the cluster from 9824 to 16064.

The new nodes have been added to the normal partition. They are identical to the old compute nodes in the partition, except that they have 52 CPU cores instead of 40.

We hope this extension will reduce the wait time for normal jobs on Saga.