Betzy down

[UPDATE, 2022-05-11:00]: Yesterday’s loss of power was due to a major power outage in the city of Trondheim.

[UPDATE, 2022-05-12 10:25] Most nodes are now up and running as normal.

[UPDATE, 2022-05-12 08:50]: There was a power outage on Betzy at around 23:30 last night, which made all compute nodes go down. We are working on getting the nodes up and back into production now.

It appears that most or all of Betzy is down right now. We are investigating.

Maintenance Stops on Saga, Fram and Betzy

[Update, 2022-04-30 11:10] The Fram and Saga maintenance is now over, and jobs are running again.

[Update, 2022-04-29 08:00] The Fram and Saga maintenances have now started.

[Update, 2022-04-28 12:56] The Betzy maintenance is now over, and jobs are starting again.

[Update, 2022-04-28 08:00] The Betzy maintenance has now started.

There will unfortunately be maintenance stops on all NRIS clusters next week, for an important security update. The maintenance stops will be

  • Betzy: Thursday, April 28. at 08:00
  • Fram and Saga: Friday, April 29. at 08:00

We expect the stops will last a couple of hours. We have set up maintenance reservations on all nodes on the clusters, so jobs that would have run into the reservation will be left pending in the job queue until after the maintenance stop.

We are sorry for the inconvenience this creates. We had hoped to be able to apply the security update with jobs running, but that turned out not to be possible.

Betzy: Corrected GPU node config

The queue system configuration of the GPU nodes on Betzy had an error: The number of CPUs were set to 128 instead of 64. Most jobs would probably not be affected by this, but it is possible that some jobs got sub-optimal cpu pinnings.

This has now been fixed, and the documentation updated. There is nothing users have to do with their job scripts (except if they asked for more than 64 cpus per node).

Downtime on Saga and Betzy, Thursday February 3.

There will be a short maintenance stop of Saga and Betzy on Thursday, Feburary 3. at 15:00 CET, due to work on the cooling system in the data hall. The downtime is planned to last for three hours.

During the downtime, no jobs will run, but the login nodes and the /cluster file system will be up. Jobs that cannot finish before 15:00 at February 3, will be left pending in the queue until after the stop.

[DONE] Fram Maintenance October 6 — 8.

Update, 2021-10-11 08:15: The maintenance is now finished, and the compute nodes are in production again. (There are still some nodes down, they will be fixed and returned to production. Also, the VNC service is not up yet. We are looking at it.)

Update, 2021-10-08 15:40: We have now opened the login nodes for users again. The work on the cooling system is taking longer than we hoped, so the compute nodes will not be available until Monday morning.

Udate: The maintenance stop has now started.

UPDATE OCTOBER 4TH:

Login and file system services will be available during Friday or earlier, but running jobs will not be possible until Monday morning

There will be a maintenance stop on Fram starting Wednesday October 6 at 12:00 and ending Friday 8 in the afternoon. All of Fram will be down and unavailable during that time. Jobs that would not finish before the maintenance starts will be left pending until after the maintenance.

The main reason for the maintenance is replacements of some parts of the cooling system. During the stop, the OS of compute and login nodes will be updated from CentOS 7.7 to 7.9, and Slurm will be upgraded to 20.11.8 (the same version as on Saga).

Betzy: Network problems

[2021-06-23 14:26] The issue is now solved and the jobs have now started to run. Please report if you experience any further issues to support@metacenter.no

[2021-06-23 09:20] We are again experiencing problems on Betzy. We will update here when we’ve solved the issue.

[2021-06-22 11:15] The problem has been located and fixed, and Betzy should work as normal again.

[2021-06-22 09:30] We are currently experiencing network problems on Betzy. We don’t know the full extent of it, but it is at least affecting the queue system, so all Slurm-related commands are hanging.

We are investigating, and will update when we know more.