Downtime Betzy, 6th December 08:00 until 9th December 08:00

There will be a scheduled downtime for Betzy lasting three days starting on Monday 6th December at 08:00. Downtime will last until Thursday 9th, 08:00.

During the downtime we will conduct:

  • Full upgrade of the Lustre filesystem (both servers and clients)
  • Full upgrade of the infiniband firmware
  • Full upgrade of the Mellanox infiniband drivers
  • minor updates to other parts of the system (Slurm, configs, etc)

Please be aware that this does also affect the storage services recently moved from NIRD to Betzy.

We apologize for the inconvenience

[SOLVED] Betzy Downtime 7th June 15:00-20:00

[UPDATE, 2021-06-08 08:00] Betzy is now up and in production again.

[UPDATE] Unfortunately, the downtime is taking longer than anticipated, and will not be finished tonight. We plan on getting Betzy up again at around 08:00 tomorrow morning.

Campusservice at NTNU will conduct maintenance on the High Voltage circuits for Non-redundant power on 7th of June 2021, between 15:00 and 20:00. All compute nodes and login nodes will be shut down during this time, and no jobs will be running during this period. Submitted jobs estimated to run into the downtime reservation will be held in queue.

Change to the “optimist” jobs

The requirements for specifying optimist jobs has changed. It is now required to also specify –time. (Previously, this was not needed nor allowed.) The documentation will be updated momentarily.

(The reason for the change is that we discovered that optimist jobs often would not start properly without the –time specification. This has not been discovered earlier because so few projects were using optimist jobs.)

UPDATED: Problem with Access to Projects

Quite a few users have lost access to their project(s) on Nird and all clusters during the weekend. This was due to a bug in the user administration software. The bug has been identified, and we are working on rolling back the changes.

We will update this page when access has been restored.

Update 12:30: Problem resolved. Project access has now been restored. If you still have problems, please contact support at sigma@uninett.no

Update: This applies to all systems, not only Fram and Saga.

[UPDATE] Saga: /cluster filesystem problem

Dear Saga cluster Users: 
We have discovered /cluster filesystem issue on Saga, which can lead to possible data corruption, to be able to examine the problem, we decided to suspend all running jobs on Saga and reserve entire cluster. No new job will be accepted until problem is resolved. Users can still login to Saga login nodes. 
We are sorry for any inconvenience this may have caused.
We will keep you updated as we progress.

Update: We are trying to repair the file system without killing all jobs. It might not work, at least not for all jobs. In the mean time, we have closed access to the login nodes to avoid more damage to the file system.

Update 14:15: Problem resolved, Saga is open again. Please check if you have running jobs, some of the jobs could get crashed.
The source of the problem is related to the underlying filesystem (XFS) and the current kernel that we are running. We scanned the underlying filesystem on our OSS servers to eliminate possible data corruption on /cluster filesystem, and we also updated kernel on OSS’es.

Please don’t hesitate to contact us if you have any questions

Fram development queue

Dear Fram User,

As of today we have adjusted the queue system policies to facilitate code development and testing on Fram and meanwhile limit possible misuse of devel queue.

devel is now adjusted to allow:

  • max 4 node jobs
  • max 30 minutes wall time
  • max 1 job per user

We have additionally introduced a short queue with following settings:

  • max 10 node jobs
  • max 120 minutes wall time
  • max 2 jobs per user

We will continue to monitor and improve the queue system. Please stay tuned.
You may find more information here.

Metacenter Operations