Fram Filesystem maintenance

Dear Fram users,
We have to do emergency maintenance on Fram storage system, one of the controller has to be rebooted to eliminate errors, during the maintenance /cluster filesystem speed will be degraded. we will update you here.

11:50 Maintenance is over, controller is rebooted. Filesystem performance is back to normal.

Best Regards

Security alert: Please update your SSH keys

There is an ongoing attack against academic HPC centers in Europe right now, and several clusters and storage systems have been compromised. The attackers have used stolen credentials (passwords and/or SSH keys) to get into systems. We are investigating whether any of our systems are affected. In the mean time we encourage everyone to create new SSH keys. See https://documentation.sigma2.no/getting_started/create_ssh_keys.html for a description of how to create SSH keys. Please do set a passphrase on the keys, so they will be worthless if they should be stolen. Please also remember to remove old SSH keys from your ~/.ssh/authorized_keys file on each system (NIRD, Fram, Saga, Stallo, Vilje), so noone can use the old keys any more.

Fram: Lustre quota problem.

Dear Fram users,
We still have lustre quota problem on Fram cluster where “dusage” command may give you inaccurate numbers.
To eliminate this issue we need downtime which will take about 4 hours.

Date for downtime is not decided, we will give you an update as soon as we have more information.

Meanwhile if you have any problem related to quota on Fram please contact us.

Fram: Interconnect network manager crashed

Dear Fram users:

Fram interconnect network manager crashed yesterday at 15:34, which caused all compute nodes had degraded routing information. This can cause Slurm jobs crash with a communication error.
Interconnect network manager is running again, and all compute nodes have the latest routing information, and communication between the compute nodes are restored.
We apologize for the inconvenience if you have any question please don’t hesitate to contact support.

[UPDATE] Saga: /cluster filesystem problem

Dear Saga cluster Users: 
We have discovered /cluster filesystem issue on Saga, which can lead to possible data corruption, to be able to examine the problem, we decided to suspend all running jobs on Saga and reserve entire cluster. No new job will be accepted until problem is resolved. Users can still login to Saga login nodes. 
We are sorry for any inconvenience this may have caused.
We will keep you updated as we progress.

Update: We are trying to repair the file system without killing all jobs. It might not work, at least not for all jobs. In the mean time, we have closed access to the login nodes to avoid more damage to the file system.

Update 14:15: Problem resolved, Saga is open again. Please check if you have running jobs, some of the jobs could get crashed.
The source of the problem is related to the underlying filesystem (XFS) and the current kernel that we are running. We scanned the underlying filesystem on our OSS servers to eliminate possible data corruption on /cluster filesystem, and we also updated kernel on OSS’es.

Please don’t hesitate to contact us if you have any questions

NIRD crash.

NIRD storage system was crashed and unavailable for short period of time.
Due to this crash, users logged in to NIRD and Fram experienced problemes.
The problem is resolved, NIRD storage system is online now.

Please contact us if you still encounter problems.

Note: The export of NIRD to FRAM does not work currently