Please cleanup data on Saga!

Dear users on Saga,

currently, usage on Saga’s parallel file system (everything under /cluster) is at about 93%. Already, some of the file system servers are not accepting new data. If usage increases even further, soon the performance of the parallel file system may drop significantly, then some users may experience data loss and finally the whole cluster may come to a complete halt.

Therefore, we are kindly asking all users with large usage (check with the command dusage) to cleanup unneeded data. Please, check all storage locations you’re storing data, that is, $HOME$USERWORK, project folders (/cluster/projects/...) and shared folders (/cluster/shared/...). Particularly, we’re asking users whose $HOME quota is not (yet) enforced (see line with $HOME in example below) to reduce their usage as soon as possible. Quota for $HOME if set is 20 GiB.

[saerda@login-3.SAGA ~]$ dusage -u saerda
Block quota usage on: SAGA
File system   User/Group   Usage   SoftLimit     HardLimit 
saerda_g  $HOME             6.9 TiB 0 Bytes     0 Bytes
saerda    saerda (u)        2.8 GiB    0 Bytes       0 Bytes

In parallel, we are trying to help users to reduce their usage and to increase the capacity of the file system, but these measures usually take time.

Many thanks in advance!

Security alert: Please update your SSH keys

There is an ongoing attack against academic HPC centers in Europe right now, and several clusters and storage systems have been compromised. The attackers have used stolen credentials (passwords and/or SSH keys) to get into systems. We are investigating whether any of our systems are affected. In the mean time we encourage everyone to create new SSH keys. See for a description of how to create SSH keys. Please do set a passphrase on the keys, so they will be worthless if they should be stolen. Please also remember to remove old SSH keys from your ~/.ssh/authorized_keys file on each system (NIRD, Fram, Saga, Stallo, Vilje), so noone can use the old keys any more.

Fram: Lustre quota problem.

Dear Fram users,
We still have lustre quota problem on Fram cluster where “dusage” command may give you inaccurate numbers.
To eliminate this issue we need downtime which will take about 4 hours.

Date for downtime is not decided, we will give you an update as soon as we have more information.

Meanwhile if you have any problem related to quota on Fram please contact us.

Fram: Interconnect network manager crashed

Dear Fram users:

Fram interconnect network manager crashed yesterday at 15:34, which caused all compute nodes had degraded routing information. This can cause Slurm jobs crash with a communication error.
Interconnect network manager is running again, and all compute nodes have the latest routing information, and communication between the compute nodes are restored.
We apologize for the inconvenience if you have any question please don’t hesitate to contact support.

[UPDATE] Saga: /cluster filesystem problem

Dear Saga cluster Users: 
We have discovered /cluster filesystem issue on Saga, which can lead to possible data corruption, to be able to examine the problem, we decided to suspend all running jobs on Saga and reserve entire cluster. No new job will be accepted until problem is resolved. Users can still login to Saga login nodes. 
We are sorry for any inconvenience this may have caused.
We will keep you updated as we progress.

Update: We are trying to repair the file system without killing all jobs. It might not work, at least not for all jobs. In the mean time, we have closed access to the login nodes to avoid more damage to the file system.

Update 14:15: Problem resolved, Saga is open again. Please check if you have running jobs, some of the jobs could get crashed.
The source of the problem is related to the underlying filesystem (XFS) and the current kernel that we are running. We scanned the underlying filesystem on our OSS servers to eliminate possible data corruption on /cluster filesystem, and we also updated kernel on OSS’es.

Please don’t hesitate to contact us if you have any questions