Dear Fram users:
Fram interconnect network manager crashed yesterday at 15:34, which caused all compute nodes had degraded routing information. This can cause Slurm jobs crash with a communication error.
Interconnect network manager is running again, and all compute nodes have the latest routing information, and communication between the compute nodes are restored.
We apologize for the inconvenience if you have any question please don’t hesitate to contact support.
Dear Saga User,
The usage of the /cluster file system on Saga has now bypassed 60%. To maintain the file system as responsive as possible, we have to periodically decrease the number of files, free up space and enforce automatic deletion of temporary files.
Starting with Wednesday, 19th of February we are going to activate the automatic cleanup of the $USERWORK (/cluster/work) area as documented here.
The retention period is:
- 42 days below 70% file system usage
- 21 days when file system usage reaches 70%.
Files older then the active retention period will be automatically deleted.
You can read more information about the storage areas on HPC clusters here and here.
Please copy all your important data from $USERWORK to your project area to avoid data loss.
Thank you for your understanding!
Dear Saga User,
We have the pleasure to announce that we have now fixed all the technical requirements and mounted NIRD project file systems on Saga login nodes.
You may find your projects in the
Please note that to transfer of large amount of files is sluggish and has a big impact on the I/O performance. It is always better to transfer one larger file than many small files.
As an example, transfer of a folder with 70k entries and about 872MB took 18 minutes, while the same files archived into a single 904MB file took 3 seconds.
You can read more about the tar archiving command by reading the manual pages. Type
in your Saga terminal.
/cluster filesystem on Saga is crashed, we are working on it. users should expect that their Slurm jobs will crash.
Update, 09:45: The file system is back online now. Only parts of /cluster was unavailable, but we recommend you to check your jobs, some of them will probably have crached.
Quite a few users have lost access to their project(s) on Nird and all clusters during the weekend. This was due to a bug in the user administration software. The bug has been identified, and we are working on rolling back the changes.
We will update this page when access has been restored.
Update 12:30: Problem resolved. Project access has now been restored. If you still have problems, please contact support at email@example.com
Update: This applies to all systems, not only Fram and Saga.
Dear NIRD User,
During the last maintenance we have reorganized the NIRD storage.
Projects have now a so-called primary site which is either Tromsø or Trondheim. Previously we had single primary site, Tromsø. This change had to be introduced to prepare coupling NIRD storage with Saga and the upcoming Betzy HPC clusters.
While we are working on a final, seamless access solution regardless of the primary site for your data, please use the following temporary solution:
To work closest to your data you have to connect to the login nodes located at the primary site of your project:
- for Tromsø the address is unchanged and is login.nird.sigma2.no
- for Trondhein the address is login-trd.nird.sigma2.no
To find out the primary site of your project log in on a login node and type:
It will print out a path starting either with /tos-project or /trd-project.
If it starts with “tos” then use login.nird.sigma2.no.
If it starts with “trd” then use login-trd.nird.sigma2.no.
Dear Saga cluster Users:
We have discovered /cluster filesystem issue on Saga, which can lead to possible data corruption, to be able to examine the problem, we decided to suspend all running jobs on Saga and reserve entire cluster. No new job will be accepted until problem is resolved.
Users can still login to Saga login nodes.
We are sorry for any inconvenience this may have caused.
We will keep you updated as we progress.
Update: We are trying to repair the file system without killing all jobs. It might not work, at least not for all jobs. In the mean time, we have closed access to the login nodes to avoid more damage to the file system.
Update 14:15: Problem resolved, Saga is open again. Please check if you have running jobs, some of the jobs could get crashed.
The source of the problem is related to the underlying filesystem (XFS) and the current kernel that we are running. We scanned the underlying filesystem on our OSS servers to eliminate possible data corruption on /cluster filesystem, and we also updated kernel on OSS’es.
Please don’t hesitate to contact us if you have any questions
Dear Fram Users,
We have scheduled a regular maintenance of FRAM’s cooling system. The cluster will run on 70% of its max. load during the day mentioned above.
Please accept out apologies for the inconvenience.
- 2020-01-13 14:54: Problems have been sorted out now and network is functional again.
- 2020-01-13 14:40: Problems are unfortunately back again. Uninett’s network specialists are working on solving the problem as soon as possible.
- 2020-01-13 14:22: Network is functional again. Apologies for the inconvenience it has caused.
We are currently experiencing network outage on Saga and some parts of NIRD. The problem is under investigation.
Please check back here for an update on this matter.
- 2020-01-23 17:30: Services are now progressively restarted.
- 2020-01-22 21:49: We have detected file system level corruption and to avoid data corruption we had to unmount and rescan all the file systems (about 18PB) on NIRD.
We are currently working on starting back the services on NIRD Toolkit.
- 2020-01-22 11:11: Software and firmware is now upgraded on NIRD Toolkit.
Most of the fileset changes are also carried out. We are currently working on the last bits. Will keep you updated.
- 2020-01-20 08:58: Maintenance has started. NIRD file systems are unmounted from Fram until maintenance is finished.
Dear NIRD and NIRD Toolkit User,
We will have a three day long scheduled maintenance on NIRD and NIRD Toolkit starting on the 20th of January, 09:00 AM.
During the maintenance we will:
- carry out software and firmware updates,
- change geo-locality for some of the projects,
- replace synchronization mechanisms,
- depending on part delivery times from disk vendor – expand the storage and quotas.
Files stored on NIRD will be unavailable during the time of the maintenance and therefore so will be the services. This will of course affect the NIRD file systems available on Fram too.
Please note that backups taken from the Fram and Saga HPC clusters will also be affected and will be unavailable during this period.
Please accept out apologies for the inconvenience this downtime is causing.