On Thursday, 2021-02-11, a user submitted an array job with an email address specified. Our computer then sent an email to him when one of the many hundreds of jobs started and when it finished. His e-mail server only allowed him to receive 500 emails per day, so after he had reached that limit an “Undelivered Mail” message was sent to us at support, twice for each of his hundreds of jobs. Each of these “Undelivered Mail” e-mails created a new case in our support system. The user did, in principle, nothing wrong, but as a result our support system was completely shut down. Until we find a permanent solution for this problem, as a temporary fix we have now disabled the email service.
11:30 15-09-2020 [Update 7]: Quick heads-up: We are trying to put one of the storage servers back into production. This could result in some users/jobs experiencing some short hangs. If you are in doubt about the behaviour of your jobs, please, do not hesitate to contact us at email@example.com.
14:30 14-09-2020 [Update 6]: Most compute nodes are running now with the old lustre client. So, what regards the most recent issues, it should be safe to submit jobs. Unfortunately, this also means that the «hung io-wait issue» may happen again. Just contact us via firstname.lastname@example.org in case you continue to have file system issues.
12:15 14-09-2020 [Update 5]: We found the reason for the behaviour many users have reported (problems with the module system, crashes, etc). It seems the new file system client causes this. So, the only immediate “solution” is to go back to the old version of the client. This may cause other issues, however, they are less severe than what we see now. We will inform here if it is safe to submit jobs.
10:30 14-09-2020 [Update 4]: Over the weekend, on the majority of compute nodes the lustre client for the parallel file system was updated. However, users are still reporting issues, particularly, when loading modules. It seems that the module system is not configured correctly on the updated nodes. We are looking into fixing the issue and keep you up-to-date here.
Sorry for the inconvenience!
15:00 11-09-2020[Update 3]: We are currently upgrading lustre filesystem clients to mitigate a «hung io-wait issue». We are also at reduced capacity performance-wice as one of eight io-servers are down. Full production is to be expected from Monday morning. A small hang is expected when io-server i phased in. We expect hung io-wait to go away during next two weeks as clients are upgraded
20:50 10-09-2020[Update 2] : Sorry to inform that we are still having some issues and vendor has been contacted
13:15 10-09-2020[Update 1] : The file system is partially back in operation. Which means you may use Fram but the performance will be sub-optimal. Some jobs may be affected when we try to bring back a object storage latter today.
08:15 10-09-2020 : We are experiencing some issues with the Fram file system and working on fix. Sorry for the inconvenience.
July 30, 12:52: Issue resolved
July 30, 12:18. We are experiencing issues accessing the NIRD storage from SAGA. This is due to a mounting issue and we do not have have an estimate on when this will be resolved due to most of the staff still on holidays. NIRD is still accessible from FRAM if you have access there as well. Sorry for the inconvenience.
Some users may experience login issues and issues when loading modules.
- 09:50: login-1-2 has network issues and we unfortunately have to reboot it to resolve the issues.
Update : 14:39 26-07-18 The issue with Fram file system is now fixed and jobs should run as normal.
We are experiencing some problems at the moment and this is most likely a file system issue. We are trying our best to bring the services back to normal, however as most of the experts are on holiday this may take longer than usual. Please check back here for updates.