Betzy: Ongoing problems with srun and Infiniband

Betzy is experiencing issues with srun and Infiniband that now and again affect users. The srun problem first appeared last week, the Infiniband problem has been there longer.

Symptoms for srun problem are of a type related to send/recv operation messages. If you see such error-messages you are probably affected by the problem. A workaround can be to just keep trying to run the srun job until it succeeds.

Symptoms for the Infiniband problem are messages of type Transport retry count exceeded. At the moment there is no workaround for this problem.

The expert-team is working on solving the problems together with the hardware vendors. We are very sorry for the inconvenience, but can assure you that the team is working hard on solving this.

[Updated] Fram Downtime on Wednesday 20th January 2021 from 12:00-15:00

Update: The file system servers have now been fixed, and we are back online again. Thank you for your patience.

We have an ongoing performance issue with Fram filesystem. We need to shut down file servers to get this fixed, and therefore need to have three hours downtime:

Wednesday 20th January between 12:00 and 15:00, Fram will be unavailable

Fram: compute nodes are down

Dear Fram users,

We have problem with Fram compute nodes, there are about 870 nodes is down due to unknown reason, we are working on the issue, and will keep you updated.

Update 2020-12-22, 20:05: Most of the compute nodes have now been brought back online. There are still a few nodes that needs more checking before being made available for jobs.

Update 2020-12-22, 18:04: The cooling system has been stable for the last hour after making some adjustments together with the vendor. We are slowly bringing up the nodes.

Update 2020-12-22, 16:01: In order to keep the cooling as stable as possible, we have decided to take down all high memory nodes. This way we can keep some of the normal compute nodes up for the time being. We are also working together with the vendor to make adjustments on the cooling system to ensure continued stability.

We are very sorry about the inconvenience.

Update 2020-12-22, 13:41: We have identified the cause to be the cooling system and are working on mitigating the issues. Most of the compute nodes must remain down while doing so, unfortunately.

Update 2020-12-24 10:30: Compute nodes shutdown again due to electrical problems in machine room, problem has been resolved according to machine room service department, we are working to take up all nodes.

Update 2020-12-24 12:10: Most of the compute nodes on Fram is back online.

[RESOLVED] Saga downtime. 7th December-11th December. Adding 4PB storage.

As previously announced, Saga will be down in the coming week, from 7th December 08:00 until 11th December 16:00.

The downtime is allocated for expanding the storage. When we come back we will have ca 4 Petabyte in addition to the already existing 1 PetaByte.

Update: Saga is back online and running jobs again. The new storage is not online yet, but all the hardware has been mounted.

Stallo – file system problem

Dear Stallo Users,

UPDATE – 27.11/16:20 – we have opened the machine for you guys but there might be some instabilities on global file system as we have also lost one object storage server. The issue is being investigated and we are waiting for some spare parts.

We have some major problems with Lustre file system at the moment. One of the main storage coolers is down. We are kicking out all users now and hope to get the machine back to an operational state ASAP.

Thank you for your patience.

HPC staff (UiT)