Reminder: Auto cleanup of Stallo

Dear Stallo users,

From today (25.05.2020) we will enforce the auto cleanup of /global/work. All files with an access date older than 21 days will in a first step set to read-only and at a later point moved to a trash folder.

Please move all files you want to keep to your home folder or to other storage solutions like NIRD.

See also

If you have questions or need help, please contact us at

Thank you for your understanding.

Metacenter Operation

Stallo Shutdown

Dear Stallo Users,

Stallo is getting old and will be shut down this year. Hardware failures cause more and more nodes to fail due to high age. The system will stay in production and continue service until at least 1. Oct 2020, the end of the current billing period (2020.1).
We will help you with finding alternatives to your computational and storage needs and with moving your workflows and data to one of our other machines like Betzy, Saga and Fram. News, updated information and howtos will be published on the Stallo documentation as we move closer to the shutdown.
If you have questions, special needs or problems, please contact us at

Thank you for your understanding

UiT HPC staff

Stallo problems / urgent maintenance

Dear Stallo Users,

Due to yesterdays time travel in Slurm / Stallo master node, there is a need of extensive machine maintenance today. There is currently ongoing work to fix reported issues on Stallo. Please pay attention to our info channels and hold new support request emails until we are on top of the manual labour onsite.

Please accept our apologies for the inconvenience these troubles are causing.

UiT HPC staff

Stallo – RAM upgrade

Dear Stallo users,

Stallo slurm master node will have a short downtime for memory upgrades today, Monday 27.4. from 13:00 till 15:00. No slurm jobs will be able to start during that time. This is done in order to avoid future SLURM problems.
We apologize for the short notice. Have a nice day.

UiT HPC staff

Stallo – slurm problem

UPDATE 16-04-2020-14:55: the system should be stable, up and running again

Dear Stallo user,

Stallo is experiencing some problems with the slurm daemon. It is therefore currently not possible to start new jobs on Stallo. Running jobs should not be affected.
We are currently working on fixing the situation.

Thank you for your patience and understanding

HPC staff

Stallo downtime Jaunary 10th 10:00 – January 11th 16:00

2019-11-01 14:37 Update: stallo is back online and in production again!

Due to work on the electrical power infrastructure in the building housing stallo, we need to power off the machine in the given periode. All jobs with walltime beyond the start time of the poweroff will be held pending in the queue until the system is up and running again.