Stallo is now up and running again. Unfortunately, the old lad lost two racks during hibernation. We are looking into it and will report when things are up and ok again. Please look into jobs that have been restarted and report back if they produce no output.
Tag: Stallo
Downtime on Stallo
We need to take down Stallo for work on building infrastructure. Downtime will be from Tue June 2nd 12:00 until no later than Thu June 4th 12:00. We apologize for the inconvenience.
Reminder: Auto cleanup of Stallo
Dear Stallo users,
From today (25.05.2020) we will enforce the auto cleanup of /global/work. All files with an access date older than 21 days will in a first step set to read-only and at a later point moved to a trash folder.
Please move all files you want to keep to your home folder or to other storage solutions like NIRD.
See also https://hpc-uit.readthedocs.io/en/latest/storage/storage.html#work-scratch-areas
If you have questions or need help, please contact us at migration@metacenter.no
Thank you for your understanding.
Metacenter Operation
Stallo Shutdown
Dear Stallo Users,
Stallo is getting old and will be shut down this year. Hardware failures cause more and more nodes to fail due to high age. The system will stay in production and continue service until at least 1. Oct 2020, the end of the current billing period (2020.1).
We will help you with finding alternatives to your computational and storage needs and with moving your workflows and data to one of our other machines like Betzy, Saga and Fram. News, updated information and howtos will be published on the Stallo documentation as we move closer to the shutdown.
If you have questions, special needs or problems, please contact us at migration@metacenter.no
Thank you for your understanding
UiT HPC staff
Stallo problems / urgent maintenance
Dear Stallo Users,
Due to yesterdays time travel in Slurm / Stallo master node, there is a need of extensive machine maintenance today. There is currently ongoing work to fix reported issues on Stallo. Please pay attention to our info channels and hold new support request emails until we are on top of the manual labour onsite.
Please accept our apologies for the inconvenience these troubles are causing.
UiT HPC staff
Stallo – RAM upgrade
Dear Stallo users,
Stallo slurm master node will have a short downtime for memory upgrades today, Monday 27.4. from 13:00 till 15:00. No slurm jobs will be able to start during that time. This is done in order to avoid future SLURM problems.
We apologize for the short notice. Have a nice day.
UiT HPC staff
Stallo – slurm problem
UPDATE 16-04-2020-14:55: the system should be stable, up and running again
Dear Stallo user,
Stallo is experiencing some problems with the slurm daemon. It is therefore currently not possible to start new jobs on Stallo. Running jobs should not be affected.
We are currently working on fixing the situation.
Thank you for your patience and understanding
HPC staff
Stallo downtime Jaunary 10th 10:00 – January 11th 16:00
2019-11-01 14:37 Update: stallo is back online and in production again!
Due to work on the electrical power infrastructure in the building housing stallo, we need to power off the machine in the given periode. All jobs with walltime beyond the start time of the poweroff will be held pending in the queue until the system is up and running again.