Scheduled downtime on the 12th of February

Update:

  • 2019-02-15 11:18: We are still experiencing problems with the storage system on Fram. Disks begun to mass-fail once again after the system seemed to be stable during the night. We are depending on the vendor to resolve these issues and we are working closely with them.
    Based on the new instability we can not give an estimate for when the system will be ready for general use again. This is an unfortunate situation and we understand the impact on you, and thus we try all possible solutions to keep your data safe and bring up the system as soon as possible.

    The OpsLog will be updated with new information when the status of the situation changes.

  • 2019-02-14 13:17: Due to missing parts, and the size of the storage, disk recovery is progressing slowly ahead on approximately 50% reduced performance. Current ETA are:
    • Fram: 15.02.2019
    • NIRD: 19.02.2019
    • Service Plattform: 19.02.2019
  • 2019-02-13 19:07: Communication with the missing storage enclosures were re-established and disk pools are rebuilding at this time. Unfortunately we can not reopen machines until disk pools are stabilized. We will have a new round of checks and risk analysis tomorrow morning. Will keep you updated here.
  • 2019-02-13 11:33: Some of the parts arrived to the datacenter and we are working with the vendor on replacing and pathing the firmware on Fram. More details to follow as we know more.
  • 2019-02-12 15:38: NIRD Tromsø and Fram storages have each one disk enclosure which failed. We are waiting for replacement parts to arrive. After replacement we will have to rebuild disk pools before re-opening machines for production.Current estimate is tomorrow evening. Will keep you updated.
  • 2019-02-12 12:36: Firmware upgrade on NIRD is finished. We are proceeding to start back NIRD services. Will keep you posted.
  • 2019-02-12 08:17: Maintenance has started.
  • 2019-02-11 13:20: Due to the disk problems accellerating during the weekend, we have now changed the maintenance stop reservation so no new jobs will start until the maintenance is done.  Already running jobs will not be affected, but no new jobs will start.  This has been done to reduce the risk of data loss.

We need to have a scheduled downtime on a relatively short notice in order to upgrade the firmware on both Fram and NIRD (including NIRD Toolkit) storages.
This is a critical and mandatory update which will increase stability, performance and reliability of our systems.

The downtime is expected to last no more than a working day.

Fram jobs which can not finish by the 12th of February, are queued up and will not start until the maintenance is finished.

Thank you for your understanding!
Metacenter Operations

HW problems on Fram and NIRD storages

Updates:

  • 2019-01-11 08:30: We are starting to rebuild the remaining degraded storage pools. The storage vendor is analyzing further logs and working on a new firmware for our systems.
  • 2019-01-07 11:30: The disk system on Fram is hopefully back to normal soon, but further disks in the NIRD filesystem failed during the weekend, and need to be replaced.
  • 2019-01-04 15:29: Raidsets are now rebuilding and we expect them to be finished within 24 hours.

We are experiencing hardware failures on both Fram and NIRD storages. Due to disk losses performance is also slightly degraded at this point.

To mitigate those issues we will have to reseat IO modules on the controllers and this might cause IO hang. Will keep you updated.

HW failures on Fram storage

Update 2018-12-21: HW is replaced and /cluster file system should be 100% functional again.

We have some hardware failing on the Fram storage needing urgent replacement. For this we have to failover disks served by two Lustre servers to other Lustre server nodes.
Some slow down and short I/O hanging might be encountered on the /cluster file system during the maintenance.

We apologies for the inconvenience.

Queue system on Fram

Dear Fram User,

We are working on improving the queue system on Fram for best resource usage and user experience.
There is an ongoing work to test out new features in the latest versions for our queue system and apply them in production as soon as we are sure they will not have negative impact on jobs.

To give all users a more even chance to get their jobs started, we have now limited the number of jobs per user that the system will try to backfill.

We will keep you updated with new features as we implement them.

Metacenter Operations

Fram $HOME migrated to /cluster file system

Dear Fram User,

Some of you might have experienced sporadic I/O hangs on Fram in the past period.
In many cases the I/O hangs were caused by overloading the RPC queue on the NFS mounted /nird/home file system. This had negative performance impact on the compute nodes, in some cases lead to job crashes.

Therefore we have decided to migrate all Fram user’s $HOME directory from /nird/home/$USER to /cluster/home/$USER, starting with the next upcoming scheduled maintenance. Preparations has been made and some accounts were already synchronized over during past few weeks.

Since today we suddenly lost a big amount of disks on NIRD, to avoid data loss, we have decided to stop all user I/O on NIRD and migrate the remaining user accounts over to Fram.

Starting from today – 2018-11-07 – /nird/home is unmounted from Fram, but will still be available on NIRD. Until next upcoming maintenance we have created a symbolic link from /nird/home to /cluster/home so that eventual scripts can be adjusted.

As soon as NIRD disk issues are remediated, nightly backups will be taken from Fram to /nird/home/$USER/backup/fram.

This step made Fram less dependent on NIRD, thus from this point on, we will be able to schedule maintenance on NIRD, without having impact on running jobs.

Thank you for your understandings!
Metacenter Operations

Cooling issues in Fram server room

Update:

  • 11:15  Cooling distribution units are functional again and computes started back once again.
  • 10:12  Cooling units failed once again and computes were automatically switched off. We are looking into the problem.
  • 09:22 Cooling is functional again and Fram computes are started back now and machine shall shortly be fully operational.

——————————————————————————————-

We had troubles with one of the cooling units in the Fram server room today around 06:30.
Safety mechanisms switched off biggest part of the Fram compute nodes.

Thank you for your understanding!
Metacenter Operations

Fram: scheduled downtime on the 28th of August

UPDATE

2018-08-28 18:02 Fram is up and jobs running again.

We will have a one day scheduled downtime on Fram on the 28th of August starting from 08:00 AM.

Jobs not being able to finish before the maintenance window, will be left pending in the queue with a Reason “ReqNodeNotAvail” and will be started when the maintenance is over.

We will keep you updated via OpsLog/Twitter.

Thank you for your consideration!
Metacenter Operations

Compute nodes down – Fixed

Update 2018-08-02 09:35 Most of the computes are up and we are working to fix the remaining few. Jobs are running again.

Compute nodes went down due to a power spike on 1st of August around 7 o’clock PM. We are starting back the system and will update this post as soon as the system is functional again.

/cluster file system hanging

Some of the Lustre object storage servers crashed during the night, making parts of the /cluster file system unaccessible. We working on the problem and will keep you updated.

Metacenter Operations