34 Minutes of Challenge: How Our Drillship Overcame Gyro Failure!

A comprehensive review has been carried out to assess incidents that jeopardize DP operations, derive valuable lessons, and prevent future dangerous occurrences. These case studies are sourced from the IMCA DP Event Bulletin.


The Unexpected Freeze of Your Critical Navigation System

Imagine being in the middle of a crucial drilling operation when suddenly, all three of your state-of-the-art DP system gyros fail. That’s exactly what happened during a recent drilling venture—within just five minutes, the gyros, essential for maintaining your vessel’s heading, froze, leaving the drillship’s DP model to take over. While the calm weather allowed for some control, the vessel still experienced a max excursion of 20 meters over 34 nerve-wracking minutes. This unexpected failure could have led to a disastrous outcome, but thankfully, emergency disconnection wasn’t necessary. Yet, the question remains: what went wrong?

The Stakes Are High

When your sophisticated navigation equipment fails, the stakes couldn’t be higher. A drilling operation is not just about precision; it’s about safety, efficiency, and protecting investments. The ongoing investigation and analysis of the gyros by the original manufacturer leave you in the dark, unsure of how to prevent such incidents in the future. With the drillship being less than a year in service, this incident raises serious concerns about reliability and performance. Can you afford to gamble with your operations and risk costly downtime, or worse, safety hazards?

Outdated Firmware & Unstable Power: The Perfect Storm for Gyros!

Have you ever faced a situation where critical equipment fails at the most unexpected moment, leading to potential safety hazards and costly downtime? Such incidents often leave us scratching our heads, wondering what went wrong. In our latest case, we encountered an incident involving three identical gyros that failed simultaneously. The proximate cause is still shrouded in mystery, but we suspect a potent mix of outdated firmware, unstable power sources, and perhaps even the age of the equipment might have played a role.

Imagine the chaos that ensues when your navigation systems falter. The uncertainty, the anxiety, and the looming risk of accidents can be overwhelming. As the clocks in the gyros tick away in perfect synchrony, a single glitch can send shockwaves through your operations. What if outdated firmware compromised their performance, or if a shared power source caused a moment of voltage instability? The potential for disaster is real, and it’s a ticking time bomb that no operator should overlook. The safety of your crew and equipment hangs in the balance, and the stakes couldn’t be higher.

Navigating Challenges: Lessons Learned from a DP3 Drillship Incident

ActionsDeep root cause analysis is to be conducted, and a step change to installed equipment is ongoing. This analysis will involve a thorough investigation into the factors that contributed to the incident, with the aim of identifying any systemic issues that need to be addressed.

It’s great to see that we’re taking proactive steps to learn and grow from our experiences. Here are some additional actions we’re considering to enhance our safety and efficiency:

  1. Regular Training and Drills: To keep our team sharp and ready, we’ll implement regular training sessions and emergency drills. This will help reinforce our Well Specific Operating Guideline (WSOG) and ensure everyone feels confident in their roles.
  2. Enhanced Communication Protocols: We plan to develop even clearer communication protocols. This includes regular briefings and debriefings, so everyone is on the same page and can share insights or concerns freely.
  3. Diverse Equipment Strategies: In line with the insights from our FMEA, we’ll prioritize using a variety of brands and models for our equipment. This will help mitigate risks associated with common mode faults and ensure we have backup options readily available.
  4. Management Follow-Up: We’re committed to making sure that any concerns raised during FMEA testing are addressed promptly. Our management will follow up on these points, ensuring that we continually assess risks and implement necessary changes.
  5. Collaborative Learning: We aim to foster a culture of collaboration where lessons learned from one situation can be shared across teams. This will help us all benefit from each other’s experiences and improve our overall operational practices.

Finally, we will document all lessons learned and actions taken, ensuring that this information is integrated into our operational frameworks and shared across relevant teams to promote a collective understanding of safety protocols and best practices. This commitment to learning and adaptation will help us enhance our safety performance and operational resilience moving forward.





(23)

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Yorum
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
    0
    Your Cart
    Your cart is emptyReturn to Shop