Digitalization & Security - [Ask the Experts] Questions Answered

Time to read
9 minutes
Read so far

Digitalization & Security - [Ask the Experts] Questions Answered

Posted in:
0 comments
Digitalization & Security - [Ask the Experts] Questions Answered
Digitalization & Security - [Ask the Experts] Questions Answered

Q1) Where do you see the main possibilities of AI for pipelines in the future?

Ove Heitmann Hansen: I see immense potential for AI, Machine Learning and Hybrid models in the pipeline industry’s future. The power lies in interoperability and utilization of complex systems, but you need to get the foundation right with structured and qualified data. The main possibilities cover:

  1. Predictive Maintenance: AI can analyse data from sensors to predict when parts of the pipeline might fail, allowing for proactive maintenance, faster replacements, and reduced downtime.
  2. New infrastructure: Optimal route selection of new pipelines utilizing geospatial analysis of local and global datasets.
  3. Data analysis: Process vast amounts of data to provide insights and trends that can inform decision-making and strategic planning. AI can assess complex datasets faster and more effectively, enhance data quality and accuracy, and fill in data gaps.
  4. Leak detection: AI and advanced algorithms can detect anomalies in pressure and flow data, identifying leaks more quickly and accurately than traditional methods.
  5. Optimization of operations: AI can optimize the flow of materials through pipelines, adjusting parameters in real time to maximize efficiency and reduce costs.
  6. Safety enhancements: Simulating “perfect storm” failure scenarios across a pipeline network and connected datasets by identifying pre-markers that will lead to those scenarios. AI can monitor pipelines for potential safety hazards, such as corrosion or external damage, and alert operators to take preventive measures.
  7. Environmental monitoring: AI can help monitor environmental impacts, ensure compliance with regulations, and minimize ecological damage.
  8. Automation: AI can automate routine tasks, freeing up human operators to focus on more complex issues and create training material for new engineers as well
     

Q2) How can AI algorithms be utilized to predict and prevent potential pipeline failures or disruptions, and what are the best practices for implementing these solutions?

Ove Heitmann Hansen: Digitalization and AI capabilities depend on trust and quality of the data. The data quality assurance and certification can range from trivial rules to complex evaluations and verifications based on reasoning mechanisms, Machine Learning, and AI, emphasizing data quality, confidentiality, integrity and availability. To predict and prevent potential pipeline failures or disruptions, we need to establish the following capability levels:

  1. Descriptive Level: The current state is described and visualised by cleaning the data and making it accessible to understand what is happening. This could, for example, be a 3D model.
  2. Diagnostic Level: By adding sensor information to the descriptive model, decisions about the current state or condition can be made by identifying correlations in data to understand why it is happening.
  3. Predictive Level: By adding intelligence and analytics to the diagnostic model, one can start to predict future states, like remaining useful life, to understand what will happen.
  4. Prescriptive Level: Adding more advanced intelligence, AI, and hybrid models can recommend actions based on the actual condition, such as replacing a specific part within a certain timeframe.
  5. Automated Level: By connecting AI algorithms and the complex system to other systems, it could generate actions automatically, i.e., closing the control loop. 

Establish capability levels I-IV at minimum to predict and prevent potential pipeline failures or disruptions.

We at DNV have developed a Digital Trust framework, together with the industry, of standards covering best practices:

  • DNV-RP-0497 Assurance of data quality management
  • DNV-RP-0317 Assurance of data collection and transmission in sensor system
  • DNV-RP-A203 Technology Qualifications
  • DNV-RP-0510 Assurance of data-driven applications
  • DNV-RP-0665 Assurance of machine learning applications
  • DNV-RP-0513 Assurance of simulation models
  • DNV-RP-A204 Assurance of digital twins
  • DNV-RP-0575 Cyber Security
  • DNV-RP-0670 Asset information modelling framework
  • DNV-RP-0671 Assurance of AI-enabled systems
     

Q3) With increasing cyber threats, especially in the context of geopolitical tensions, what are the key elements of a robust cybersecurity strategy for pipeline operations, and how can operators stay ahead of evolving threats?

Jon Hopkins: Pipeline operators should develop and maintain a comprehensive threat intelligence plan to stay ahead of evolving cybersecurity threats. This plan should involve continuous monitoring for new and emerging threats, analysing their potential impacts, and proactively addressing any identified vulnerabilities. By staying informed about the latest threat landscape, operators can better anticipate and mitigate risks before they materialize.

Regularly running tabletop exercises is another crucial strategy. These exercises simulate potential security incidents, allowing the organization to test and refine its response plan. Tabletop exercises should not be a one-and-done affair; rather, they should be revisited multiple times a year to ensure incident response plans run as smoothly as possible. Through these simulations, operators can identify weaknesses and areas for improvement. Iterating on these findings helps build a more resilient security posture that is ready to handle real-world threats.

Furthermore, vigilance against phishing attempts is essential. Phishing remains one of the most common and effective methods for cyber attackers to gain unauthorized access. Now, with generative AI usage on the rise, we are seeing more sophisticated phishing tactics, making detection that much harder. Educating employees about how to recognize and report suspicious emails and other forms of communication is vital. This awareness can significantly reduce the risk of data breaches and unauthorized access, protecting the organization’s sensitive information.

By implementing these measures, pipeline operators can enhance their cybersecurity posture and better protect their critical infrastructure from evolving threats.
 

Q4) What are the key considerations and challenges when using simulation models to transition existing pipeline systems to accommodate hydrogen?

Tony Alfano: Transitioning existing pipeline systems to accommodate hydrogen presents several unique challenges that must be addressed through careful simulation and modelling. The key considerations include:

  1. Material Compatibility: Hydrogen can cause embrittlement in certain metals, potentially leading to cracks and failures. Simulation models must accurately predict how different materials in the existing infrastructure will interact with hydrogen over time.
  2. Flow Dynamics: Hydrogen has different flow characteristics compared to natural gas. Models need to account for these differences to ensure proper pressure management and flow control.
  3. Leak Detection: Hydrogen molecules are smaller and more prone to leakage. Simulation models must be adapted to detect and predict potential leak scenarios specific to hydrogen.
  4. Blending Scenarios: Many transitions will involve blending hydrogen with natural gas. Models need to simulate various blending ratios and their impacts on the system.
  5. Safety and Risk Assessment: Given hydrogen's high flammability, safety simulations are crucial for risk assessment and mitigation planning.

The main challenge lies in the accuracy of these models, given the limited real-world data on large-scale hydrogen pipeline operations. At DNV, we're developing and validating these models through a combination of laboratory testing and pilot projects, leveraging our extensive experience in natural gas and hydrogen systems. Overcoming these challenges requires a multidisciplinary approach, combining materials science, fluid dynamics, and risk assessment expertise.
 

Q5) How are digital twins being used to enhance predictive analytics capabilities in pipeline monitoring, and what are the main technological and data challenges to be addressed?

Ove Heitmann Hansen: Network demand, especially for natural gas, can fluctuate greatly not only day-to-day, but hour-by-hour. Internal pressure associated with high demand and increased flow places higher stress on the pipe, which brings it closer to its failure state. Overlaying and aligning network analysis/hydraulic modelling, maintenance records, operational history, integrity assessments, corrosion control data (e.g. CP performance, CIS and VG surveys, coupon sampling), and other complementary datasets, including externally available data, is already occurring to help operators pinpoint potential risk along a pipeline.  The next step is providing an engineer or analyst with the ability to leverage these aligned datasets (i.e. “Digital Twins) daily, empowering them to work smarter and faster; however, this relies on implementing a robust enterprise data platform with a well-planned architecture. The future will be an interface/application that allows those engineers to “drag and drop” data from various datasets to build their own AI/ML models that will enable them to make critical, thoughtful, and tactical decisions while removing the more tedious data cleansing tasks that many are burdened with today. The two biggest challenges are:

  1. Digital Architecture and Data Platform: Implementing a Digital Data Platform that integrates various information models, ensures interoperability, and provides shared services across the company is fundamental, as well as hosting, validating, contextualizing, securing, and managing structural data.
  2. Data Alignment and Relationships: Understanding how data is aligned is a critical first step to ensuring that a digital twin is represented correctly. The relationship between data and the asset it is associated with can sometimes be complex, and a reflective approach may be required on a case-by-case basis (e.g., time-series SCADA data, which is a point associated with segments of pipe within a pressure network).

Hence, an assurance process of digital twins for Data-driven verifications and predictive analytics needs to comply with technology qualification, capability and data quality assessments, and verifications of all simulation models and AI-enabled systems. 
 

Q6) What strategies can pipeline operators employ to ensure the interoperability of various digital systems and platforms across their operations?

Ove Heitmann Hansen: Ensuring interoperability across diverse digital systems is crucial for maximizing the benefits of digitalization in pipeline operations. Steps that can be taken to ensure interoperability between digital systems and the data they own and manage are as follows:

  1. Unique identification across digital systems: Assets have one unique identifier that is utilized in all digital systems (i.e., one asset does not have different names/IDs in different digital systems). For example, a mainline valve is identified in the Work Management System, in the SCADA system, and GIS.
  2. Data Standardization and Protocols: Deploying Data Standards that establish common data formats and standards across the organization ensures that different systems can easily share and understand data. Minimizing free-form text fields and utilizing domains whenever possible will alleviate future misinterpretations, both by humans and computers (e.g., machine learning models).
  3. Data Relationships and Connectivity: Create data relationships that can be programmatically deployed when combining data sets across digital systems. Most often, companies will utilize Middleware to develop rule-based bridges between different systems that enable them to exchange data and function together.
  4. One System of Record: There should be one system of record that is identified as the “truth” and cascades to other applicable digital systems automatically (i.e., an update to the system of record gets pushed to all other applicable digital systems). This minimizes the effort to update an asset and its data and ensures that other systems that utilize this data are quickly kept up to date.
  5. Benchmarking: Benchmarking current work processes and data flows to ensure they are running optimally. When gaps or opportunities are identified, a thoughtful approach to streamline these processes will reduce the time it takes for data to move from capture to storage. This can be critical in making informed decisions based on all available.
     

Q7) What are the key challenges in implementing a comprehensive asset performance management system for pipelines, and how can these be overcome?

Tony Alfano: Implementing a comprehensive asset performance management (APM) system for pipelines involves several challenges:

  • Data Integration: Pipelines often have disparate data sources and legacy systems that need to be integrated.
  • Data Quality: Ensuring consistent, accurate, and complete data across all assets can be challenging.
  • Complexity: Pipeline systems are complex, with many interdependent components and variables to consider.
  • Cultural Resistance: There may be resistance to adopting new technologies and changing established processes.
  • Skills Gap: Many organizations lack the in-house expertise to implement and manage advanced APM systems.
  • ROI Justification: It can be challenging to quantify the long-term benefits of APM systems to justify the initial investment.

To overcome these challenges:

  • Implement a robust data governance framework to ensure data quality and consistency.
  • Use advanced integration platforms to connect disparate systems and data sources.
  • Start with pilot projects to demonstrate value and gain organizational buy-in.
  • Invest in training programs to upskill existing staff and recruit necessary expertise.
  • Develop clear KPIs to measure the impact of the APM system and demonstrate ROI.
  • Partner with experienced providers like DNV to leverage industry best practices and proven implementation strategies.
     

Q8) How can risk-based inspection methodologies be enhanced through the use of advanced data analytics and machine learning?

Tony Alfano: Risk-based inspection (RBI) methodologies can be significantly enhanced through advanced data analytics and machine learning in several ways:

  • Improved Risk Assessment: Machine learning algorithms can analyze vast amounts of historical and real-time data to provide more accurate and dynamic risk assessments.
  • Predictive Modeling: Advanced analytics can predict future degradation rates and failure probabilities, allowing for more proactive inspection planning.
  • Pattern Recognition: Machine learning can identify subtle patterns in data that humans might miss, potentially revealing new risk factors or correlations.
  • Optimization of Inspection Intervals: By continuously analyzing data, these technologies can help optimize inspection intervals, focusing resources on high-risk areas while potentially extending intervals for low-risk areas.
  • Automated Anomaly Detection: AI can automatically flag anomalies in inspection data, helping to prioritize areas for human review.
  • Integration of Multiple Data Sources: Advanced analytics can integrate data from various sources (e.g., ILI, external corrosion direct assessment, cathodic protection surveys) to provide a more comprehensive risk picture.
  • Continuous Learning: Machine learning models can continuously improve their accuracy as they are fed more data over time.

At DNV, we're developing and implementing these advanced RBI methodologies, helping pipeline operators move towards a more dynamic, data-driven approach to integrity management. This approach enhances safety and reliability, optimizes inspection resources, and reduces overall lifecycle costs.
 

Q9) How can pipeline operators leverage big data analytics to optimize maintenance schedules and reduce downtime?

Ove Heitmann Hansen: Pipeline operators can leverage big data analytics by establishing a digital infrastructure that allows for various data sets to be analyzed in real-time. Inspection data sets (e.g. ILI, CIS, DVCG) coupled with historical maintenance records would be leveraged to create, calibrate, and refresh an interconnected optimization algorithm with the main objective of targeting the highest priority maintenance locations. These optimization algorithms would harness probabilistic risk model outputs that more accurately identify assets which necessitate urgent maintenance versus those sections that would be more opportunistic.  Establishing a graphical network as a digital twin, representing pipes with nodes and edges can be highly effective because they allow for a down-sampled representation of a given network while still maintaining the most important features and can perform big data computations with unmatched efficiency.

 

Got more questions about Digitalization and Security? Want to speak to an expert at DNV or know more about our solutions? Submit your queries here or write to us at digital@dnv.com. 

 

The Experts

Tony Alfano, Pipeline Product Line Director, DNVTony Alfano

Tony Alfano leads the Pipeline Product Line at DNV Digital Solution, bringing decades of experience in helping clients navigate complex safety and reliability challenges. As a trusted advisor, he combines deep industry knowledge with innovative approaches to optimise risk strategies for pipeline networks and facilities worldwide. A recognized thought leader, Tony has authored international publications and frequently speaks at industry workshops. His passion for integrating science and technology drives DNV's development of cutting-edge solutions, significantly enhancing pipeline safety for clients across the globe.


Ove Heitmann Hansen, Senior Principle Digital Trust, DNVOve Heitmann Hansen

Ove Heitmann Hansen leads the Digital Trust initiatives and services in North America, enabling our customers and their stakeholders to manage digitalization risk and complexity with confidence. Ove is a recognised Digital Trust advisor and drives DNV’s development of qualification and assurance processes of digital solutions, where quality and trustworthiness need to be continuously assessed. He has authored international publications and frequently speaks at industry conferences. His passion for Digital Trust significantly enhances the digital transformation for clients across the globe. Ove claims that “If you cannot trust your digital representation, technology or data, you cannot use it in the real world”.


Jon Hopkins, Synergi Pipeline Security Lead, DNVJon Hopkins

Jon Hopkins is the Security Lead of Synergi Pipeline in DNV Digital Solutions. As a new member of the Pipeline Integrity team, he’s dedicated to fostering a culture of security awareness and continuous improvement within DNV’s Software Engineering unit.