Contact Twinn

CONTACTER Twinn

  • By Andy Chandler
  • In Blog
  • Posted 24/11/2016

Over the last few years, big data analytics and BI have transitioned from a world of hype to much more of a general business norm with big name vendors like Microsoft, Oracle, Yahoo & SAP all entering the market. Each of these, together with a host of new players are addressing business issues in a way that has not previously been possible.

In the LNG world however, perhaps big data analytics in isolation is not the answer and a simulation of the intended process is maybe of more use.

Where might the big data pitfalls lie?

In order to run a successful big data project, two factors need to be in play. Firstly, sufficiently complete and accurate data that contains evidence of best practice must be available and secondly, the right tools and suitably trained data scientists, capable of unearthing that evidence from the data, must be in place. Discrepancies or omissions in data derived from multiple sources will adversely affect the quality of any conclusions that can be drawn.

The LNG world is constantly evolving, through technological advances in liquefaction and regasification processes and increasing ship sizes. Alongside technical evolution, the commercial world is becoming ever more complex with an increasing transition from long term contracts to more short term trading, FOB contracts and contractual flexibility.

So let’s assume that a big data project has been executed on an organisation’s historical data. Value will only be delivered if the findings can be applied to future operations that will provide benefit. This ability to improve past operations is of no help in dealing with the changing LNG environment described above. Big data’s strength in relatively static environments is lost when change is ever present.

Real improvements can only come from understanding the causes of observed behaviour.

It is far too easy for the untrained to confuse correlation with causation. There are numerous classic pseudo-correlations that have no causal relationships linking such disparate behaviours as ice cream consumption and murder rates, or US spending on science, space & technology and the number of suicides by asphyxiation.

It is dangerous to assume a link between two observed behaviours without being absolutely certain that there is a causal nature between them which is difficult if some data might be missing, or the behaviours are simply a coincidence. Worse still, the cause might be separated in time from the effect, such as a weather delay to a ship voyage a month ago that led to a missed canal transit convoy resulting in a tank top now.

Improvements come when the right questions are being asked of a system capable of providing the answer.

“Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise.” John W. Tukey

So how does simulation and a virtual business model take big data analysis one step further?

Candidate improvements identified by big data analysis can quickly be tested in a simulation to see whether they have merit in the expected future world.

The individual nature of each component of the process is incorporated and played out many times in a virtual world. If coincident events between two or more independent areas result in undesired behaviour, this becomes apparent and the likelihood of any recurrence can be measured. Due to the fast virtual execution of a simulated future, the right questions can be asked and a confidence in the likely behaviour of the entire simulated process can be assessed.

By making minor variations in input parameters such as breakdown profiles, transfer rates or expected weather events, sensitivities to particular inputs can be established, and the greatest risks to performance can be identified.

Analysing history is not always the best way to predict future success if future conditions are likely to be different. To borrow from the athletics world, it is easy to see that if it was possible to freeze a 100m race after 50m you could perform a large amount of analysis on that first 50m to understand why the best performers at that point were performing so well and apply that learning to the remainder of the race. Use that same process after 50m of a 200m race and you would end up close to the landing point of the javelins before you had crossed the finish line. Learning from the past and feeding that learning into a simulated future gives the best chance of success.

While Big Data undoubtedly has its strengths and its place in the business world in general, insight into performance understanding and improvement in the LNG world is better served by the addition of predictive simulation.


Loading blog comments..

Post a Comment

Thank you, your comment is awating approval
Submit