Contact Twinn


  • By Jeremy Horgan
  • In Blog
  • Posted 22/02/2017

'Speed to answer’ is a phrase we hear a lot around our office. In our world of predictive simulation, we treat it to mean the speed at which a decision maker can obtain valuable answers from a simulation model, within a time that matters. This can involve running a set of pre-determined what-if scenarios or some form of goal seeking (optimisation). In either case, you want the answer the minute you hit the run button. For simple models, simulation delivers these answers almost instantly. However, for more complex models where variability exists and many replications are required for statistical confidence, speed to answer is of a different magnitude.

Look beyond a ‘traditional’ approach

When you find yourself in the large model/many replications situation, you need to look beyond the traditional on-premise solutions. If you have access to an on-premise High Performance Computing (HPC) cluster great, however in my experience, few organisations have such horse power and end up making do with what they have.

There are other ways that no longer require a big investment in hardware and maintenance. Azure Batch is a managed service from Microsoft that enables you to run large-scale, parallel, HPC applications in the cloud, ideally suited for running simulations.

Batch processing works well for simulation workloads, described above. What-if scenarios are intrinsically parallel allowing them to be split into different workloads that can be performed simultaneously on many computers.

Using a batch service to scale up the deployment of your models

With the Azure Batch service, you define a pool of compute nodes (virtual machines) and schedule jobs and tasks to run on those nodes. As the compute nodes are created, you get access to a Start-up Task to automatically deploy the software applications you want to run on each compute node. Batch Jobs and Tasks are created to run the workload on the pool of compute nodes. Results can be collected and persisted in a storage container. Once complete the pool of compute nodes can be destroyed.

Here is a typical overview of this process, taken from


Tapping this power with WITNESS

The simulation software we have designed at Lanner hooks nicely into this approach. In our case, we use the Start-up tasks to deploy the simulation engine to run the workloads. Once these compute nodes are up and running, they are ready to start running what-if scenarios (jobs). Each scenario can run many times (tasks), using different random control, to gain statistical confidence in the answers provided. Simulation results are pushed back to storage containers, for later download by a client application. Our cloud licensing model also makes deploying software in this environment a breeze.

If your organisation uses simulation to help predict operational loads, or frequently run large number of what-if scenarios, then running this type of platform as a service makes it much easier for you. For more information, or to talk any of the above through in more detail, feel free to drop me an email –

Loading blog comments..

Post a Comment

Thank you, your comment is awating approval