The Journey
My journey started in 1995 when leaders across F1 started to see a route to differentiation through enabling data aggregation and simulation.
It’s no secret that F1 is one of the world’s most expensive sports. But this was actually to the benefit of the drive toward technical innovation. By around 2005, cost-saving became the prime directive of many F1 teams. One of the most efficient ways to save on costs when you have mountains of data to work with is to switch your business over to the virtual realm rather than experiment in the (increasingly expensive) real world. From 2010 onwards, the costs of the sport have only continued to increase, reinforcing and strengthening a virtual approach embracing simulation and experimentation. In other words, the past 30 years of F1’s journey have been marked by moving from an expensive and inefficient world of testing in wind tunnels and at race tracks…to…a faster, cheaper, and more holistic world of expansive virtual experimentation, calibrated and validated by minimal real-world testing.
But this is obviously a vast oversimplification of the journey. In reality, I witnessed about six important stages of change within F1’s development and adoption of simulation-based technologies during my tenure. Each of these phases was marked by a variety of use cases as the maturity and fidelity of the technologies grew:
Starting at the beginning…
Data Aggregation
This is foundational. We had to make sure that all the relevant data that potential users could access across the business was flexible, but consistent. This included making sure that contextual datasets are as equally available as primary datasets, so that trustworthy conclusions can be drawn.
Additionally, ensuring that users are using the same and unique data sources when performing their analysis is crucial. The ambiguity trap, and hence disagreement, is very easy to fall into if alternative data sources, or alternative means of accessing the data source for the same fundamental data is available.
Situational Awareness
Access to data on its own is obviously not sufficient to start deriving valuable insights. The first step is being able to clearly understand the situation within which you are operating, whether that’s understanding the performance of the car in order to improve it, or understanding how a race is evolving in order to win it. Simple, intuitive, and communal visualisation and analysis tools, including all the relevant data, are key. Getting this right delivered huge value across the team.
Focused Experimentation
Understanding the situation you are in is great, but there is far more value to be gained from experimentation. Historically, this is where experienced experts would use their knowledge to predict how outcomes could be improved by making changes to the status quo. But in such a complex sport, it’s too much for experienced individuals to hold, process, and interpret all of the information required to make the best decisions. This is where experimentation is used to support, NOT replace, the experts.
Through focused experimentation, we poured our team’s communal knowledge into very high-fidelity models and simulations of how individual aspects of the car would perform. Be those CFD models of aerodynamic flow over the front wing, dynamic suspension models of how the springs and dampers move, or thermal models of how hot the brakes would get. Perturbations and design changes would be simulated and compared in order to supply the experts with the detailed information they needed to make changes and set direction.
Broad Experimentation
Here, we achieved similar outcomes to the focused experimentation method by supporting experts as they made decisions. This phase of “broad experimentation” refers to use cases that required broad, contextually diverse, and integrated models to be able to simulate the systems of interest. Modularity of the modelling and simulation was paramount to be able to integrate and keep the systems we were looking at evergreen.
For example, consider strategic decisions made across multiple F1 seasons. How should you split your finite investment across all areas affecting team performance? Aerodynamics, engine power, suspension, tires, pit stops, reliability, wind tunnel investment, etc. are all vital variables of consideration.
Recommendation
This next phase of innovation refers to the point when optimisation routines and recommendation engines became mature enough to support the expert decision-makers. Progressing to this point enabled a higher volume of experimentation to be performed with aggregated outcomes that were intelligently filtered and presented intuitively to the experts.
It is important to remember that within this phase, the expert remained very much in control and responsible for the decisions being made. Notice a theme here? All information was interrogatable and transparent; innovation supported the expert as they explored, analysed, and interpreted a much wider space.
Automation
Finally, and most simply, when all five of the previous processes are executed intelligently and carefully, automation of certain decisions can deliver value when uncertainty in the likely outcome is incredibly low.
Results and Benefits
All of these stages together resulted in a range of benefits, but the key differentiating result is that F1 fundamentally improved human decision-making instead of automating decision-making. This led to a hugely efficient car performance improvement program, realised massive cost savings, and made strategic investment trade-offs far more analytical.
But outside of simply making the business better, it was that human element to technology adoption that helped build incredible institutional knowledge, retention, and growth. By building common ways of working, F1 teams spoke the language of the technologies they were using. For example, asking ‘Do the simulations know about “x”?’, ‘What does the simulation say?’ and ‘How will the simulation inform what we do next?’ simply became the expectation, leading to more collaboration and better experimentation in both the virtual world and the real one. Plus, having a common data, simulation, and analysis platform across the company delivered the coveted ‘single source of truth.’ One team, one dream.
So what can other industries take away from F1’s adoption of simulation technology? A fair few things, in my opinion:
First: It’s not about the destination, it’s the journey!
There is value at every stage of the development and adoption journey. Progress along the way can take many forms and should be valued as highly as the end destination because there is enormous value at early stages where complexity and fidelity are lower. Take, for example, the vision of near-perfect, high-fidelity simulations of F1 cars being driven around the ideal track. Sure, a representation using AI driver models that are capable of deriving the “perfect lap” is a great and valuable vision. But really, some of the early, more simplistic car models being driven in a straight line on a perfectly smooth track, have proved over time to be some of the most highly used and highly valuable.
Second: A blinkered focus on complexity can detract from usefulness.
No synthetic environment journey starts with the technology; it must focus on maximising user value. And on a similar note, a synthetic environment should only be as complex as it needs to be to derive the value that it has been designed for. Forgive the shameless plug, but at Skyral, for example, our focus on users and use cases determines where complexity and higher fidelity are required.
For example, Skyral has successfully modelled vast communication networks. It became clear from detailed customer and user interviews that many use cases were strategic in their nature, while others were distinctly tactical. The strategic use cases were better served by lower fidelity network models that integrated easily with much wider contextual models. This enabled simulations that could be executed quickly, facilitating broad experimentation.
High fidelity network models, simulating where every packet of information flowed, whilst providing higher accuracy levels, would require long execution times and eliminate the possibility of broad scenario testing—one of the key user requirements.
Third: Never lose the human touch.
Tooling is there to aid, not replace, operational experience, judgement, and intuition. At every stage of investment, learning, and development, users need to be bought in and brought along the journey. They also need to maintain a thorough, up-to-date understanding of the models. Developing a shared understanding and consensus to make decisions faster, easier, and more accurate across an organisation is crucial.
Fourth and finally: Pay attention to modularity and extensibility.
One simulation doesn’t fit all problems—and it certainly won’t last forever! It’s a great fact that many use cases will emerge along the journey, and so will academic understanding, the modelling technologies themselves, and available validation data processes.
Evolution is a good thing, and, you’ll be shocked to learn, only happens over time. But that means using modular, extendable, reusable, reconfigurable models that can be tailored to the correct depth and breadth required for the use case is essential to getting the big picture right.