Small Satellites – How do we predict trends?

Marketing
Share this page

Nick has been our Space Technology Business Analyst Intern for the past year and has been an excellent asset to the Small Satellites and Future Constellations team. He’s been driving our Small Satellite Market Intelligence Reports, and has been working with the Business Strategy team gaining experience in how to exploit new technology in industry. Below Nick explores how the small satellite industry analyses and predicts trends, and how it should consider changing techniques.

Predicting changes, trends, and events in the small satellite world suffers from the same problems as predictions in any domain; how should confidence be apportioned between the expectations of continuing historical trends and the announcements or beliefs that would make the future discontinuous from the past? The Catapult’s market intelligence effort has spent the last year increasing the breadth of data collected and recorded to improve the methods we use to analyse and predict the trends of the small satellite market. This work allows increasing confidence in our understanding of historical trends in the small satellite market, eg. on Jan 1st 2017 n satellites were announced to be launched over the year, but n + 10% had been launched by the end of the year.

Fact vs feeling

The temptation is to rely heavily on softer sources of information, for instance a general feeling of excitement in the space industry for small satellite constellations and communications mega-constellations. Unfortunately, these two viewpoints will point to different outcomes: will there be 400 small satellites launched in 2020, or will there be 4000? Determining a defendable figure for questions like these has been central to the market intelligence effort at the Catapult, as these predictions don’t exist in a vacuum; they are used to inform decision making and construct business models around. The truth is likely to lie somewhere between these two figures, but attempting to narrow this down at this point relies on using intuitive understanding of how the industry will respond to future events which are themselves uncertain. For instance, the total could be broken down by companies that have announced satellite launches and each company given its own confidence weighting, this allows smaller regions of possibility to be examined separately then recombined. It is possible to be more confident on the question of “will company X manage to launch their satellites” than when considering every company collectively.

Too much information!

There is a universal tendency when exploring uncertainties: the more information the better, and the more detailed a model you build the more you can do with it. Unfortunately, this often isn’t true, and an excellent example of this is Philip Tetlock’s Good Judgement Project, originally an entry into a US intelligence competition to find which group of people made the most accurate predictions. The GJP found that an understanding of probability plus the humility to update views upon receiving new information, even if it requires abandoning previous positions, made the best group of forecasters – 30% better than subject experts and analysts with access to classified information, whilst some experts performed worse than random chance. To clarify, trying to integrate and wield large amounts of information simultaneously can lead to failing to correctly weigh the strength of facts against each other. This is the difference in the above example of trying to feel the relative probabilities of 400 or 4000 satellites (and all the points between) and considering each company individually. Taking the latter choice allows you to work on simpler problems with fewer variables to consider and lets the maths of probability run its course.

Tracking and predicting launch failures is an interesting and relevant example for the small satellite industry. It is opaquely resistant to intuitive analyses, and as launch companies are reticent when talking about possible issues with their vehicles, it is best approached using a data-driven base rate calculation (a cleanly executed example here). The Catapult’s approach is also data-driven: tracking announced launch dates over time to see how delays have propagated following historical failures, informing estimates given for delays caused by future events.

In conclusion…

The message to take home from this it to keep prediction methods clearly defined and grounded, to accept uncertainty, and to be literate in probability. A specific, actionable suggestion for making well calibrated predictions would be to use confidence intervals, and if there is uncertainty about an outcome, express that uncertainty as a percentage. In addition to this, predictions should be recorded and scored. If predictions at 80% confidence are wrong 30% of the time, future predictions should be updated accordingly. By weighting predictions with confidence levels, the impact of the issues mentioned above (historical data vs domain knowledge and intuition) can be reduced. If SpaceX wants to launch 4000 satellites then a stubborn historical trend towards 400 satellites will not hold, but at the same time there is a historical failure rate for large constellations (see the 90s), so confidence is updated accordingly. At the end of the process a statement like this is produced: “Fewer than 325 small satellites to be launched in 2018 – 72% confidence[2]” (perhaps the confidence has been lowered from an original 80% after noticing that more than 20% of predictions at this level were wrong).

A key improvement for the consumers of market intelligence would be to see more of this process in action. There are a lot of excellent market intelligence reports available: Euroconsult, Spaceworks, Bryce Space Technology, the IDA Science and Technology Policy Institute, and SIA produce informative and detailed breakdowns as well as often making predictions. It would be fascinating and probably quite valuable to see future reports address past predictions. Were they close to the truth? Was this due to events playing out as expected, or did an unexpected set of events lead to the same outcome? If they were wrong, what happened that was unexpected? And most importantly, the question at the heart of this piece:

What has the future taught us about how we thought in the past,

and how will we be less wrong next time?