Creating a data analytics roadmap for bidding organisations is both challenging and exciting. With new and emerging data sources linked to bidding activity, businesses can access faster insights to support strategic decisions and manage risk like never before.

Research in the area of competitive bid modelling has been in progress since the 1950s (Rothkopf 1969, 1994; Harstad 1994, 2008) and numerous models have been developed to predict the probability of a competitive bidder winning (Vergara 1977, Engelbrecht-Wiggans, 1980) or being awarded a project (Ravanshadnia 2010).

Most models to date have been based on games theory and decision analysis using mathematical pricing data. Few models, however, have drawn on actual practices using the multiple financial and technical criterion used in bidder evaluation; and how these, as well as other internal and market indicators such as bid/no bid decisions, win/loss ratios, lessons learned and competition affect position performance and outcomes over time.

The concept of a closed loop system is especially relevant for bidding because all of the inputs across the end-to-end process, when properly structured, aggregated and analysed, can capture important trends. Using this data, the model can predict what will happen next, or suggest actions to take for optimal outcomes. Organisations can then proactively plan, and continuously adapt, their competitive bid strategies to achieve repeatable success.

Types of data analysis

There are three main types of analysis that can benefit bidding companies and it is important to choose the one best suited to the business problem.

Predictive analytics helps us to identify, with a high degree of probability, what will happen in the future. The technique is different to descriptive analytics, which helps us to understand what has already occurred, and prescriptive analytics, which optimizes data to make recommendations on a course of action or strategy based on factors around situations or scenarios, resources and past or current performance.

Heuristic versus data-driven approaches in bid decision making

Access to big data is at an inflection point where its power has the ability to transform business productivity, innovation and competitive advantage. Users can quickly test hypotheses and analyse results to guide decisions and see the simulated effect of operational changes.

In competitive bidding, data analysis helps managers identify root causes from correlations, such as reasons why a business line in a particular market is consistently winning or losing bids, and they can reduce the variability of outcomes as they adapt their strategies with more clarity.

Machine learning also makes it possible to process tremendous amounts of data much faster and more comprehensively than human capabilities could manage. Companies have large information flows of historical data from past tenders, where data about products and services, buyers and suppliers, customer pain points and team resourcing and capacity can be captured and analysed.

Despite the huge economic benefits of predictive modelling, however, there are limitations to data if it doesn’t have the right set of conditions, including large training datasets, proper categorization, and specific use cases. It is also highly reliant upon the quality of the data used. Many organisations have access to data, however much of it is unstructured and so it needs to be cleaned and organised before it can be useful.

Data structure and defining use cases

Closed loop continuous improvement systems have been widely researched and applied extensively in quality management practices across industries such as manufacturing, banking, supply chain and logistics, and customer service.

Market leaders that implement closed-loop approaches also do so with data that bio-directionally unites people, processes and information across the value chain, indicating that closed-loop processes which are interconnected can be taken a step further by opening collaboration portals between functional business units.

In bid management practice, much of the decision making around opportunity pursuit has traditionally relied on tacit knowledge developed over time through on-the-job experience. Bid/no bid decisions, for example, are made analysing a range of factors relating to potential and capability (eligibility to participate, ability to deliver, alignment with company strategy, relationship with the customer). This approach relies on probability theory based on past performance and current knowledge of the competitor landscape.

But bidding insights that combine hindsight and foresight, by virtue of siloed systems, have seldom been centralized, aggregated, or widely shared. In its raw form, bidding data can be difficult to use in predictive and prescriptive models for several reasons. First, in isolation it contains very little information about the team who worked on the bid, the customer who chose the solution, and the other bidders in the competition. Additionally, as patterns change rapidly over time, the type of data can be highly complex and unstructured.

To overcome these issues, new methods for capturing, aggregating and correlating historical bidding data into machine readable formats are being made available to solve long-standing problems that can make transformational impact to win ratio, profit margin and revenue growth.

By accumulating data across key points in the end-to-end bid process, we can begin to see patterns to more efficiently and effectively make accurate assessments of bid risk, identify relational factors, and probability of success or failure.

Closed loop continuous improvement indicators in bid management

Insight, not hindsight, is the benefit of analytics in bid management.  By identifying and mathematically representing underlying relationships in historical data we can better explain the data and make more accurate predictions, forecasts or classifications about future events. This method minimize the time-consuming and resource-intensive demands of interrogating spreadsheets, desktop competitive intelligence, and making subjective judgement calls, especially the Bid/No Bid decision.

Any algorithms deployed initially will not be perfect, however running them in open-loop won’t provide any opportunity for improvement. In building a predictive or prescriptive model, critical factors to focus on include, but are not limited to:

  • Pricing
  • Competitors
  • Buyer/Customer
  • Decision drivers
  • The buying process
  • Win/Loss decisions (reasons why)
  • External influences

For a closed-loop continuous improvement system to be truly effective it must centralise, standardize and streamline end-to-end business processes and quality data which is why one of the biggest challenges to effective modelling resides in the fragmentation and inconsistency of data. For example, information available to bidders can be the most dominant factor in shaping their decision to bid/no bid yet not all markets disclose highest to lowest prices relative to the bidder.

In some countries bidders are given their scoring distance from the winning bid, whereas in others, bidders are given feedback on their scores against evaluation criteria but no benchmark as to where they scored against the competition. This disparity and paucity of data can make it difficult to model for continuous improvement.

The different models designed for use in data analysis can solve a range of business problems or provide some type of unique capability to deliver competitive advantage. These include:

  • Forecasting model: can take historical open contracting data and predict how many tenders will be issued in a month based on historical buyer spending patterns and market engagement learned from historical data. This can be helpful to calculate resources needed for an upcoming period using forecast analytics.
  • Cluster and classification model: identifies different groupings within existing data. Clustering separates data into well-defined groups, while classification then uses a combination of characteristics and features to indicate whether an item of data belongs to a particular class. For example, clustering winning companies according to industry classification codes can provide actionable insights on competitor pricing in a specific market or business line.
  • Outliers model: works with anomalous data entries within a dataset to identify data that deviates from the norm. By identifying unusual data, such as a low bid, no bids, unusually short tender periods, and using them in isolation or in relation with different categories or a number of outliers become useful in heling organisations save millions of dollars in detecting mismanagement and fraud.
  • Time series model: evaluates a series of data points based on time. Despite closed loop being a repeatable process, the concept of time or a time trend in the bidding process to inform improvements in outcomes has been found to be inconsequential by researchers (Kingsman & Mercer, 1988) because in bidding strategy competitors in most industries change their bid strategies over time in direct response to their changing market position, and constantly shifting internal and external factors. Time can be helpful however, for proactive planning. For example, the number of bids submitted by a company could be used to predict how many projects engineers it will need to recruit next quarter, next year or the year after; or the number of contracts awarded by a government department could be used to predict how many jobs the government is expected to create next quarter, or next year. A single metric measured and compared over time is therefore more meaningful than a simple average.

Using data to outperform peers

The goal of any analytics model is to build variables that provide decision makers and influencers who are involved in the process with insights into its own processes and cycles of working. Optimisation models or simulation can then be used to improve the accuracy of the predictions to data.

In a bidding context, we can use predictions to provide scores in rank order of likely future performance eg. the likelihood of a bid progressing to next stage; or its success or failure; and the likely profit margin of the contract which can ultimately determine a bid/no bid decision, flag important risk indicators and determine the ultimate success or failure of a contract if poorly managed.

Other important implications for data modelling in future will be recommendation engines that use machine learning to enable suppliers to search appropriate bidders and vice versa. The technology would open up competition for procurement to discover unknown companies which are suitable for its tender, and conversely, connect bidders to new opportunities.

Summary

For a mission critical function like bid management, modelling can drive decisions and actions in near real time. We need to use the technology to free up bid teams to make the most of their ability to strategise.

The combination of forward and reverse flow of data in a closed loop bid management system is more complex that a traditional linear process because it brings together the uncertainties of forecasted data together with historical data simultaneously. Therefore, to avoid making the wrong predictions it is imperative that there be right balance between quality data, careful project specification and model design that is fit for purpose to reflect an organisation’s environment.

Those companies that are already planning their data roadmap, and who understand the business context and are building analytical capability into their bidding processes will be the ones that will add the most value and achieve the greatest impact with the technology.

References

Engelbrecht-Wiggans, R. 1980. State of the art – auctions and bidding models: a survey, Management Science 26(2): 119– 142.http://dx.doi.org/10.1287/mnsc.26.2.119

Engelbrecht-Wiggans, R. 1989.The effect of regret on optimal bidding in auctions. Management Science 35(6): 685–692. http://dx.doi.org/10.1287/mnsc.35.6.685

Harstad, R. M.; SašaPekec, A. 2008. Relevance to practice and auction theory: a memorial essay for Michael Rothkopf, Interfaces 38(5): 367–380. http://dx.doi.org/10.1287/inte.1080.0396

Ravanshadnia, M.; Rajaie, H.; Abbasian, H. R. 2010. Hybrid fuzzy MADM project-selection model for diversified con- struction companies, Canadian Journal of Civil Engineer- ing 37(8): 1082–1093. https://doi.org/10.1139/L10-048

Rothkopf, M. H.; Harstad, R. M. 1994. Modeling competitive bidding: a critical essay, Management Science 40(3): 364– 384. http://dx.doi.org/10.1287/mnsc.40.3.364

Rothkopf, M. H. 1969. A model of rational competitive bidding, Management Science 15(7): 362–373. http://dx.doi.org/10.1287/mnsc.15.7.362

Vergara, A. J. 1977. Probabilistic estimating and applications of portfolio theory in construction: PhD Thesis. Department of Civil Engineering, University of Illinois at UrbanaChampaign, Urbana, IL