In the rapidly evolving landscape of artificial intelligence (AI), chatbots powered by AI have become a critical tool for businesses across various sectors. In the bidding process, Generative AI has emerged as a game-changer for time-poor, resource-constrained teams. However, the effectiveness of these tools is significantly influenced by the quality of the prompts used, making the concept of ‘dumb AI prompting’ an important topic of discussion.
What is Dumb AI?
‘Dumb AI prompting’ refers to the practice of using generic or non-specific prompts to generate AI responses. While these may yield results in certain contexts, they often fall short in complex, highly specialised fields like bidding. The reason is simple: AI, no matter how advanced, lacks the deep domain expertise that humans possess.
In the context of bidding, domain expertise refers to a comprehensive understanding of the industry, the customer requirements, bid management processes, governance, compliance, and more. This specialised knowledge is crucial for creating effective bid responses. When AI is prompted without this deep domain expertise, the responses may lack precision, relevance, and the nuanced understanding that high-stakes bidding situations demand.
This is where foundational models trained on domain-specific data comes into play. As a powerful AI tool, Generative AI can generate human-like text based on the prompts it receives. The key to getting the most out of Generative AI, and indeed any AI tool however,, lies in leveraging a human’s deep domain expertise alongside the technology to create smarter prompts.
For instance, in the bidding process, teams can use their industry knowledge and bidding experience to create specific, targeted prompts that guide the AI to generate highly relevant and effective bid responses. This approach to AI prompting is far from ‘dumb’; it’s a strategic use of AI that capitalises on human expertise.
Integrating AI tools into bid management platforms (like Bidhive) allows for the development of a company-specific AI-powered research assistant. This assistant can help users analyse and/or curate tailored content from scratch using file uploads and their corporate knowledge corpus, as well as search for similar or closely matching bid responses that have been answered previously, saving time and improving efficiency.
Prompt examples
In these examples we will upload and parse a single tender document, or tender volume containing a suite of related documents.
Prompt: “Summarise the key objectives of this Request for Proposal (RFP)”
Rationale: This prompt will help users quickly understand the main goals of the RFP without having to read the entire document. The AI will scan the document and provide a brief summary of its key objectives.
Likely Output: A concise summary of the RFP’s main goals and objectives, which may include information about the project, its scope, desired outcomes, and any specific requirements.
Prompt: “Generate a compliance matrix for the bid requirements outlined in this RFP”
Rationale: A compliance matrix is a tool that helps bidders ensure they meet all the requirements listed in the RFP. This prompt will help users create a compliance matrix quickly and accurately.
Likely Output: A table or matrix listing all the bid requirements from the RFP, along with a column for users to check off or fill in once they have addressed each requirement in their bid response.
Prompt: “Identify the key evaluation criteria in this RFP”
Rationale: Understanding the evaluation criteria is crucial for creating a successful bid. This prompt will help users identify these criteria so they can tailor their response accordingly.
Likely Output: A list of the main evaluation criteria for the bid, which may include things like technical capability, price, quality of service, and past performance.
Prompt: “Generate a proposal outline based on the structure and requirements of this RFP”
Rationale: This prompt will help users create a structured proposal outline that aligns with the RFP’s requirements. This can save time and ensure that the proposal addresses all the necessary points.
Likely Output: A detailed proposal outline that follows the structure of the RFP and includes all its main requirements.
In these examples we will upload and parse previous responses (ie. preferably winning and highly scored responses), as well as approved, up-to-date corporate knowledge and marketing assets (ie. annual reports, policies and procedures, case studies, website copy, whitepapers and other collateral).
Prompt: “Provide a summary of our company’s past performance relevant to this RFP”
Rationale: Past performance is often a key evaluation criterion in RFPs. This prompt will help users generate a summary of their company’s relevant past performance, which they can use in their bid response.
Likely Output: A brief summary of the company’s past projects or contracts that are relevant to the RFP, highlighting the company’s successes and strengths.
Prompt: “Generate a draft response to the technical requirements section of this RFP based on our company’s capabilities”
Rationale: This advanced example will help users generate a draft response to the technical requirements section of the RFP, based on the capabilities of their company.
Likely Output: A detailed draft response to the technical requirements section, highlighting the company’s capabilities and how they align with the RFP’s requirements.
Prompt: “Outline your track record and experience in providing contracts of similar size and complexity”
Rationale: This question is commonly asked in RFPs across industries. The goal of the question is to assess a bidder’s experience and ability to handle the project at hand. Labelling (also called annotation) is a crucial part of training AI to retrieve highly relevant responses to generic questions.
Likely Output: This process allows the AI to understand the structure of a question-answer pair and develop a pattern recognition ability. When trained with a large amount of labelled data, the AI can learn to accurately retrieve highly relevant responses to a wide range of generic questions.
Tip: As you continue to converse with your AI Assistant, you can refine your prompts if your output is too high level or generic. For example, in the above example, asking for your track record and experience might be too broad. With some further refinement, you can ask your AI to narrow-in on your requirements such as work you have done for companies in a similar industry, or with similar project scope and requirements, business line, geographic region(s) or even companies with a certain number of employees. If your responses aren’t hitting the mark you either need to keep refining your prompts, or you may not have provided enough knowledge assets to train it. If your AI is consistently returning incorrect or outdated information you may also need to review the currency and accuracy of your training data.
Key takeaways
Prompts are designed to help users leverage the AI’s capabilities to streamline the bid management and proposal writing process, saving time and improving the quality of responses. In many instances AI has been found to save 70% of time in the pre-bid research and information gathering phase, leaving far more time for the bid team to focus on strategy.
While AI has immense potential in bidding, its effectiveness really does hinge on the quality of the prompts and training data it receives. Dumb AI prompting can lead to subpar results, underscoring the importance of leveraging deep domain expertise for smarter prompting. By combining human expertise with AI capabilities, businesses can harness the full power of tools like Generative AI to optimise their bidding processes and outcomes.
Remember, in the world of AI-powered bidding, the adage ‘garbage in, garbage out’ holds. The smarter the prompts and quality of the training data, the better the AI’s performance. So, invest time in understanding your domain deeply and crafting intelligent prompts – your bidding success depends on it.
Want to test drive Bidhive’s AI Assist Module? Book a demo or Contact Us!