The Early-Stage B2B Startup Go-to-Market Bible

Last Updated: March 2026

Cover image
Sales & Customer Success

Pipeline Management

With a playbook in hand and territories defined, you will (hopefully) see deals start flowing through your pipeline. Phase 5 also means putting in place the management disciplines to track and improve that pipeline. This chapter covers how to align marketing, sales, and customer success with service-level agreements (SLAs), how to forecast using a weighted pipeline, and how to run win/loss analysis to continuously learn from deals.
Establishing SLAs Between Marketing, Sales, and Customer Success
One common friction point as startups scale is the blame game between marketing and sales (and later, customer success) when growth numbers fall short. Marketing might say “we gave you lots of leads, sales just can’t close them,” while Sales fires back “the leads are junk!” Meanwhile, Customer Success might complain that Sales is selling promises that CS has to somehow deliver on. The way to break this cycle is through internal Service-Level Agreements (SLAs) that explicitly document what each team commits to deliver to the others. Think of an SLA as a contract: Marketing agrees to a certain volume and quality of leads, Sales agrees to follow up promptly and diligently, and CS agrees to certain standards in onboarding and retention efforts. By hashing this out, everyone gets aligned on shared goals instead of finger-pointing.
Marketing → Sales SLA: Define what constitutes a Marketing Qualified Lead (MQL) and how many MQLs marketing will deliver, say, per month. For example, Marketing might commit to provide 500 leads per quarter that meet our ICP criteria. The SLA should also specify the expected lead context (e.g. marketing will capture key info like prospect title, use case, etc. in the CRM). In return, Sales commits to act on those leads in a timely manner – e.g. 100% of MQLs get a first follow-up within 24 hours. This kind of two-way promise (marketing delivers X leads, sales gives each a fair try) builds trust. Make sure to define lead quality standards clearly (for instance, an MQL is a VP-level contact at a target account who engaged with a specific high-intent asset) – ambiguity here will “fuel conflict” as one side or the other will claim the agreement was met or not met.
Sales → Customer Success SLA: Sales should commit that when they close a deal and hand it to Customer Success, certain information is provided and certain customer expectations are set. For example, the SLA could require “Sales will document the customer’s key use cases, success criteria, and any promised deliverables in the CRM at close”. They might also agree to schedule a hand-off call with the customer that includes the CS manager for high-value accounts. In turn, Customer Success might commit to “engage the new customer within 5 business days of contract signing to kick off onboarding”. This prevents new customers from languishing after the sale. Additionally, CS can agree to provide feedback loop to sales – e.g. updates on implementation or usage that could signal upsell opportunities or refute whether the deal was a good fit.
Customer Success → Marketing/Sales SLA: Yes, even CS has obligations upstream. For instance, Customer Success should share churn risks and upsell signals in a timely fashion so marketing and sales can act. An SLA might be: “CS will flag any at-risk account (e.g. declining usage or low NPS) in the CRM at least 90 days before renewal” and “CS will pass any customer referral requests or case study opportunities to marketing promptly”. In our example, a CS-to-Marketing SLA could be “deliver quarterly health reports or testimonials from X customers for use in marketing”. These ensure that successes are amplified and issues are addressed collaboratively.
To make SLAs effective, follow a few best practices. First, anchor them in high-level goals that everyone cares about – e.g. pipeline coverage, win rates, Net Revenue Retention – so it’s clear why the SLA matters. Keep the commitments measurable and realistic (if Marketing currently generates 200 leads, don’t demand 1000 next month). Implement regular review checkpoints – say monthly meetings – where the teams review metrics like “MQLs delivered vs. SLA” and “lead follow-up time vs. SLA”. Use a shared dashboard so data is transparent to all. If someone falls short, treat it as a process problem to solve together, not an occasion for blame – perhaps the targeting needs adjusting, or maybe Sales is understaffed to handle lead volume. And importantly, have an agreed escalation path for breaking SLA violations. For example, if Sales repeatedly doesn’t follow up in 24 hours, the Head of Sales and Head of Marketing get involved to troubleshoot. This ensures accountability. As your startup grows, consider a RevOps function to own these SLAs – because RevOps sits across marketing, sales, and CS, they can neutrally monitor the data and mediate conflicts, acting as “connective tissue” to enforce the agreements. Ultimately, well-crafted SLAs create a culture of shared responsibility for revenue: Marketing, Sales, and CS all succeed or fail together, and everyone knows what “success” means in concrete terms.

Forecasting with a Weighted Pipeline Model

Accurate sales forecasting is a hallmark of a maturing go-to-market. In the early days, your “forecast” might have been a wild guess or simply the CEO’s gut feeling on a few big deals. By Phase 5, you need a more data-driven approach. One widely used technique is the weighted pipeline model, which uses deal stage probabilities to predict revenue more realistically than a straight pipeline sum.
What is a weighted pipeline? In a nutshell, you assign a win-probability to each stage of your sales process and then multiply each deal’s value by that probability to get “expected value.” Summing those gives a forecast that accounts for the fact that not every deal will close. For example, suppose your stages and probabilities are: Prospecting 10%, Qualified 25%, Demo 50%, Proposal 70%, Verbal Commit 90%. A \$50k deal at proposal stage would contribute \$35k to the forecast (50k * 70%). Unlike an unweighted pipeline that naively assumes every open deal is worth its full value, the weighted approach adjusts expectations based on likelihood. This yields a more sober view of upcoming sales – no more 10x forecast blow-ups because that entire top-of-funnel didn’t magically convert.
To implement weighted pipeline forecasting, follow these steps:

  • Define your Sales Stages and Probabilities: Leverage historical data if you have it. For instance, if historically 1 in 5 deals in “Initial Demo” stage eventually close, that stage gets 20%. In absence of data, start with reasonable benchmarks (many SaaS teams use something like 10–20–50–80–100% increasing by stage as a starting point) and refine over time. The key is the probabilities should reflect evidence-based confidence. If you know that reaching the “trial completed” stage strongly indicates a deal will close, that stage might have a high weight like 80%. Early stages where most prospects fall off have low weights (e.g. 10%). Document the exit criteria for each stage clearly (tying back to buyer journey as discussed) so that when a deal is marked 50%, everyone knows what that means in terms of buyer commitment.
  • Calculate Weighted Values: In your CRM or forecasting spreadsheet, multiply each deal’s dollar amount by the probability of its current stage. If you have \$500k of pipeline in stage 2 (25% probability), the weighted value is \$125k. Do this for all open deals in the time period (e.g. deals expected to close this quarter) and sum them up. This sum is your expected revenue for that period. You can further break it down by rep, region, or product to see which segments contribute what.
  • Review and Refine: A weighted model is only as good as its assumptions. Track the accuracy of your forecasts over time. If you consistently over-forecast (actuals come in lower), perhaps your stage probabilities are too high or reps are advancing deals further than they should. For example, you might discover that Proposal stage was set at 70% but only half of proposals actually close – so maybe that stage should be 50%. Conversely, if you under-forecast (actuals higher), maybe you’re not giving enough credit to late-stage deals. Adjust the weights as you gather more win-rate data. Also consider other factors: a generic stage probability is a blunt tool. You might refine by deal type (new business vs expansion might have different win rates) or by lead source (e.g. partner-referred deals close at higher rates). Some companies incorporate a “confidence” override for each deal, but be cautious – that reintroduces subjective bias. The beauty of weighted pipeline is in removing pure gut feeling and anchoring to historical conversion rates.

A quick example to illustrate the benefit: Say you have an unweighted pipeline of \$1M this quarter across 20 deals. History and stage weighting suggest only about \$500k is likely to close. If you plan hiring or expenses on the \$1M figure, you’re likely to be in trouble. Weighted forecasting gives you that realistic \$500k number to plan against, avoiding overly optimistic projections that lead to overspending. It also helps sales leadership focus reps on the deals that matter – a deal weighted at \$0 (early stage tiny lead) won’t skew the forecast much, whereas a late-stage big deal will. This naturally draws attention to high-probability, high-value opportunities that may need extra executive focus to win. In summary, using a weighted pipeline model makes your revenue forecasts more credible and actionable, which is crucial when communicating with your board or making hiring and budgeting decisions. It’s a move from art toward science in sales management – though never forget, forecasting will always have some art!

Win/Loss Analysis for Continuous Improvement

Every closed deal – whether a win or a loss – contains a lesson for your team. Win/Loss Analysis is the practice of systematically capturing those lessons to improve your go-to-market strategy. This isn’t just checking a box in the CRM that says “Closed Lost – Reason: Price.” It’s a deeper dive into why the outcome happened, often involving direct feedback from the buyers themselves. As Dave Kellogg might point out, sales teams are notoriously bad at diagnosing their own losses – one study found 60% of sellers are partially or completely wrong about why they lost a deal. Buyers don’t always tell sales the full truth in the moment (it’s easier to say “budget issue” than admit “your competitor’s product had a feature we needed”). Thus, a more formal win/loss program is needed to get unbiased insights.
Setting up a Win/Loss Program: Start by deciding who will gather feedback. It could be someone in Product Marketing, an external consultant, or a member of the RevOps/enablement team. The key is to have a neutral party if possible – buyers will open up more to someone who isn’t the rep that sold to them. Establish a process where for every significant deal (or at least a sample each month), the buyer is contacted for a brief win/loss interview or survey. In a win interview, you might ask what factors influenced their decision, what they liked about your approach, and where you could improve. In a loss interview, you’ll ask what didn’t work for them, who they chose instead (if anyone), and why. Often you’ll uncover subtle things: maybe your demo was too technical for a non-IT buyer, or a competitor offered a more flexible contract. These insights are gold for refining your sales playbook, marketing messages, even your product roadmap.
Make sure to analyze both wins and losses. It’s tempting to only dissect losses (to fix mistakes) but understanding wins is just as important – it tells you what you’re doing right that you should double down on. For example, win analysis might reveal that customers consistently cite your ease of implementation as a deciding factor. That’s a message to amplify in future sales cycles and marketing materials. Loss analysis might reveal patterns too – e.g. many losses in a certain vertical because a competitor has a feature you lack, or deals lost when no executive sponsor was developed (signaling a sales process gap). By looking at both sides of the coin, you get a full 360° view of the market’s response to your offering.
Continuous Improvement Loop: Make win/loss a monthly habit. Perhaps you review the findings in a monthly sales meeting or a cross-functional go-to-market sync. Each cycle, identify 1-2 actionable takeaways. For instance: “We learned this month that in lost deals, prospects said our pricing model was confusing. Action: product marketing will simplify the pricing slide and retrain reps on how to present it.” Or “Many won deals mentioned our onboarding process as a strength – let’s get a customer success story out of that as a testimonial.” Win/loss insights can inform training topics, marketing campaigns, product enhancements, and competitive strategy. Over time, a good win/loss program makes your company market-sensing – you’re not flying blind on why deals are won, lost, or stuck as “no decision.” As one guide puts it, it turns deal data into strategic action by uncovering patterns and root causes behind sales outcomes.
One pro tip: Share win/loss insights beyond the sales team. Product managers should hear if a missing feature is losing deals. Customer success should hear if expectations set during sales aren’t aligning with reality, causing churn later. Marketing should hear if your messaging isn’t resonating or if buyers consistently didn’t understand a certain value prop. By feeding this data back into all parts of the GTM engine, you create a closed loop of learning. For example, if buyers in win interviews say your content helped them make the case internally, give kudos to marketing – and do more of that. If buyers in loss interviews say they couldn’t get a ROI estimate, perhaps sales needs a better toolkit or ROI calculator from marketing. Done right, win/loss analysis is a powerful continuous improvement engine that sharpens your sales effectiveness each quarter. It’s the kind of practical, introspective exercise that seasoned leaders like Kellogg advocate to troubleshoot a faltering GTM: systematically gather evidence, then derive solutions from first principles.