The standard advice for SaaS idea validation is to build an MVP — the smallest possible version of the product that can be put in front of users to test demand. This advice is better than building a full product before validating, but it still puts the build before the test. The fastest way to validate a SaaS idea is not to build an MVP. It is to manually do the job the product would automate and see whether anyone will pay for that job to be done before you write a line of code.
Manual delivery as a validation method — often called a concierge MVP or a manual service test — produces evidence that an MVP cannot. When you deliver the output of your would-be product by hand, using whatever tools and effort it takes, you discover whether the customer values the result, what they would pay for it, how they actually want it delivered, and what edge cases your assumptions missed. You discover this before any technical architecture has been decided, before any interface has been designed, and before any engineering hours have been spent. The build is the expensive part of a software product. SaaS idea validation works best when the build comes last.
Why the MVP is the wrong first step
The logic behind the MVP is sound: reduce the investment required before testing demand. The problem is that even a minimal viable product requires engineering time, infrastructure decisions, authentication, deployment, and enough of a working interface for a user to interact with. In the current AI-assisted development environment, that cost has come down significantly — but it has not reached zero, and more importantly, the MVP’s architecture encodes assumptions that become expensive to undo.
When a founder builds an MVP, they make decisions about data models, interaction patterns, scope, and workflow that are difficult to reverse after the build. A user who tests the MVP is testing the founder’s interpretation of the job to be done, not the underlying job itself. If the interpretation is wrong — if the workflow the product automates is not actually the workflow the user performs, or if the output the product generates is not the output the user needs — the feedback from the MVP reflects the implementation’s failure, not the idea’s potential. The founder iterates on the implementation rather than reassessing the assumption.
Manual delivery separates the job from the implementation. When a founder does the job by hand — researching and writing the report that the product would generate, setting up the workflow that the software would manage, extracting and organizing the data that the tool would process — they test whether the output is valuable before committing to how to produce it. The customer who pays for manually delivered output has revealed something the MVP survey or waitlist cannot: that the result is worth money. That evidence anchors the build in a way that no amount of expressed interest can.
What manual delivery reveals that an MVP cannot
Manual delivery produces three types of evidence that are either unavailable from an MVP or significantly more expensive to acquire through one. The first is price signal. The only way to know what someone will pay for a job to be done is to ask them to pay. A customer who receives manually delivered output and pays for it has revealed a floor for willingness to pay with no ambiguity about the result they received. A customer who signs up for an MVP waitlist or completes a usage survey has revealed interest, which is not the same thing.
The second is workflow fidelity. When a founder does the job manually, they discover the actual workflow: the inputs the customer provides, the format in which they expect output, the edge cases that arise from real data, and the questions that cannot be answered without clarification. These discoveries reshape the product design in ways that are cheapest to incorporate before any design has been committed to. A founder who builds first and learns the workflow through user testing is paying engineering costs to reach information that manual delivery would have surfaced in the first week.
The third is customer qualification. A customer who pays for manually delivered output is a different customer than one who tries a free MVP. They have a real problem that is worth real money to them, a willingness to engage with a solution that is not yet polished, and the tolerance for a high-touch process that reveals how they actually use the output. These are the customers a SaaS product needs in its first cohort — not because they will always be the long-term customer profile, but because they will teach the founder what the product needs to be.
How to run a manual validation before building
Manual validation is not a substitute for building eventually. It is a gate that determines whether the build is worth starting. These steps make the method concrete.
-
Define the output your product would deliver and find one person who needs it. Not a category of person — one specific person with a real, current need for the output. If you cannot find one person willing to receive the output from you manually, you cannot find a thousand people willing to pay a subscription for software that produces it automatically.
-
Deliver the output by hand using whatever tools are available. Do not build anything. Use spreadsheets, scripts, manual research, existing APIs, or your own time. The constraint is that no engineering work specific to the product should be done. The goal is to produce the output the customer needs using the resources you have. Document every step, every input you needed, and every decision you made in producing the result.
-
Charge for the delivery before you do it. Ask the customer to pay for the output before you produce it. The amount is less important than the act — you are testing whether they will commit money, not how much. A customer who pays in advance for a manually produced output has validated the job. A customer who says they would pay once the software is built has not.
-
Deliver to five customers before drawing conclusions. One customer paying reveals a person with the problem. Five customers paying reveals a pattern — common inputs, common edge cases, common output preferences, common objections. Five is enough to identify the core workflow and to distinguish the job the product should do from the job you assumed it should do.
-
Use the manual delivery to write the product specification, not to confirm the one you already have. After delivering manually to five customers, write down what the product would need to do to replace your manual work. This specification will be more accurate than any pre-build requirements document because it is derived from observed workflow rather than from assumptions about it. Build from this specification, not from the original idea.
-
Set a price before you build. Customers who paid for manual delivery have given you price anchor data. Before writing the first line of code, decide what the software product will cost and validate that price with the customers who already paid for the manual version. If the price you need to charge to build a sustainable business is significantly higher than what manual delivery customers paid, the economics of automation need to be examined before the automation is built.
Why founders resist manual validation
The most common objection to manual validation is that it does not scale. A manual process cannot be sold to thousands of customers, and the founder’s time is better spent building the software that can. This objection misunderstands the purpose of the method. Manual validation is not a business model — it is a test designed to answer whether the software business is worth building. The inability to scale manual delivery is irrelevant if manual delivery reveals that no one will pay for the output at all.
The deeper resistance is that manual delivery is slower than an MVP launch in one sense: it requires direct customer engagement before there is anything polished to show. Founders who are more comfortable building than selling find the MVP more natural — it produces something tangible and impressive faster than manual service delivery, and it defers the moment when a real customer has to decide whether the result is worth paying for. That deferral is exactly why the MVP produces weaker validation signal. The moment of payment is the moment of validation, and manual delivery reaches it faster than any build.
The founders who adopt manual validation as a first step are not slower — they are making fewer expensive wrong turns. Every founder who built a product that no one paid for could have learned that lesson by doing the job manually first. The build was the expensive part. The lesson was available before the build began.

