From Zero to Launch: Building an Intelligence Platform for Field Operations

From Zero to Launch: Building an Intelligence Platform for Field Operations | AI PM Portfolio

From Zero to Launch: Building an Intelligence Platform for Field Operations

September 12, 2021 · 12 min read · Case Study

Building a product from zero to one means compressing years of learning into months. I joined a field operations intelligence startup when it was an idea on a whiteboard. Within 18 months, we had a product recognized by TIME as one of the Best Innovations, with enterprise customers including one of the world's largest heavy equipment manufacturers. Here is the playbook.

What does 0-to-1 product management actually look like?

When I walked into a field operations intelligence startup in early 2020, there was no product, no codebase. There was a thesis: field operations teams in heavy industry were making million-dollar decisions using paper checklists and gut instinct. According to McKinsey, the construction and heavy equipment industry loses approximately $1.6 trillion annually due to productivity gaps, much of it from poor field coordination.

But a thesis is not a product. About 90% of startups fail, and CB Insights data shows that 35% fail because there is no market need. Our job was to make sure we were not in that 35%.

How do you go from whiteboard to working product in 18 months?

Our journey followed four phases. Not because we planned it that way from day one, but because each phase taught us something that reshaped the next.

Phase 1: Discovery (Months 1-3)

We spent the first three months not building software. We spent them in the field. Literally on construction sites, in equipment yards, riding along with field supervisors. I conducted 47 interviews in 90 days across 8 companies ranging from mid-size contractors to global heavy equipment operators.

The most important thing we learned: the problem was not a lack of data. These operations generated enormous amounts of data from IoT sensors, GPS trackers, equipment telematics, weather stations, and crew logs. The problem was that the data lived in 12+ disconnected systems and nobody had a unified view during the moments that mattered.

  1. Map the existing workflow end to end, not the idealized version
  2. Identify the 3-5 decisions where better information would change the outcome
  3. Quantify the cost of those decisions being wrong
  4. Validate that people would actually change behavior with better data
  5. Find the one user persona who feels the pain most acutely

Step 5 was critical. We initially assumed the operations director was our buyer and user. Wrong. The field supervisor, the person standing on-site coordinating 20 people and 15 machines, was the one whose daily reality we needed to transform. That insight reshaped everything.

Phase 2: Prototype (Months 4-8)

With discovery complete, we built the first prototype. We made a decision I would repeat every time: we cut the scope ruthlessly. Our initial vision document had 34 features. We shipped v0.1 with five.

Feature Category Initial Vision (34 features) V0.1 Shipped (5 features) Why Cut or Kept
Real-time dashboard 12 widget types 3 widget types Kept the 3 tied to daily decisions
Equipment tracking Full telematics suite Location + utilization only Utilization drove 80% of scheduling decisions
Predictive maintenance ML failure prediction Cut entirely Required 6+ months of training data we did not have
Crew management Full scheduling + skills Headcount + location only Supervisors needed "who is where" not "who knows what"
Reporting Custom report builder Cut entirely Premature; needed to learn what reports mattered first
Integrations 14 data source connectors 3 data source connectors Started with the 3 systems every pilot customer used

The decision to cut predictive maintenance was the hardest. It was the "wow" feature, the thing that made investors excited. But it required historical data we did not have, and building it would have consumed 40% of our engineering bandwidth for something we could not validate for six months. According to Standish Group research, feature bloat contributes to 64% of software features being rarely or never used. We decided we would rather ship five features that worked than twelve that were half-baked.

Phase 3: Pilot (Months 9-14)

We designed an enterprise pilot framework that I have since used at every company. The key insight: a pilot is not a free trial. It is a structured experiment with success criteria defined before it starts.

  1. Define success metrics before deployment. We agreed with the pilot customer on three metrics: a 15% reduction in equipment idle time, supervisor time-to-decision under 5 minutes (from an average of 23 minutes), and zero missed safety briefings.
  2. Set a fixed duration. 90 days. Not "let's see how it goes." A fixed end date created urgency for both sides.
  3. Assign a champion inside the customer. We had a field operations VP who staked his reputation on our success. Without him, we would have been blocked by IT security review for months.
  4. Instrument everything. We tracked not just whether people used the product but how. Screen time per feature, workflow completion rates, and most importantly, what they did after looking at our dashboard.
  5. Run weekly retrospectives. Not monthly. Not quarterly. Every Friday, 30 minutes with the on-site team. These sessions generated more product insights than all our pre-launch discovery combined.

The pilot results exceeded our targets. Equipment idle time dropped 22% (target was 15%). Supervisor decision time averaged 4.1 minutes (target was under 5). Safety briefing compliance hit 100%. That last metric was the one that got the executive team's attention most, because a missed safety briefing had cost the company $2.3 million in a single incident the previous year.

Phase 4: Scale (Months 15-18)

Pilot success is not product-market fit. It is product-pilot fit. Scaling from one site to many revealed problems we never anticipated. Network connectivity on remote sites was unreliable. Different sites used different equipment manufacturers with different telematics protocols. The field supervisor who loved our product at the pilot site was not the same persona as supervisors at other sites.

We invested heavily in offline-first architecture, so the product remained useful even when connectivity dropped. According to Cisco's Global Networking Trends report, 57% of industrial IoT deployments cite connectivity as their primary reliability challenge. Building for intermittent connectivity was not a nice-to-have; it was table stakes.

What should you cut from v1 of a 0-to-1 product?

The pattern I have seen across multiple 0-to-1 builds is that teams cut the wrong things. They cut quality (shipping bugs), simplicity (adding complexity to cover edge cases), or instrumentation (flying blind on usage data). Here is what you should actually cut:

  • Features that require data you do not have yet. Predictive anything needs historical data. Ship the data collection first.
  • Configurability. In v1, make opinionated decisions. You can add settings later when you know which decisions should be flexible.
  • Multi-persona support. Serve one persona brilliantly. You can expand once you have revenue.
  • Integrations beyond the core 2-3. Every integration is a maintenance burden. Start with the ones your pilot customer actually uses.
  • Reporting. You do not know what reports matter yet. Ship a dashboard and learn.

As we later discovered when selling to Fortune 500 enterprises, what you keep in v1 matters far more than what you cut. The five features we kept were the five that a field supervisor would use every single morning before the crew arrived.

How do you design an enterprise pilot framework?

The pilot framework we developed became our repeatable go-to-market engine. A well-structured pilot converts at roughly 3x the rate of a free trial, based on our experience across 11 pilot engagements. The reason is simple: a pilot has organizational commitment, defined success criteria, and an executive sponsor who has made a public bet.

Pilot Element Free Trial Approach Structured Pilot Approach
Duration Open-ended, "try it and see" Fixed 60-90 days with hard end date
Success criteria Vague: "see if it works" 3 measurable KPIs agreed upfront
Customer commitment Low; can abandon anytime Executive sponsor, dedicated team
Vendor investment Self-serve, minimal support Dedicated CSM, weekly retros
Data collected Signups and logins Usage depth, workflow changes, ROI metrics
Conversion rate ~15-25% ~60-75%

The recognition from TIME as one of the year's Best Innovations validated not just the technology but the approach. We were not the only company trying to bring intelligence to field operations. We were the one that started with the user's daily reality instead of the technology's capabilities.

What are the biggest mistakes in 0-to-1 product management?

Looking back, I would change three things. First, I would have started the pilot two months earlier, even with a rougher product. We over-polished features that the pilot customer never used. Second, I would have hired a customer success person before the pilot, not after. Having a dedicated person on-site full-time during the pilot would have accelerated our learning cycle. Third, I would have built the offline-first architecture from day one instead of retrofitting it. That retrofit cost us six weeks during the scale phase, and those six weeks delayed our second enterprise deal.

Building zero to one requires the discipline to stay in discovery longer than feels comfortable, the courage to cut features that feel essential, and the humility to let pilot data override your intuition. According to First Round Capital's data, the median time from founding to product-market fit is approximately 18 months, which aligns almost exactly with our timeline. That is roughly how long it takes to complete the discovery-prototype-pilot-scale loop once with rigor.

For more on navigating the metrics that matter at scale, including why small improvements can have outsized business impact, see the next post in this series.

Frequently Asked Questions

How long should the discovery phase last in a 0-to-1 product?

Plan for 8-12 weeks minimum. Every week of discovery you skip adds 2-3 weeks of rework later. Our 47 interviews saved us from building for the wrong persona.

How many features should a v1 product have?

We shipped 5 out of 34 and our pilot still exceeded targets. The heuristic is "minimum lovable": the smallest set that makes one person's daily workflow dramatically better.

How do you convince leadership to invest in a pilot instead of launching broadly?

Show the economics. Our structured pilots converted at 3x the rate of unstructured trials. Frame it as de-risking, not delaying.

What is the single most important thing in 0-to-1 PM?

Proximity to the user. Not surveys, not analytics. Sitting next to a field supervisor for a full shift taught me more than any user research ever could.

Last updated: September 12, 2021