How to gain conviction to work on a startup idea for 10+ years

Illustration of a man in a large library

In December 2020, Josh Ma and I started our startup, Airplane. Before that, we spent several months exploring ideas before settling on the idea that eventually became Airplane. While it's too early to say yet whether Airplane will be a long-term success, we're very happy with the idea we decided on and the progress to date. Here's the process we used overall, and with the benefit of hindsight, what worked and what didn't.

Solving for personal pain points

Before starting Airplane, I co-founded an analytics startup called Heap, and Josh was CTO of a life sciences SaaS company called Benchling. We both spent 7+ years at each company. During my time at Heap, I had always kept a running list of big pain points that I'd run into that could form the basis of another startup. Some of these were problems we'd faced at Heap, and others were ones I'd faced in my personal life. Josh also had a similar list.

So when we both left and decided to start something together, we started with those lists. Some of the ideas I had been most excited about had already turned into very successful startups in the years since I'd experienced the pain point. For example, Heap had lots of international employees and there was no Gusto-like payroll experience for them back in 2015. Now, companies like Deel, Oyster, and Remote exist to solve this problem. Another idea I had, a Steam-like app store for music production software and samples, now exists in the form of Splice.

There were however several pain points and ideas that still seemed unsolved.

The first ideas we looked at

We were initially most excited by solving problems related to internal tools. However, we felt that the space was too crowded already–there were tons of new low-code and no-code tools popping up all the time trying to tackle problems in this space. We figured that one or more of these tools would end up solving the problems we had faced. So we chose not to dive deep into the space and dismissed ideas in that space without a ton of thought.

We spent about a month each investigating two areas:

  • Financial planning for startups: Every year at Heap we went through a long financial planning process for the upcoming year. There were lots of aspects of it that we felt could have benefited from better software, e.g. allowing non-analysts the ability to change variables and run scenarios themselves.
  • Life sciences software: There were many problems that Josh had seen during his time at Benchling—customers often had their own laundry lists of adjacent systems or processes they wished Benchling would solve. Some of these things were pretty distant from Benchling's focus, and we looked into solving problems in related parts of the organization and later in the scientific pipeline.

In both cases, we spent a lot of time brainstorming ideas, investigating existing solutions, and talking to potential users. Both times, after some research, we iterated to an idea that we felt could be a strong business.

However, we also spent those months increasingly feeling like we didn't want to be in either space for the next 10 years. So we didn't pursue either idea. The decision to abandon these ideas wasn't driven by the merits of the ideas themselves–it was driven by our lack of a personal, emotional connection to those problems.

Idea filters for founder-market fit

We realized we'd been approaching the process like a business school case study–purely thinking in terms of market opportunity–while ignoring the side of founder-market fit. So we went back to the drawing board. This time, we decided to be more deliberate with our process. We didn't want to spend another month or two investigating an idea, only to feel like it was something we didn't want to do for the next 10 years, regardless of the merits of the idea.

So we came up with several "idea filters" after spending time introspecting about what we really wanted to send time on. These were parameters that we thought would correlate strongly to ideas we could be passionate about for 10+ years:

  • B2B SaaS: This is implicitly what we were looking at already, but there were some consumer and prosumer ideas we'd debated but not gone deep into. It was useful to make it explicit.
  • A problem that both Josh and I had experienced: We didn't want to spend years becoming domain experts in a new field. We wanted to go in with a strong perspective on day one.
  • Great software matters: Not every SaaS company needs world-class engineering to succeed. I'd say most don't. But we felt that this was one of our strengths and we wanted to work on something where that would make a difference.
  • Fast iteration cycles: We wanted to build something people could adopt quickly and give feedback on right away, rather than long sales cycles. We wanted to be in a market that lent itself to a fast rate of learning from customers
  • Something outside of analytics or life sciences: We had just spent several years in those spaces and wanted to do something new.

To be clear, these filters don't mean "these are what all good ideas look like." Rather, these were filters we could put potential ideas through that would tell us whether Josh and I, specifically, could see ourselves being interested in working on for a long time. But anyone who is brainstorming startup ideas has some hard or soft requirements subconsciously and it's worth introspecting about what those are for you before you waste time.

The idea we eventually settled on

When we went back to the drawing board this time, we decided to take another look at internal tools–in contrast with the previous ideas, it was something that didn't look like a great space "on paper" due to how crowded it was, but was something we were both highly interested in. Building a platform that helped engineers create powerful internal tools fit all of our filters really well.

This time, we spent a lot longer thinking through why our previous companies had struggled so much with internal tools. There were some great tools that had come out in recent years, but lots of problems were largely unsolved. Internal tooling platforms–and business software in general–still has a long way to go.

Specifically, both Heap and Benchling were enterprise SaaS products that dealt with large amounts of customer data. There were frequent write-heavy, compute-heavy customer-facing operations that support and services teams had to deal with. For example, at Heap, a customer might have implemented a data collection API incorrectly and we'd need to delete some historical data on their behalf. There were dozens of other frequent operations just like this.

In all these situations, customer-facing people often had to escalate to solutions or infrastructure engineers to solve these problems. These were complex, sensitive operations that we couldn't just model as a REST endpoint and shove behind a button in an internal admin panel. So they remained as eng-only operations even as they became a huge bottleneck for the company. None of the low-code/no-code/internal tools platforms on the market would have solved these issues.

We then set out investigating whether other companies had the same pattern where engineering-only tasks piled up and were hard to turn into usable internal tools.

We started by having conversations with engineers and internal tools users (ops, support, etc) at a ton of companies. These were very open-ended at first. Some of the questions we used as jumping-off points:

  • What internal tools have you built at [company]?
  • How have these changed over time?
  • When you ship a new feature, what's the process for making sure there's the tooling in place to support that feature?

From here we'd ask tons of follow-ups and sometimes screenshare if their internal tools weren't too sensitive to show.

This time, we developed more excitement and conviction throughout the process, rather than feeling bogged down. We had to resist the urge to come up with solutions too quickly and stay in "learn" mode.

After ~20 conversations, we came up with an idea to solve the problems we'd heard. We wrote a 2,000+ word Notion doc that described how it would work and sent this out to some of the folks we'd already spoken with to get their feedback. We continued having conversations, revising the doc as we learned more. Having a written doc, rather than a slide deck or mocks, made it really easy to iterate, but also was less capable of conveying our idea than something visual.

After 20-30 more conversations, we put together a slide deck with mockups of how the solution would work. We also sent this back around to the people we'd spoken with and showed it to many more. Eventually, we felt enough confidence to dive in after 100+ total conversations and started building our product.

An early mock of how Airplane would work
An early mock of how Airplane would work

What worked and didn't in retrospect

Some of those early conversations who loved our mocks ended up becoming the first users of Airplane. But many didn't. You can only get so far with mocks–when people actually try the product, you learn so much more. There were tons of issues that get glossed over in mocks that are really crucial to actually building a usable product.

Because Airplane is an interaction-heavy developer tool intended to be adopted bottoms-up, rather than something sold top-down, the developer ergonomics and UI/UX really matter.

In retrospect, there have been several things that we've learned over the last 10 months which we didn't learn when pitching mockups:

  • Lots of use cases emerged that weren't part of our initial validation. We initially conceived of Airplane as a way to quickly turn Python or JS scripts into internal tools. But when people started using it, we noticed that some of these scripts were simply hitting REST endpoints or making simple database queries. We built REST and SQL task creation flows to make these use cases even easier, and these are now what people tend to onboard with before they get into more complex scripts.
  • One of the most common early feature requests was the ability to run a task on a schedule. We built this out and now tons of companies start out by using Airplane as a lightweight substitute for cron or Airflow.
  • Internal tools need to be able to read and write production data. Even if Airplane's core functionality is valuable, there are tons of security requirements that even small startups have before they'll adopt something like Airplane. We ended up getting a SOC 2 audit and taking other security measures that other SaaS startups put off until much later.
  • Interoperability matters a lot for larger companies. Every company over a certain size already has some internal tools built, and they don't generally want to introduce another "pane of glass" by bringing on Airplane. We built a Slack integration to make it possible to do almost all Airplane functionality directly within Slack.

Perhaps we could have asked better questions and validated some of these learnings without building anything. However, I think building an MVP and getting it into peoples' hands was a better approach. For example, when you discuss internal tools with people, scheduled tasks don't typically come up. It was only when people started using Airplane did it click for them that they wanted to use this as a substitute for cron if we had a scheduler.

Overall, I'm happy with the process we ran. There's a few things we would have done differently in retrospect:

  • Started thinking through our idea filters earlier. We could have saved a couple months of looking at ideas that ultimately never would have worked for us. Though perhaps we had to do that in order to realize what we really wanted.
  • Spent less time worrying about other startups. We initially resisted exploring internal tools because it "felt crowded." However, what we ended up building isn't really that competitive with other internal tools startups, because we're solving pain points that they're not (and vice versa). The focus should purely be learning what your users are doing and discovering if there's something that can be improved.
  • Launched earlier. We started writing code in December 2020 and launched publicly in July 2021. We felt really confident after getting validation on our idea mocks so we didn't push for getting actual product feedback as quickly as we could have. We could probably have built and launched a narrower MVP in just a couple months after starting.

If you're interested in seeing what we ended up building almost a year after going through this process, you can watch our demo video and sign up for free here.