Today’s startups very quickly fall into the optimization trap where they think future growth will largely come from optimizing their existing product. The better approach is finding the right balance between optimization and innovation since both methods can produce future growth.
Growth Hacking is declining in relevance. Will it disappear entirely? I don't think so. Nor do I think it should. But the craze that once drove every startup (even enterprise!) to look for a Growth Hacker is on a steep decline. And I believe that's a good thing.
In 2010 a very sharp technology marketer named Sean Ellis coined the term "Growth Hacking". Andrew Chen, another very skilled technologist and current Partner at A16Z, followed on with a post describing the role of a Growth Hacker as the "new VP of marketing". In parallel to that companies like Facebook and LinkedIn, which had two of the earliest and most successful growth teams, dating back to 2007/2008, received mainstream media attention for the efficacy of these little-known yet powerful Growth teams. In a very short period of time something was created out of nothing and the Growth Hackers phenomenon spread, as if by design, like wildfire throughout the technology industry. There are now several growth hacking bootcamps and large conferences. I've happily participated in some of them.
However, I observed an unhealthy interpretation and adoption of the growth mentality taking seed not too long after growth hacking became a thing. People spoke of growth hacking and hired for a growth hacker as if it were a panacea. Many months later they found themselves optimizing a product that consumers didn't care for, dangerously short on cash, and with zero interested investors or buyers. There are a few examples of this happening in recent years. Viddy, once touted as the Instagram of video, amassed tens of millions of Monthly Active Users thanks to temporary prominence in Facebook's news feed. They raised at a valuation in the hundreds of millions but shut down shortly after a flurry of growth and fundraising because it turned out they had lots of cheap, temporary distribution but very little user engagement because the product wasn't useful.
By now I'm sure that hundreds or thousands of other startups have similarly discovered that a growth hacking mentality hasn't led to the breakout moment they hoped for. That said, I would assume that interest in growth hacking has started to subside. It certainly feels like it. Fortunately, there's data we can look at.
The search query data in Google Trends provides a simple proxy for the level of interest in Growth Hacking. Below is the global search query interest for the keyword "Growth Hacking" over the last several years. Worldwide query data reveals a healthy up and to the right trend, though with some observable deceleration over the last year in particular.
The story becomes a bit more interesting when you slice the query data by country. Here is the search query data in the United States. It's down and to the right.
Interestingly, the next large market to adopt the term was India beginning in August of 2012. Strangely, it looks like the term is rebounding a bit in the last few months. I'm not sure why that's the case yet it's interesting to point out since the term is showing a consistent flattening or decline in most other large markets I looked at. Either way, this isn't a healthy trend.
In the United Kingdom, lift off in searches for Growth Hacking didn't begin until December of 2012. The query interest is steadily declining just as it is in the US.
Following that it spread to Germany by August of 2013. Germany hasn't started its decline yet however search interest is may be flattening as of the last 2-3 months.
Shortly after that the term took hold in Brazil in October of 2013. The "market' is still young relative to the US (the term took hold a little over a year later in Brazil) so I would expect flattening and a decline to kick in within the next 12 - 18 months.
Here's the search query interest for all of these large markets in aggregate. This doesn't look promising. Given that these large markets are down and to the right as a cohort, yet worldwide search interest is still on the rise, I'm assuming that worldwide intent is currently being driven by laggard, long-tail markets.
Nonetheless, I think it's fair to say that we have data that validates the opening line of this blog post: growth hacking is declining in relevance.
When I was writing this post I originally planned on solely looking at the search query data for "Growth Hacking". Then it dawned on me (thank you Philz coffee?) to look at the search query data for the term "Product Market Fit" and to compare the two. I think this is a fascinating comparison because the assertion of Growth Hacking is that you can optimize your way towards scale while the assertion of Product Market Fit is that you must innovate your way towards scale.
My hypothesis prior to looking at the data was that Product Market Fit as a search term would be showing linear-to-exponential organic growth, which, wonderfully enough, is the proper definition (or at least the proper measure) of Product Market Fit.
First, let's compare the relative search query interest worldwide between "growth hacking and "product market fit" from 2008 to present. I was shocked (mostly saddened) to see that query interest in growth hacking (the blue line) was many times greater than product market fit (the red line).
My gut reaction is that we've lost our damn minds if we think growth hacking is more important or compelling to research than product market fit.The generous interpretation would be that startup operators and founders understand product market fit much more than growth hacking, which leads to lower search interest in product market fit. That seems like a stretch so I'm not inclined to believe that interpretation.
Let's slice it by country to see what else is going on.
Here's data in the United States.
Now we're talkin! It's like Myspace meets Facebook circa 2009.
Here's India. Pretty uneventful other than to note the nearly non-existent growth for the term product market fit.
And in the United Kingdom the story is about the same.
Germany data tells a similar story to the UK, though with a smaller and more recent lift.
Lastly, here is Brazil. This makes me sad.
The data makes me think of a recent interview with Warren Buffet where he had a gem of a quote regarding why most investors don't buy and hold a broadly diversified portfolio and instead choose to trade individual stocks in an attempt to outperform the market. When asked why most investors don't follow his advice he remarked "Nobody wants to get rich slowly!" This behavioral phenomenon appears to be playing itself out in the startup world as well and I think this data provides reasonable support for that hypothesis. After all, who wants to build a billion dollar startup slowly?
But it's not all doom and gloom. What if we looked at the term "product market fit" in isolation? The worldwide search intent data looks promising.
And here it is in the United States.
All other major markets I looked at show little or no growth in searches for "product market fit". The UK is showing a bit of lift off in the last 1-2 years, but again, it's minor relative to the search interest for growth hacking.
My hope is that the term Product Market Fit has Product Market Fit and that Growth Hacking continues its decline down to a more reasonable level. What that implies is that technologists, as a collective, will get back to our roots of building innovative products that people love, within great markets where better alternatives are needed, as opposed to optimizing our way towards vanity metrics.
As I said at the beginning of this post, I don't think Growth as a skill and a function should go away entirely. The concept of looking at data, running experiments, and optimizing products is valuable and necessary to consumer technology companies when applied in the right way, at the right time, and at the right types of companies. And in moderation! It is no replacement for innovation.
Yet I am glad that it's going through its boom and bust cycle because that means we'll move on from the hype and get down to the white hot center of what's actually useful and relevant when it comes to building large, sustainable technology companies. The end result is that the best-in-breed of growth practitioners and bootcamps will remain. Reforge is an example of that. It is a world-class program put on by growth leaders that deeply understand the discipline. Importantly, they also understand that having a Growth team/focus is the side dish and not the main course.
Building a product and company that can grow sustainably takes much more than a few clever hacks. In future posts I'm going to spend a significant amount of time talking about the other elements of growth that receive very little attention and that deserve the spotlight. The list includes things like:
My hope is that the collective attention of the consumer technology world will continue to shift away from growth hacking and more towards subjects that deserve a greater proportion of our attention.
“Our industry does not respect tradition— it only respects innovation.”
That’s what Satya Nadella wrote in his opening email to the company shortly after becoming Microsoft’s new CEO. It was a clear call to arms that Microsoft needed to reignite innovation in order to scale the company after roughly 15 years of stagnation. The price of Microsoft’s stock has increased ~3x since he came back because the market seems pleased with Microsoft’s sharpened focus, progress made in the cloud business, and willingness to change how it used to do things in order to compete in the future. Some of this could be window dressing or marketing speak, but the changes happening at Microsoft seem genuine.
Satya said nothing about doubling down on what’s already working in order to get more juice out of the squeeze. Rather, he ended the email by emphasizing the need for clarity of focus on new innovations and on changing the culture which, for the most part, was focused on preserving the status quo for over a decade. It’s not unheard of that a large company often forgets how to innovate.
I haven’t spent enough time at companies with 1,000+ employees to speak deeply about the dynamics of large company stagnation, but I can speak to it happening at early-stage startups. In particular, I find it interesting that the same two problems Satya outlined for Microsoft often appear within early stage startups as well: i.e. the culture becomes comfortable with the status quo and the company loses its ability to innovate.
How does it happen? When a startup becomes obsessed with and designed around data and optimization. Today, every 50 - 100+ person startup has multiple business intelligence tools, off-the-shelf A/B testing tools, a data science team, and product managers who know much more about writing SQL than they do about interviewing customers.
In fact, I kept score while interviewing PM candidates in 2017. I spoke with 67 product managers. About 50 of them were reasonably proficient in SQL and could write a few queries on the spot. Guess how many knew how to conduct customer development? Three. That’s it. Only three product managers could proficiently describe the purpose, process, and outcomes from customer development. 75% could write SQL, but only 4% knew how to properly interview a customer. It’s a small sample size, but the gap is large.
Here’s why that’s bad: Most startups, just like large companies, need to go through continuous phases of innovation in order to create 2x+ step changes in the potential for their business. The process of going from 0 to 1 with their first product is an innovation. It’s what allows the company to get off the ground. Sometimes, that original innovation is enough to carry them from seed to IPO. But that is incredibly rare. What’s more common is that startups need to innovate several times over in order to create step changes that help them scale from early stage to growth stage and from growth stage to a publicly traded company.
Over the last 10 years, there has been a massive overcorrection in the direction of optimization based on broad availability of data, leading me to find that most PMs are incapable of effectively deriving insights from customer conversations and most startups are incapable of producing new product innovations beyond the initial product that they take to market. They’re great at A/B testing, but not great at creating new features based on customer insights and a leap of faith.
To put it plainly, growing through data analysis and A/B testing isn’t the only path to future growth. While it seems obvious, I see very few startups designed for innovation, which may be the biggest driver to new growth for your business. Do you think Facebook would be at its current scale without innovations like News Feed? Community-driven translations to expand globally? Or the developers platform? The answer is obviously “no”. Take a look at MAU acceleration beginning in 2007 / 2008. That coincides with the launch of the international translations app, which allow Facebook users to crowdsource the translation of the product. It took several months to build and a few years of ongoing maintenance and development to mature the product. That innovation led to a boom in active user growth.
The point I’m making is that today’s startups very quickly fall into the optimization trap where they think future growth will largely come from optimizing their existing product. The better approach is finding the right balance between optimization and innovation since both methods can produce future growth.
By the time you’re done with this series of blog posts, you’ll have the knowledge and tools you need to do the following:
We should first start with a more detailed explanation of the difference between optimization and innovation. Optimization is when a startup iterates on its existing products or services to squeeze more juice out of the orange. Typically, the results of optimization are incremental in nature.
If they are incremental in nature, then why do them? Well, because many small optimizations can accrue into large long-term results when you allow those optimizations to compound.
Here’s a simple example. In the below graph, I compare the 12 month growth in monthly active users (MAUs) in 4 hypothetical cases. The blue line is the base case where the monthly growth rate is slowly declining, leading to flattening growth. The red line is for sustained 10% month-over-month growth (MoM), yellow is sustained 12% MoM, and green is sustained 14% MoM. If a startup can optimize its way towards a slightly higher and sustained rate of growth, the compounded outcome is very different relative to the base case. In fact, this is what we did in 2009 at Facebook. Our growth team focused on optimizing our way towards a sustained 2% week-over-week growth rate because we knew that we would grow from ~100 million MAUs to ~300 million MAUs in 12 months if we did so. This happened to be the company-wide goal for that year.
nnovation is when a company embarks on building entirely new products or services for existing customers or for a new segment of customers. Innovation can also involve expanding into an entirely new business line. However, this happens so rarely (hello, Amazon!) that I won’t focus on this definition for the time being. Additionally, innovation can create step change improvements in the trajectory of the company, although they are much more difficult to discover and successfully execute on.
I’ve taken the same scenario above, but added in a 5th option which is labeled as “with innovation” in the below graph. What this does is take the base growth rate scenario and applies a 2x multiplier to growth midway through the year (e.g. you build a new feature, such as Facebook’s News Feed and it leads to a step change in monthly active usage). This assumes no optimizations along the way.
The point isn’t that you should pick one approach to growth over the other. Rather, the ideal outcome (and most realistic) is a healthy combination of both optimization and innovation. In the below scenario, I assumed that a segment of the company is working on optimizing the existing products and services to sustain 10% MoM growth and another segment is working on new product innovation that leads to a 50% bump in MAUs midway through the year. This scenario is plotted as a black dashed line on the graph.
The appropriate question to ask is, “For my company, should I be innovating or optimizing?”
For Seed and Series A startups the practical reality is that you are headcount constrained into picking one over the other because you’ll have less than 20 employees. Prior to establishing product market fit, you’ll be entirely focused on innovation because you’ve yet to figure out the new technology that delivers something better, faster, cheaper, and more convenient relative to the alternatives in the market. Consequently, you’ll have very little growth or customers to optimize on top of, so don’t waste your time optimizing if you don’t already have exponential organic growth.
As a company matures to the point of Series B and beyond (sometimes with a large Series A) it can hire enough people that it can contemplate doing more than one thing at a time. From my experience that’s at the point in which a consumer software company has 30 or more employees. On average, about half of the employees will be engineers, so that means you’ll have 15 people that can do the building. With 15 people doing the building you can divide them amongst 3-4 teams— e.g. 2 product teams, an infrastructure team, and a floating pool of engineers needed for miscellaneous tasks and on-call work.
When a company reaches 100 employees it can certainly multi-task. Its 50 engineers can be subdivided amongst 2-3 well-staffed product teams, 2-3 infrastructure teams, and still be able to manage on-call support and miscellaneous tasks.
Assuming a company is able to reach the scale of 30+ employees and is now capable of walking and chewing gum at the same time, the question becomes, “How do you allocate those people in terms of optimization versus innovation?” I like to use investing analogies when thinking through this decision.
Most investors should have an investment portfolio that maximizes their returns given the amount of risk that is appropriate for them to take (this concept is known as Modern Portfolio Theory). Put in simple terms, it stipulates that you’ll want a diversified portfolio comprised of a mix of higher risk, higher return investments (e.g. stocks) and lower risk, lower return investments (e.g. bonds). Depending on the level of risk you can afford to take, you’ll want to shift the allocation towards certain investments and away from others. For example, if I’m 70 and ready to retire, I should be taking very little risk and will want a portfolio weighted heavily towards low risk, low return investments (bonds). If I’m 30 and putting money into a retirement account that I’ll use 30 to 40 years from now, then I should be taking on more risk to generate more returns during that long time horizon (i.e. more stocks).
I hope you are starting to see how this investing analogy applies to your startup thinking. Innovation is your stocks and optimization is your bonds. The question to ask is, “What proportion of my company’s focus should be on optimization versus innovation?”
If you’re building a seed stage startup, then you’ll solely be focused on innovation (all stocks and no bonds) because you’re trying to build something new and innovative that finds product market fit. If you’re working on a series A or series B startup with clear indicators of product market fit (i.e. exponential organic growth), then you should be considering the trade-off between optimization and innovation.
Facebook is a good example of optimization and innovation at play. While I was at the company (2008-2010), we did a bit of both. The Growth Team was focused predominantly on optimization by improving sign up conversion rates, new user onboarding, reactivated user onboarding, getting people to add more friends, and a vast library of miscellaneous A/B tests for the sake of getting more users. Meanwhile, several of the core product teams were pushing out big innovations like the first smartphone app, various News Feed innovations, large enhancements to photos, and the developer’s platform.
There’s a powerful concept known as “shipping the org chart”. It was brilliantly outlined by Steven Sinofsky in his piece on Functional vs Unit Organizations. The TL;DR is that the design of your org makes its way into your product. In other words, your product is significantly influenced by the nature of the organization you’ve designed within your company.
Here’s an example from an org chart I recently reviewed with a Series A (soon to be Series B) startup currently scaling from 15 employees to about 45.
It’s a fairly straightforward org design. The ops team is focused on optimizing the field operations folks to scale their service at lower cost. The eng team is building out and scaling underlying services and products to support 10x growth in a number of customers. There are two product team. The first team is the LTV team, which is focused on increasing revenue per user. The second team— the growth team— is focused on improving all important conversion rates, such as sign up rate, new user onboarding, and so on. Lastly, the marketing team is focused on acquiring more customers.
That all seems reasonable— but, with one catch.
I asked the founders of this Series A company, “Who is focused on delivering more value to the customer?” To which I received a blank stare, followed by a bit of head scratching, and then a final, “Uhhhh … well…good question!”
The problem with an org chart like the one above is that it’s almost exclusively aligned with producing value for the business— so much so that very little attention is being given to satisfying the needs of the customer. Here’s where things get really tricky—it also pushes the company deep into optimization territory. To be specific, it’s the design of the product teams (those highlighted in green) that is most worrisome. I’ll elaborate more on this in the next section.
Imagine you have 1 junior/mid experience product manager, 1 junior/mid experience designer, and 2-3 engineers— each with a few years of experience. That’s a fairly common atomic unit of a product team within a startup. This small team now refers to themselves as the “LTV team” with an understanding that their primary metric is to improve revenue per customer. The next step for them is creating a roadmap, which they begin to do through the lens of increasing revenue per user to maximize LTV for the business.
The very first project that the team puts on their roadmap is to A/B test the pricing tiers for their subscription business. Another item on their roadmap is to A/B test variations of the subscription cancellation flow with alternative messaging and discount offers in an attempt to convince customers to not cancel their subscription. Following that, the team has fleshed out a portion of their roadmap for testing new email, in-product, and push notifications to encourage freemium users to upgrade to one of the paid tiers. Again, these are all reasonable projects to work on. The issue is that they are all focused on incremental optimizations for the benefit of the business and don’t add any additional value to the user. This is the slippery slope I alluded to a few paragraphs ago.
Fast forward 12 months and the LTV team is still busy running A/B tests, looking at funnel data, and squeezing out 5% - 10% wins via the occasionally successful experiment. Meanwhile, they haven’t shipped any new, innovative products or features that deliver substantial value to the customer (which can also increase LTV for the customer!). While exercising their data analysis and A/B testing muscles, their customer development and new product development skills have atrophied.
Jump ahead another 6-12 months and this team of highly skilled optimizers is scratching their head because the company is lagging its growth goals. They’ve continued to hire PMs whose strength is in running SQL queries and designing experiments. They’re finding the occasional 5% - 10% win, but they’re starting to get the sense that they’ve scraped the bottom of the barrel because it’s becoming increasingly hard to find a positive experiment. Meanwhile, one of their competitors is scaling more quickly, compelling them to want to run even more experiments because they’re questioning if they just haven’t run the right A/B tests yet. Anecdotally, many employees at the company notice that the amount of customer love they received on social media has slowed down. They observe a noticeable decline in feature requests and praise from their existing customers in Zendesk as well.
Meanwhile, the Growth team has been busy doing much of the same. They’ve been running experiments, building innumerable data dashboards, and commiserating with the startup’s lone data scientist as to why growth is below plan and becoming increasingly dire, despite having run dozens or hundreds of A/B tests over the last two years. Several of the tests were successful, but what gives? Why does growth suck relative to their expectations?
The product teams and company have entered what I like to call “optimizers purgatory.” They’re in a strange middle ground between succeeding with plenty of data and A/B testing abilities, minus a single meaningful innovation to the user experience in the last year or two. This sounds like an extreme hypothetical, but it’s incredibly common. I’ve personally been there and worked with dozens of other startups that have encountered optimizers purgatory as well.
What can be done? The company could have considered an alternative to the org chart that struck a better balance between having some focus on optimizing for business value and some on innovation for customer satisfaction. This may in turn create business value far greater than the value that comes from solely optimizing for business metrics. Below is an example alternative that swaps the LTV Team for a Client Value Team. This new team’s primary metric is customer satisfaction score— e.g. the percent of customers “very satisfied” with their experience.
Take the same atomic unit of a team (1 PM, 1 designers, a few engineers) and you’ll find their roadmap is wildly different than the LTV Team’s roadmap. This difference is simply because their team name implies creating new value for the customer and their primary metric requires that they increase customer satisfaction. Recall the LTV Team had a roadmap full of A/B tests focused on optimizing the business metrics. The Client Value Team’s roadmap is more likely to contain a list of new, high value features that customers have been asking for and new, innovative value that customers weren’t expecting to receive, but will be delighted with.
In contrast to the LTV Team, the Client Value Team will develop their customer development and product development muscles. They’ll have well-defined customer research and design research methods. They’ll likely also develop a closer relationship with the customer service employees within the company, leading to regular meetings with the head of customer service where they review the latest Zendesk customer requests. They’ll have fewer data dashboards and won’t be able to speak as eloquently about the parts of the product that are well optimized, but they will be able to speak about which customer complaints have tapered off and which new customer requests have bubbled to the surface.
The LTV Team and the Customer Value Team have become two very distinct organisms, simply because of the name of the team and the type of metric chosen— i.e. a customer success metric versus a business success metric. This is the notion of “shipping the org chart” at play and it’s an essential concept to understand when thinking about designing an organization with the intent to grow the business.
When working with founders on creating an org chart that adequately balances growth from optimization and innovation, I give them the following exercise:
Step 1: Concisely describe your mission and vision for the next 2-3 years
Step 2: List the 2-3 things that must be true for your customer to realize that vision
Step 3: Design an org with product teams that map to the 2-3 truths for your customer
Step 4: Revise and edit until satisfied with the results
Here’s a practical example from Wealthfront, where I was most recently the President:
Step 1: Wealthfront’s mission is to provide everyone access to sophisticated financial services with the vision that our customers would use Wealthfront to exclusively manage all of their finances.
Step 2: In order for that mission and vision to be true, our clients would need to (1) create a free financial plan that captures their needs and wants; (2) have a superior set of banking products relative to what they could get at large banks; (3) have world-class investment management that’s typically only available to the ultra wealthy.
Step 3: We set out to design the primitives of a product organization that reflected 1 and 2 above. It looked something like this:
We came up with an Onboarding Team that would digitize many of the financial processes traditionally handled over the phone or via paperwork. By digitizing these experiences, we could ensure “everyone gets access”, per our mission statement. The Onboarding Team’s primary metric was customer satisfaction. For this metric, they measured the percent of users that were very satisfied with various parts of the onboarding experience. We made the leap of faith that if the customer was more satisfied with the experience, they would trust us with more of their money ( our data science team proved to be true). That ensured we took a very customer-centric approach to innovating with the onboarding experience.
Secondly, we created a Financial Planning team to build out a whole new suite of products, so that our clients could get more value out of Wealthfront beyond just investment management (the company began with this offering). Finally, we had a Financial Services Team that would build the next generation of investing and banking products, so that our clients could get access to financial products typically reserved for the rich.
Step 4: Once we had those teams in place with a clear charter for creating new innovative products (as opposed to simply optimizing the products we already provided), we put the rest of the company org in place.
And within the product organizations, we could then provide guidance on the proportion of their roadmaps/time and effort spent on creating new feature innovations versus optimizing for growth with the existing feature set. For example, one might ask each product team to construct roadmaps that are 70% focused on building new value to the customer and 30% spent testing and optimizing for key business metrics related to their product line. With this approach to org design, a startup can be very explicit with its allocation towards growth through both optimization and innovation.
Another version of striking a balance between optimization and innovation is as follows: In this case there are 3 innovation-focused product teams (in blue) and 1 product team (the growth team in green) that is focused exclusively on optimizing the existing features and experiences in order to improve the business metrics. This would lend itself to a split of 75% innovation and 25% optimization.
As noted earlier, companies need to pick their balance of “stocks and bonds”— i.e. their mix of optimization and innovation. However, they shouldn’t pick their mix once and set it for perpetuity. The mix should change over time depending on the circumstances of the business.
For example, if your company launched a new product line a few months ago and is experiencing exponential organic adoption, then the product clearly has product-market fit within your customer base. It may make sense for that product team to then spend 3-6 months optimizing the existing features within that product line to maximize for adoption via some low hanging fruit experiments. This is especially true for network effects businesses since optimizing the drivers of the network effects can produce massive results. That was the case at Facebook where we spent a lot of time optimizing for sign up rate, new user onboarding, and getting people to add friends. By doing so we meaningfully accelerated the growth of the company due to it being a network effects business.
Conversely — and is the more common scenario I’ve seen at early stage startups — is that topline growth has stagnated as a result of having not shipped anything new and innovative in the last 1-2 years. That’s often the case since most businesses do not have a network effect and must therefore grow through new product innovation. The following example comes from my time at Wealthfront. At one point three out of four product teams were setup to focus mostly on new product innovation (Onboarding, Financial Planning, Financial Services) and one team was set up exclusively for optimization (Growth). Within the Onboarding, Financial Planning, and Financial Services roadmaps, the teams then have an explicit balance of how much of their efforts is dedicated to building new innovative features versus optimizing the existing products.
In subsequent quarters, the mix would change based on new insights or overall changes to the business. The key point is to remain flexible and use this simple mental model of “stocks and bonds” to regularly communicate and decide the appropriate mix of optimization and innovation across the company and within each product team’s roadmap.
If you want to take a stab at designing your own org chart using a similar process, go ahead and copy this free template that I made available and create a version of your own. It provides guidelines for laying out your org chart, listing what you must accomplish for your customers in order to realize your mission and vision. It’s also a place for you to balance optimization and innovation within each roadmap, as well as list the customer success metrics for each innovation team.
Assuming you’ve determined the right balance of optimization and innovation from the above sections, we can now take a closer look at how to manage an optimization roadmap and pick the “right” experiments to run.
Like any good product team, you should begin with a roadmap. The roadmap should be organized in priority order with the priority determined by estimated impact and level of effort. For example, if you estimate that a certain set of tests can produce a large increase (double digit gain) in the metrics for a relatively small amount of effort (a few weeks or less of engineering and design support), then it’s likely a high priority experiment. I’ve also created a template for creating your own experimentation roadmap, which you’re welcome to make a copy of and run with it.
The roadmap has two segments to it: The first segment allows for estimating the impact of various experiments so that you can rank them in priority order. The second segment is intended to capture the results from the experiment. It’s essential to maintain a history of all experiment results so the team can conduct post mortems in order to refine their experiment selection and design.
Generally speaking, I recommend that optimization teams— such as a growth team—operate in 6-8 week sprints focused on improving one metric at a time. A common mistake I see is a small growth team trying to optimize multiple metrics in parallel. This lack of focus normally leads to subpar results. In contrast, significant results can be produced when the full weight of a growth team is poured into a single metric for at least a few months. The team will find that they improve their pattern recognition through focused effort, leading to better test results as time goes on. As an example, during my time at Quora, our growth team spent 16 months optimizing solely for sign up rate. During that time frame we increased the sign up rate from SEO traffic from 0.1% to north of 4%. Once we reached the bottom of the barrel on that particular metric, we moved onto the next metric and repeated the process. To encourage this type of focus, I broke the experimentation roadmap template into multiple tabs where each tab maps to a roadmap for a specific growth metric — e.g. churn vs. reactivation vs. signups and so on.
Picking the right experiment to run is part art and science. By art I mean using judgement to craft a user experience worth testing. By science I’m referring to the practical constraints of testing new experiments on a relatively small population (i.e. sample size in statistics speak) when you’re still an early stage startup.
I often see startups try to run A/B tests in the same way that large companies like Google and Facebook do. They create a list of A/B test ideas that require fairly limited level of effort and then they start shipping dozens of small change tests fairly quickly. A classic example would be making changes to the call-to-action on a landing page, such as on the homepage, and perhaps testing the location of the call-to-action as well. The problem with this sort of test is that a startup often has a much smaller sample size (because they have less traffic or users of the product), so running and resolving that A/B test at high statistical confidence takes much, much longer than running a similar test at a high traffic product like Facebook. The relationship between experiment thoughtfulness and sample size is captured in the below diagram.
Here’s how to interpret it: Companies with a large sample size (a lot of traffic) don’t have to be as thoughtful with experiment selection and design. The reason is that the large company can make relatively small changes to the product, set up an A/B test to measure the effect, and then resolve the experiment in a matter of days at high statistical confidence because they have a wealth of data to lean on. On the other hand, a small startup with very little traffic (small sample size) needs to be much more thoughtful about experiment selection and design because an A/B test on a small sample size that produces a small change relative to the control will take weeks or months to harvest enough data to reach a statistically significant conclusion. I’ll demonstrate this effect in the below table.
Let’s imagine we have three different startups (A, B, and C — below). Each is going to run an A/B test on their homepage where the base conversion rate is 10%, the relative increase in conversion rate they are aiming for is 5%, leading to a new conversion rate of 10.5%. However, each startup has a different volume of daily traffic. Startup A receives 100 visits per day to the homepage, B receives 1,000 visits per day, and C receives 10,000 visits per day. Using the A/B testing calculator from AB Tastyto calculate the necessary test duration, we get the following results.
You can see from the data that the test duration declines significantly as a result of having more samples (i.e. traffic) in the test funnel. Now, let’s take a look at what happens when you tweak the magnitude of the relative experiment effect. In other words, when you run a test that produces a small, medium, or large change to the baseline conversion rate.
By increasing the magnitude of the relative experiment effect, the test duration declines precipitously. The key takeaway here is to aim for large changes. That seems like an obvious observation, yet I see many startups testing relatively minor changes to their product in the hopes it will produce a double digit increase in the target metric.
Finally, let’s look at what happens if we manipulate the base conversion rate. By base conversion rate I’m referring to the starting conversion rate. For example, if you have 100 visitors/day to your homepage and 1 user signs up, and you’re running an A/B test on the homepage, then you have a base conversion rate of 1%. If instead you run an A/B test midway through the sign up flow where there are 10 visitors per day, and 1 visitor manages to sign up at the end of the flow, then you have a 10% base conversion rate. What you’ll notice in the below scenario is that test duration decreases as a result of having a higher base conversion rate. Practically speaking, that means you’re more likely to reach statistical significance quicker if you A/B test in the bottom half of a funnel versus the top half since the bottom half has a higher base conversion rate.
To recap, there are a few key lessons to take away from the above scenarios:
It’s essential that anyone working on an experimentation team or roadmap understands the above statistical concepts. If so, they are less likely to stack their roadmap with poorly chosen A/B tests that will take too long to run and produce results too small to change the trajectory of the company.
Modern software companies follow a variety of common conventions to scale quickly and efficiently. For example, most software companies have a defined and documented approach for engineers when it comes to writing, reviewing, editing, and deploying new code. It’s important to settle on some standards and procedures for software development because it means a company can write code quicker, reduce mistakes that are inherent in writing code, and provide a better working environment for software developers. The end result is more and better products delivered to the customer, which in turn is good for the business.
However, standardization of a product development process is uncommon within startups. Most companies lack a clear procedure for taking an idea and turning it into a high quality, shippable product. What typically happens is product teams form and are left on their own to figure out how they want to drive new product development. For example, who is responsible for conducting customer research, when, and how should it be conducted? How does a team come up with an initial prototype for a new product? How do you iterate on it over time? In what ways can you maintain clear internal communication with key stakeholders as the product is being built? When and how do you come up with the go-to-market plan for the product? A well-designed product development process will have an answer for each of these questions and will help you ship more and better products to your customers. Without such standards, each product team will build products through different methods, leading to inconsistent product delivery timelines and inconsistent product quality. The last thing a startup needs is more unpredictability.
I created the following content to prevent unnecessary churn when trying to create new innovative products. It describes a product development process I’ve refined over the years and use on a day-to-day basis when building compelling products customers love. The process is described in a way that will make it clear and easy to implement within your company. It is specifically designed for building large customer-facing features where “large” is defined as a product that requires 1 month or more of engineering time to complete.
First, it’s useful to point out the ways in which product development is typically broken or inefficient at young technology companies. Here are the common issues that I tend to see at startups:
The below process has been designed to explicitly solve or greatly mitigate each of the above issues when developing new products.
In addition to solving common product development pitfalls, this method of developing products is rooted in a set of guiding principles which further prevents the above issues and gives product teams a common language to use when describing how they build product:
First, I’ll describe the process. Following the description is a visual concept. The product development process follows these steps:
This is a conceptual diagram for the product development process from start to finish. It’s very useful for project leads (especially the product manager) to have this process memorized, so that they always know what should be coming next in the development process. If run well, it should only take 2-3 weeks to finish customer research, the design sprint, and have a kickoff meeting session. Keep in mind that this is for new, innovative products/features, so getting to the point of alignment on a medium fidelity prototype is impressive in such a short timeframe. From there, development starts to move quickly until the product is ready to launch.
Here’s the full list of templates that you can used in conjunction with the process laid out above. This will allow you to incorporate some or all aspects of this process into your own team or company.
Thanks to an abundance of data storage, analysis, and visualization tools, startups today have the ability to make rapid improvements to nearly every aspect of their business. However, this overabundance has led to a significant bias in that startups now lean on structured data too much. So much so, in fact, that some of the fundamentals of building innovative products, such as rigorous customer development, have fallen by the wayside. One of the byproducts of this data obsession is that many startups try to optimize their way towards success through relentless A/B testing. This typically pulls them further away from essential insights and truths that they might discover, if they spent less time analyzing structured data from a database and more time collating the unstructured data that can be discovered when talking to customers.
The good news is that data over-reliance can be easily corrected with a shift in mindset and some of the tools and guides I provided in this four part series. In terms of next steps, I hope you take a few key steps from here. First, move forward with designing a company-wide org chart that creates an explicit balance between optimization efforts and innovation efforts. It’s also critical to make wise decisions with the types of experiments to run and avoid running tests that will never meaningfully improve your business. And finally, that you adopt some version of the repeatable product development process I shared, so that you can innovate much more effectively for the betterment of your customers and your business.
Successful online communities seem to grow perpetually through organic growth, which is the holy grail in startup land. But how does one create such a platform driven by perpetual organic growth? This playbook breaks down the six ingredients of the content flywheels that drive organic growth, how they work, and what you can do about them in pursuit of building your own startup fueled by a flywheel.
Each year a batch of entrepreneurs set out to build the next great online community. Some attempt to build large horizontal platforms where users engage on topics ranging from immunotherapy to the Boston Celtics. Reddit would fit that description. Others seek to create vertical communities tailored to a particular subject and audience, such as Wheelwell for car enthusiasts.
There are many reasons to be on the prowl for the next great online community, either as an investor or operator. A leading reason is that the winners tend to be massive. Another reason is that successful online communities seem to grow perpetually through organic growth, which is the holy grail in startup land.
The unstoppable momentum of organic growth, driven by users creating new content on the platform, is what is often referred to as the “flywheel”. In technical terms, a flywheel is a device that stores energy. The more it’s revved up, the more energy it stores and the longer it can spin unaided. It sounds like magic, but there’s a simple explanation for it—at least as simple as physics goes.
What a flywheel does is it converts kinetic energy into potential energy. Kinetic energy is the energy that an object possesses due to its movement. Potential energy is the energy stored by the object due to its position. Archery provides a basic example. When you pull the bowstring back, you can say that the arrow has potential energy. And when it is released the arrow has kinetic energy.
Another key point about flywheels is that the bigger it is and the faster it spins, the more energy it stores and the longer it takes to slow down. Online communities that have established a “content flywheel” behave similarly.
Let’s take Reddit as an example. The stockpile of registered users is Reddit’s version of potential energy. When those users create content and the content is discovered in Google, shared via social media, or distributed online through other means, then the “arrow has been shot” so-to-speak. In this analogy, the user-generated content is kinetic energy. When new content is created it fetches new traffic and users into the platform, increasing the size of the flywheel and accelerating its rotational energy. It becomes self-propagating. And once that kicks in, good luck stopping it.
To put the power of a content flywheel in perspective, Reddit recently claimed 430M monthly active users. It’s 15 years old and still spreading its wings.
But how does one create such a platform driven by perpetual organic growth? Clearly, it can’t all be distilled down into a simple formula and bottled up in cans to be sold next to ketchup and mustard. It’s no commodity. There is no “secret sauce” that only Italian grandmothers and a few exceptional founders figure out. However, I do believe some of the ingredients are knowable and repeatable. This playbook will describe what those ingredients are, how they work, and what you can do about them in pursuit of building your own startup fueled by a flywheel.
I believe there are seven primary ingredients when it comes to building flywheels in software. Six of them are knowable and I’ll go into detail on each below. One ingredient is something only you, the founder, can figure out. It’s the “secret sauce” that makes your community stand out relative to the rest and is your unique innovation.
Here’s the full set of seven ingredients:
Let’s jump into each, how they work, and what you can do about them.
Let’s assume at this point that you, the founder, have already decided on the type of content community you want to build. It could be for scientists, sports enthusiasts, Chief Information Officers, or be the world’s next horizontal platform to compete with Reddit, Youtube, and so on. It doesn’t matter which option you’ve selected. What matters is you’ve vowed to create a dent in the content universe.
You begin toiling away in your preferred design tool with product prototypes, first starting with low-resolution concepts. After a bit of user testing, you’ve identified a variety of UX snafus, shuffled around the deck chairs a bit, and arrive at a prototype that’s ready for development.
The designs turn into an alpha. You test it with more users. The alpha becomes a beta. You test it with more users. Finally, you’re ready to launch it. You turn on the TV and play the iconic scene from Field of Dreams where the spirit of Shoeless Joe Jackson whispers, “If you build it, he will come.” And like ghosts emerging from a cornfield, users show up and engage with each other like long lost friends. Hours and hours of lively conversations are created and your community is flush with chatter.
Except, that’s not what happens. Conversations don’t spontaneously ignite and engagement is at a whisper. You’ve built it, but no one has come.
This is where your journey to creating the flywheel begins.Kevin Costner’s journey began with designing a field, but yours begins with designing your flywheel and hand-picking your early adopters.
Don’t get fancy. Start with 2,000-year-old technology and 500-year-old technology; paper and pencil. You don’t need modern software to design your first flywheel, so shut your laptop.
I believe there are four atomic units of a 1.0 flywheel:
Those elements represent the common building blocks of a content flywheel.
It begins with a visitor signing up to use the product. After a user has signed up, the user will gain access to the stockpile of content that exists in the application, which they may start consuming. Note that the stockpile of content won’t exist at first. I’ll get to that in the section about solving the cold start problem.
After consuming enough content, some users evolve into creators of content. The content the user creates leads to new traffic headed your way. A common example would be content indexed in a search engine or shared on social media, which fetches more traffic back to your community. Lastly, some of the newly harvested traffic will convert into newly acquired users that sign up to be a part of the community.
With a sketch similar to this, you can begin with a simple conceptual understanding of the content flywheel for your application. Assuming your product has launched and has at least a few hundred users, you can then measure the baseline conversion rates (CVR) at each step in the flywheel.
In the above example, the conversion rate (CVR) for the initial signup is 1.5%. Of the users that sign up, 20% of them go on to consume content in the application and 5% of consumers go on to become creators of content. The new content that is created then leads to new traffic generated for the application.
In this example, I chose the metric of visits per piece of content per month. A practical example would be a question answered on Quora or a thread posted on Reddit. In this case, each question on Quora or thread on Reddit would receive an average of 2 visits per month. Finally, the new traffic generated from the content created by new users leads to brand new users signing up for the product at a rate of 0.2%, which is a fairly common conversion rate for long-tail traffic coming from SEO.
The conversion rate to signup from this traffic is typically lower than traffic that goes directly to an application’s homepage, such as someone opting to go directly to Reddit.com and sign up. Visitors navigating directly to an app’s homepage have relatively high intent, likely because someone told them about the product, which is why the conversion rate is highest for direct homepage traffic.
And just like that, you have your first content flywheel designed and instrumented with empirical metrics. But your homework isn’t done yet. You’ve only completed the first of three assignments. And this one was the easiest since most online communities have a nearly identical 1.0 flywheel. In fact, you can just copy this flywheel and you’re off to a good start.
Now that you’ve warmed up a bit with the 1.0 flywheel, it’s time to tackle the next obstacle, which is to craft another flywheel that’s specific to the consumption piece of the community experience.
Users of your community won’t Kobayashi content without some provocation. You’ll most likely have to coax them into consuming a lot of content in your community through clever product design, a buffet of high-quality content, and other mechanisms that you, the founder, must figure out.
Online communities that have low flywheel momentum are those that begin with very poor information discovery. Take a look at eBay’s community product. I’m not sure it’s changed much over the last 10 years. It looks much like the original forums and message boards dating back to the golden age of Yahoo Groups.
You’ll notice that a modern artifact of today’s online communities is missing—a newsfeed. Every major online community or social network is now oriented around a feed. Why? Because it gets people to engage as a consumer of information at an order of magnitude greater than alternatives. It makes us all Kobayashi’s of content. The momentum in eBay’s flywheel is minuscule compared to the momentum in TikTok’s flywheel because TikTok has a much more enticing consumption flywheel due to a few brilliant product design choices, such as allowing you to consume a feed of highly entertaining videos prior to registering for the product.
Yet there are nuanced decisions that need to be made as to how your product will drive a consumption loop where users come back to consume over and over again. It’s time to pick up your paper and pencil again.
Just like there are a few “atomic units” that make up a 1.0 flywheel, I believe there are a few building blocks to a consumption flywheel as well, which include:
In the below example, I’ve diagramed what a consumption loop might look like for a product like Quora. A newsfeed and weekly digest emails are the primary consumption drivers. The user is then given a selection of product verbs as the core content interaction paradigm, such as upvote, downvote, comment, or share. That data is used to enhance personalization back into the newsfeed, weekly digest emails, and other one-off email notifications.
What’s important is that you map out what the consumption flywheel might look like for your product and that you ponder the following questions while designing it.
Assuming you’ve done a quality job at designing and implementing the consumption flywheel, user engagement should increase. That may reveal itself in an uptick in weekly active users (WAUs) or daily active users (DAUs).
To go back to the flywheel physics, the potential energy within your community increases as a byproduct of enhancing the consumption loop. And with higher potential energy comes another wonderful side effect: high-frequency consumers become content creators. Don’t put the pencil and paper down yet as that’s the next flywheel to design.
The third flywheel is the most important. A thriving online community can’t be built without a healthy consumption flywheel. However, a stellar consumption flywheel can’t be built without a high rate of quality content being created. That’s why it’s the most important flywheel—yet, it is also the most difficult to create as it requires more secret sauce (i.e. innovative thinking) than a consumption flywheel.
Similar to the other flywheels, I believe that this flywheel has a few common building blocks worth understanding.
The below diagram captures what this content creation flywheel might look like. Just as you would with a consumption flywheel, you have to take a step back and ask yourself a few key questions when designing a creation flywheel:
Once the creation flywheel kicks in, you can expect to have momentum as potential energy (consuming users) is converted into kinetic energy (creating users). As lots of new content is generated, acquisition channels accelerate, such as SEO, social sharing, and so on.
With the flywheel designs in place, you can instrument each step in the flywheel to understand where your flywheel might not be performing. In the example below, I may label certain parts of the flywheel with conversion metrics to benchmark how it’s performing. This approach would allow me to diagnose where I perceive there to be weaknesses in my flywheel(s) and come up with a plan of attack for improving each sequence.
The green items would indicate which rates I feel are performing well, whereas the yellow and red items likely require some attention and could be slowing the entire system down.
In the above example, the product has only a 13% open rate for the digest email. I should revisit the content I’m putting in that email and the frequency that I’m sending it. Something is clearly wrong since that’s a very low open rate. Consequently, the digest email isn’t contributing meaningfully to the consumption flywheel, so I may need to find alternatives to doing so.
I would also note the very low conversion rate to becoming a content creator. If only 3% of users that read content also create content, there must be something catastrophically wrong with the user experience or the core product value. Or, maybe that’s okay? Youtube is powered mostly by super-creators. They don’t have a high proportion of users that create videos—most users are consumers. But if I’m Reddit and only 3% of users comment on threads or create new threads, that could be cause for concern. That low of a rate may lead me to believe that most new threads are starting off with a low-quality prompt.
Similarly, I would be concerned with the low rate of visits per piece of content per month. Maybe I haven’t optimized for SEO or social sharing? Maybe I have a huge long-tail of content that isn’t interesting enough to warrant any traffic? That’s certainly the case with Yahoo Answers.
Now that you have the flywheels designed and metrics implemented, you’ll want to convert this into a basic model that captures how your product grows. Translate each conversion rate into a variable in a growth equation. Here’s a very simple example based on the above flywheel and one that we tinkered with at Quora in 2011:
To keep it simple for now, let’s use three variables in the flywheel:
Work with your local friendly data scientist, and they’ll produce a growth equation for you. Here’s a basic example based on the flywheel model:
From a model like this, you can project a rate of growth. It may look something like the graph below, which projects the weekly growth rate of total users:
What’s great about using this flywheel design and measuring approach is that you can “pull levers” in the model and find where the model is most sensitive in the long run or at a given point in time.
For example, if you were to increase the average number of visits per month per piece of content from 2 to 2.5 viaSEO improvements for example, you can project the impact on overall growth. And if you modeled that effect against increasing the conversion rate to signup from 0.2% to 1.%, you may find that one lever implies a greater net effect on growth than another. Or, that optimization in one part of the flywheel may create a larger near-term bump, but have a smaller long-term effect.
That’s how you go about designing a content flywheel, instrumenting measurement, and developing crude growth models to understand what the drivers in your flywheel may be. It’s not a perfect science, but it isn’t meant to be. However, it is a very effective approach to systematically architecting and manipulating your growth flywheel to give your online community the best chance to thrive.
Next, we’ll dive into the classic chicken-and-egg problem that online communities face. How do you get people to signup for—‚and engage in your community— when it currently has little-to-no users and engagement? A flywheel doesn’t start on its own. It needs an initial thrust, which is what the next section is all about.
The hardest stage of a community and content-driven application is day one. How do you compel people to create content within the community when very few users and very little content exists?
It has to start with what I call the “white-hot coal” approach to establishing early adoption. You don't want to "get rich quick" with 1 million users because you pulled strings at TechCrunch and hacked together a waiting list, etc. Instead, aspire to the "white-hot coal" launch.
A big top-of-funnel doesn't mean you have product-market fit and people love your online community. You create your own false narrative with the big bang launch. Think Viddy with their Facebook open graph integration or Jelly when it launched to compete with Quora. Both were big bangs, but didn't have product-market fit. The "white-hot coal" approach advises the opposite: intentionally constrain growth until PMF is clear and the only thing holding it back is more oxygen, i.e. public launch. Quora and Instagram spent 1-2 years iterating on the product and hand-selecting the first few thousand users.
Taking this slow—but steady approach—gives you time to understand WHY it works, and for WHOM it works, before opening up adoption. It requires an uncomfortable level of patience. In return, you gain the insights necessary to make quality decisions when scaling it from 1 to N. Marketplaces commonly make this mistake when launching in more cities before they establish repeatable playbooks for supply and demand acquisition. Online communities make this mistake by opening up for broad adoption before understanding the engagement mechanisms and establishing an initial pattern of high-quality, repeated usage.
Once you've established the white-hot coal of a small but deeply engaged customer base, opening it up must also be done at the "right" pace. Too broad/quick of a launch can create a backdraft i.e. quick inflow of oxygen leading to a superheated fire that burns out quickly. The startup equivalent of backdraft is rapid expansion followed by contraction. It's incredibly difficult to know what the "right" growth rate (i.e. "oxygen") is. It's case-dependent, but I do know that all else is futile without that white-hot coal.
Think small at first. Very, very small. In the words of @paulg: "Do things that don't scale" for as long as possible and only consider a big bang launch after the white-hot coal is established. Hand-pick your first users. Know all of them by name and listen to them daily.
To make things more concrete, here are the broad strokes to follow when solving the cold start problem:
Let’s take it from the top.
You won’t find high-quality early adopters for a new content-driven application by running Facebook ads. If your startup is already doing this, stop immediately. Buying early adopters is the path to burning money and learning very little about who the community is ideal for.
Another issue with paying to acquire early adopters is the lack of a personal relationship with them, which means you have no influence over how they use the product. You want your early adopters to be fully bought into your vision for how it should be used and you’ll want to guide them down that path. What you want from early adopters of your community is high-quality participation and you’re more likely to get that with hand-holding.
I commonly meet founders who want to take their product to market with paid marketing. I don’t know where this method came from, but it is disastrous for startups. When you first launch your product, you’re still in hypothesis mode. Do I have the right product and have I built it for the right person? Answering those questions requires proximity to your users. You need to be so close to them that you can tell what kind of deodorant they use. How else can you validate if you’ve built a product that people care about and if you’ve delivered it to the right type of customer? This is Product Market Fit 101. It can’t be done from a distance. It must be intimate.
Establishing early traction starts with manual labor. Get out an excel spreadsheet. Write down the names of people that you know the best and can lean on to be your earliest testers. Or, if you’re building a product for a user type that you don’t have direct access to via your personal network, hop on Reddit, Facebook Groups, and any other niche network you can find and start building relationships with the people that may eventually become your early adopters. This approach has worked for LinkedIn, as well as for WhatsApp. Don’t avoid doing this work simply because it’s tedious and doesn’t scale.
In the early days at Quora, each beta user of the product was a close friend, family member, or former colleague of the original employees. By appealing to them as a close connection, we could provide structured guidance (and subtle pressure) to ask them to act as role models in the app. For example, we did not want Quora to be like Yahoo Answers because the content was terrible. The information shared was very low quality. It was a mile wide and an inch deep, so-to-speak. We wanted Quora to be about finding the most interesting and relevant information that couldn’t be found elsewhere. If we were successful at that, then people would come to Quora because of its unique basket of human knowledge.
To that end, we set a very high content quality bar. If you were an early user of the product, we expected you to contribute unique questions and write thoughtful answers. Did Einstein’s descendants inherit his level of intelligence? Well, you can find a fascinating answer to that. For history buffs, you can find an incredible collection of WWII photos. I even wrote a detailed answer about what it’s like to go to Mount Everest. Nearly all early employees acted as prolific creators on the platform to demonstrate the expected behavior.
Quora was a block of clay and we all had our hands on it to ensure we shaped it in a particular direction. Several of the early employees, friends, family, and colleagues we brought into the alpha and beta versions remain as some of Quora’s all-time best contributors.
And, because of our personal connection with all of the earliest users, they felt a sense of responsibility to use our new application in the way that we were using it ourselves. For Quora, that meant exceptionally high-quality questions and answers based on someone’s experiential knowledge. In other words, we wanted them to write about what they knew best. That also explains why Quora became known in the early days as one of the best repositories of Silicon Valley knowledge— thist was intentional.
Jason Lemkin has become a central figure in SaaS venture investing and company building at least in part because of his use of Quora. He has well over 3,000 answers and 45,000,000 views and growing. He was an early adopter and continues to share his expert insights.
Every startup wants to storm Paris. But the question is, what is your Normandy? You have to have a precise and almost comically constrained beachhead of early adopters and early content creators.
For us, it was our Silicon Valley connections who wrote excessively about Silicon Valley insider knowledge. What we did with Silicon Valley content on Quora is the software equivalent to what Tesla did when they came to market with their first car, the Roadster. It was intentionally designed for a small, but exceptionally engaged and enthusiastic audience.
Assuming that you’ve kept your focus narrow and managed to ignite a flame with a paltry, yet passionate base of early adopters, you’ll eventually want to expand adoption. But how should you do it? Honesty, there isn’t a perfect playbook for this. But one option to consider, especially if maintaining content and engagement quality is important (which it commonly is), is to allow your early adopters to invite and onboard other users into the community.
Superhuman has taken an extreme view of this approach and it has worked out incredibly well for them. An alternative is to enable invitations, but with frequency caps. For example, each early adopter can invite a maximum of 3-5 new users. The scarcity forces the user to think through who they believe would be a great addition to the content network. In my early days at Quora, I invited a few of the best company builders I knew because I wanted them to provide answers on topics I was interested in.
A seed-stage startup I invested in is taking this approach right now. Each early adopter will be able to invite a few people to join, but not more than that. It will allow them to grow by 3x - 5x organically based entirely off of invitations. You may ask why they would intentionally limit their growth? The reason is they want to maintain a high engagement and quality bar, which I’ll discuss in more detail in the next section.
Thanks to Youtube, Reddit, Twitter, Facebook and so on, most of us are familiar with the important and complex role that moderation plays in massive online communities. There is no such thing as “perfect” community moderation so an ideal solution does not exist. But there are guideposts that can be followed when it comes to thinking through the role of moderation within your content-driven application.
The key thing to consider at the outset is why moderation is essential. To put it simply, it’s because people can be jerks and most people don’t want to be around a lot of jerks. Would you want to go to a town hall discussion about an election topic where any citizen could spout off at the mouth at another, saying vile and disruptive things, without recourse? Nope, me neither. In the analog world, we have policies and procedures to enforce decorum. If you shout down the judge, you’re held in contempt of court and carried away. Online communities require policies that enforce a code of conduct as well.
Here’s a user review left in the app store for the anonymous social app Whisper. It only took a few seconds to find it and is a great example of the downsides of an anonymous identity model and how difficult it is to moderate such platforms.
“A friend of mine recommended this app to me and it’s been great and all — minus all the incredibly narrow minded people on it and the men all older than 25 who really only use this app to send pictures of their body to other people. it’s a great way to spread peoples’ thoughts but without an identity. i think it’s a great idea, but really it’s not being used in the way it should...”
This commenter loves the idea of the app, but it falls apart in practice. Without a moderation model that requires people to use their real identity, it’s hard to hold people accountable for their actions. As a result, the quality of participation erodes and most people opt out of anonymous communities because they inevitably turn ugly. It’s the same reason that most major online publications have turned off comments on their articles.
As the creator of a content community, the question you must answer is “What does quality mean for my product, and how do I enforce it?” I can’t give you that answer since it’s unique to each online community. But I can talk about the various levers at your disposal when it comes to stitching moderation into your product from inception to scale.
From my perspective, there are three methods for moderating behavior within a content application:
Let’s talk about each.
An official company policy on moderation is a common approach. The company determines what is and is not okay to say or do based on their worldview and the vision for the company. These moderation policies, meant to enforce some minimum bar of quality participation by its users, are crafted by euphemistically named teams such as the “Trust and Safety” team.
I say it’s euphemistic because they are censorship teams. They determine what you can say and do based on their collective preferences and beliefs. Some of the restrictions are enforced by law—such as child pornography—which is a great thing. But many policies are selected due to their preferences.
For example, Twitter has a policy that doesn’t allow users to display pornographic or violent content in profile pictures or header images. However, a user can tweet pornographic content. It will be obscured with a “sensitive content” label, which then puts the control in the hands of the user if the user chooses to click a button that then reveals the obfuscated material.
This approach is not governed explicitly by state or federal law. It is a moderation preference that reflects Twitter’s worldview and vision for the company. Similarly, if you choose to build an online community, you’ll have to start by designing the moderation policies to censor what can and can’t be said or done on your platform.
At Quora, our quality definition was aligned with the substance of questions and answers. We wanted a community that represented the best of human experiential knowledge. That meant that we were happy to remove questions that were antagonistic towards an individual, such as one user asking another user a very personal or accusatory question. Our policies also meant that we would remove answers with similar characteristics. We did not accept abusive answers such as people using f-bombs or attacking other users of our service. Civility was paramount, so we had company-created policies in place to preserve courtesy.
As your community begins to scale and evolve, you may need to enlist the help of others to help you identify and draft moderation policies to keep up with the changing nature of the community.
Several examples can be referenced. One version of community moderation that enforces quality participation is the Yelp Elite. Another version would be the now-defunct Quora moderators. An often-criticized example would be Wikipedia moderators.
Community moderation is tough. It’s like managing a growing classroom of students that all begin to think that they should be the teacher. Be careful when enlisting community moderation from people that aren’t official representatives of your company. If the community gets large enough and is left unchecked, they may come to believe that the content platform you’ve created is theirs, not yours. Wikipedia preferred this fully decentralized approach. It’s led to broad access to a lot of great content, but a long history of conflict and complaints as well. At one point, the moderators in charge of the Spanish version of the site went rogue in response to Wikipedia considering selling ads on the website. Proceed with caution when creating community moderation programs.
A superior alternative to community moderation is feature-based moderation. This approach is increasingly enabled by advancements in machine learning and produces better outcomes at scale than a team of human moderators.
You use products all the time that rely on UI features + machine learning to enhance the experience and maintain quality controls. If an answer on Quora receives enough downvotes then the answer will be “collapsed”, i.e. hidden. The ratio of likes to dislikes and the volume and velocity of likes helps Youtube determine which videos to highlight on their home screen versus eject into the ether.
Obviously, it takes a lot of data before these mechanisms can kick in and productively manage the quality of your user experience. When first getting started, you will have to rely mostly on human moderation. Thankfully, machine learning tools are becoming increasingly available so more startups will have access to these scalable moderation tools than in the past. What they may mean for your content platform is you can more quickly (or completely) bypass the community moderation step and move towards a machine learning-driven model compared to startups before you. Consider yourself lucky!
As you might have observed by now, building an online community is strenuous. If you want to make it even more difficult than it already is, you’ll try to boil the ocean by encouraging users to create content and engage in a wide range of topics. This is a mistake. As I mentioned above, you can’t try to sack Paris right out of the gates. You must first find your Normandy/your beachhead. That’s not only true with the specific type of early adopter you pursue, but it’s often true with the category of content you want the early adopter to create and engage with.
Many online communities need to grow like Amazon. Pick one product line, make it exceptional, and then use the momentum from that product line to expand into adjacencies. Amazon started with books and eventually moved into jewelry, DVDs, and so on. What content will you have your users focus on at the birth of your community?
At Quora, we started with content that was familiar to us: technology. After we grew to the low tens of thousands of users, we picked the next content verticals to go after. Thankfully, the playbook for expanding into new categories of content closely resembles the playbook for solving the cold start problem. It entails hand-picking your early adopters, building a personal relationship with them that allows you to exert pressure on them such that they engage in your community in a productive way, and then giving them distribution to encourage them to create more great content and help build out the new category of content.
The most common mistake I see made when attempting vertical expansion is skipping the part where you hand-select early adopters. Not all categories will require this approach. A bit of good luck and serendipity sometimes drops vertical expansion on your doorstep. That’s especially true once the platform is large and established.
How is it that YouTube’s content library continues to expand into an increasingly large long-tail of content categories? Well, it helps that YouTube is a household name and that it offers the potential for enormous distribution to any new creator that shows up with something special. For example, there are tens of millions of views for standup reaction videos. That’s right. People post videos of themselves reacting to stand up comedians. YouTube certainly didn’t have this as part of their planned vertical expansion.
These anomalous behaviors happen without YouTube’s orchestration. But you’re not YouTube, so you can’t rely purely on serendipity. In the early days of an online community, vertical expansion may need to be driven through the good ol’ process of handpicking early adopters, rolling up your sleeves, and nurturing the first creators for a new content category until it’s clear that the seeds have started to sprout. That’s what we did in the early days of Quora and that’s what I see fascinating new startups like Golden attempting to do within various technical fields.
Creating an online community that thrives is brutally difficult. But, when you get it right, they’re a juggernaut.
Doing so requires the artful construction of a core flywheel, a consumption flywheel, and a creation flywheel. It also requires masterful selection and execution on an early-adopter effort to crack the chicken-and-egg problem, not only for the initial beachhead, but also for subsequent content categories you may want to expand into.
Along the way, you can’t sacrifice user experience. Content and engagement quality has to be maintained despite the community growing. It’s like stuffing your thumb into the dam wall only to find that each hole you plug reveals a new crack in the foundation.
To top it all off, you have to figure out what your “hook” is going to be. What is it that people will come to your community for that they can’t find at others? Is it innovations like AMAs (Ask Me Anything) that platforms like Reddit and Quora helped popularize? Is it an incredible database of content that you can’t find elsewhere? If so, how do you compel people to share with you what they haven’t shared with others? That requires innovation as well. In the early days of Quora, several employees built relationships with inmates at San Quentin Prison to give them a voice and megaphone. Beautiful prose came out of the cellblock and made its way onto Quora’s pages.
The above strategies are fruitless without a hook. That’s why most online communities never take flight. They simply don’t offer a 10x better experience relative to the alternatives. Of all of the questions I outlined in this essay, this question remains the most important: What unique value am I going to provide that other communities do not?
You must begin with a strong hook. Then you can follow the playbook outlined in this essay to help it grow. For inspiration, here are examples of product hooks that helped establish some of the world’s most successful online communities.
Without a hook, you won’t have a carrot you can dangle in front of early users to entice them away from a myriad of other online communities that they have at their disposal. This is the “secret sauce” that only you, the founder, can be responsible for.