April 22, 2024
Portfolio
Unusual

Perplexity's product-market fit journey

Sandhya Hegde
No items found.
Perplexity's product-market fit journeyPerplexity's product-market fit journey
All posts
Editor's note: 

Perplexity is an AI-powered search engine that answers user questions. Founded in 2022 and valued at over $1B, Perplexity recently crossed 10M monthly active users and is growing fast.

In this episode, Sandhya Hegde chats with Denis Yarats, co-founder and CTO of Perplexity.

‍Be sure to check out more Startup Field Guide Podcast episodes on Spotify, Apple, and Youtube. Hosted by Unusual Ventures General Partner Sandhya Hegde (former EVP at Amplitude), the SFG podcast uncovers how the top unicorn founders of today really found product-market fit.


Episode transcript

Sandhya Hegde
Welcome to the Startup Field Guide, where we learn from successful founders of unicorn startups how their companies truly found product-market fit. I'm your host, Sandhya Hegde, and today we'll be diving into the story of Perplexity. Perplexity is an AI-powered search engine that answers user questions. Founded in 2022 and recently allegedly valued at over a billion dollars, Perplexity recently crossed 10 million monthly active users and is growing very fast. Joining us today is Denis Yarats, the CTO and Co-founder of Perplexity. So, Dennis, you were a research scientist at Facebook a couple years ago. How did you meet Arvind and how did the rest of your founding team come together?

Denis Yarats

Yeah. So there's a very interesting story. So while I was a research scientist at Facebook AI research, I was working on something called reinforcement learning mostly for ‌robotics, but that's one of the sort of essential piece for ChatGPT. And so Aravind and I happened to work on very similar problems. And one dday,it was like middle of 2020 when COVID was just going on. We published independently, the same paper, the same research results. And since that point, we started talking and like collaborating together. I spent some time at Berkeley, working with him and his advisor. And after that, we were maintaining our relationships. So he went on to OpenAI and in early 2022 was about to graduate and trying to see for like other opportunities. And we'd been talking and it was becoming like, obviously clear that GPT is getting stronger and stronger. And there is going to be like an opportunity to create a company. And so around June 2022, he left OpenAI, Ileft Meta, so we decided to do something.

Sandhya Hegde

Such a great story, especially because, instead of being mad at each other for publishing the same work. I think you were two days before him, right, you had mentioned? You became friends instead. That is such a sweet story. And what about the rest of your team? Once you decided to get started, how did you think about, okay, who are the other co-founders you need to add to your team and why?

Denis Yarats

Yeah. Essentially, so we were both like research scientists, more like AI people. And we definitely knew from the beginning that weneededd somebody who is very strong on product and like in general, like engineering. And it so happened for us kind of that one of my friend and a former colleague, Johnny Ho, whom I workedwith at Quora, back in 2013. So he also recently became available at that time. And he was like the smartest person I knew. He was for example, like IOI world champion, so as a high schooler, so like it's number one in the world, it's not easy to do. And yeah. And so we start working, three of us together, trying different prototypes. And yeah, once we got Johnny, I was like very confident we can do something interesting.

Sandhya Hegde

What was the very original idea, like idea number one that you started working on in August 2022?

Denis Yarats

Yeah, so actually we started maybe around like July. The idea was ‌simple. We wanted to do search, but couldn't do it because we wouldn't get funding. And so we decided to do something simpler, text to SQL.Essentially at that time, there was a pretty decent model called DaVinci2 from OpenAI. And so we decided to build ‌a tool that can translate a natural language into SQL query and then execute it on a database. One of the first things we wanted to tackle was creating an interesting database of public data. And so one of our first interests was‌ Twitter. So we went ahead and back then it was like much easier to do because there was like API. So we scraped a lot of or I guess downloaded a lot of Twitter data and stored them, organized them in database. And started to create like a natural language interface around that data.

So you could have asked like questions like, how many followers Elon Musk has that have more than 1 million followers themselves. So it's doing this like joint operations. We got ‌some rendering. So it was like a very cool demo. In fact, this is a demo we used to get Yann LeCun as our seed investor, like angel investor, because he's spends a lot of time on Twitter. So we went to his office at NYU and showed him this demo. And he was like, very excited about it. That's how we started, but we wanted to do search. You can think of this as a search and kind of a more narrower structured data. But we still spent ‌some time prototyping, like more general search. And in fact, around like October, 2022, we had an internal sort of Slackbot that was essentially the very first prototype of Perplexity. So we would use it to ask questions about medical insurance for our employees and stuff like that. Something that we didn't know a lot about, but it was very useful to see the first glimpses of this technology.

Sandhya Hegde

You're only a few months into building Perplexity, you are focused on text to SQL enterprise customers, and November, ChatGPT launches, and it's one of the most successful product launches in the history of product launches. What was the conversation like within your team? What were you assessing this as? 

Denis Yarats
Yeah, I very clearly remember this day. I was just waking up and I saw a lot going on Twitter. I think we very quickly recognized that this is groundbreaking So it's not something that's just gonna come and go and we had this prototype, right? It just so happened it was also addressing the very early feedback that ChatGPT was getting, where people would complain about hallucinations. People would complain about not knowing where their information is coming from because there is no citations or anything. And this was exactly what ‌our‌ prototype was doing. And so we put two things together, saying, okay, so there is a lot of attention on ChatGPT and we have something that can enhance it. And so we‌ literally in a span of two days, prototyped a very simple website. Put it out as a joke on Twitter. We never thought it was going to receive any attention. And to our surprise, it‌ did. So we started receiving a lot of buzz on Twitter, like a bunch of people started like retweeting us and praising it. So even though it was likea very horrible implementation, it was like very slow, it didn't work well.

But that was like a very interesting sign for us just because we know we can do it way more, much better, but even in the current form there is something about it like people did like. And so we're still at that time not sure, should we proceed it? Because, we're like thinking, okay, so maybe it's going to go like a week or two and then it's going to fade away. And we were like, especially entering into the holiday season, Christmas and New Year. We decided, okay, let's just see how it goes. And early January, when we started looking at ‌usage, to our surprise, ‌ traffic did not drop. In fact, it increased. So it was like, okay, there's something here, so it's not normal. And so we made this decision to completely stop working on Text to SQL, disregard like four weeks, four months of work, all the infrastructure we've built, and like fully focus on general search.

Sandhya Hegde

Yeah. What a fascinating pivot. I'm curious, obviously you had a lot of confidence that you were solving problems that matter to people, right? You could see that from ‌early feedback, whether that's the hallucinations, the citations, using RAG. All of those things. However, you must have thought about what's our long-term competitive advantage since we don't own the core foundation model here. OpenAI does. What was that conversation like? How did you talk about okay, if this works, how do we win and how do we maintain a superior product over the providers of the LLMs that you might be using? I'm sure there must have been some skepticism internally from your team as well.

Denis Yarats

This is definitely‌ a very important question. And something we still, you know think about and not only like being dependent, like a wrapper and depend on OpenAI as an LLM provider, but there is also right after, I think around January or like early February, like Bing released a very similar product, they have everything, they have like distribution, they have search, they have LLM. So there is no good reason for why we should exist, right? So it's just impossible, but turns out it was for whatever reason, our product was better and people preferred us to all the other alternatives out there.

And I think to answer your first question, I think it's like very interesting. The way I look at it is that being a wrapper was very essential and very important position to be early in those days, just because  this is something that only became available like when OpenAI rolled out the API, like before, imagine three years ago you wanted to build Perplexity or something like that. So at that time you have to first, even before launching the product, you have to collect data, train model, launch the product, and only then like figure out if it‌ has market fit or not. Would have been stupid to do this when you have this API available. And the OpenAI API, like essentially allows you to turn the problem around, flip it from its head, and first verify if there is like market fit and if so, then figure out what to do. And that's why that was the best decision we've made. And the thesis here is just, the model, while it's very important, it's not the moat, right? Now, especially, we are fortunate the open-source community is picking up steam. So now that there's like very capable open source models that you can just take and function on top of, but if it weren't we would gotten to a point where we have like enough capital, where we can pre train our own model. Pre-training is one part of it, but I feel like the more complicated part is fine-tuning and like post-training, and optimizing this model for your specific product. And to do that, the only necessary ingredient is user data. Like, you need to establish the data play. If you don't have that, if you have the best pre-trained model, it's useless. So that's why our thesis always was okay so model is not the moat. Data is the moat and like product and the brand. And so now we basically have a lot of data. We know exactly how people use Perplexity. We know exactly what they're asking. We know exactly how to optimize the model to improve metrics, because we can just literally AB-test everything. And so we just take this data. And take whatever model is out there available to post-train on top. So you can think of as bootstrapping. So you bootstrap on something, get the data, and then you can replace all of those pieces that you don't have over time.

For all the pieces of infrastructure, we started with the same idea. We started with some search provider, but then, as we started seeing what needs to be done, so we bootstrapped on it and then built our own infrastructure and just kept improving it.

Sandhya Hegde

And it sounds especially your background in RL and being able to take advantage of user preference data to have a superior product experience is really key.

Denis Yarats

This is important, right? Like basically, each product is unique. So there's like different qualities of the model, different properties you like optimizing over, let's say ChatGPT. So for example, we don't want our model to hallucinate. So we design a reward function in a way that it would refuse to answer if there is no support. And so we just train for it, right? Maybe for some other products, it is okay to hallucinate because it's maybe it's going to make it more engaged or something.

Sandhya Hegde

Right, yeah, there are places where the hallucination is a feature, not a bug. And I am curious, you're obviously building ‌a very horizontal mass-consumer product, but you still have a small enough audience, obviously, compared to the full search market that you probably have some use cases that are much more common than others. Do you have an internal early-adopter persona that you were optimizing for and what did that look like? What was the early adopter and, what were the use cases you wanted to make really good?

Denis Yarats

Even, in the current stage, we are not going after the whole web search space. It's just enormous, but I think like something that we've seen early on and seeing right now is just a lot of people who use us. Are like knowledge workers type of persona, right? Like people who do all kinds of like research, like academics, some financial, and stuff like that. So there's basically people who search internet to solve whatever task, not just recreational queries, whether that’s the weather, maybe some navigational queries. Knowledge workers’ queries are going to lead to some decision, right? So decision making, that's very important. In fact, like Google, huge company, but has a lot of all kinds of users, but there is a very skewed distribution of how much money they make from whom, and like very small percent of user makes majority of the revenue for Google. And we have a small overlap with that portion of users, not the users who create free traffic, but somebody who can pay for it. And that's why I think it’s unlikely we're ever going to get as big as Google. And that's honestly not our goal, but if we can provide a tool that's going to be useful, it's going to save time for a small portion of users. But those users, professionals, knowledge workers, people who can pay for those services, we can create a successful business.

Sandhya Hegde

And were there any surprises for you in the early customer feedback? Anything that stood out that like really crystallized the product direction for Perplexity?

Denis Yarats

There were a few interesting things that I did not expect to see. Like one of them was people searching for other people or themselves, like vanity searches. That was a very common use case, and it still is. So imagine a salesperson or whatever — you have to meet somebody and you want ‌a very quick understanding about somebody who you're going to talk to soon, right? So you just go to Perplexity to for a quick write-up.  A lot of people were interested in academic research. So they're asking if we can do add PDFs, not just like ‌web documents, but like some more like specialized literature and specialized indices where it's very hard to use Google for those.

Sandhya Hegde

Makes sense. When I think about what Google's core technical competency was, aandwhat helped them stand out because they were certainly not the first or only search company, but what helped them really beat Yahoo or Bing was they were the best at content indexing. They've scaled the hardware infrastructure really well. And they also had a pricing model, right, that really helped them with having good user experience, right? They came up with a pricing model that really helped them improve their user adoption as well. I'm curious when you think about okay, what are the core technical competencies that ‌Perplexity always needs to be the best at, right? Compared to whatever chatbot other model companies might launch or whatever else or whatever Google might launch tomorrow. How do you think about what are the one or two things that you as the CTO of Perplexity think we need to make sure we're always the best at?

Denis Yarats

I'm personally a big fan of Google. I feel Google search is probably the most complex system and sophisticated system that humanityhas everr viewed.And two core concepts that I really like about Google is speed and accuracy. And I think they're like by far the best into it. So those principles are something I care deeply about and trying to make sure like Perplexity is also very fast and very accurate. In this new era where you have to combine a very expensive and hard-to-run LLM with search, you have to figure out how to do this efficiently and fast and without sacrificing quality. So I feel like our core competencies like the orchestration part where how would you given a query, how would you make sure that you can answer it perfectly, you can answer it very fast. You can also do it cost efficiently. So it's basically a multi-dimensional optimization problem. And so doing that, it's difficult and something that we focus ‌a lot on. And then basically once you start solving this problem, then you can start like deviating a little bit into the direction of LLM, into direction of search.

So the main thing to understand is just the search index, even though I say like Google is the most complicated thing, doesn't in this new LLM world, it doesn't have to be as complicated as it used to be. So we don't have to spend so much time on designing this, like manually crafting the ranking signals and stuff because a little LLM is going to take care of this. So you basically already know, like some answers to the all the trade offtrade-offs like precisionrecall, freshness of frequency update and stuff like that.

And so now given that it's going to work together with LLM, so certain decisions become easier. The same exactly comes on the LLM side. Like you have specific product problem you wanted to solve. Do you need to run the most capable or like largest LLM? Probably not. Depends on query. Like maybe some query do require that. Some don't. So how would you like route this query to like appropriate system? How do you then maybe have a smaller model that can do like decently well on like certain querie And yeah, that's basically. controlling the whole orchestration and then optimizing individual components of like LLMs and search in order for everything like to work together perfectly. That's I feel like our core competency.

Sandhya Hegde

 Yeah, no, fascinating. You're right. There's so many jobs that you can specifically choose to use smaller models for, and you're constantly having to decide what is the smallest model you can use that will still give close to the best possible experience for your user. That's yeah really fascinating. How do you think about, especially given your business model which is going to be subscription. I think that's the power of your model. And the biggest reason why the average consumer would want to try a Perplexity is, you don't want the like 10 links and five ads.You really just want to save time, get a well researched answer. That means subscription pricing. How do you think about gross margins at that for that business model, and of course, you're still early and in hyper growth mode. But how do you think about long term gross margins and the implications of kind of the pricing model you have chosen?

How do you think it works over time?

Denis Yarats

 So I think like currently the subscription is the main model right now. I'm sure there's going to be like something else into the future, but even now it's very interesting to see that margin is actually pretty good especially. We observed over the last year it becomes cheaper to run those models. Like hardware becomes cheaper, models get smaller and better. Like even OpenAI, API price, it dropped, I think four or five times over last year. And then we also build certain things in house and now we don't have to rely on OpenAI API as much, so we are observing the margins increasing over time. Which is good. Obviously we keep adding like new features to make our product even better. But I think I have like full confidence that we will be able to continue to do and like eventually at some bigger scale, we're going to have very good margins. Still, there is going to be like other opportunities to monetize. Iwon'tt rule out ads. I think like ads in the current form as Google, it is probably not something we're going to be doing, but I think there's like ways to have ads and some in a way that is going to be helpful for the users, right? Like people don't really mind ads if it's helpful, right? If you like searching for something and you want to buy something and like exactly perfect product for you, it's going to be good. People hate when they see a lot of irrelevant ads and a lot of them. So that's why I feel like definitely somebody is going to reimagine ads in the LLM world?

Sandhya Hegde

Makes sense. You mentioned hardware, so maybe this is a good segue point. So I have been super impressed with how much Perplexity has leaned into hardware partnerships already. I'm curious what was the motivation behind that, whether or not it's Rabbit or these other kind of new partnerships you recently announced? What was the motivation behind it? And what are you learning from these early phone and glass partnerships?

Denis Yarats

Yeah. The motivation is actually pretty simple. We're still very small company, like a lot of people, we'd like never invested any single dollar in a sort of like advertisement or anything like that, so it was all natural and organic growth, but if people compare us, like a Google challenger or whatever, so it's it's pretty funny because they have a completely the complete level of distribution, right? If you ever want to even get an inch closer to them, you have to have distribution. And to that point, we decided, okay, so if there's likother, like maybe company at the similar stage as we are, who are, like also innovating in different directions like rabbit, you mentioned there's like the glasses and phone So it seems like there's like an opportunity and by working together, we can create a opportunity for all the parties and so take on big guys. Cause otherwise, it's just going to be impossible to compete with them ourselves.And so that was like primarily motivation, but also then it was good to see that a lot of our users and their users cheering for us. And they really like when those different new products like work together. 

Sandhya Hegde 

Right. And hopefully, really good learnings from experiments and user interfaces, right? And what are the different ways people are asking questions and want to consume information and navigate that kind of information space?

Denis Yarats

The main learning is everybody wants to be very fast, nobody wants to wait for an answer. They want to get ‌instant answers. And that's a big challenge for us. So we are spending a lot of time optimizing our infrastructure.

Sandhya Hegde

And, keeping with that thread, obviously OpenAI, Google are thinking about hardware, custom chips. How are you thinking about just, maybe not just for Perplexity, but what the kind of chip ecosystem will look like? Are, they're going to be really custom chips for each model that will give you the best performance for that particular model.

What's your take on how this pans out? And what would be ideal for Perplexity?

Denis Yarats

I think so far honestly we have GPUs from NVIDIA and we have TPUs from Google, but maybe at a lesser scale, and the chips are not the the hardest part to build, but actually software around it. To me, it feels like CUDA is the main sort of like moat for NVIDIA rather than the chips. Cause it's just so much software like PyTorch, all of the other stuff is just built on top of CUDA and it's very hard to replace. So that's yet to be seen. Obviously, we would love to see competition in that space as well, I feel like competition in general best for everybody because it just ultimately creates a better product, creates better opportunities so we would love to see competition there as well. And then, yeah as you said different models can utilize different hardware. Maybe we don't know yet if transformers is the ultimate architecture that's going to stay, right? Transformers are good just because there's perfect hardware for it in terms of GPUs. What if somebody comes with different chips propelling different architectures? Maybe it has sparse components into it. So that remains to be seen, but I definitely expect to see fierce competition in that direction. And I definitely think there's going to be multiple players in that space and ultimately it's going to be best for us.

Sandhya Hegde

And could you chat a little bit about how you're thinking about, Perplexity, future product vision and in such a rapidly evolving ecosystem? How do you think about, what does the company and tech stack need to look like in two years and four years? And who are you trying to hire to future-proof the company?

Denis Yarats

Yeah, this is a very interesting question because like from one point of view, it's it's very hard to plan. Like that far in advance, just because we've been trying to do this, but like all the time we had to scratch our plans and do something else. We just want to excel in search and vertical of search specifically, as I mentioned, build the best possible product for knowledge workers or just like some portion of them. And that means yeah, just like improving product around being able to answer very complex questions, like something that requires maybe half an hour, like Googling right now, can you answer those questions very fast and reliably?

So that's something we're going to be building in general. I think we also want to adap more like classical things in our search, like some people want to see like sports results so then maybe we should also support that. So like those types of things. And yeah, like integrating some of the different like APIs and like providers. Like recently, for example, we added like a local search, like maps. I think that's the, obviously like very useful. Maybe like shopping is going to be something that we're going to add at some point.

Yeah, but the main goal is just build the best possible product. And we're going to be attacking speed and quality. But apart from that, it's very hard to predict because we also depend a lot on what big guys is going to be doing, like what Google is going to release.

Sandhya Hegde
Yeah, I'm curious, what are some AI kind of products maybe you are using to build Perplexity or maybe even in your day-to-day that you have big fans of and you are excited about?

Denis Yarats

 I'm personally like a big fan of ChatGPT. I think apart from that surprisingly, I don't really use coding yet, like any coding assistant. I don't know. I still feel like I'm better than AIs in that aspect. But we'll see. Yeah, that's probably the main one. I'm like a big fan of like voice generation. I think we've been using it extensively in our product. I think things like ElevenLabs is very impressive. So it's good to see.

Sandhya Hegde 

Awesome. Any advice for, the next generation of founders building startups in the time of AI?

Denis Yarats

I feel like it's basically it's be comfortable when everything's uncomfortable. I think that's the main one. Every day is basically going to be a battle. And yeah, you have to just like mentally be prepared for that. I think like also be stable in a sense that if things are good, they're like never as good as people say, if things are bad. They're also like, not as bad as people say. Try to stay grounded and just like you optimize in the fundamental work.

Ultimately, it's still what's going to matter is if you don't overreact to certain things, just try to stick to your mission, try to stick to your vision, obviously take into account whatever happens outside, but don't just like fully jump on it. So if you basically give up on your original idea, that means okay, so likely your idea was not great.

And I feel like also maybe the other one, the big one is like hiring is the most important thing. Without hiring like great people, it's like nothing is possible.

All posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

All posts
April 22, 2024
Portfolio
Unusual

Perplexity's product-market fit journey

Sandhya Hegde
No items found.
Perplexity's product-market fit journeyPerplexity's product-market fit journey
Editor's note: 

Perplexity is an AI-powered search engine that answers user questions. Founded in 2022 and valued at over $1B, Perplexity recently crossed 10M monthly active users and is growing fast.

In this episode, Sandhya Hegde chats with Denis Yarats, co-founder and CTO of Perplexity.

‍Be sure to check out more Startup Field Guide Podcast episodes on Spotify, Apple, and Youtube. Hosted by Unusual Ventures General Partner Sandhya Hegde (former EVP at Amplitude), the SFG podcast uncovers how the top unicorn founders of today really found product-market fit.


Episode transcript

Sandhya Hegde
Welcome to the Startup Field Guide, where we learn from successful founders of unicorn startups how their companies truly found product-market fit. I'm your host, Sandhya Hegde, and today we'll be diving into the story of Perplexity. Perplexity is an AI-powered search engine that answers user questions. Founded in 2022 and recently allegedly valued at over a billion dollars, Perplexity recently crossed 10 million monthly active users and is growing very fast. Joining us today is Denis Yarats, the CTO and Co-founder of Perplexity. So, Dennis, you were a research scientist at Facebook a couple years ago. How did you meet Arvind and how did the rest of your founding team come together?

Denis Yarats

Yeah. So there's a very interesting story. So while I was a research scientist at Facebook AI research, I was working on something called reinforcement learning mostly for ‌robotics, but that's one of the sort of essential piece for ChatGPT. And so Aravind and I happened to work on very similar problems. And one dday,it was like middle of 2020 when COVID was just going on. We published independently, the same paper, the same research results. And since that point, we started talking and like collaborating together. I spent some time at Berkeley, working with him and his advisor. And after that, we were maintaining our relationships. So he went on to OpenAI and in early 2022 was about to graduate and trying to see for like other opportunities. And we'd been talking and it was becoming like, obviously clear that GPT is getting stronger and stronger. And there is going to be like an opportunity to create a company. And so around June 2022, he left OpenAI, Ileft Meta, so we decided to do something.

Sandhya Hegde

Such a great story, especially because, instead of being mad at each other for publishing the same work. I think you were two days before him, right, you had mentioned? You became friends instead. That is such a sweet story. And what about the rest of your team? Once you decided to get started, how did you think about, okay, who are the other co-founders you need to add to your team and why?

Denis Yarats

Yeah. Essentially, so we were both like research scientists, more like AI people. And we definitely knew from the beginning that weneededd somebody who is very strong on product and like in general, like engineering. And it so happened for us kind of that one of my friend and a former colleague, Johnny Ho, whom I workedwith at Quora, back in 2013. So he also recently became available at that time. And he was like the smartest person I knew. He was for example, like IOI world champion, so as a high schooler, so like it's number one in the world, it's not easy to do. And yeah. And so we start working, three of us together, trying different prototypes. And yeah, once we got Johnny, I was like very confident we can do something interesting.

Sandhya Hegde

What was the very original idea, like idea number one that you started working on in August 2022?

Denis Yarats

Yeah, so actually we started maybe around like July. The idea was ‌simple. We wanted to do search, but couldn't do it because we wouldn't get funding. And so we decided to do something simpler, text to SQL.Essentially at that time, there was a pretty decent model called DaVinci2 from OpenAI. And so we decided to build ‌a tool that can translate a natural language into SQL query and then execute it on a database. One of the first things we wanted to tackle was creating an interesting database of public data. And so one of our first interests was‌ Twitter. So we went ahead and back then it was like much easier to do because there was like API. So we scraped a lot of or I guess downloaded a lot of Twitter data and stored them, organized them in database. And started to create like a natural language interface around that data.

So you could have asked like questions like, how many followers Elon Musk has that have more than 1 million followers themselves. So it's doing this like joint operations. We got ‌some rendering. So it was like a very cool demo. In fact, this is a demo we used to get Yann LeCun as our seed investor, like angel investor, because he's spends a lot of time on Twitter. So we went to his office at NYU and showed him this demo. And he was like, very excited about it. That's how we started, but we wanted to do search. You can think of this as a search and kind of a more narrower structured data. But we still spent ‌some time prototyping, like more general search. And in fact, around like October, 2022, we had an internal sort of Slackbot that was essentially the very first prototype of Perplexity. So we would use it to ask questions about medical insurance for our employees and stuff like that. Something that we didn't know a lot about, but it was very useful to see the first glimpses of this technology.

Sandhya Hegde

You're only a few months into building Perplexity, you are focused on text to SQL enterprise customers, and November, ChatGPT launches, and it's one of the most successful product launches in the history of product launches. What was the conversation like within your team? What were you assessing this as? 

Denis Yarats
Yeah, I very clearly remember this day. I was just waking up and I saw a lot going on Twitter. I think we very quickly recognized that this is groundbreaking So it's not something that's just gonna come and go and we had this prototype, right? It just so happened it was also addressing the very early feedback that ChatGPT was getting, where people would complain about hallucinations. People would complain about not knowing where their information is coming from because there is no citations or anything. And this was exactly what ‌our‌ prototype was doing. And so we put two things together, saying, okay, so there is a lot of attention on ChatGPT and we have something that can enhance it. And so we‌ literally in a span of two days, prototyped a very simple website. Put it out as a joke on Twitter. We never thought it was going to receive any attention. And to our surprise, it‌ did. So we started receiving a lot of buzz on Twitter, like a bunch of people started like retweeting us and praising it. So even though it was likea very horrible implementation, it was like very slow, it didn't work well.

But that was like a very interesting sign for us just because we know we can do it way more, much better, but even in the current form there is something about it like people did like. And so we're still at that time not sure, should we proceed it? Because, we're like thinking, okay, so maybe it's going to go like a week or two and then it's going to fade away. And we were like, especially entering into the holiday season, Christmas and New Year. We decided, okay, let's just see how it goes. And early January, when we started looking at ‌usage, to our surprise, ‌ traffic did not drop. In fact, it increased. So it was like, okay, there's something here, so it's not normal. And so we made this decision to completely stop working on Text to SQL, disregard like four weeks, four months of work, all the infrastructure we've built, and like fully focus on general search.

Sandhya Hegde

Yeah. What a fascinating pivot. I'm curious, obviously you had a lot of confidence that you were solving problems that matter to people, right? You could see that from ‌early feedback, whether that's the hallucinations, the citations, using RAG. All of those things. However, you must have thought about what's our long-term competitive advantage since we don't own the core foundation model here. OpenAI does. What was that conversation like? How did you talk about okay, if this works, how do we win and how do we maintain a superior product over the providers of the LLMs that you might be using? I'm sure there must have been some skepticism internally from your team as well.

Denis Yarats

This is definitely‌ a very important question. And something we still, you know think about and not only like being dependent, like a wrapper and depend on OpenAI as an LLM provider, but there is also right after, I think around January or like early February, like Bing released a very similar product, they have everything, they have like distribution, they have search, they have LLM. So there is no good reason for why we should exist, right? So it's just impossible, but turns out it was for whatever reason, our product was better and people preferred us to all the other alternatives out there.

And I think to answer your first question, I think it's like very interesting. The way I look at it is that being a wrapper was very essential and very important position to be early in those days, just because  this is something that only became available like when OpenAI rolled out the API, like before, imagine three years ago you wanted to build Perplexity or something like that. So at that time you have to first, even before launching the product, you have to collect data, train model, launch the product, and only then like figure out if it‌ has market fit or not. Would have been stupid to do this when you have this API available. And the OpenAI API, like essentially allows you to turn the problem around, flip it from its head, and first verify if there is like market fit and if so, then figure out what to do. And that's why that was the best decision we've made. And the thesis here is just, the model, while it's very important, it's not the moat, right? Now, especially, we are fortunate the open-source community is picking up steam. So now that there's like very capable open source models that you can just take and function on top of, but if it weren't we would gotten to a point where we have like enough capital, where we can pre train our own model. Pre-training is one part of it, but I feel like the more complicated part is fine-tuning and like post-training, and optimizing this model for your specific product. And to do that, the only necessary ingredient is user data. Like, you need to establish the data play. If you don't have that, if you have the best pre-trained model, it's useless. So that's why our thesis always was okay so model is not the moat. Data is the moat and like product and the brand. And so now we basically have a lot of data. We know exactly how people use Perplexity. We know exactly what they're asking. We know exactly how to optimize the model to improve metrics, because we can just literally AB-test everything. And so we just take this data. And take whatever model is out there available to post-train on top. So you can think of as bootstrapping. So you bootstrap on something, get the data, and then you can replace all of those pieces that you don't have over time.

For all the pieces of infrastructure, we started with the same idea. We started with some search provider, but then, as we started seeing what needs to be done, so we bootstrapped on it and then built our own infrastructure and just kept improving it.

Sandhya Hegde

And it sounds especially your background in RL and being able to take advantage of user preference data to have a superior product experience is really key.

Denis Yarats

This is important, right? Like basically, each product is unique. So there's like different qualities of the model, different properties you like optimizing over, let's say ChatGPT. So for example, we don't want our model to hallucinate. So we design a reward function in a way that it would refuse to answer if there is no support. And so we just train for it, right? Maybe for some other products, it is okay to hallucinate because it's maybe it's going to make it more engaged or something.

Sandhya Hegde

Right, yeah, there are places where the hallucination is a feature, not a bug. And I am curious, you're obviously building ‌a very horizontal mass-consumer product, but you still have a small enough audience, obviously, compared to the full search market that you probably have some use cases that are much more common than others. Do you have an internal early-adopter persona that you were optimizing for and what did that look like? What was the early adopter and, what were the use cases you wanted to make really good?

Denis Yarats

Even, in the current stage, we are not going after the whole web search space. It's just enormous, but I think like something that we've seen early on and seeing right now is just a lot of people who use us. Are like knowledge workers type of persona, right? Like people who do all kinds of like research, like academics, some financial, and stuff like that. So there's basically people who search internet to solve whatever task, not just recreational queries, whether that’s the weather, maybe some navigational queries. Knowledge workers’ queries are going to lead to some decision, right? So decision making, that's very important. In fact, like Google, huge company, but has a lot of all kinds of users, but there is a very skewed distribution of how much money they make from whom, and like very small percent of user makes majority of the revenue for Google. And we have a small overlap with that portion of users, not the users who create free traffic, but somebody who can pay for it. And that's why I think it’s unlikely we're ever going to get as big as Google. And that's honestly not our goal, but if we can provide a tool that's going to be useful, it's going to save time for a small portion of users. But those users, professionals, knowledge workers, people who can pay for those services, we can create a successful business.

Sandhya Hegde

And were there any surprises for you in the early customer feedback? Anything that stood out that like really crystallized the product direction for Perplexity?

Denis Yarats

There were a few interesting things that I did not expect to see. Like one of them was people searching for other people or themselves, like vanity searches. That was a very common use case, and it still is. So imagine a salesperson or whatever — you have to meet somebody and you want ‌a very quick understanding about somebody who you're going to talk to soon, right? So you just go to Perplexity to for a quick write-up.  A lot of people were interested in academic research. So they're asking if we can do add PDFs, not just like ‌web documents, but like some more like specialized literature and specialized indices where it's very hard to use Google for those.

Sandhya Hegde

Makes sense. When I think about what Google's core technical competency was, aandwhat helped them stand out because they were certainly not the first or only search company, but what helped them really beat Yahoo or Bing was they were the best at content indexing. They've scaled the hardware infrastructure really well. And they also had a pricing model, right, that really helped them with having good user experience, right? They came up with a pricing model that really helped them improve their user adoption as well. I'm curious when you think about okay, what are the core technical competencies that ‌Perplexity always needs to be the best at, right? Compared to whatever chatbot other model companies might launch or whatever else or whatever Google might launch tomorrow. How do you think about what are the one or two things that you as the CTO of Perplexity think we need to make sure we're always the best at?

Denis Yarats

I'm personally a big fan of Google. I feel Google search is probably the most complex system and sophisticated system that humanityhas everr viewed.And two core concepts that I really like about Google is speed and accuracy. And I think they're like by far the best into it. So those principles are something I care deeply about and trying to make sure like Perplexity is also very fast and very accurate. In this new era where you have to combine a very expensive and hard-to-run LLM with search, you have to figure out how to do this efficiently and fast and without sacrificing quality. So I feel like our core competencies like the orchestration part where how would you given a query, how would you make sure that you can answer it perfectly, you can answer it very fast. You can also do it cost efficiently. So it's basically a multi-dimensional optimization problem. And so doing that, it's difficult and something that we focus ‌a lot on. And then basically once you start solving this problem, then you can start like deviating a little bit into the direction of LLM, into direction of search.

So the main thing to understand is just the search index, even though I say like Google is the most complicated thing, doesn't in this new LLM world, it doesn't have to be as complicated as it used to be. So we don't have to spend so much time on designing this, like manually crafting the ranking signals and stuff because a little LLM is going to take care of this. So you basically already know, like some answers to the all the trade offtrade-offs like precisionrecall, freshness of frequency update and stuff like that.

And so now given that it's going to work together with LLM, so certain decisions become easier. The same exactly comes on the LLM side. Like you have specific product problem you wanted to solve. Do you need to run the most capable or like largest LLM? Probably not. Depends on query. Like maybe some query do require that. Some don't. So how would you like route this query to like appropriate system? How do you then maybe have a smaller model that can do like decently well on like certain querie And yeah, that's basically. controlling the whole orchestration and then optimizing individual components of like LLMs and search in order for everything like to work together perfectly. That's I feel like our core competency.

Sandhya Hegde

 Yeah, no, fascinating. You're right. There's so many jobs that you can specifically choose to use smaller models for, and you're constantly having to decide what is the smallest model you can use that will still give close to the best possible experience for your user. That's yeah really fascinating. How do you think about, especially given your business model which is going to be subscription. I think that's the power of your model. And the biggest reason why the average consumer would want to try a Perplexity is, you don't want the like 10 links and five ads.You really just want to save time, get a well researched answer. That means subscription pricing. How do you think about gross margins at that for that business model, and of course, you're still early and in hyper growth mode. But how do you think about long term gross margins and the implications of kind of the pricing model you have chosen?

How do you think it works over time?

Denis Yarats

 So I think like currently the subscription is the main model right now. I'm sure there's going to be like something else into the future, but even now it's very interesting to see that margin is actually pretty good especially. We observed over the last year it becomes cheaper to run those models. Like hardware becomes cheaper, models get smaller and better. Like even OpenAI, API price, it dropped, I think four or five times over last year. And then we also build certain things in house and now we don't have to rely on OpenAI API as much, so we are observing the margins increasing over time. Which is good. Obviously we keep adding like new features to make our product even better. But I think I have like full confidence that we will be able to continue to do and like eventually at some bigger scale, we're going to have very good margins. Still, there is going to be like other opportunities to monetize. Iwon'tt rule out ads. I think like ads in the current form as Google, it is probably not something we're going to be doing, but I think there's like ways to have ads and some in a way that is going to be helpful for the users, right? Like people don't really mind ads if it's helpful, right? If you like searching for something and you want to buy something and like exactly perfect product for you, it's going to be good. People hate when they see a lot of irrelevant ads and a lot of them. So that's why I feel like definitely somebody is going to reimagine ads in the LLM world?

Sandhya Hegde

Makes sense. You mentioned hardware, so maybe this is a good segue point. So I have been super impressed with how much Perplexity has leaned into hardware partnerships already. I'm curious what was the motivation behind that, whether or not it's Rabbit or these other kind of new partnerships you recently announced? What was the motivation behind it? And what are you learning from these early phone and glass partnerships?

Denis Yarats

Yeah. The motivation is actually pretty simple. We're still very small company, like a lot of people, we'd like never invested any single dollar in a sort of like advertisement or anything like that, so it was all natural and organic growth, but if people compare us, like a Google challenger or whatever, so it's it's pretty funny because they have a completely the complete level of distribution, right? If you ever want to even get an inch closer to them, you have to have distribution. And to that point, we decided, okay, so if there's likother, like maybe company at the similar stage as we are, who are, like also innovating in different directions like rabbit, you mentioned there's like the glasses and phone So it seems like there's like an opportunity and by working together, we can create a opportunity for all the parties and so take on big guys. Cause otherwise, it's just going to be impossible to compete with them ourselves.And so that was like primarily motivation, but also then it was good to see that a lot of our users and their users cheering for us. And they really like when those different new products like work together. 

Sandhya Hegde 

Right. And hopefully, really good learnings from experiments and user interfaces, right? And what are the different ways people are asking questions and want to consume information and navigate that kind of information space?

Denis Yarats

The main learning is everybody wants to be very fast, nobody wants to wait for an answer. They want to get ‌instant answers. And that's a big challenge for us. So we are spending a lot of time optimizing our infrastructure.

Sandhya Hegde

And, keeping with that thread, obviously OpenAI, Google are thinking about hardware, custom chips. How are you thinking about just, maybe not just for Perplexity, but what the kind of chip ecosystem will look like? Are, they're going to be really custom chips for each model that will give you the best performance for that particular model.

What's your take on how this pans out? And what would be ideal for Perplexity?

Denis Yarats

I think so far honestly we have GPUs from NVIDIA and we have TPUs from Google, but maybe at a lesser scale, and the chips are not the the hardest part to build, but actually software around it. To me, it feels like CUDA is the main sort of like moat for NVIDIA rather than the chips. Cause it's just so much software like PyTorch, all of the other stuff is just built on top of CUDA and it's very hard to replace. So that's yet to be seen. Obviously, we would love to see competition in that space as well, I feel like competition in general best for everybody because it just ultimately creates a better product, creates better opportunities so we would love to see competition there as well. And then, yeah as you said different models can utilize different hardware. Maybe we don't know yet if transformers is the ultimate architecture that's going to stay, right? Transformers are good just because there's perfect hardware for it in terms of GPUs. What if somebody comes with different chips propelling different architectures? Maybe it has sparse components into it. So that remains to be seen, but I definitely expect to see fierce competition in that direction. And I definitely think there's going to be multiple players in that space and ultimately it's going to be best for us.

Sandhya Hegde

And could you chat a little bit about how you're thinking about, Perplexity, future product vision and in such a rapidly evolving ecosystem? How do you think about, what does the company and tech stack need to look like in two years and four years? And who are you trying to hire to future-proof the company?

Denis Yarats

Yeah, this is a very interesting question because like from one point of view, it's it's very hard to plan. Like that far in advance, just because we've been trying to do this, but like all the time we had to scratch our plans and do something else. We just want to excel in search and vertical of search specifically, as I mentioned, build the best possible product for knowledge workers or just like some portion of them. And that means yeah, just like improving product around being able to answer very complex questions, like something that requires maybe half an hour, like Googling right now, can you answer those questions very fast and reliably?

So that's something we're going to be building in general. I think we also want to adap more like classical things in our search, like some people want to see like sports results so then maybe we should also support that. So like those types of things. And yeah, like integrating some of the different like APIs and like providers. Like recently, for example, we added like a local search, like maps. I think that's the, obviously like very useful. Maybe like shopping is going to be something that we're going to add at some point.

Yeah, but the main goal is just build the best possible product. And we're going to be attacking speed and quality. But apart from that, it's very hard to predict because we also depend a lot on what big guys is going to be doing, like what Google is going to release.

Sandhya Hegde
Yeah, I'm curious, what are some AI kind of products maybe you are using to build Perplexity or maybe even in your day-to-day that you have big fans of and you are excited about?

Denis Yarats

 I'm personally like a big fan of ChatGPT. I think apart from that surprisingly, I don't really use coding yet, like any coding assistant. I don't know. I still feel like I'm better than AIs in that aspect. But we'll see. Yeah, that's probably the main one. I'm like a big fan of like voice generation. I think we've been using it extensively in our product. I think things like ElevenLabs is very impressive. So it's good to see.

Sandhya Hegde 

Awesome. Any advice for, the next generation of founders building startups in the time of AI?

Denis Yarats

I feel like it's basically it's be comfortable when everything's uncomfortable. I think that's the main one. Every day is basically going to be a battle. And yeah, you have to just like mentally be prepared for that. I think like also be stable in a sense that if things are good, they're like never as good as people say, if things are bad. They're also like, not as bad as people say. Try to stay grounded and just like you optimize in the fundamental work.

Ultimately, it's still what's going to matter is if you don't overreact to certain things, just try to stick to your mission, try to stick to your vision, obviously take into account whatever happens outside, but don't just like fully jump on it. So if you basically give up on your original idea, that means okay, so likely your idea was not great.

And I feel like also maybe the other one, the big one is like hiring is the most important thing. Without hiring like great people, it's like nothing is possible.

All posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.