May 6, 2024
Portfolio
Unusual

Monte Carlo's product-market fit journey

Sandhya Hegde
No items found.
Monte Carlo's product-market fit journeyMonte Carlo's product-market fit journey
All posts
Editor's note: 

SFG 46: Barr Moses on data reliability

Monte Carlo is an end-to-end data observability platform that monitors pipelines for missing or inaccurate data. Last valued at $1.6B, Monte Carlo has over 150 customers, including data teams at companies such as CNN, JetBlue, Hubspot, PepsiCo, and Toast

In this episode, Sandhya Hegde chats with Barr Moses, co-founder and CEO of Monte Carlo.

‍Be sure to check out more Startup Field Guide Podcast episodes on Spotify, Apple, and Youtube. Hosted by Unusual Ventures General Partner Sandhya Hegde (former EVP at Amplitude), the SFG podcast uncovers how the top unicorn founders of today really found product-market fit.

Episode transcript

Sandhya Hegde
Welcome to the Startup Field Guide, where we learn from successful founders of unicorn startups how their companies truly found product market fit. I'm your host Sandhya Hegde, and today we'll be diving into the story of Monte Carlo. Monte Carlo is a data observability platform and it monitors pipelines for missing or inaccurate data. Last valued at 1. 6 billion dollars, Monte Carlo has over 150 customers including data teams at companies like CNN, JetBlue, HubSpot, and Toast. Joining us today is Barr Moses, CEO and co-founder of Monte Carlo. Welcome to the Field Guide, Barr.

You started the company in 2019, so five years ago already. How did you and your co-founder, Lior, arrive at‌ this problem of data observability and decide to focus on it?

Barr Moses
Yeah, for sure. Great question. I'll start by describing‌ the problem that we're solving and then how we got to focus on it. The problem that we're solving might be familiar to folks listening, which is basically when you look at some data product, maybe it's a dashboard maybe it's a website with a price, maybe it's a generative AI application. And you're looking at it and something looks wrong, like the numbers look off. Something tells you that the data just doesn't make sense, and you're like, like, why are the numbers wrong here? That's exactly the problem that we solve. So we help data teams, data engineers, data analysts, data scientists, be the first to know about data problems, but not only the tech, the problems also make sure that they can triage, resolve, and improve on those over time. And yes, we're very honored to work with some of the best data teams in the world, including companies like Fox, Cisco, JetBlue, and many others. All of those rely on data to drive their operations. So make sure that, in the case of JetBlue, your suitcase arrives on time.

What other companies that use it for, they use data for financial reporting or, new companies that are now, or new organizations that are now building generative AI products. And in all of those use cases, making sure that the data is trusted and reliable is paramount to being able to use the data. And, going back to five years ago now, it's crazy how time flies. I was responsible for data at a company called Gainsight. So for folks who don't know, Gainsight created the customer success category, which basically ushered in this new era based on subscription businesses and recurring revenue models in which the importance of making sure that your customers are‌ using your product, expanding, and renewing is paramount to your business. That‌ wasn't the case before the subscription era. And at the time, I joined the company and I was responsible for a team called GONG, Gainsight on Gainsight, where we were using our own product internally. I always thought that the acronym GONG was very clever. It wasn't mine. It was credited to Nick Mehta, the CEO.

And as part of that, we were using data to share with our board, with our internal executives, with our external customers. We were sharing data with them. And in all of those instances, it occurred way too often, the data was wrong. And I would hear about it from Nick, our CEO. I would hear about it from our customers that they were frustrated that the data is wrong. And maybe the thing that was most frustrating is that when I was looking at our engineering counterparts, they all had great solutions to make sure that their applications are reliable. They had solutions like New Relic, Datadog, and AppDynamics. But data teams literally had nothing. We were like, oh, let's ship this data and hope the data's accurate, right? Which ironically is still how many data teams operate today. And so just frustrated by our own personal experience, again, this was five years ago, and so the world of data was very different, but I was looking forward and thinking to myself, there's no way, or I believe in a world where data is going to become even more important. And if data is going to become even more important, then the trust and reliability of that data must be more important. And I don't think the way that we've addressed data quality to date is insufficient. It's definitely a topic that has been, looked at and thought about for the last couple of decades. But the realization or sort of the driver to starting Monte Carlo was the realization that there's a lot of things that have been developed in engineering, whether that's like DevOps or SecOps, best practices that have been really helpful for engineering teams and security teams to make sure that applications and businesses are reliable, secure, trusted, and none of that exists in data.

And so we started Monte Carlo with that goal and that sort of hypothesis. Can we bring those best practices and empower data teams so that they have the same level of rigor, visibility, operational experience, operational excellence, and the right technology to support that? So that's how the idea got started from our sort of own pain ‌and experience.

Sandhya Hegde
Now obviously, pretty much any company in the world could be your customer. And there are so many ways in which data quality degrades. How did you think about, like, where to start? Both in terms of what is the right early functionality to build, as well as, what's the right early adopter to focus on? Would love to hear more about, maybe the first few months of the company as you and Lior were navigating that question.

Barr Moses
Yeah, for sure. Such a great question. And as you pointed out, any company can have a data team and can run into this problem. And so the first thing was to step out of my experience, right? So say, okay, I've had this, but let's talk to like people out there, my counterparts, data leaders everywhere, and see if they're experiencing the same level of pain. And let's also start mapping, like who's experiencing in which industries, is it more in sort of B2B or retail or e-commerce? And what types of companies? Are these smaller companies experiencing this pain or rather larger? Basically start to think of different variables that you can create some sort of like market map to understand where's the pain.

I think one of the misconceptions in creating a category is that when you create a new category, there's a need to bring the message and teach people on this new problem. But the thing that's really hard is if you create a category and there's no pain, you might be pushing a rock uphill for a really long time until that pain exists. And so for us, in the early days, and still today, it's not so much about creating the category. Because your customers don't care about creating, they don't care that you're creating the category. They care to the degree that they're excited for you and they want to be part of the journey. And we take great sort of pride in that. But really, at the end of the day, a customer wants their problem to be solved and their pain point to be addressed. And so the key was finding out who, if anyone is experiencing this pain and what are they doing about it and what are they willing to do about it? And so I think, as a founder in the very early days, there's a couple of things like one, the odds are like the cards are stacked against you. Like, the odds are very low for success. Just because of that's the nature of startups, like 99 percent of them fail. You're also running against a clock, like running against the time, right? Because there's always there's a right time for the market and sort of the opportunity. You also don't have unlimited funding and unlimited time to figure things out. And you have no resources. So you're basically like one, two, maybe three people. So basically everything is stacked against you. Which I‌ think was a lot of fun. And within that, you're like, wow, I have a list of a hundred problems. It's I don't know what's the, some of the things that you mentioned, like who's the right company, who's the right persona, what's the right product, what's the right price, what's the right service model.

There's so many questions that you need to focus on as a founder. It can be really overwhelming, in, in the early days, but also later on. And so the approach that we've always took is be really, first, very focused on what are the things that are going to kill us next. What are the things that basically we can't survive without? And then, what is our hypothesis about them? So very clear articulation of what are the things we need to believe for this pain point to be material enough for a customer to care about and for us to be able to help solve this problem. In the early days, the first hypothesis was There is a customer out there that cares enough about this problem that we can help them. So the hypothesis was can we make one customer happy? That's all we needed to prove. And so I remember calling a lot of my founder friends and saying, Hey what's the trick to getting your first customer? Like, how, what should I do? And, like, how did you do this? I'm sure there's some I don't know, strategy or hack or something. Like, how do you get started? And there was something really liberating about that because everybody that I spoke with was like, look, there's no hack. You just have to talk to enough customers, understand their pain, and see how to help them. And obviously have a lot of luck along the way. And I think that was something that I loved hearing because I was like, look, there's no shortcuts for hard work. There's no shortcuts for being customer-obsessed. There's no shortcuts for solving a real meaningful problem. And learning that lesson early on and then carrying it forward.

So what I did was my hypothesis was there's one customer that we can make happy. And so we narrowed and we spoke to, I think maybe a couple of hundred of data leaders, all of them are folks that we didn't‌ know, and‌ mapped out, okay, these are the types of companies that have a pain. The thing with literally every single company in the world either has a pain today or later, but there were some early clues. So, for example, companies that were more data-driven had more imminent and urgent problems. And four or five years ago, those were‌ industries that I would consider more data-rich or more data-intensive. So, for example, e-commerce. FinTech, retail, those were companies that were faster to adopt data in the early days as opposed to companies in B2B that were‌ slower to adopt data. And so we focused on those early sort of industries and cohorts of companies where the problem was more acute. That's one example. The second example was, we were debating what type of people, what type of title, or what type of scope the people have that we work with. So, for example, are these data engineers, data analysts, or data scientists? Who cares about this problem? And then we‌ learned that everyone is involved in delivering data trust and delivering data reliability. And so how do we‌ bring all of those people together into one platform to talk about data issues and data problems? I would say in designing the early stages of the company and the focus and the product, it was really based on trying to find one customer to make them happy and then five customers and make them happy and then 10 and grow from there. But it was very like, what's a here-and-now customer that we can help with.

Sandhya Hegde
And looking back, like, how would you describe the pattern in your early adopters? Looking back, what were the characteristics that ended up one, making them want to lean in and work with a startup on this?

Barr Moses
The MOM test. It's‌ not a great name, but it's a very good book. And the idea there is that there are some people in the world that if you tell them, Hey, I have this great idea, maybe it's your mom or your dad or your grandparents, you're like, Hey, I'm starting a startup.

And I have this idea. Most of them will be like, wow, Sandhya, that's the best idea on the planet. How did you come up with it? You're so smart, right? Because they love you and they care about you, and they have a relationship with you. The problem is those people are very bad testers for your product because they will say yes to whatever you want. And so there are a lot of people out there in the world who like Sandhya and appreciate you, and want you to be successful, and so they might be engaging with you just for that purpose And so the idea is that in the early days to really find product market fit, you need to focus on those that pass the mom test and that they do not‌ have that affinity or affiliation with you and they will give you honest feedback.

And I remember, I cold called a CTO of a large company at the time, and there's some cold email and I was like, Hey, do you have this problem? And he immediately said, Yes, I do. I'd like to speak immediately if you're solving it. And he didn't owe me anything. He didn't have to take the time of day to speak with me. And then when I met with him, I‌ shared a couple of slides, ‌describing the problem and then how I'm going to solve it. And I remember him telling me, Your slides are the ugliest slides I've ever seen. These slides are terrible. But, how you're thinking about the product is amazing, and I'd like to implement it tomorrow if you had this. And that was exactly the kind of reaction that we were looking for. Someone who ‌doesn't think about all the sort of extra and, the, but more focuses on, does this solve a core problem for me and for my business and can it add value? And I was like, that's what you want. And so we‌ termed, we gave that a term, we called that hell yeah moments when our very early customers would basically be so excited about something.

And so we would look for that. We would have a conversation with someone and if they just jumped off the chair with excitement about something, we're like, yes, that's the kind of reaction that we're looking for.

Sandhya Hegde
Yeah, you want like clear no or hell yes. You don't really want the maybe, let's do five more meetings and no it doesn't go anywhere right, that's like death for a startup.

Barr Moses
Yeah, like another, exactly. And another good example with that is when you show something to someone and you're like, here are five features that we're thinking of building. Which of them is most interesting to you? And then the answer, the worst answer is, Oh, all of this looks great. I want it. That's terrible because that tells you that none of it‌ matters. What you're looking for is‌, I want number three. And I wanted that yesterday. Okay. And if I don't have number three, it's going to be a disaster for me. That is a kind of reaction that you're looking for when people give oh, all of the above, and this sounds amazing, and I'm so excited about this like general sentiment. It's‌ very hard to move forward from that.

Sandhya Hegde

Right, and is there ‌a particular industry or business model or something you focused on for your early days? Did you say, okay, we are going to go after ‌big consumer tech companies. So is there a particular thing that like made it higher value for them to implement data observability? Maybe they were like using that data in production. Was there anything like that you could latch onto?

Barr Moses
Yeah, a hundred percent. So there were a number of different variables that we used to sort of hone in on our sort of what we call our ICP, our ideal customer profile. The first is the size of the company. And one of the hardest things in an early-stage startup is to work with too big of a company or too small of a company. Because if it's a company that is very particular in some way, either too small or too big, They‌ don't represent the broader market. And so the trick is finding a customer that represents the broader market. And so in the early days, we really focused on the mid-market segment. And we used that to basically build velocity and build a product. Today, we support the world's largest Databricks, Snowflake AWS customers. But in the early days, it was really hard to start with that. And so we had to grow into that scale. So that's one sort of variable. The second variable I mentioned was industries. So back in the day, we found that industries like B2B, for example, were late to adopt data as, as funny as that sounds. And so we focused on companies that have‌ have a lot of data and use it. The problem is that some companies had data, but they wouldn't use it. And so if they're not really using it and the data is wrong, who cares? So we needed to find a company that had a real use case for data where people are‌ looking at data. And an e-commerce company has a pricing algorithm that relies on data or has. Ad campaigns that are based on data. Or has discount codes on the website. If any of those were wrong, that's material impact to the business immediately right now. You know, FinTech companies, if the ticker number is too high or too low, if a company is reporting the wrong numbers to Wall Street, that happens too, right? In all of those instances, that's a real business impact. So we were looking for companies who‌ use data. The third kind of variable that we looked at was who‌ has a budget for the data? And is there a senior person who's responsible for data? Several years ago, data oftentimes was an afterthought in organizations, so it was, maybe a very small team, couple of layers down from the CFO or the IT department, and they basically had the entire quarter to make sure that the data is accurate before they say reported it to the street, but they could use a lot of manual ways to make sure that the data is accurate. Fast forward to today. Every data team is ‌front and center for a company, especially more so in generative AI. And so oftentimes, that sort of organization reports into a CTO or a chief data officer or VP engineering. All of those people care deeply about the data infrastructure, and the reliability of the data. But in the early days, we‌ had to make sure that we're working with a data team that worked on something that was material to the company and that they had the budget to support that. And we‌ use that to make sure that it's a big enough problem for the company because it's not a big enough problem for the company. That also could be really difficult. And then the final variable that we use was ‌like we're in the stack to start with. So a little bit of context, data observability is the only sort of approach that looks at your stack end-to-end. What I mean by that is it includes all your data all the way from source to consumption.

So that includes, it could be upstream data sources. It could be data warehouses, data lakes, ETL, orchestration, and BI ‌and ML solutions, right? ML models. And when we work with a customer, we integrate end-to-end. And the question in the early days was where the hell did you get started? Cause it would take us like years to ‌build that coverage, right? We have like dozens of integrations, but we, as a startup, you don't have time to, to sit and build and hope that in five years, the market will be there. And so you have to make a bet on where do you start. And so what we did is, again, we tried to find an immediate solution to a real customer pain point. And what we found was that most of our customers‌ were getting yelled at, if you will, by their downstream consumers. So by folks looking at reports or using the output of models or‌ looking at a website. So those are closest to the end user. And those are the people who are feeling the strongest pain.

And so what we did was really focus on them. And so our very first integrations were with the Data Warehouse, the Data Lakehouse, and the BI solution. And over time, we've developed this end-to-end stack, and we now support the full end-to end, but we had to make a bet at the beginning on where to start, and it‌ got a ton of traction just connecting to your data warehouse and your BI solution solved a ton of pain for data engineers and data analysts, but it wasn't obvious. Like, I think if we would have started somewhere else in the stack, it might have taken us a lot longer and a lot more time.

Sandhya Hegde
And how, were there like a few Like forward-thinking, early design partners who really helped you nail things like where to start, what are the first few features, and were there any surprises for you in that process of just working with customers, early prototypes for the first time? If you recall any moments that really helped you crystallize that MVP, would love to hear those.

Barr Moses
Yeah, for sure. The customers that were the most helpful to us were those who were the most blunt and direct with us in their feedback. So they could be like, really cut through the bullshit and say this is helpful, this is not helpful to me. So I'll give you an example. In the early days, we talked about something called the five pillars of data observability, which give a comprehensive view of all the various reasons why data might go wrong. And so those include schema changes, that's a common culprit for data going wrong that includes freshness, so making sure the data is accurate, includes volume. Includes lineage, includes data quality. So basically like field-level, accuracy, completeness, et cetera. The traditional data quality variables. And when you map that out again, as a startup, you can't build all of that from day one. That's a lot to build. And so I remember, you know, we were, we were speaking with a very early customer. His name was Yoav. And we were, we're basically saying, can you describe to us like the last three to four data incidents that you had, what was the root cause of them, and what happened? And through that we uncovered that ‌all of his pain really boils down to schema changes. And he was like, if all you send me is a daily update, it can be an Excel sheet. It can be an email. I don't care. You can deliver it with a pigeon. All I need is like a list of schema changes because it's like, Yoav was a data analyst. He was‌ the lead of the data analytics team, and he was like, all these upstream teams are making changes. I have no visibility into them.

I have no idea if they're literally deleting a table. Maybe they're changing the field, the name, the type, or the name of the field. I have no visibility into that, and then I wake up in the morning and my reports are all broken. I don't know what happened. Who did it? And why did they do it? And so if you just get me the list of schema changes every day, I will be forever grateful.

And so that was super helpful, because they were like, okay, these pillars and the concept of data observability is very broad, but here's one very particular thing. that could be really helpful. And so that I remember was really meaningful. By the way, for the record, I, in the early days, I was like, Oh, like, how would that be?

I'm so skeptical of that. And, he was like, no, I need this. And I think that taught me an important lesson that even myself as a user, the customer is always right and always go in that direction. Another example has more to do with pricing and go-to-market.We didn't charge customers in the early days. We ‌let people design partners work with us. And we, we‌ measured their engagement in terms of how much time they're spending with us. So if they were spending time with us, that means that the problem is important enough for them to‌ invest in this. And they were spending a fair amount of time with us. And I remember after maybe six or eight weeks that we were working with this particular customer his name is Rick, ‌ he offered to pay. And he said, Barr, we're using your product. This is‌ a great product. It's a real, sort of SaaS solution. It's a real service. We should be paying for this. And I was like, Rick, what are you talking about? There's we're too early. There's so much more that we need to build. It was like, no, like we should be paying for this. Y'all are crazy. And I remember that was like a shift for me because I was like, Oh, okay. So our product has reached a certain maturity level where our customers are telling us that we need to pay. That's a big deal. And I remember telling Rick, nevermind we'll come back to it later, and he kept coming back to me again and again. After the third time, I was like, okay, we just look like idiots. We really just need to move forward here. And so that really, that ‌ pushed us to the market, if that makes sense. And so again, I think being really attuned to our customers along the way, we're so grateful for all of our amazing customers and they still push us in similar ways today.

So I can tell you, we were iterating on new pricing models or new go-to-market models, or we're building additional integrations. We just released Pinecone generative AI, a vector database integration. Like those are all thanks to our customers saying, Hey, like we need this, or directing us in that or pointing us in that direction.

Sandhya Hegde
Having happy customers is it's the best drug for a startup. And, until you get there, you're still thinking about oh, fundraising and valuation and like celebrating those moments. And then you discover there's‌ a stronger drug. And it's customers saying, Oh my God, we love your product. Maybe a shift to a go-to-market strategy. Obviously this is this is not like something most people would consider oh, self-serve PLG. How did you think about go-to-market strategy in the early days? Obviously, you must have done a ton of founder selling for a while. How are you thinking about go-to-market strategy now?

Barr Moses
Yeah, I would say that's something that, similar to building a product, we're continuously evolving and adopting. We just ship changes, or we ship new features and bug fixes, etc on daily and weekly basis at Monte Carlo. We also think about our go-to-market in a similar way. And I think the worst thing is to be static because your customer changes and how they're thinking about the market changes. Not that being said, you don't want to introduce changes just for the sake of changes. And if something is working well, you want to double down on that and increase that. And so that is really what we did in the early days. So you know, I mentioned that I spoke to hundreds of leaders and I asked them, Hey, what's your biggest pain? And data downtime or data being incorrect or accurate, inaccurate, or for any other reason wrong. Um, came up as a top three pain for almost all of the data leaders consistently again and again. And I mentioned we, we spoke to hundreds of leaders and basically asked them if this is a problem for them that came up as a top three. Okay. And then, as we did more discovery and to understand their pain, how they're solving it today, what are they thinking about, we also learned about how they typically buy a solution.

So they sometimes told us, Hey, I just bought Databricks, or we're just migrated from GCP to Snowflake, or we're moving to Redshift or whatever it is. And then throughout that conversation, we learned that they were the budget holders. And they need approval from their CFO and CIO, and the evaluation team includes the data engineering manager and a bunch of people on their team. I'm simplifying here, but the idea is that as you spend more time with your customers, you're learning more about their pain and how a particular solution can solve it. You're also learning about how they prefer to buy and how they prefer to interact with you. And learned that, for example, particular pricing models work better or worse for them. We learned that there's particular ways to make it easy for them to work with us. One of the most important things that, you know, for me as a sort of former data leader that was critical to me was like, you are bombarded with like hundreds of data vendors and you're like, oh, I have to manage all of this now. And so finding ways to ‌add value to these people, our customers, and be really easy to work with has always been really critical to us. And be easy to work with, I think in go-to-market means be easy from a marketing perspective, like it should be easy to learn about the company, be easy to work with a sales team, be easy to work with legal. Like we don't want to spend months, making your life hard. If we can, if we can jump to the part where you're seeing value, right? Being easy to bill and operate. So every single team contributes to this like strong customer experience. So when I think about improving go-to-market, it's improving every single team if that makes sense. And meeting the customer where they are. So for example, one of the things that's really important for customers is that it's just easy to get started. And with our current pricing model, you can just, you pay for what you use. So you sign up and then if you're. Not monitoring a particular table, a particular asset. You don't pay for it. It's pretty transparent. It's pretty clear, and it meets customers where they are. They don't want ‌opaque models. Or sort of things that are really hard to get started with. And I think it's through a similar process of finding those hell yeah moments. Where people are like, Oh, I can get started tomorrow and like onboarding takes 30 minutes and I can see value within 24 days, 24 hours. Hell yeah, sign me up. So then, you get a signal that you're on the right track with your sort of go-to-market motion.

Sandhya Hegde
Makes sense. Maybe switching gears to a little bit to the broader data infrastructure ecosystem. I feel this is like one area that has seen such dramatic change in the last 10 years, which is often not visible to a lot of the end users of the data, right? What's happening behind the curtain, if you will.

And for example, 10 years ago when I was at Amplitude and we were early, very few companies had embraced data warehouses, right? And that, that itself has changed so much now. And the way people want to use data warehouses is already going through another shift. So I'm curious what has been your perspective as someone who's literally observing everything from source to consumption and data pipelines and so on?

How do you think also like this, rise, or at least currently, the desire to build proof of concepts with AI and try using AI on, on proprietary data within the business, how do you think like all of these forces will change the ecosystem from Monte Carlo's perspective?

Barr Moses
Yeah, great question. And it's‌ really fascinating to see to your point, the evolution of the data space more broadly going from, Oh, what's a data warehouse to what's a data lake house to, Oh, what's a new stack to support generative AI, right? And what am I supposed to do there? I think in each of those waves that you're seeing the, what, there's two things that remain constant. The first is data, and the data team becomes more important, more central in the organization, and more critical to the business. The second thing that you're seeing is that. In each of these waves, the importance of trusted and reliable data becomes even more important. So if 10 years ago it didn't matter at all, maybe it mattered a little bit, today,‌ in surveys that we're seeing, there's two barriers to releasing or to, to shipping generative AI products to production.

The first is security and the second is the trustworthiness of the data. Those are the top two barriers to building generative AI products. So I would say in each of those waves, our belief in our mission, our vision of accelerating the world's use of data by enabling trustworthiness of the data has‌ become even more important. So we have even more conviction that data is going to become a more critical role. And you can't cut corners. Everybody's going to know if your generative AI, if the chatbot or whatever it is that you're building is you know, if there's a hallucinations or any sort of experience like that, that erodes customer trust, it's very clear if it's based off of wrong data. In fact, I would say that in generative AI in particular, the number one competitive advantage and the moat is‌ first-party data that companies are using, whether it's with RAG or fine-tuning. And that means that if companies are relying on their data as their moat for generative AI products, it means that the quality of that data is paramount.

And so we're seeing a lot of companies now in preparation for generative AI products. They're‌ making sure that the data that they have, regardless of where it is at a data warehouse or data lake house, make sure that data is accurate because they know that as they're starting to build those solutions, they're going to highly, it's going to be very visible whether the data that you have is accurate or not. So that is what I would say, so more broadly in the ecosystem. And I think data observability also has come a long way, even in the short time that it's been alive. I think we started really at just looking at data in particular, believing or thinking that changes in data are mostly what contribute to problems in data.

And what we've evolved since is that it's just not sufficient. Data is wrong due to three reasons. Things can go wrong in the data, in the system, or in the code. And you‌ need to have a solution that encompasses all three of those. And that is a very big shift. The second shift is that most of the solutions and most of the approaches have really focused on detection. And I think to a certain degree that's becoming commoditized. Anomaly detection solutions are everywhere. But the power in having a strong observability approach is in making sure that you go beyond detection to triage of the problem, resolution of the problem, and measurement of the problem.

And‌, strong data teams have capabilities across all four of those. And that, again, becomes even more important as data becomes even more front and center in generative AI. And so when you think about generative AI stack that's emerging, Are your pipelines supporting that? Do you have the right operational capabilities and technological solutions to support a highly reliable generative AI pipeline?

That's what I'm excited about.

Sandhya Hegde
I see a lot of startups that are working on new approaches to analytics, new approaches to maybe like data curation for training these models, essentially a lot of kind of new energy and enthusiasm and the data stack. I'm curious, what would your advice be to founders starting to build new startups in this data ecosystem in 2024.

Like, how do you think about building for the future and having something that ‌ stands the test of time in five years? Because I think the next five years are going to be fairly chaotic in terms of just how much change is happening.

Barr Moses
Oh, for sure. I'm stoked for the next five years. I think if there's one thing that's for sure, it's going to be really exciting. I don't know what's going to happen. I like to think that the data, data space is like the best party to be at and to a certain degree, we're a little bit ‌like Taylor Swift, like you gotta reinvent yourself every few years. I don't know why the data space, we're just reinventing ourselves all the time. I would say, so first and foremost, my advice is to simply not listen to advice. I think, by and large, most advice is‌ crap, and it's not a good idea to listen to advice. I think also advice really is heavily weighted by the person giving the advice, the scars on their back, their experiences, their agenda, not even maliciously, just because as human beings, that's the kind of advice we get. And so my primary advice is do not listen to advice‌. The only source of information that you should listen to is your customers. And I think that is what will give guidance for the next five years because, and by the way, I will just clarify, customers can be wrong. So sometimes they will tell you they want something, but they‌ means something else.

And as founders, your job is to decipher that and figure out, is the customer trying to solution right now or are they talking about the pain? They're never wrong about their pain. And so understanding deeply, what is your customer paying? What are they trying to solve and help them get there is the only thing that matters.Nothing else matters. And being ruthlessly focused and fast and striving to have a strong hypothesis that gets you to real win with the real customer is the only thing that matters and it doesn't matter like the entire world can crumble around you the entire market can change five Times in the next few years, but if you have those like foundations and you continue to stay true to That would be my first advice.

And then my second advice is that you know, being a founder is a turbulent, long-term journey if you're lucky, and so I highly recommend trying to have fun if you can do it. I, I think for all listeners out there, you could probably be doing a hundred different things, and yet you choose to wake up and be a founder or do whatever it is that you're doing right now.

And so focusing on‌ having fun while you're doing that, I think is paramount. And so working with people that you Love working with and customers that you love working with and building something that you're really proud of, and you look at the day and you're really proud of at the end of the day, you're proud of the journey, those are all things in your control.

And so regardless of what your founder journey looks like, I think remembering to have fun is something that's it makes the journey extra special.

All posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

All posts
May 6, 2024
Portfolio
Unusual

Monte Carlo's product-market fit journey

Sandhya Hegde
No items found.
Monte Carlo's product-market fit journeyMonte Carlo's product-market fit journey
Editor's note: 

SFG 46: Barr Moses on data reliability

Monte Carlo is an end-to-end data observability platform that monitors pipelines for missing or inaccurate data. Last valued at $1.6B, Monte Carlo has over 150 customers, including data teams at companies such as CNN, JetBlue, Hubspot, PepsiCo, and Toast

In this episode, Sandhya Hegde chats with Barr Moses, co-founder and CEO of Monte Carlo.

‍Be sure to check out more Startup Field Guide Podcast episodes on Spotify, Apple, and Youtube. Hosted by Unusual Ventures General Partner Sandhya Hegde (former EVP at Amplitude), the SFG podcast uncovers how the top unicorn founders of today really found product-market fit.

Episode transcript

Sandhya Hegde
Welcome to the Startup Field Guide, where we learn from successful founders of unicorn startups how their companies truly found product market fit. I'm your host Sandhya Hegde, and today we'll be diving into the story of Monte Carlo. Monte Carlo is a data observability platform and it monitors pipelines for missing or inaccurate data. Last valued at 1. 6 billion dollars, Monte Carlo has over 150 customers including data teams at companies like CNN, JetBlue, HubSpot, and Toast. Joining us today is Barr Moses, CEO and co-founder of Monte Carlo. Welcome to the Field Guide, Barr.

You started the company in 2019, so five years ago already. How did you and your co-founder, Lior, arrive at‌ this problem of data observability and decide to focus on it?

Barr Moses
Yeah, for sure. Great question. I'll start by describing‌ the problem that we're solving and then how we got to focus on it. The problem that we're solving might be familiar to folks listening, which is basically when you look at some data product, maybe it's a dashboard maybe it's a website with a price, maybe it's a generative AI application. And you're looking at it and something looks wrong, like the numbers look off. Something tells you that the data just doesn't make sense, and you're like, like, why are the numbers wrong here? That's exactly the problem that we solve. So we help data teams, data engineers, data analysts, data scientists, be the first to know about data problems, but not only the tech, the problems also make sure that they can triage, resolve, and improve on those over time. And yes, we're very honored to work with some of the best data teams in the world, including companies like Fox, Cisco, JetBlue, and many others. All of those rely on data to drive their operations. So make sure that, in the case of JetBlue, your suitcase arrives on time.

What other companies that use it for, they use data for financial reporting or, new companies that are now, or new organizations that are now building generative AI products. And in all of those use cases, making sure that the data is trusted and reliable is paramount to being able to use the data. And, going back to five years ago now, it's crazy how time flies. I was responsible for data at a company called Gainsight. So for folks who don't know, Gainsight created the customer success category, which basically ushered in this new era based on subscription businesses and recurring revenue models in which the importance of making sure that your customers are‌ using your product, expanding, and renewing is paramount to your business. That‌ wasn't the case before the subscription era. And at the time, I joined the company and I was responsible for a team called GONG, Gainsight on Gainsight, where we were using our own product internally. I always thought that the acronym GONG was very clever. It wasn't mine. It was credited to Nick Mehta, the CEO.

And as part of that, we were using data to share with our board, with our internal executives, with our external customers. We were sharing data with them. And in all of those instances, it occurred way too often, the data was wrong. And I would hear about it from Nick, our CEO. I would hear about it from our customers that they were frustrated that the data is wrong. And maybe the thing that was most frustrating is that when I was looking at our engineering counterparts, they all had great solutions to make sure that their applications are reliable. They had solutions like New Relic, Datadog, and AppDynamics. But data teams literally had nothing. We were like, oh, let's ship this data and hope the data's accurate, right? Which ironically is still how many data teams operate today. And so just frustrated by our own personal experience, again, this was five years ago, and so the world of data was very different, but I was looking forward and thinking to myself, there's no way, or I believe in a world where data is going to become even more important. And if data is going to become even more important, then the trust and reliability of that data must be more important. And I don't think the way that we've addressed data quality to date is insufficient. It's definitely a topic that has been, looked at and thought about for the last couple of decades. But the realization or sort of the driver to starting Monte Carlo was the realization that there's a lot of things that have been developed in engineering, whether that's like DevOps or SecOps, best practices that have been really helpful for engineering teams and security teams to make sure that applications and businesses are reliable, secure, trusted, and none of that exists in data.

And so we started Monte Carlo with that goal and that sort of hypothesis. Can we bring those best practices and empower data teams so that they have the same level of rigor, visibility, operational experience, operational excellence, and the right technology to support that? So that's how the idea got started from our sort of own pain ‌and experience.

Sandhya Hegde
Now obviously, pretty much any company in the world could be your customer. And there are so many ways in which data quality degrades. How did you think about, like, where to start? Both in terms of what is the right early functionality to build, as well as, what's the right early adopter to focus on? Would love to hear more about, maybe the first few months of the company as you and Lior were navigating that question.

Barr Moses
Yeah, for sure. Such a great question. And as you pointed out, any company can have a data team and can run into this problem. And so the first thing was to step out of my experience, right? So say, okay, I've had this, but let's talk to like people out there, my counterparts, data leaders everywhere, and see if they're experiencing the same level of pain. And let's also start mapping, like who's experiencing in which industries, is it more in sort of B2B or retail or e-commerce? And what types of companies? Are these smaller companies experiencing this pain or rather larger? Basically start to think of different variables that you can create some sort of like market map to understand where's the pain.

I think one of the misconceptions in creating a category is that when you create a new category, there's a need to bring the message and teach people on this new problem. But the thing that's really hard is if you create a category and there's no pain, you might be pushing a rock uphill for a really long time until that pain exists. And so for us, in the early days, and still today, it's not so much about creating the category. Because your customers don't care about creating, they don't care that you're creating the category. They care to the degree that they're excited for you and they want to be part of the journey. And we take great sort of pride in that. But really, at the end of the day, a customer wants their problem to be solved and their pain point to be addressed. And so the key was finding out who, if anyone is experiencing this pain and what are they doing about it and what are they willing to do about it? And so I think, as a founder in the very early days, there's a couple of things like one, the odds are like the cards are stacked against you. Like, the odds are very low for success. Just because of that's the nature of startups, like 99 percent of them fail. You're also running against a clock, like running against the time, right? Because there's always there's a right time for the market and sort of the opportunity. You also don't have unlimited funding and unlimited time to figure things out. And you have no resources. So you're basically like one, two, maybe three people. So basically everything is stacked against you. Which I‌ think was a lot of fun. And within that, you're like, wow, I have a list of a hundred problems. It's I don't know what's the, some of the things that you mentioned, like who's the right company, who's the right persona, what's the right product, what's the right price, what's the right service model.

There's so many questions that you need to focus on as a founder. It can be really overwhelming, in, in the early days, but also later on. And so the approach that we've always took is be really, first, very focused on what are the things that are going to kill us next. What are the things that basically we can't survive without? And then, what is our hypothesis about them? So very clear articulation of what are the things we need to believe for this pain point to be material enough for a customer to care about and for us to be able to help solve this problem. In the early days, the first hypothesis was There is a customer out there that cares enough about this problem that we can help them. So the hypothesis was can we make one customer happy? That's all we needed to prove. And so I remember calling a lot of my founder friends and saying, Hey what's the trick to getting your first customer? Like, how, what should I do? And, like, how did you do this? I'm sure there's some I don't know, strategy or hack or something. Like, how do you get started? And there was something really liberating about that because everybody that I spoke with was like, look, there's no hack. You just have to talk to enough customers, understand their pain, and see how to help them. And obviously have a lot of luck along the way. And I think that was something that I loved hearing because I was like, look, there's no shortcuts for hard work. There's no shortcuts for being customer-obsessed. There's no shortcuts for solving a real meaningful problem. And learning that lesson early on and then carrying it forward.

So what I did was my hypothesis was there's one customer that we can make happy. And so we narrowed and we spoke to, I think maybe a couple of hundred of data leaders, all of them are folks that we didn't‌ know, and‌ mapped out, okay, these are the types of companies that have a pain. The thing with literally every single company in the world either has a pain today or later, but there were some early clues. So, for example, companies that were more data-driven had more imminent and urgent problems. And four or five years ago, those were‌ industries that I would consider more data-rich or more data-intensive. So, for example, e-commerce. FinTech, retail, those were companies that were faster to adopt data in the early days as opposed to companies in B2B that were‌ slower to adopt data. And so we focused on those early sort of industries and cohorts of companies where the problem was more acute. That's one example. The second example was, we were debating what type of people, what type of title, or what type of scope the people have that we work with. So, for example, are these data engineers, data analysts, or data scientists? Who cares about this problem? And then we‌ learned that everyone is involved in delivering data trust and delivering data reliability. And so how do we‌ bring all of those people together into one platform to talk about data issues and data problems? I would say in designing the early stages of the company and the focus and the product, it was really based on trying to find one customer to make them happy and then five customers and make them happy and then 10 and grow from there. But it was very like, what's a here-and-now customer that we can help with.

Sandhya Hegde
And looking back, like, how would you describe the pattern in your early adopters? Looking back, what were the characteristics that ended up one, making them want to lean in and work with a startup on this?

Barr Moses
The MOM test. It's‌ not a great name, but it's a very good book. And the idea there is that there are some people in the world that if you tell them, Hey, I have this great idea, maybe it's your mom or your dad or your grandparents, you're like, Hey, I'm starting a startup.

And I have this idea. Most of them will be like, wow, Sandhya, that's the best idea on the planet. How did you come up with it? You're so smart, right? Because they love you and they care about you, and they have a relationship with you. The problem is those people are very bad testers for your product because they will say yes to whatever you want. And so there are a lot of people out there in the world who like Sandhya and appreciate you, and want you to be successful, and so they might be engaging with you just for that purpose And so the idea is that in the early days to really find product market fit, you need to focus on those that pass the mom test and that they do not‌ have that affinity or affiliation with you and they will give you honest feedback.

And I remember, I cold called a CTO of a large company at the time, and there's some cold email and I was like, Hey, do you have this problem? And he immediately said, Yes, I do. I'd like to speak immediately if you're solving it. And he didn't owe me anything. He didn't have to take the time of day to speak with me. And then when I met with him, I‌ shared a couple of slides, ‌describing the problem and then how I'm going to solve it. And I remember him telling me, Your slides are the ugliest slides I've ever seen. These slides are terrible. But, how you're thinking about the product is amazing, and I'd like to implement it tomorrow if you had this. And that was exactly the kind of reaction that we were looking for. Someone who ‌doesn't think about all the sort of extra and, the, but more focuses on, does this solve a core problem for me and for my business and can it add value? And I was like, that's what you want. And so we‌ termed, we gave that a term, we called that hell yeah moments when our very early customers would basically be so excited about something.

And so we would look for that. We would have a conversation with someone and if they just jumped off the chair with excitement about something, we're like, yes, that's the kind of reaction that we're looking for.

Sandhya Hegde
Yeah, you want like clear no or hell yes. You don't really want the maybe, let's do five more meetings and no it doesn't go anywhere right, that's like death for a startup.

Barr Moses
Yeah, like another, exactly. And another good example with that is when you show something to someone and you're like, here are five features that we're thinking of building. Which of them is most interesting to you? And then the answer, the worst answer is, Oh, all of this looks great. I want it. That's terrible because that tells you that none of it‌ matters. What you're looking for is‌, I want number three. And I wanted that yesterday. Okay. And if I don't have number three, it's going to be a disaster for me. That is a kind of reaction that you're looking for when people give oh, all of the above, and this sounds amazing, and I'm so excited about this like general sentiment. It's‌ very hard to move forward from that.

Sandhya Hegde

Right, and is there ‌a particular industry or business model or something you focused on for your early days? Did you say, okay, we are going to go after ‌big consumer tech companies. So is there a particular thing that like made it higher value for them to implement data observability? Maybe they were like using that data in production. Was there anything like that you could latch onto?

Barr Moses
Yeah, a hundred percent. So there were a number of different variables that we used to sort of hone in on our sort of what we call our ICP, our ideal customer profile. The first is the size of the company. And one of the hardest things in an early-stage startup is to work with too big of a company or too small of a company. Because if it's a company that is very particular in some way, either too small or too big, They‌ don't represent the broader market. And so the trick is finding a customer that represents the broader market. And so in the early days, we really focused on the mid-market segment. And we used that to basically build velocity and build a product. Today, we support the world's largest Databricks, Snowflake AWS customers. But in the early days, it was really hard to start with that. And so we had to grow into that scale. So that's one sort of variable. The second variable I mentioned was industries. So back in the day, we found that industries like B2B, for example, were late to adopt data as, as funny as that sounds. And so we focused on companies that have‌ have a lot of data and use it. The problem is that some companies had data, but they wouldn't use it. And so if they're not really using it and the data is wrong, who cares? So we needed to find a company that had a real use case for data where people are‌ looking at data. And an e-commerce company has a pricing algorithm that relies on data or has. Ad campaigns that are based on data. Or has discount codes on the website. If any of those were wrong, that's material impact to the business immediately right now. You know, FinTech companies, if the ticker number is too high or too low, if a company is reporting the wrong numbers to Wall Street, that happens too, right? In all of those instances, that's a real business impact. So we were looking for companies who‌ use data. The third kind of variable that we looked at was who‌ has a budget for the data? And is there a senior person who's responsible for data? Several years ago, data oftentimes was an afterthought in organizations, so it was, maybe a very small team, couple of layers down from the CFO or the IT department, and they basically had the entire quarter to make sure that the data is accurate before they say reported it to the street, but they could use a lot of manual ways to make sure that the data is accurate. Fast forward to today. Every data team is ‌front and center for a company, especially more so in generative AI. And so oftentimes, that sort of organization reports into a CTO or a chief data officer or VP engineering. All of those people care deeply about the data infrastructure, and the reliability of the data. But in the early days, we‌ had to make sure that we're working with a data team that worked on something that was material to the company and that they had the budget to support that. And we‌ use that to make sure that it's a big enough problem for the company because it's not a big enough problem for the company. That also could be really difficult. And then the final variable that we use was ‌like we're in the stack to start with. So a little bit of context, data observability is the only sort of approach that looks at your stack end-to-end. What I mean by that is it includes all your data all the way from source to consumption.

So that includes, it could be upstream data sources. It could be data warehouses, data lakes, ETL, orchestration, and BI ‌and ML solutions, right? ML models. And when we work with a customer, we integrate end-to-end. And the question in the early days was where the hell did you get started? Cause it would take us like years to ‌build that coverage, right? We have like dozens of integrations, but we, as a startup, you don't have time to, to sit and build and hope that in five years, the market will be there. And so you have to make a bet on where do you start. And so what we did is, again, we tried to find an immediate solution to a real customer pain point. And what we found was that most of our customers‌ were getting yelled at, if you will, by their downstream consumers. So by folks looking at reports or using the output of models or‌ looking at a website. So those are closest to the end user. And those are the people who are feeling the strongest pain.

And so what we did was really focus on them. And so our very first integrations were with the Data Warehouse, the Data Lakehouse, and the BI solution. And over time, we've developed this end-to-end stack, and we now support the full end-to end, but we had to make a bet at the beginning on where to start, and it‌ got a ton of traction just connecting to your data warehouse and your BI solution solved a ton of pain for data engineers and data analysts, but it wasn't obvious. Like, I think if we would have started somewhere else in the stack, it might have taken us a lot longer and a lot more time.

Sandhya Hegde
And how, were there like a few Like forward-thinking, early design partners who really helped you nail things like where to start, what are the first few features, and were there any surprises for you in that process of just working with customers, early prototypes for the first time? If you recall any moments that really helped you crystallize that MVP, would love to hear those.

Barr Moses
Yeah, for sure. The customers that were the most helpful to us were those who were the most blunt and direct with us in their feedback. So they could be like, really cut through the bullshit and say this is helpful, this is not helpful to me. So I'll give you an example. In the early days, we talked about something called the five pillars of data observability, which give a comprehensive view of all the various reasons why data might go wrong. And so those include schema changes, that's a common culprit for data going wrong that includes freshness, so making sure the data is accurate, includes volume. Includes lineage, includes data quality. So basically like field-level, accuracy, completeness, et cetera. The traditional data quality variables. And when you map that out again, as a startup, you can't build all of that from day one. That's a lot to build. And so I remember, you know, we were, we were speaking with a very early customer. His name was Yoav. And we were, we're basically saying, can you describe to us like the last three to four data incidents that you had, what was the root cause of them, and what happened? And through that we uncovered that ‌all of his pain really boils down to schema changes. And he was like, if all you send me is a daily update, it can be an Excel sheet. It can be an email. I don't care. You can deliver it with a pigeon. All I need is like a list of schema changes because it's like, Yoav was a data analyst. He was‌ the lead of the data analytics team, and he was like, all these upstream teams are making changes. I have no visibility into them.

I have no idea if they're literally deleting a table. Maybe they're changing the field, the name, the type, or the name of the field. I have no visibility into that, and then I wake up in the morning and my reports are all broken. I don't know what happened. Who did it? And why did they do it? And so if you just get me the list of schema changes every day, I will be forever grateful.

And so that was super helpful, because they were like, okay, these pillars and the concept of data observability is very broad, but here's one very particular thing. that could be really helpful. And so that I remember was really meaningful. By the way, for the record, I, in the early days, I was like, Oh, like, how would that be?

I'm so skeptical of that. And, he was like, no, I need this. And I think that taught me an important lesson that even myself as a user, the customer is always right and always go in that direction. Another example has more to do with pricing and go-to-market.We didn't charge customers in the early days. We ‌let people design partners work with us. And we, we‌ measured their engagement in terms of how much time they're spending with us. So if they were spending time with us, that means that the problem is important enough for them to‌ invest in this. And they were spending a fair amount of time with us. And I remember after maybe six or eight weeks that we were working with this particular customer his name is Rick, ‌ he offered to pay. And he said, Barr, we're using your product. This is‌ a great product. It's a real, sort of SaaS solution. It's a real service. We should be paying for this. And I was like, Rick, what are you talking about? There's we're too early. There's so much more that we need to build. It was like, no, like we should be paying for this. Y'all are crazy. And I remember that was like a shift for me because I was like, Oh, okay. So our product has reached a certain maturity level where our customers are telling us that we need to pay. That's a big deal. And I remember telling Rick, nevermind we'll come back to it later, and he kept coming back to me again and again. After the third time, I was like, okay, we just look like idiots. We really just need to move forward here. And so that really, that ‌ pushed us to the market, if that makes sense. And so again, I think being really attuned to our customers along the way, we're so grateful for all of our amazing customers and they still push us in similar ways today.

So I can tell you, we were iterating on new pricing models or new go-to-market models, or we're building additional integrations. We just released Pinecone generative AI, a vector database integration. Like those are all thanks to our customers saying, Hey, like we need this, or directing us in that or pointing us in that direction.

Sandhya Hegde
Having happy customers is it's the best drug for a startup. And, until you get there, you're still thinking about oh, fundraising and valuation and like celebrating those moments. And then you discover there's‌ a stronger drug. And it's customers saying, Oh my God, we love your product. Maybe a shift to a go-to-market strategy. Obviously this is this is not like something most people would consider oh, self-serve PLG. How did you think about go-to-market strategy in the early days? Obviously, you must have done a ton of founder selling for a while. How are you thinking about go-to-market strategy now?

Barr Moses
Yeah, I would say that's something that, similar to building a product, we're continuously evolving and adopting. We just ship changes, or we ship new features and bug fixes, etc on daily and weekly basis at Monte Carlo. We also think about our go-to-market in a similar way. And I think the worst thing is to be static because your customer changes and how they're thinking about the market changes. Not that being said, you don't want to introduce changes just for the sake of changes. And if something is working well, you want to double down on that and increase that. And so that is really what we did in the early days. So you know, I mentioned that I spoke to hundreds of leaders and I asked them, Hey, what's your biggest pain? And data downtime or data being incorrect or accurate, inaccurate, or for any other reason wrong. Um, came up as a top three pain for almost all of the data leaders consistently again and again. And I mentioned we, we spoke to hundreds of leaders and basically asked them if this is a problem for them that came up as a top three. Okay. And then, as we did more discovery and to understand their pain, how they're solving it today, what are they thinking about, we also learned about how they typically buy a solution.

So they sometimes told us, Hey, I just bought Databricks, or we're just migrated from GCP to Snowflake, or we're moving to Redshift or whatever it is. And then throughout that conversation, we learned that they were the budget holders. And they need approval from their CFO and CIO, and the evaluation team includes the data engineering manager and a bunch of people on their team. I'm simplifying here, but the idea is that as you spend more time with your customers, you're learning more about their pain and how a particular solution can solve it. You're also learning about how they prefer to buy and how they prefer to interact with you. And learned that, for example, particular pricing models work better or worse for them. We learned that there's particular ways to make it easy for them to work with us. One of the most important things that, you know, for me as a sort of former data leader that was critical to me was like, you are bombarded with like hundreds of data vendors and you're like, oh, I have to manage all of this now. And so finding ways to ‌add value to these people, our customers, and be really easy to work with has always been really critical to us. And be easy to work with, I think in go-to-market means be easy from a marketing perspective, like it should be easy to learn about the company, be easy to work with a sales team, be easy to work with legal. Like we don't want to spend months, making your life hard. If we can, if we can jump to the part where you're seeing value, right? Being easy to bill and operate. So every single team contributes to this like strong customer experience. So when I think about improving go-to-market, it's improving every single team if that makes sense. And meeting the customer where they are. So for example, one of the things that's really important for customers is that it's just easy to get started. And with our current pricing model, you can just, you pay for what you use. So you sign up and then if you're. Not monitoring a particular table, a particular asset. You don't pay for it. It's pretty transparent. It's pretty clear, and it meets customers where they are. They don't want ‌opaque models. Or sort of things that are really hard to get started with. And I think it's through a similar process of finding those hell yeah moments. Where people are like, Oh, I can get started tomorrow and like onboarding takes 30 minutes and I can see value within 24 days, 24 hours. Hell yeah, sign me up. So then, you get a signal that you're on the right track with your sort of go-to-market motion.

Sandhya Hegde
Makes sense. Maybe switching gears to a little bit to the broader data infrastructure ecosystem. I feel this is like one area that has seen such dramatic change in the last 10 years, which is often not visible to a lot of the end users of the data, right? What's happening behind the curtain, if you will.

And for example, 10 years ago when I was at Amplitude and we were early, very few companies had embraced data warehouses, right? And that, that itself has changed so much now. And the way people want to use data warehouses is already going through another shift. So I'm curious what has been your perspective as someone who's literally observing everything from source to consumption and data pipelines and so on?

How do you think also like this, rise, or at least currently, the desire to build proof of concepts with AI and try using AI on, on proprietary data within the business, how do you think like all of these forces will change the ecosystem from Monte Carlo's perspective?

Barr Moses
Yeah, great question. And it's‌ really fascinating to see to your point, the evolution of the data space more broadly going from, Oh, what's a data warehouse to what's a data lake house to, Oh, what's a new stack to support generative AI, right? And what am I supposed to do there? I think in each of those waves that you're seeing the, what, there's two things that remain constant. The first is data, and the data team becomes more important, more central in the organization, and more critical to the business. The second thing that you're seeing is that. In each of these waves, the importance of trusted and reliable data becomes even more important. So if 10 years ago it didn't matter at all, maybe it mattered a little bit, today,‌ in surveys that we're seeing, there's two barriers to releasing or to, to shipping generative AI products to production.

The first is security and the second is the trustworthiness of the data. Those are the top two barriers to building generative AI products. So I would say in each of those waves, our belief in our mission, our vision of accelerating the world's use of data by enabling trustworthiness of the data has‌ become even more important. So we have even more conviction that data is going to become a more critical role. And you can't cut corners. Everybody's going to know if your generative AI, if the chatbot or whatever it is that you're building is you know, if there's a hallucinations or any sort of experience like that, that erodes customer trust, it's very clear if it's based off of wrong data. In fact, I would say that in generative AI in particular, the number one competitive advantage and the moat is‌ first-party data that companies are using, whether it's with RAG or fine-tuning. And that means that if companies are relying on their data as their moat for generative AI products, it means that the quality of that data is paramount.

And so we're seeing a lot of companies now in preparation for generative AI products. They're‌ making sure that the data that they have, regardless of where it is at a data warehouse or data lake house, make sure that data is accurate because they know that as they're starting to build those solutions, they're going to highly, it's going to be very visible whether the data that you have is accurate or not. So that is what I would say, so more broadly in the ecosystem. And I think data observability also has come a long way, even in the short time that it's been alive. I think we started really at just looking at data in particular, believing or thinking that changes in data are mostly what contribute to problems in data.

And what we've evolved since is that it's just not sufficient. Data is wrong due to three reasons. Things can go wrong in the data, in the system, or in the code. And you‌ need to have a solution that encompasses all three of those. And that is a very big shift. The second shift is that most of the solutions and most of the approaches have really focused on detection. And I think to a certain degree that's becoming commoditized. Anomaly detection solutions are everywhere. But the power in having a strong observability approach is in making sure that you go beyond detection to triage of the problem, resolution of the problem, and measurement of the problem.

And‌, strong data teams have capabilities across all four of those. And that, again, becomes even more important as data becomes even more front and center in generative AI. And so when you think about generative AI stack that's emerging, Are your pipelines supporting that? Do you have the right operational capabilities and technological solutions to support a highly reliable generative AI pipeline?

That's what I'm excited about.

Sandhya Hegde
I see a lot of startups that are working on new approaches to analytics, new approaches to maybe like data curation for training these models, essentially a lot of kind of new energy and enthusiasm and the data stack. I'm curious, what would your advice be to founders starting to build new startups in this data ecosystem in 2024.

Like, how do you think about building for the future and having something that ‌ stands the test of time in five years? Because I think the next five years are going to be fairly chaotic in terms of just how much change is happening.

Barr Moses
Oh, for sure. I'm stoked for the next five years. I think if there's one thing that's for sure, it's going to be really exciting. I don't know what's going to happen. I like to think that the data, data space is like the best party to be at and to a certain degree, we're a little bit ‌like Taylor Swift, like you gotta reinvent yourself every few years. I don't know why the data space, we're just reinventing ourselves all the time. I would say, so first and foremost, my advice is to simply not listen to advice. I think, by and large, most advice is‌ crap, and it's not a good idea to listen to advice. I think also advice really is heavily weighted by the person giving the advice, the scars on their back, their experiences, their agenda, not even maliciously, just because as human beings, that's the kind of advice we get. And so my primary advice is do not listen to advice‌. The only source of information that you should listen to is your customers. And I think that is what will give guidance for the next five years because, and by the way, I will just clarify, customers can be wrong. So sometimes they will tell you they want something, but they‌ means something else.

And as founders, your job is to decipher that and figure out, is the customer trying to solution right now or are they talking about the pain? They're never wrong about their pain. And so understanding deeply, what is your customer paying? What are they trying to solve and help them get there is the only thing that matters.Nothing else matters. And being ruthlessly focused and fast and striving to have a strong hypothesis that gets you to real win with the real customer is the only thing that matters and it doesn't matter like the entire world can crumble around you the entire market can change five Times in the next few years, but if you have those like foundations and you continue to stay true to That would be my first advice.

And then my second advice is that you know, being a founder is a turbulent, long-term journey if you're lucky, and so I highly recommend trying to have fun if you can do it. I, I think for all listeners out there, you could probably be doing a hundred different things, and yet you choose to wake up and be a founder or do whatever it is that you're doing right now.

And so focusing on‌ having fun while you're doing that, I think is paramount. And so working with people that you Love working with and customers that you love working with and building something that you're really proud of, and you look at the day and you're really proud of at the end of the day, you're proud of the journey, those are all things in your control.

And so regardless of what your founder journey looks like, I think remembering to have fun is something that's it makes the journey extra special.

All posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.