April 23, 2024
Portfolio
Unusual

Selling AI to big enterprises: A Conversation with Joel Hron at Thomson Reuters

Selling AI to big enterprises: A Conversation with Joel Hron at Thomson Reuters Selling AI to big enterprises: A Conversation with Joel Hron at Thomson Reuters
All posts
Editor's note: 

Enterprises are increasingly looking for new and exciting use cases for Large Language Models (LLMs) and more generalized Foundation Models (FMs). LLMs can re-imagine existing workflows, and create entirely new opportunities. However, this excitement must be balanced with concerns around model accuracy, security/privacy, and co-existing with traditional ML tooling. 

Some incumbent companies, like Thomson Reuters, are actively incorporating newer LLM tools into their workflows. Founded over 100 years ago, Thomson Reuters is no stranger to ML technologies. “We were one of the first companies to release probabilistic rank retrieval into our search algorithms very early… and have continued to evolve,” observes Joel Hron, the Head of Artificial Intelligence at Thomson Reuters. We spoke with Joel recently about his work, and gained insight into how startups can navigate selling new LLM-based solutions to large, complex enterprises. Joel heads an applied research team — TR Labs — which was founded nearly 30 years ago! Joel’s work involves balancing existing AI infrastructure with the latest trends given the rise of generative AI. 

Thomson Reuters acquired Joel’s company, ThoughtTrace, in 2022 and brought him on to build out the TR tech offering. Now two years into his time there, Joel's team is made up of nearly 200 AI scientists and engineers. Their goal is to help Thomson Reuters infuse AI into their legal, tax, risk, and fraud products, as well as their news organization. Joel gets why enterprises need to be selective about deploying LLMs. He has deep ML expertise to choose the best tool for every job. 

Startup founders building LLM solutions should understand the needs of established companies working in regulated industries. Otherwise, they will not be able to effectively partner with such firms to meet their evolving needs. In our conversation, Joel highlighted three considerations for startup founders building GenAI solutions for companies such as Thomson Reuters.

Build on top of existing infrastructure

As a 100+ year-old company, Thomson Reuters might seem like an incumbent slow to change amid an AI land grab. But in fact, Thomson Reuters was an early-adopter of AI. The company has a vast trove of premium content and long expertise in search algorithms and machine learning. Joel explains that this uniquely positions the company to rapidly prototype and deploy generative AI. Rather than throw decades of research out, Thomson Reuters built on top of its existing retrieval models to power next-generation "search-generated experiences."

For early-stage startups, it is simply not feasible to ask established companies to throw away their existing tooling to implement a newer solution. Instead, any new product must either:

  1. Be additive by creating a newer offering not currently available, or 
  2. Displace a small part of the existing infrastructure with a dramatically better solution
It is impractical to ask an established business to throw away all their existing infrastructure in favor of a young startup’s offering. Instead, founders need to accept the constraints of their customer and work with them as true partners rather than just “buyers” — this requires understanding their existing infrastructure and working within those boundaries. 

Provide a high level of support to enterprise customers

Joel emphasized that while LLMs synthesize creative and accurate answers, the quality is still beholden to the quality and relevance of the data given to the model. Put plainly: garbage in, garbage out. For high-stakes industries like law and finance, a mostly right response is not good enough. Evaluating answer quality through multiple lenses — completeness, harmfulness, misleadingness — remains extremely hard.  

As Joel puts it: “In traditional Retrieval Augmented Generation (RAG), retrieving the right content is the primary influencer on getting a correct or incorrect response from the model… as we look to avoid the catastrophic situations where hallucination might occur.” The key for enterprises then is rigorously optimizing their entire generative pipeline, not just expecting language models to magically output perfection.

For startups, this means having a deep understanding of your customer’s pain-points as you build out an initial product. Working with enterprises also requires constant engineering iteration to build an enterprise-ready solution.

Established businesses working in heavily regulated industries typically require continuous guidance from their tooling providers, even more so in emerging categories such as GenAI. Enterprises are terrified of implementing a customer-facing LLM solution that will spit out the wrong information. They will likely expect a high level of support from their infrastructure providers to avoid catastrophe.

Adapt to changing customer needs as the market shifts

As the LLM market continues to shift, Joel realizes just how important it is to stay up on the latest trends. Although Thomson Reuters has done work in fine-tuning models to ensure they work for specific use-cases, Joel realizes that things may change over time. “As these context windows grow, the need for fine-tuning, I think in many use cases, kind of goes away a bit,” he explains. 

Thomson Reuters has the budget and expertise to experiment with more advanced techniques, such as fine-tuning or pre-training. However, model capabilities continue to advance at a rapid clip. For founders looking to build long-lasting businesses, it is imperative to stay aware of just how quickly the market is moving. They should build for the future rather than focus on short-lived problems. Right now, most enterprise buyers are in a “wait-and-see” phase as they look to avoid serious infrastructure investment while the market is still playing out. 

Startups must recognize the position that larger enterprises are in today, and continue to iterate on their product as the entire market moves. It is not enough to solve near-term optimization issues around price/performance, but rather go after larger, longer-term problems that will remain unsolved

Both large and small enterprises need to adapt their current workflows to incorporate GenAI and large foundation modes into their existing infrastructure. Even though these large models do work out of the box, just “fine” or “good enough” does not cut it for many established companies working in regulated industries.  

While enterprises like Thomson Reuters are moving quickly to adopt new AI technologies, they will need to make sure their efforts meet customer expectations. At Unusual, we advise startups to remain nimble as the GenAI market continues to mature, and create deep partnerships with their enterprise customers.

If you'd like to read more from our AI Buyers series, subscribe to our newsletter for regular updates!

All posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

All posts
April 23, 2024
Portfolio
Unusual

Selling AI to big enterprises: A Conversation with Joel Hron at Thomson Reuters

Editor's note: 

Enterprises are increasingly looking for new and exciting use cases for Large Language Models (LLMs) and more generalized Foundation Models (FMs). LLMs can re-imagine existing workflows, and create entirely new opportunities. However, this excitement must be balanced with concerns around model accuracy, security/privacy, and co-existing with traditional ML tooling. 

Some incumbent companies, like Thomson Reuters, are actively incorporating newer LLM tools into their workflows. Founded over 100 years ago, Thomson Reuters is no stranger to ML technologies. “We were one of the first companies to release probabilistic rank retrieval into our search algorithms very early… and have continued to evolve,” observes Joel Hron, the Head of Artificial Intelligence at Thomson Reuters. We spoke with Joel recently about his work, and gained insight into how startups can navigate selling new LLM-based solutions to large, complex enterprises. Joel heads an applied research team — TR Labs — which was founded nearly 30 years ago! Joel’s work involves balancing existing AI infrastructure with the latest trends given the rise of generative AI. 

Thomson Reuters acquired Joel’s company, ThoughtTrace, in 2022 and brought him on to build out the TR tech offering. Now two years into his time there, Joel's team is made up of nearly 200 AI scientists and engineers. Their goal is to help Thomson Reuters infuse AI into their legal, tax, risk, and fraud products, as well as their news organization. Joel gets why enterprises need to be selective about deploying LLMs. He has deep ML expertise to choose the best tool for every job. 

Startup founders building LLM solutions should understand the needs of established companies working in regulated industries. Otherwise, they will not be able to effectively partner with such firms to meet their evolving needs. In our conversation, Joel highlighted three considerations for startup founders building GenAI solutions for companies such as Thomson Reuters.

Build on top of existing infrastructure

As a 100+ year-old company, Thomson Reuters might seem like an incumbent slow to change amid an AI land grab. But in fact, Thomson Reuters was an early-adopter of AI. The company has a vast trove of premium content and long expertise in search algorithms and machine learning. Joel explains that this uniquely positions the company to rapidly prototype and deploy generative AI. Rather than throw decades of research out, Thomson Reuters built on top of its existing retrieval models to power next-generation "search-generated experiences."

For early-stage startups, it is simply not feasible to ask established companies to throw away their existing tooling to implement a newer solution. Instead, any new product must either:

  1. Be additive by creating a newer offering not currently available, or 
  2. Displace a small part of the existing infrastructure with a dramatically better solution
It is impractical to ask an established business to throw away all their existing infrastructure in favor of a young startup’s offering. Instead, founders need to accept the constraints of their customer and work with them as true partners rather than just “buyers” — this requires understanding their existing infrastructure and working within those boundaries. 

Provide a high level of support to enterprise customers

Joel emphasized that while LLMs synthesize creative and accurate answers, the quality is still beholden to the quality and relevance of the data given to the model. Put plainly: garbage in, garbage out. For high-stakes industries like law and finance, a mostly right response is not good enough. Evaluating answer quality through multiple lenses — completeness, harmfulness, misleadingness — remains extremely hard.  

As Joel puts it: “In traditional Retrieval Augmented Generation (RAG), retrieving the right content is the primary influencer on getting a correct or incorrect response from the model… as we look to avoid the catastrophic situations where hallucination might occur.” The key for enterprises then is rigorously optimizing their entire generative pipeline, not just expecting language models to magically output perfection.

For startups, this means having a deep understanding of your customer’s pain-points as you build out an initial product. Working with enterprises also requires constant engineering iteration to build an enterprise-ready solution.

Established businesses working in heavily regulated industries typically require continuous guidance from their tooling providers, even more so in emerging categories such as GenAI. Enterprises are terrified of implementing a customer-facing LLM solution that will spit out the wrong information. They will likely expect a high level of support from their infrastructure providers to avoid catastrophe.

Adapt to changing customer needs as the market shifts

As the LLM market continues to shift, Joel realizes just how important it is to stay up on the latest trends. Although Thomson Reuters has done work in fine-tuning models to ensure they work for specific use-cases, Joel realizes that things may change over time. “As these context windows grow, the need for fine-tuning, I think in many use cases, kind of goes away a bit,” he explains. 

Thomson Reuters has the budget and expertise to experiment with more advanced techniques, such as fine-tuning or pre-training. However, model capabilities continue to advance at a rapid clip. For founders looking to build long-lasting businesses, it is imperative to stay aware of just how quickly the market is moving. They should build for the future rather than focus on short-lived problems. Right now, most enterprise buyers are in a “wait-and-see” phase as they look to avoid serious infrastructure investment while the market is still playing out. 

Startups must recognize the position that larger enterprises are in today, and continue to iterate on their product as the entire market moves. It is not enough to solve near-term optimization issues around price/performance, but rather go after larger, longer-term problems that will remain unsolved

Both large and small enterprises need to adapt their current workflows to incorporate GenAI and large foundation modes into their existing infrastructure. Even though these large models do work out of the box, just “fine” or “good enough” does not cut it for many established companies working in regulated industries.  

While enterprises like Thomson Reuters are moving quickly to adopt new AI technologies, they will need to make sure their efforts meet customer expectations. At Unusual, we advise startups to remain nimble as the GenAI market continues to mature, and create deep partnerships with their enterprise customers.

If you'd like to read more from our AI Buyers series, subscribe to our newsletter for regular updates!

All posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Recent Blog Posts

No items found.