Interview with the Founder of AI Healthcare Unicorn Abridge: There Is No Simple "Shell" for Enterprise-level AI Applications
Updated on: 08-0-0 0:0:0

Author: Robin

Display: Bright Company

! Recently, Dr. Shiv Rao, founder and CEO of Abridge, an AI medical unicorn in the United States, was interviewed by CNBC in the United States. Shiv talks about Abridge's position in healthcare enterprise services and how Abridge, an application that was once classified as an "AI shell", defines the direction of "shell". Shiv Rao is also a practicing non-invasive cardiologist.

In the early days of the development of large models, "AI shell" companies were also nicknamed "second-rate middlemen" because they "just put an interface on other people's technology, instead of putting in the hard work to create their own models". At the time, it was believed that the only way to compete in the AI field was to raise huge sums of money, invest computing power, and conduct large-scale pre-training, a conclusion that was quickly reflected in the valuations of large model vendors such as OpenAI and Anthropic.

But that has changed. It was quickly realized that the real high value was moving to the top level of applications, which is one of the reasons why investors were willing to pay a premium for this type of value and give so-called "shell" products a higher valuation multiple than Anthropic and OpenAI.

According to Information, AI search engine Perplexity is currently valued at 43 times annualized revenue, while Anthropic is 0 times and OpenAI is 0 times. This indicates that investors are more confident in Perplexity's efficiency in achieving profitability faster and converting growth into revenue.

Cursor的母公司Anysphere在12個月內實現了超過1億美元ARR(年度經常性收入)。Cursor是由Anysphere開發的AI代碼編輯器。Anysphere的估值在2024年的幾個月內飆升了550%,現在已成為一家vibe-coding領域的“十角獸”。

Similar to Perplexity and Cursor, Abridge is an "AI shell" company focused on medical conversations. Its core product is AI Scribe, which converts doctor-patient conversations into structured clinical notes in real-time and integrates seamlessly with electronic medical record systems. This product is designed to reduce the burden of paperwork on physicians and improve the efficiency and accuracy of medical records.

Abridge通過向醫療系統提供訂閱服務獲取收入,公司預計其ARR在2024年達到5000萬美元。Abridge已在美國超過100個醫療系統中部署其AI平臺。

The following is an excerpt from the interview, compiled by Bright Company:

How Abridge understands the healthcare industry

Moderator: Let's start with the basics. What does Abridge do?

Shiv: Abridge eases clinicians' paperwork and allows them to focus on the people who matter most, the patients. We were founded in 2018 years, so we've been around for a while. Everything we've built stems from a fundamental assumption: healthcare is people-centric, and we don't think that's going to change.

There is always a dialogue between different people in the healthcare system: on one side is a professional such as a doctor or nurse, and on the other side is usually a patient or a family member. Conversations may arise in a clinic or hospital examination room, emergency room. These conversations are actually upstream of many different workflows in healthcare.

For example, if you're one of my patients, and I consult you, I'll jot down some scribbled notes, try to piece together what we're actually discussing, and finally I'll write down a series of paperwork in accordance with professional norms. This is true on a global scale.

Multiple stakeholders will look at that note and evaluate it: for example, other doctors and nurses will read it to understand why I am prescribing you certain specific medications; Business operations and finance people also read that note because it's going to be related to what I get paid, and that's how doctors get credit for the medical services they provide.

Moderator: Abridge was founded in 2022 years. ChatGPT came out in 0. Generative AI has been around for a long time, how has it impacted your business?

Shiv: I started my business in 2018 years and part of it had to do with Transformer, a machine learning model that supports generative AI. But we're starting with some models that predate LLMs – models like BERT, BioBERT, Longformer, or Pegasus. These models are sometimes pre-trained on internet data and can also be fine-tuned for the specific use case you want to solve.

We were thrilled when large language models came out and generative AI became a phenomenon. I remember in '2023 we attended their dinner at a big industry conference on generative AI. In 0, many people who attended that dinner called me and said, "I understand now, I'm going to try it too". So we turned all that latent energy into some explosive power almost overnight.

Moderator: I'm sure the doctors can feel it too. Large language model companies such as OpenAI, Google, and Anthropic are always improving their products and also have their own speech transcription services. Why don't doctors, hospitals, the healthcare industry use Abridge directly?

Shiv: Looking at the full stack of AI technology, the bottom layer or near the bottom layer are the basic large model companies, which can provide raw materials that everyone can use. There should be an infrastructure layer in the middle to help the company reconcile the different models.

At the top is the application layer, and the companies in the application layer focus on solving a specific problem for a specific set of users or businesses. They are often deeply integrated into workflows and leverage proprietary datasets. Abridge's mission is to really solve industry-specific problems, which means that we often learn from datasets that are not available on the internet, often coordinating a large number of different models to create the best possible user experience.

Interviewer: Is it also because the privacy of data is also extremely important in healthcare?

Shiv: Yes. There are specific barriers to entry for specific industries, privacy is critical to healthcare, and trustworthiness is a fundamental requirement, the ultimate currency for your transactions.

So the challenge and opportunity we see is whether we can leverage the excellent technology close to the bottom level, build our own technical capabilities in the most suitable scenarios, and converge into truly full-stack products and solutions. This is the integration of core technology and AI, and it comes from our exploration of how to integrate data into workflows, collect input from doctors on a daily basis, and provide customer service.

Moderator: Satya Nadella said that OpenAI is now a product company. What is stopping big model companies from entering this space at Abridge?

Shiv: I think it makes sense for a basic model company to move to a top-level application. In a way, they've always been in that direction. You and I are both using ChatGPT or Claude. They're apps or products.

These companies are also T-corporations, and they have a vertical orientation, which may be the consumer-facing apps that we all use; But they also have a more horizontal layer, APIs, that provide other companies with access to these raw components, allowing them to leverage them in a deep way. But they cannot cover all areas and go deep into all areas.

5% Better Experience: Efficiently combine the benefits of different models

Moderator: What are the models behind Abridge?

Shiv: We coordinated a lot of different models. The work we do behind the scenes can be called a "scenario-based inference engine". After exploring over the past few years, we have been able to handle the complex coordination tasks between different models. Some of these models are network-scale models (often with large-scale parameters and complex computational structures), while others are fine-tuned, open-source models.

You know, in our work, even a little bit of difference, even if it's just 5%, makes a huge difference to the user experience that comes into the end. Plus there's the issue of trust and safety in healthcare. So, in many ways, we have to be as thoughtful as possible to invoke large language models and leverage AI.

Part of the challenge is how to abstract a lot of tasks from user behavior under complex conditions. Behind the product, we might call a large language model 20 to 0 times just to generate a seemingly ordinary note. Obviously, we can't call a large, off-the-shelf model at such a high frequency and still keep costs low. Therefore, we need to use a more cost-effective model that we have optimized ourselves to ensure efficient and accurate output.

Moderator: So you're also using the DeepSeek open source model?

Shiv: Absolutely, we're experimenting. I think that's part of the application story, and it's a good thing for us every time there's a change in the underlying big model and a new advancement: it's definitely good news for companies that know how to shape these technologies into their specific use cases.

Moderator: Their products have become more sensitive.

Shi: Absolutely.

Moderator: Do you think that as an AI application company, is it easier to monetize than OpenAI? After all, OpenAI loses billions of dollars every year. What is your cost structure? It's okay if you don't want to give out specific details. But can you give us some general information?

Shiv: At the application level, you have to follow all the rules of your particular industry. In the healthcare industry, there are rules for your product promotion strategy, sales strategy, etc. I believe that other professions, such as law, also have their own characteristics. Once we've done the above according to the rules, pricing is just a next step.

Our value lies in the impact we can have on the industry. Over the years, two out of every 12 doctors don't want to continue as doctors for the next 0 to 0 years; According to an article in JAMA magazine, 0% of nurses do not want to continue as nurses in the next 0 months. This may mean that patients living in rural areas have to drive five or six hours to a city hospital to see a rheumatologist. What we must do about these public health emergencies.

Moderator: With Abridge, some paperwork is eliminated, and the doctor's work may become a little more focused, so they are willing to stay in the profession because of that?

Shiv: Yes, we actually get feedback from countless users from our compliant communication channels, and they tell us that they are ready to work in this industry for another 5 years because we have taken a lot of burden off of them. This feedback is also a source of dopamine for our engineers, who are becoming more and more clear about why they are doing it.

Interviewer: It's also a very competitive space. Have you reached the stage where doctors and hospitals are willing to pay for your products? Or are you in the growth phase of scaling up and gaining market share?

Shiv: They're willing to pay, and we're scaling up all the time. We are online in more than 110 healthcare systems across the country. And actually the moment we've been in over the last few years has been really historic, because the healthcare system doesn't usually change so quickly. This shows that the problem we are solving is also one of the top issues that are on the minds of all C-level executives in our country, and they are looking for effective solutions to it. We were able to demonstrate that we were able to bring positive change so we were able to scale smoothly.

Open source allows us to differentiate our products at a lower cost

Moderator: I've been careful to call Abridge an "AI application company". Do you feel uncomfortable when someone else defines you as an AI shell?

Shiv: I don't know what to make of the word because I'm not sure if it still has the same meaning as it did a few years ago. Maybe two or three years ago, there was an argument that the only way to compete in the AI space was to raise hundreds of millions of dollars and pre-train these network-scale models to solve all sorts of problems.

But people quickly realized that the real value was shifting to the upper level of applications. And when you can solve human problems deeply, you can realize the highest value of this technology.

Our company's CSO (Chief Scientific Officer) is a professor at Carnegie Mellon University, who is also with us full-time and lives in San Francisco. He felt that we were more like some basic model companies, albeit on a different scale; We are also T-shaped, developing in our own way. We also have our own level of technology, we also use proprietary datasets, and we are also doing fine-tuning and post-training optimization into our own models to provide the best possible results and solutions.

Moderator: So "shell" is no longer an accurate description for you? The application of the casing implies "thin", right? But I think you're tackling more complex issues and going deeper and deeper.

Shiv: Absolutely. I think the so-called "casing" may still apply to a certain type of product, more in the consumer space. But it doesn't apply to the enterprise service space, because in addition to technology, there are those factors that you have to consider: compliance, privacy, security, infrastructure, the ability to scale, etc.

Moderator: You're developing your own models, and we're already seeing this trend, especially with DeepSeek making models cheaper and more efficient. Not just you, many AI companies have more open-source models to tap into, allowing them to produce their own in-house optimization models.

Shiv: Yes, absolutely. I think it's the trend, and that's the secret of our success. We are able to differentiate ourselves through products, user experience, and specific metrics.

We're not going to tell the doctor: hey, behind the product, we might be calling and coordinating 20 different models. Some of these models may extract what the insurer wants to review for underwriting or reimbursement. Other models may extract what the patient wants to read – if the doctor uses a term like "transcatheter aortic valvulopresty" in his notes, his patient may see the term and think, "Wait, he never said this to me." I'm going to go Google and it sounds horrible. "Then the patient may call or email the doctor and ask what the hell is going on.

So all of the different models that we coordinate behind the scenes are designed to serve these different stakeholders, and that's where a lot of our differentiation comes from.

The relationship between Vibe-coding and traditional software development

Moderator: I know you're doing a great job in healthcare, but then I want to talk a little bit more broadly about the AI application layer. We're curious about the rise of vibe-coding, especially how it will change the work of developers. As a leader in AI applications, have you studied this?

Shiv: Yes, absolutely. This is probably a topic that we bring up every day on our internal Slack.

Vibe-coding can quickly turn ideas into testable prototypes, skipping many traditional development steps. For example, creating a website without having to understand the underlying code in depth. We don't know what advances will be made in this technology in 12 months or 0 months, but even with the help of vibe-coding, you still need to master the basics of programming languages, algorithms, AI, and machine learning to complete the construction of a product.

As a doctor, I still go to the hospital from time to time to see how patients are doing. Every month, when no one wants to be on duty on a certain weekend, I go on duty. I would study some of the patient's desensitized disease information and feed it into the AI model to see what the AI would do if it saw a patient with these symptoms or complaints in the role of a doctor. I've found that at least half of the time, the AI's first instinct is incorrect. I would talk to the AI, we would talk about it over and over again, and I could be convinced by it. While I didn't make counterfactual reasoning or assumptions, I believe we ended up with a better conclusion than we had originally.

(Counterfactual reasoning or hypothesis: Doctors consider how a patient's condition would have progressed if different treatment options had been taken.) This way of thinking is common in medical decision-making. )

So the AI helped me expand my thinking and make sure I didn't overlook some rare disease diagnosis or unusual medical diagnosis. But it still can't replace the whole diagnostic thinking. I think vibe-coding has a similar interaction with traditional development and AI enterprise applications with basic models.

Moderator: Do you think vibe-coding can replace traditional code generation?

Shiv: At this stage, especially in industries such as healthcare that need to scale enterprise solutions, vibe-coding is a great tool to quickly prototype a product or feature. For example, it helps us skip a lot of the steps of getting an idea out of the ground or communicating ideas across companies, makes it easier for people to understand what we're building, and maybe allows us to start optimizing the elements we're building in advance, such as optimizing the user experience to make sure that the product will perform as expected when it's generally available.

As a result, it can shorten the entire cycle before product release, but it doesn't replace the groundwork that we still need to do – making sure that the code we release runs accurately and scales.

AI enterprise applications need to maintain a highly sensitive and right team

Moderator: The last topic I want to discuss is that startups are competing with giant players who have been trying to solve the same problem for a long time. What is the reason why you have room to compete now? Does this have anything to do with generative AI? Or is it more related to the "ship hard to turn around" of the traditional giants?

Shiv: When the underlying model changes so quickly, as you said before, the sharpness and flexibility of the application layer are very important. High agility means you have a team with the right skill set, including a team of scientists who can drill down into the base model layer and fine-tune those models for your specific use case, while also being deeply integrated into industry workflows.

So, it's important that you combine your team's skills with AI and industry knowledge at this moment. Obviously, market timing is very difficult to grasp, but when some huge demand is generated, your value is highlighted. Healthcare is now resonating with AI, and other industries will soon be the same.

Moderator: It's only a matter of time before startups can move quickly, but the giants have more data and can use that data wisely, right? They may make mergers and acquisitions in the future to compete with you. So what's your moat?

Shiv: The base model definitely has a moat. In a few years, they need a huge amount of money for computing power, data collection, reinforcement learning, etc., which is an extremely heavy workload.

The application layer moat is the moat that technology companies have always had, especially in the B2B market. Moats can be network effects, high conversion costs, the ability to tap scarce resources, or brand effects. All of these can be a competitive advantage for a business.

Moderator: But do the giants have a greater advantage in the distribution channel?

Shiv: I think the distribution part is important. But AI is also changing the rules of the distribution game and changing the definition of distribution channels. In just the weeks and months, we've been exposed to a lot of enterprise-grade technologies that can help you automate many processes and quickly reach users' browsers or established systems. The barrier to entry for being able to accomplish these large-scale tasks can be very high before.

AI is redefining the importance of distribution channels, or the importance of existing channels. This gives startups room to win, as long as they have the best products.