No AI without Data with Snowflake's Christian Kleinerman
This is Cross Validated, a podcast where we speak to practitioners and builders who are making AI deployments a reality.
Today our guest is Christian Kleinerman, SVP of Product at Snowflake. Prior to Snowflake, he was a Senior Director of Product Management at YouTube and, prior to that, had spent over a decade at Microsoft in various leadership roles.
Listen and subscribe on Spotify and Apple.
Follow me on Twitter (@paulinebhyang)!
Transcription of our conversation:
Pauline: Welcome to Cross Validated, a podcast with real practitioners and builders who are making AI in the enterprise, a reality. I'm your host, Pauline Yang. I'm a partner at Altimeter Capital, a lifecycle technology investment firm based in Silicon Valley.
Today our guest is Christian Kleinerman, Senior Vice President of Product at Snowflake. Prior to Snowflake, he was a Senior Director of Product Management at YouTube and, prior to that, had spent over a decade at Microsoft in various leadership roles. Thanks so much for being on the show. Christian!
Christian: Hi, Pauline! Always good to see you and hear you and thank you for having me.
Pauline: Super excited about the conversation. So I'd love to kick off with a little bit more background on Snowflake and the company's mission.
Christian: Sure. So we think of Snowflake as a company that is aiming to help any organization mobilize their data and by that, what we mean is help them tear down silos between data sets, be able to think holistically across products, customers and businesses and more importantly, transform that data into value. How do you get insights? How do you get analytics? How do you get new ML / AI type of experiences, which are all powered by data. So it's about getting value out of data and do so without trading off governance.
Pauline: Really appreciate that background and what about your role as SVP of product? How did you come into that position?
Christian: So I've been building database data systems for a while. You mentioned both Microsoft and Google in my background and when I originally heard about Snowflake, it was pre-product in the market. I heard some of the aspirations and goals, and I'll admit saying that they sounded too audacious for my own taste. But when I saw the paper published (a Sigma paper), I remember thinking, this is true innovation. This is moving the state of the art of data management technology forward, and I also remember thinking competing with this thing is going to be difficult. I think both of them have proven so far to be true and as soon as I have the opportunity to join a team, I'm like, this is going to be fun, and here we are five plus years later, and it has been fun.
Pauline: If you can't beat the enemy, you got to join them, right?
Christian: At the time, I don't know that I would have called it directly the enemy because I was at YouTube, but I could see that it was going to be very hard to compete with this technology. And it is true that Snowflake has been in market for over 12 years now, and many of the alternative technologies have not caught up to where Snowflake was 12 years ago.
Pauline: Incredible, it's been an incredible story.
I'd love to kick off the AI component of this conversation by talking about two quotes that have come out in the last two weeks from Microsoft and Amazon. Satya at Microsoft said, “every AI app starts with data.” And then at AWS Summit, the quote is “your data is your differentiator when it comes to generative AI.” And so Snowflake as the data cloud has to be sort of right in the center of that. What have been the conversations with customers pre-generative AI? And now that generative AI has taken over the imagination of the world, how have those conversations shifted?
Christian: So we fully identify, and agree with the sentiment that AI and generative AI are powered by data, are enabled and amplified by data for the longest time. When I said, our mission is to help organizations mobilize your data in many ways speaks to let's help organizations have a data strategy. Data matters, what metric definitions matter, what dimensionality, what latency, streaming technology, etc. etc. And what I would say is the AI trend that we're living in has just increased the focus on understanding that data strategy.
A lot of our messaging at our user conference a few weeks ago was there is no AI strategy without data strategy, which is the sentiment of the quotes you mentioned from Amazon and Microsoft. And what we hear now, even louder from CIOs, CTOs, even product executives in different companies is I want to have these modern experiences, I want to have conversational UIs, I want to have modern search.
And the most common conversation is let's talk about your data, let's talk about your data model. So I would say it has increased the emphasis, more than a pivot or a change because I think in the last 5-10 years, most organizations have realized that data is becoming a requirement to have a competitive differentiator.
Pauline: I think one of the questions that I've been asking myself as it relates to AI is what is the time scale that we're looking at? We've had this amazing period of excitement and is this going to be more like autonomous cars where we're always ten years away, or is this more immediate? And as it relates to you need a data strategy before you have an AI strategy, I'm curious what you're seeing from your customer base, which is a lot of Fortune 500s and the most important organizations in the world. How far are they in having a data strategy before they can even get to an AI strategy?
Christian: I think you find different states of maturity. Some organizations have been long aware of data - the importance of data - as the fuel of how a company runs. Some of them are digital native that were born with the mindset of data. Some others have, I would say, a lot of homework to start tearing down many of the silos that they have.
We may see some distinctions around financial services industry being maybe a little bit ahead from a data maturity (standpoint). I think it's been a long fact or long known fact that hedge funds and others have realized that with data, you can start to understand and predict behaviors. But I would say pretty much across industries, there is a realization that data is needed. Data strategy is needed. I would say you'll find the full spectrum on maturity on that life cycle. But I think there's a unified desire to leverage data for AI and experiences.
And to your question on timing, I think that even though everyone is trying to integrate some gen AI into their products and experiences tomorrow, in reality, true productization, true bringing solutions to market, bringing reliable and secure, trustworthy solutions is going to take some time as with all technology. There’s a lot of excitement at the beginning and then there's real work to make it happen.
Pauline: If you think about the most sophisticated customers, what is the conversation that you're having as Snowflake with them and saying, okay, you have the status strategy, now think about your AI strategy and the role that Snowflake plays in enabling that.
Christian: I think the vast majority of sophisticated organizations have realized that just accessing one of these language models directly will not produce as good of a type of result as if they are able to provide their data and their data models as part of the context. And I'm using it generically to be whether it's a prompting of a model request or finetuning of a model.
The other conclusion that I think the sophisticated companies are realizing is you can get quite amazing results with a much smaller models - think, I don't know, 7 billion parameter model, 10 maybe 15 - finetuned with the right data for the right use case than if you were to use much larger models that have not been fine-tuned.
And the conversation that we hear on a regular basis, and this is the focus of our strategy is how do we help customers that have already entrusted us with their data to be able to leverage generative AI and language models without having to compromise the safety or security of the data and what that means is I want to be able to invoke models. I want to be able to prompt models. I want to be able to finetune models within the security context, security perimeter of Snowflake and do so again with guarantees around safety and privacy.
Pauline: And you mentioned the safety and privacy and the governance and particularly with banks, or healthcare or pharma companies, which I have found in my conversations to be further along, it is so important. And so can you talk a little bit about that and how much of that is Snowflake’s priority and what you're providing to customers?
Christian: I would say that goes hand in hand that the organizations that understand the value of data are the ones that are most mature in their governance processes of data and governance includes the full spectrum: knowing what data you have, securing the data you have having privacy policies in place, having data quality strategy in place.
And all of them also realize that they want to be able to do generative AI or leverage the benefits of generative AI, but without compromising on those high standards and practices that they have around governance. That I think is the big opportunity in the enterprise world. I would say, if you listen to all the large, major players in the enterprise, they're realizing the opportunity and in many ways, the question is who is able to deliver the most seamless, easy to use, simple process for training a model or finetuning a model deployed into production, getting assurances around the safety of that behavior. That I think is where the value is going to be for many of us.
Pauline: And that's such a nice segue to Snowflake Summit, which was in June. You announced a number of really incredible product releases. Can you do a quick recap of what were your most exciting product releases that you had at the conference?
Christian: All of these questions on favorites and most exciting is always difficult because all the announcements are incredibly exciting for me, but I would say we had a broad set of announcements grouped into three categories.
The first category was how do we continue to advance the platform that the core capabilities of the platform. Incredibly exciting the progress that we're making on supporting Iceberg (Apache Iceberg) as a standard for open file formats and open table formats. Not every organization in the world may be interested or excited about that, but the largest companies that are far along that strategy and saying parquet file-based data lake, they would welcome the ability to have all the capabilities of Snowflake and the performance of Snowflake operating on files and tables that they can read based on open standard format. So that's one exciting announcement.
We were also able to share probably a couple dozen performance enhancements. We like to highlight the fact that with Snowflake, based on our business model, faster also means better economics. We obsess about price performance as a ratio, and that's what we aspire to deliver improving economics to our customers on a regular basis.
Second wave of announcements at least at the conference was around being able to distribute data products in our marketplace, we achieved a significant milestone with what we call native apps, which is our framework for being able to deliver either a data set or a data function or a data application, deliver it cross cloud, cross region in the marketplace, and monetize it via Snowflake. All of that seamless, and I can tell you the excitement and interest that we have is through the roof right now. We have a hard time onboarding folks fast enough to deliver this.
And the third category is very critical to our conversation, which was how do we enable organizations to be able to program the data and get value out of the data without compromising or having tradeoffs around what we talked about governance and security and privacy. And even though we have a long list of enhancements, Snowpark is our anchor capability. Snowpark is our secure hosting of the python and the Java runtimes. We delivered new policies for package management, new runtime support, new streaming table value functions and other capabilities.
But probably the headline out of the conference was the broader integration with a container-based hosting as part of the Snowpark technology. So we were not going to be able to continue expanding the number of runtimes and programming languages that we are supporting. We just said, you know what, the faster time-to-value for our partners and our customers is integrate containers and that's in prior preview. But again, the excitement is overwhelming in a very positive way.
Pauline: I know you can't pick favorites. I would say the one that stood out for me that I was most excited to see was the Snowpark container services. And in particular, giving the customers the ability to deploy and run full stack and gen AI apps in Snowflake. You talked about this excitement that you have so much of this demand that is waiting for this. What are you most excited about or what are you seeing from customers in terms of deploying apps within Snowflake?
Christian: So I would say it's interesting that our Snowpark container service is an effort that started way before gen AI was our single topic of conversation in many contexts, but it is a really good mechanism to achieve what we were just talking about. How do I do gen AI, leverage my own private data and do so with safety and security? And the broad goals for us, our goal is instead of taking data to all sorts of different programming environments, how about bringing that computation to happen closer to the data? And that is what container services does and one of the most interesting forms of programmability that you can use today is bringing LLMs.
So we have a number of partnerships with model providers. They're bringing the models to run inside Snowflake. So you can be able to do a secure inference, secure finetuning of those models. And then there's a broader collection of capabilities and how do I lever those technologies with Snowpark container services? So I would say just within that technology, a favorite use case given the interest and priorities of pretty much every organization today is the enablement of generative AI inside Snowflake.
Pauline: And actually you showed a fun demo at Summit about building a chatbot within Streamlit. And so can you talk about at scale or deployed within an enterprise customer, how do you expect or how do you envision that to look like over the next three years?
Christian: That's also a good reminder, there were so many announcements that in a quick overview, I'm bound to miss tens, if not hundreds of announcements.
Also shared at the conference, the introduction of a new chat component as part of the Streamlit library. Because it is true with this new capability on language models, it is very interesting and appealing and compelling to go and build conversational experiences. And there is no great, simple way to go build those experiences.
What the Streamlit team set out to do was here's a simple component and that's what we're showing at the conference. You can get a model, you can finetune it with your database in Snowflake and also you can present the conversational experience or other data experiences - very simple very seamless with Streamlit as the presentation layer.
And that's where we see the stack coming together. Now, if you expand what I mentioned, bring the computation to the data. The computation can include a model evaluation, a model finetuning, but also a presentation layer in Streamlit. So now you can say, it is within striking distance to go and say I want to deliver a conversational experience for maybe start with folks internal to my company, but later on to my partners or customers directly.
And I can do that without having to be copying my data to all sorts of contexts. That is a big part of our announcement that's at Snowflake Summit and that is a big part of our stack on why we are aiming to deliver the simplest way to build this type of modern experiences.
Pauline: And you mentioned also the partnership with a number of LLM providers. There were a number of them: Reka, AI21, OpenAI. How do you see these different partnerships serving different customers or use cases? And as a customer, how do I choose?
Christian: So actually a very important aspect of our philosophy is to provide choice to customers because we do think that there's going to be many models. I subscribe to the thesis and the idea that there's going to be a million models. We call it a “million model hypothesis”. I don't think it's going to be two or three models to rule them all, but there's going to be lots of options, specialization, focus and use cases. Right now, the large models, you can say, are they good at summarization? Are they good at code generation? Are they good at conversation? Where what you see is this rich ecosystem of folks specializing.
And I'll call out our partnership with Reka as an example, where they understand the value of running in the context of an enterprise. They understand the value of being able to finetune their models in the context of enterprise private data. And we want to deliver that but in addition to choice to others. So we have other conversations in progress with folks that want to partner and bring their models into Snowflake. Our commitment to our customers is we want to deliver as much choice as possible.
Pauline: Makes sense. And maybe we'll have model monitoring and so much more model ops potential there as well.
Christian: That's a big part of it. Well, maybe getting the initial inference and the demo up and running that's the easy part. But how do you measure quality? How do you understand what's going on? How do you measure activity? Those are all of the things that will come in the future and understanding things like safety or correctness or even a baseline of evaluation of results. There's a lot of work to be done.
Pauline: You've now mentioned multiple times your philosophy of having a million models. I wasn't going to get to it until later, but given that it's been brought up a number of times, obviously one of the biggest announcements that's happened in open source in the last month, or even the last year, is Llama 2. And having a really good, incredible open source model that can be commercially used. How has the conversation with customers changed about whether you use the best model that's provided by an OpenAI or Anthropic or Cohere versus an open source, smaller model? What's been the shift that you've seen in those conversations with the enterprise?
Christian: I think the Llama 2 release was very meaningful for the industry, not only on the capability of the model, but you alluded to the commercial license behind it. Because there have been previous, reasonably good open models but the licenses were restricted. Now this has unlocked all sorts of additional innovation, not that there wasn't a lot of innovation, but now there's innovation plus actual desire to productize.
I think it depends a little bit on the organization, but the more mature organizations, the regulated organizations, they will all overindex on the importance of what we thought: security, safety, reliability. I don't want to send my data somewhere and what if someone is training on my data, knowingly or not? Whereas the ability to host a model running close to the data, we guarantee that the data is not being used for training or finetuning. That is an important conversation for enterprises and the fact that Llama 2 is available now says if you want to take a model, go, you can download it yourself. You can install it and say Snowflake and leverage it for either inference or finetuning. That has simplified or lowered the bar from it would make me comfortable to use this in my enterprise use case. The day after the Llama 2 announcement, we actually put out a blog post showing how simple it was to just go take a model, host it inside Snowflake, and you're off to the races that have drawn a decent amount of interest from our customers. Because again, you want gen AI, but you want it with safety and security. I cannot say this enough times.
Pauline: I think one of the things that we also touched on was this idea that a lot of people are experimenting. People want to have deployed LLMs or gen AI very quickly, yesterday if they could have, but there's a period that's happening where there is product discovery. There's building LLMs into the product in a very secure way. How do you think about the barriers that are stopping an enterprise from going from, hey, I'm just going to build a chatbot that I can play around with on Streamlit to fully deploying it into production with data, with finetuning, with the safeguards that are needed to touch millions, if not billions of customers?
Christian: Those safeguards and safety are in my mind the most important aspect that organizations need to be able to be comfortable on putting one of these models into production. The capabilities of these models are both the pro and the con of the technology. You can submit any question in natural language and the model will do its best to identify patterns and give you responses based on those patterns. But it also broadens the surface area on what your product that integrated model can do. And testing and validating those use cases becomes an overwhelming task.
How do you know if someone asks some inappropriate questions in your product? All of a sudden, your product is providing answers. Are those the right answers? Should you even allow the answers? You’ve seen progress in the industry on being able to control when these models provide answers and not. So I would say that is an important vector of innovation and surfacing of knobs and controls for moderation for influencing responses. That I think controls a lot of what's happening and it's difficult because what you want is a solution based on language models should be able to understand language broadly. You want it as broad as possible, but also not too broad to the point that it crosses the fine line between what is appropriate for your product, your industry, your country, your location.
Pauline: A very delicate balance indeed and I'm very hopeful and optimistic time will help us work all of that out, but certainly it's not there now. Before we move off of Summit, I want to also touch on the new or deepened partnerships that you announced, particularly with NVIDIA and Microsoft being the most notable. What was the catalyst for those conversations and what do the customers get out of these tighter partnerships?
Christian: With Microsoft, it was more a focus around collaboration in the field. It's no secret that there are dynamics with the cloud providers have a large element of us being a customer and a large element of us being a partner. But there's also some competitive dynamics with some of the products that they have, and the reality is if we all figure out where to partner, where to compete, the results are quite good. What we've proven with AWS, I would say amazing and a lot of what you saw with the announcements with Microsoft were about that type of deeper collaboration between organizations.
Probably more significant was our partnership announcement with NVIDIA. Where the truth of the matter is NVIDIA may be known for hardware and GPUs, but they have an amazing software stack that has made it simple to leverage those GPUs. And the more we talk to each other, the collaboration started with a very simple here's a rapid library that lets you do data science in python accelerated by GPUs and the results were like 5x faster, 10x faster than what you could get otherwise.
And then we started learning and collaborating a lot more on what other software solutions they have and they have NVIDIA Enterprise AI. They have something called the NEMO framework, which is software solutions to build a language model, to prompt a language model, to finetune a language model. And it just also happened to be that most of these solutions were readily available as Docker containers.
So one thing led to another where very quickly we were able to commit to the partnership where we will bring many of the software solutions that NVIDIA has that will simplify our mutual customers the ability of I want to create a model. I want to finetune a model. I want to get results and if you're not tired of hearing me say it, you can do so with private data and with safety and security. That's was the genesis. And we're very excited about the partnership with NVIDIA.
Pauline: I know that Frank Slootman, the CEO of Snowflake, and Jensen, the CEO of NVIDIA, kicked off Snowflake Summit with an amazing fireside chat. And so very excited to see what the partnership between Snowflake and NVIDIA and where that leads.
We've now talked about how customers of Snowflake you envision using AI and LLMs. I'd like to shift focus and talk about how Snowflake internally is using AI. Certainly, I think this idea of Snowflake allows you to utilize your data faster, better, cheaper very much resonates. And so what is the role that you see AI playing in making Snowflake itself more efficient to serve customers?
Christian: We've been leveraging machine learning and AI within our product for quite some time. [For example,] our internal estimation of how many machines are needed at any one point in time to make sure that our customers end up not having to wait for instances. All of that has been AI driven for quite some time. And the beauty of that type of use case is not only delivers a great customer experience, but it also helps us optimize from a cost management perspective for us. So that's one example. There's 20 others where we have been leveraging AI or machine learning within Snowflake technology.
Probably more interesting is okay, what about gen AI? And we actually showcase at the conference some of the use cases that are quite far along. We have a marketplace. We would love to have a much more robust and modern search experience in the marketplace to help customers be connected to the right data products. Some part of that experience, we showed at the conference and we'll be rolling out in the next few weeks so it's quite imminent. We showed at the conference how some of our query editors will be augmented with an ability to go from text-to-SQL generation.
And I am aware that there is a large number of companies that are trying to pursue this type of solution, but for us, it's maybe table stakes is too strong, but we have data, we have metadata, we have a large body of SQL queries. It is the place where customers would expect this type of solution to work and we not only showed at the conference, but we have upcoming launches. So we have been using AI for quite some time at Snowflake all in service of a better customer and user experience. And we'll continue doing so inclusive of LLMs and Gen AI.
Pauline: What's maybe a challenge that you had to overcome as Snowflake itself has been using AI and ML that maybe you can help customers as they're going along on their journey?
Christian: I think understanding the bounds of what these solutions can do. In the traditional AI / ML, there was a lot of conversation around explainability and it's still 100% true an important priority. If at some point, some AI solution tells me that I need to share around 100,000 machines, I should be able to understand why that is being produced as a recommendation or as an answer. The same thing applies to everything else we do inclusive of Gen AI. If our marketplace search is going to be augmented with language model technology, we need to be able to understand why things are behaving the way they were behaving.
I think something that the cool demos on LLM don't capture well is the demo is good, and the happy path looks happy, but when it's not producing the right results, do organizations have the right tools to understand why it did what it did? And how can I influence that? There's a lot of work these days on what are the right prompts and what are the right ways to sort of influence what these technologies produce. I think that's going to be a big part of the focus for us as a platform and for customers over the next many months or years on deploying the solutions.
Pauline: Certainly I think explainability will be something that we need to solve before it's built into every product. Last questions before we move on Rapid-fire. What would you tell startup CEOs trying to navigate competing against big tech companies like Microsoft or Google or Snowflake that have both the distribution as well as the capital to deploy AI.
Christian: I would say if you look at the journey that we've been on at Snowflake is innovate (and) create true, differentiated technology. My litmus test is if it was created in a weekend or in a week, probably it's not that hard to come by someone else, but probably they overlay that with focus on a customer, focus on a use case and go do everything you need to make that one customer successful. Then the rest follow from there.
We are incredibly focused at Snowflake on our customers and customers’ success and that is how you end up being able to do better marketwise and business-wise than organizations that have 100,000 times more resources than you. But I would also say you included us / Snowflake in the list of larger companies that you may want to compete with. I would say we're extremely partner-focused. And probably my bigger message to any startup CEOs, have you considered building your solution as a Snowflake solution, as a Snowflake application? As now we can help with that distribution, we can help with that monetization, we can help with shorter sales cycles, simplifying things like security and procurement and legal. That was my bigger message more so than how do we compete?
Pauline: I love it. Certainly I think just the number of startups that were announced during Snowflake Summit as partners is a testament to that. I do have one follow up before I moved to Rapid Fire, which is we've been longtime investors in snowflake. And I think what has been really amazing to watch, even when Snowflake was early on its startup journey, was the ability for Snowflake to get enterprise customers. And I think for a lot of AI startups, certainly getting into Fortune 500 customers is really important. Any advice on that specific point?
Christian: I would probably start with less regulated industries, tech companies, digital native companies. Getting into the top five banks in the US is a much, much steeper climb for obvious reasons for good reasons, like wherever all of us store money, we want them to have the right controls. It makes it easier to go and find - I don't want to put any company or industry in the spotlight, but it would have less requirements. So a lot of the early success of Snowflake was with ad tech, digital native companies, and that may be an easier place to start. Not easy, but easier.
Pauline: Wonderful. And with that, let's move on to the Rapid Fire round. First question, what's your definition of AGI and when do you think we'll get it?
Christian: Probably getting technology to be able to accomplish tasks in a way that will be comparable to human performance on those tasks. And it can include a variety of considerations, not just do the task, but what are the emotional implications, the social implications, all other capabilities that we as humans simply do.
How soon we'll get it? I think for the absolute generic use case, I think it's far out, but I think for subsets of use cases, we're entering that stage where it's blurry on how we enter AGI for some use cases. So some of it is soon. Some of it is far out.
Pauline: A spectrum. I like that answer. Second question: what is your regulatory mental framework?
Christian: It has to be regulated, as is all powerful technologies. I do believe that it needs to be regulated. I don't know if it's more analogous to, I don't know, encryption technology, more analogous to copyright technology. So I don't know that the comparison because it is quite unique and I've heard a lot of commentary and podcasts. And this is a category in its own, but I do believe that it has to be regulated probably based on use cases. If you have critical infrastructure for the country running on this technology, you would expect it to have a higher scrutiny than if you're just using it in your laptop for your own use case.
Pauline: That makes sense. Third, what is the biggest challenge facing AI practitioners or researchers today?
Christian: I think that the topic of being able to vouch for the lineage of the data or the lineage of the results, it's a hard problem. I think we're only seeing the beginning of copyright complaints and issues. I've heard now from many large customers that they will not deploy in certain sensitive use cases if they cannot vouch for the entire lineage of data. So I think understanding what data went in, what answers have been provided and what guarantees can be put on those answers. It's a big, difficult problem.
Pauline: And actually on that note, as a follow-up, how does that in your mind tie into this debate of open-source versus closed-source models?
Christian: I think it's orthogonal. You could imagine someone in the closed source world saying this is exactly what we're willing to represent and vouch for in terms of this model. It could also be done on the open source. So I think it's orthogonal. But because even the training of the open source models right now, it's not a completely open reproducible process, so it's orthogonal, but there is an opportunity to be clear on what can be represented or not on any of these models.
Pauline: Makes sense. Second to last question, who is one or two of the biggest influences on your mental framework on AI?
Christian: That's a really hard question. I don't know that I have any one person or any two people or three people. I do like to read and listen to all sorts of opinions. I like to find contrarian views. What's the case for we should be scared for OpenAI or for AI general, when there was this letter saying let's halt the development? I wanted to hear the case for it. And then when it's like, let's keep going, what is the case for it?
Of course I pay attention to what Sam Alman is saying. At Summit, we had Andrew Ng at the conference and I got a chance to chat with him. So it is mostly a privileged time to hear from so many talented people. And I think it's the collective that shapes my understanding.
Pauline: The number of opinions I will say on AI is mind-boggling and wonderful. I think we need all sorts of different opinions, because this will impact everyone. So I love that answer.
Christian: By the way, we're very stimulated to hear all these opinions.
Pauline: Last question: what is one thing that you believe strongly about the world of AI that you think most people would disagree with you on?
Christian: I don't know about the disagree part of it, but we already touched on this. I strongly subscribe to this million model thesis. Language models in many ways are going to get commoditized and I think the key question is going to be what data is going to be used to prompt and finetune these models that will determine the better results, which goes back to probably where we started. If you have good data and a good data strategy, you're going to have better results on top of a technology that I think is going to be commoditized and available to everyone.
Pauline: And with that, we've come full circle. Christian, really appreciate you coming on and having this wonderful conversation. Really excited about what Snowflake announced that summit and even more excited to see what comes ahead.
Christian: Pauline, thank you for having me. And likewise, we're very excited about our announcements. And there's a ton more innovation, following shortly after that.
Pauline: Thank you again.