Gen AI Use Cases in the Enterprise with Microsoft's Ali Dalloul
This is Cross Validated, a podcast where we speak to practitioners and builders who are making AI deployments a reality.
Today our guest is Ali Dalloul, a VP of Azure AI at Microsoft. He spent nearly 25 years with Microsoft in various leadership roles, most recently as the VP of Strategy and Commercialization at Azure AI.
Listen and subscribe on Spotify and Apple.
Follow me on Twitter (@paulinebhyang)!
Transcription of our conversation:
Pauline: Welcome to Cross Validated, a podcast with real practitioners and builders who are making AI in the enterprise a reality. I'm your host, Pauline Yang. I'm a partner at Altimeter Capital, a lifecycle technology investment firm based in Silicon Valley.
Today our guest is Ali Dalloul, a VP of Azure AI at Microsoft. He spent nearly 25 years with Microsoft in various leadership roles, most recently as the VP of Strategy and Commercialization at Azure AI. He's also a board member at the Open Voice Network, a community of the Linux Foundation. Thanks so much for being here, Ali.
Ali: Thank you for having me, Pauline. Really appreciate the opportunity.
Pauline: You've been at Microsoft for a long time. How did you first get involved with AI?
Ali: I've been at Microsoft for 25 years. I've worked across a lot of different product groups and about 8-10 years ago I was working in Office and for a leader, and prior to that, in Bing. And AI and Microsoft have been going on since the 1990s. Back in the days when we were a PC company, believe it or not, there was such a thing. And people asked Bill, what's the future of Microsoft? And Bill said, one day, computers are going to see, they're going to hear, they're going to understand, they're going to comprehend.
Very few people truly understood that, Pauline. And also, Bill has always - and along with Paul Allen and Steve Bomber thereafter - believed in the magic of software. Fast forward to where we are today, the world runs on software, which is remarkable. As deep learning started picking up in a much more scalable way, I was in my current capacity at that point working for a leader called Qi Lu. We wrote the first paper around what is the significance of AI for Microsoft.
There was a bit of different pockets of AI across different product groups. And, Steve Bomber was the CEO at that point in time. And we believe that there's going to be a need for critical mass. Thereafter, Satya took over the leadership and an incredible turnaround for the company.
But there was momentum picking up. And a division was formed back then called AI and R (Research) where the Bing workload was one of the major AI workloads. And search continues to be kind of one of the largest workloads. And then, as the company evolved, as we kind of regrouped and re-organized across a variety of different things, I've been very fortunate and blessed to have the support and the sponsorship by leadership to continue to be engaged.
And when the company realized that we have to consolidate all of the different AI teams across the company in one Platform team, currently under the leadership of Eric Boyd, whom I report into.
Today, we are the platform that runs pretty much all of the Microsoft first party products. So it's the same API that if you're using Bing Chat, if you're using M365 copilot or Dynamics or what have you. That is the same API available to all of our third-party customers across the board, whether it's our machine learning tooling, or Azure cognitive services. Our applied AI today runs both Microsoft and third party.
So it's been a very interesting journey. Lots of learnings. It started with really mapping to the opportunity in the market and how we saw the world evolving and getting ahead of that curve and making sure that Microsoft is set up to capture that wave.
And recently, of course, Satya has also put the company on the track of the generative AI work. And it's been going on for many years, but it came to the forefront in the past few months and been the talk of town. But that's a bit of the background.
Pauline: Yeah, that's very helpful. I'd certainly say Microsoft has been the talk of the town, particularly with the partnership with OpenAI and what have you.
I think Amy Hood and Kevin Scott did an AI discussion last week where they were saying that generative AI and AI more broadly is going to be the fastest growing $10 billion business in Microsoft's history. Help us contextualize what that means from Microsoft's customers’ perspective and how you guys are helping them with their AI journeys.
Ali: Let's take one step back and look at the big picture before we talk about Microsoft in particular. Goldman Sachs released a pretty interesting study about what are the key economic flywheels that will drive the global economy over the next decade, which is today roughly about $100 trillion economy. And they projected that generative AI and AI broadly is one of the most important economic flywheels that will drive economic productivity. And economic productivity is a function of labor, capital and the output in the economy.
And the first conclusion was roughly about $7 trillion of economic value-add will be created over the next decade through AI. That's about a 7% growth in the global economy. The second thing is on a conservative basis, generative AI or AI can help boost US labor productivity by 1.5-3 points, which translates into a sustained economic GDP growth.
And the third is that AI as a percentage of spend of GDP could be in the range of 1%++. If you look at just numbers of that magnitude, you're now entering into the domain of healthcare spend, advertising spend, of course defense spend and very large categories that drive the US economy.
And I think Microsoft is going to be one of the benefactors, along the broader ecosystem. So I think that's one macro view.
The second point I would land is today, if you look at where the value creation is happening in AI, there is clearly the infrastructure, and the silicon layer, and the cloud. There's also the application layer where Microsoft has the lead today with our Copilot offerings.
The minute we came out with our first platform service back in January, we immediately thereafter started launching Bing Chat and then Dynamics Copilot, Teams Premium, Viva Sales, M365 Copilot, LinkedIn, and many many other Copilots.
So when you look at Microsoft's ability to harness the value of AI responsibly from the cloud, from the infrastructure, all the way up to the application layer, we are fortunate as a company that over the past five decades we've built that entire stack and the broader ecosystem.
The third I would say is if you look where we've come from and where we are today, our mission as a company is really to power every person and every organization in the world to achieve more. Fundamentally that is a productivity mission. And if you look at the power of generative AI today, it is about augmenting productivity and catalyzing creativity.
And if you look at the tools that are available to our customers in both enterprise and consumer and small and medium business, they are one way or another using one form of our services or applications and tooling or even the platform services where the team that we sit in today. And all of these components create that value.
Today if I'm inside Word, I can accelerate my productivity through a Copilot, in terms of planning perhaps for this podcast or writing a research paper. That level of productivity historically has not been available through traditional technologies.
Take today one of the most complex human domains where AI is really coming to the forefront, and that is writing code. Back in college, it was the holy grail back then where third generation languages for GL and so on, to really be able to autogenerate code and code that is really meaningful and can be used in a real application environment, reproduction environment. Today, that is available.
And some of the studies that we've done with GitHub, we found north of 50% developer collectivity. We found north of 75% satisfaction of developers by taking away the drudgery in the work and the stuff that is very repetitive.
But the last point I would add to that, Pauline, is when we use co-pilot, it is not just a branding and a good marketing message. It's also fundamentally a product philosophy to an extent because really it's a human-centric approach to building AI. And the human is at the center.
It is really about empowering the individual, where AI is an assistant. It's a co-pilot, and the human is the pilot. And that is a fundamentally important point when we look at our role in the world and how we can responsibly guide the evolution of technology and the mainstreaming and how we partner with the broader civil society, academia, public and private partnerships that help foster this debate.
So yes, I think, we are on the cusp of real innovation and real economic growth. It is something we are going to have to navigate responsibly and do in a very careful way. I'm an optimist and I feel it's a great, great opportunity for the world right now to really embrace this responsibly and in a way that really catalyzes more creativity and drive more productivity for sure.
Pauline: You guys work with such a wide spectrum of customers, whether it's enterprise, whether it's consumers, whether it's SMBs, and you bring up this very important concept of augmenting productivity, augmenting intelligence. How do you think about the most popular use cases that you are seeing that your partners are using and deploying within their own organizations?
Ali: Great question, Pauline. I mean, I'd say generative AI in general is broadly speaking very horizontal in nature, first and foremost. And if you look at what these large language models do really, really well, they do really well in understanding unstructured data. They do really well in understanding language, hence large language models.
They're very good in generating and understanding text. They're very good in generating images from text. They're very good in generating code. And they have their own limitations in structured data and of course transactions. And we work through these through the broader Azure AI services. But it is important to understand what they do and they do very well.
First, they're really, really good in summarization of very large data sets and complex topics. They're really good in generating content. They're really good in writing assistance. They're really good in terms of natural language to code. They're also really good in looking at old legacy code and refactoring that code. They're really good in terms of, of course the thing that brought them to the forefront of society: conversation, conversational AI through chat and so on.
When you bring these broad capabilities together, including of course, images and so on, you are able to transpose that into use cases specifically in the enterprise in the areas of customer service. Self-serve conversational interfaces in a contact center.
If I turn the mic back to you and say, tell me when was the last time you were frustrated by calling customer service for any service or even your credit card, and I don't think it's going to take you a long time to answer that question.
It is not the best experience regardless of what service you are dealing with because we have a lot of requests, and companies have limited resources and ability to respond. And more importantly, Pauline, the products and services in our economy and globally in the world are becoming very complex.
By the time you bring in the right people, you train them and you want to empower them with these tools to support a customer. It is a pretty lengthy and complex process. And then you have employee churn and so on. So how do you preserve organizational knowledge? How do you empower and increase the productivity of these service agents?
We solve a lot of these customer scenarios. One example that is now public I can talk about is we just launched with AT&T for their users. It's on their website right now. And that is one clear use case where at a telco of the size of AT&T, a legendary company in their own right. We owe so much to the Bell Labs that came out of AT&T, the C language, everything that came out of that amazing engineering institution. And today they're transforming that entire customer experience through Azure OpenAI and conversation. So that's one of the largest use cases out there in the enterprise.
The second really is what I touched upon earlier is code assistance. I was with a lot of customers outside the US but even within the US also, there is a shortage of software developers, Pauline. And there's a shortage of also seasoned developers who will be able to help onboard newbies onto the scene.
And every company, not just at that company, but every company, today, as we said earlier, the world runs on software. And yet even with generative AI, you still need software. You still need computer scientists. You still need computer engineers. Those jobs are increasing, if anything. But how do we democratize that access and make that skill available through a broader segment of society globally.
And it's remarkable how much now you can do in GPT-4, for example in terms of writing very complex code, and how not only can you generate from plain English language. Now, again, it's not an exaggeration to say that English language is probably one of the most common programming languages these days because of that.
But also like you have developers who've got one or two years of experience and they need a buddy. They need somebody who's more seasoned. They can review the code, they can give them coaching and so on. Well, today that GitHub Copilot can be that. The human is still involved but that's a pretty big use case across the board.
There’s a third bucket, which is the business process optimization. So if you look at enterprises, what are they looking for? They're either trying to increase the top-line, how do I generate more revenue? Or they're trying to take out cost in the equation whether it's customer service, whether it is optimization of certain processes internally.
But that is a pretty big use case as well. And we find a lot of customers are trying to understand how do I optimize a lot of these core functions, whether it's employee self-serve inside an enterprise, basic things like managing travel expenses, managing HR benefits, stuff that most people would look at and say, oh, that is like in the domain of operations.
But guess what? That's one of the largest line items in a budget for an organization, including Microsoft. And how can we optimize that and take costs out? And drive efficiencies there.
You also have cases around access to information. Like one of the largest use cases, right now we're also dealing with is enterprise search. So we've launched our Azure search with vector databases support. Again, not to get into the technical weeds of this, but one of the largest problems is a lot of organizations are sitting on massive amounts of data. And, I mean, proprietary data that needs to be preserved within the enterprise firewall, but they don't really have a mechanism to access that data. And they don't have a mechanism to reason over that data.
You can probably put it through SQL database or something if it's structured data. You can probably pull it from other databases or other repositories or data stores. But how do you reason over that? When you're a very large Fortune 100 company or Global 100 company, it's very complex. You can't even apply enough manpower on it.
And with vector databases, you can solve the word embedding problem and the indexing and the ingestion in a way that also increases productivity. So we're trying to help enterprises with a lot of tools, not just OpenAI, Azure OpenAI, generative APIs, but all of the complementary AI services that help address enterprise scenarios.
And another bucket really is the creative side of the enterprise like marketing side. So some of the public use cases we spoke about, the obvious one of course is Mattel that we spoke about last year in our Build conference. Mattel wanted to go and design a completely new car.
Mattel's largest toy company in the world for people don't have kids. And Hot Wheels is one of their hottest brands. I grew up with Hot Wheels, it’s that old of a brand, and being a car person. And they wanted to create a completely new design that is very modern that relates to the current generation.
And they wanted to accelerate the design process with their designers. And they applied back then the Dall-e model. And we created a car from scratch. Now, of course, here you say, wow, that's pretty amazing. But that didn't take away the role of the designer. You still need the designer, you still need the engineer. You still need the manufacturing person. You still need the marketing person, because they gotta go now figure out, okay, it’s this car. How do I build it? What is the bill of material? How much is it going to cost? How do I package it? What's the quality control? Will certain parts of this car break and cause risk to a child playing with that car? What's the competitive positioning? All of that stuff still completely requires the human domain. We're just accelerating it. So a lot of use cases, responding to social marketing campaigns at the speed of what is happening and changing in the market.
Automotive is looking at how do we build a car in less than five years? It takes five years to actually design and build the car for the next generation. Can we reduce that? I mean, that's billions of dollars of economic value creation and differentiation in the market.
And recently, we announced our partnership with Mercedes also for a chat interface in the car. How do you delight the end user in different endpoints? What does that consumer experience? And like the second probably largest item you would buy aside from your house is a luxury car like a Mercedes.
These are a lot of different use cases we're seeing in the enterprise and they’re embracing it in a way that is, again, thoughtful, responsible, grounded, but really driving productivity across all of the broader spectrum of customer service, business process, manufacturing, delighting users, creativity, and many, many, many other domains. That historically, especially in this inflationary environment and restricted labor environment, there isn't enough of a talent pool or resource pool to tap into to address the scale of how the global economy is evolving. So that's kind of a broad spectrum of what we're seeing.
Pauline: I mean, you touched on so many different use cases and across so many different domains: across creativity and coding and more business ops related. I think one thing that it feels like Fortune 500 companies in particular are thinking a lot about is that value of proprietary data. And you sort of touched on it in your use cases. And certainly when we talk to people it sounds like there is a concern of, what if my data goes to OpenAI or Microsoft? What's your perspective on that and what do you tell customers and how have you seen maybe the mind shift of some of these companies change as it relates to their own data?
Ali: Yeah, I mean it's, the number one question I deal with every single day at the C-suite of every enterprise, bar none. So, fantastic question, Pauline. And it is a very important question and it's not just important, I mean, it's an existential question for enterprises, and we are very, very clear about that.
So let me walk you through how the Azure OpenAI service in particular, but also broadly AI and data handling and protections in Microsoft work. When you use any of our AI services also but in particular Azure OpenAI, given the conversation here, the user in this case is an enterprise. The enterprise is signing into an Azure tenant. That data and that service sits in their own tenant. Microsoft cannot see that data, that data is encrypted at state. It stays within their own tenant. We don't use any data to train any models. There is completely eyes off on that data.
Enterprises can go all the way up to requesting certain data logging also to be removed. And they would assume responsibilities for global GDPR and other regulatory compliance. But that data sits within their tenant just like an M365 account sits inside Azure.
It is 100% their data. And Microsoft does not really at all get involved in that data. That is foundational, that is like the core promise. It is in the terms of service of the service. It is in all of the documentation and we walk customers through it in a lot of detail on technically how that works.
The second part of it is, it is also important to understand that when customers use their own data inside the service, whether it's an inside a prompt or inside, for example, a search service or inside even any other complementary AI service, the same treatment applies: that data stays within their own tenant.
That data is their data. There's no data that is flowing back to Microsoft. There's no data that is being used by Microsoft for any training of any models. We also provide all of the security and compliance of the underlying cloud platform, which is Azure.
So you have full role-based access control. We have content moderation built into the service that we announced also as a separate API recently at our Build conference that has multiple levels of lockdown that help you control and filter the risks that could come out, whether it's hate content inside the chat, whether it is PII content.
There are multiple levers you can apply. We also, give a lot of training and advice on how we handle, how you can lock down kind of the system message or the prompt in a way that you make it very confined to a sandbox of outputs and therefore you reduce the risk of hallucination. So we give guidance on that and that again, a prompt in general is a stateless prompt. So that data is really, unless it's logged by the customer themselves, as some of our customers do for their own compliance, that data is not in anywhere logged in Microsoft.
We also provide, in addition to all of that, a set of governance mechanisms on responsible AI from a legal perspective. We have an office of responsible AI. We have a responsible AI standard in terms of policies and best practices of how to handle broader regulatory and policy issues.
We have a mechanism of how we gate certain services so that there is no abuse, whether it's biometrics or things of that sort. We publish tooling on GitHub like InterpretML and our responsible AI dashboard inside Azure ML to help customers understand what's happening inside the model, with the data explorer, etc.
We have broader mechanisms across the company, all the way down to our systems engineering that ensures that we are in full compliance. And all of that is governed by a set of very, very strict principles, to the highest degree possible, take the emotion out of the conversation or things around privacy and safety. Like Satya always comes out and says privacy is a human right. And we fundamentally support that safety of these models.
We have some of these models, not just generative Ai, but broadly our AI services running in very complex manufacturing environments. Take, for example, computer vision. We train these models on skin tone, color fairness. Why? Because if you're applying this in a medical environment, you don't want a misdiagnosis of potentially skin cancer. Color of the skin is not well thought through.
So there are a lot of different areas where we apply a lot of thinking on these principles and make sure that we are very transparent with the customers to ensure that that promise is kept. And the last thing I'll say is Microsoft runs on trust and, to the best extent we can, we are not in a position where we want to compete with our customers, where we always want to be complementary. The business models are complimentary. We're not monetizing data, we're not using that data. We're not applying any other approach where that data is in any way compromised. At all cases, that data belongs to the customer.
The last thing is we just launched a new service called Azure OpenAI on your data. That was launched last month at our BUILD conference. One of the challenges with LLMs, again to keep it very simple, is they generate their own knowledge or their own output from the knowledge of the model.
But that creates risks of not just hallucination, but completely wrong answers that's coming out of the model. But when you apply your own enterprise data inside the prompt, what we call grounding of the prompt, you ground the prompt with your data and you're able to get a very specific answer to your question inside the enterprise.
So we actually now do that in a very seamless way. You can go into the Azure OpenAI studio. You can click on a dropdown box, you can choose your data source. It pulls in for example, from your Azure search, from a file from a Cosmos database, from a SQL, and you apply that data and pulls it into the prompt and it automatically creates the word embeddings for you.
So now you've solved also more complex issues around the data, which is grounded with your data and the power of that generative knowledge of the LLM where you have citations done for you automatically. You have therefore referential integrity. You solve the recency issues in terms of what is the gap between the model of these dates and so on.
So again, I wanted to give a very comprehensive answer because this is so important. Like if there's one takeaway of the whole podcast, that is fundamentally, we are about protecting your data. We are about making sure that we are operating with the utmost trust, with full security, with full compliance. And that is the baseline of any conversation we have with the enterprise. So I hope that gives you a bit of an overview of Pauline.
Pauline: That's very helpful. And I'm sure, again, it does come up quite often. And so I appreciate the depth at which you walk through that.
The other debate that I'd say we hear a lot about is this open source vs. closed source. And certainly Microsoft has a partnership with OpenAI, which has a closed source model in GPT-4. What's your perspective and what are you seeing from the customer's view of the open vs. closed source model debate?
Ali: It's a great question. If you look at the story of innovation in general, there's room for a lot of different models and approaches in the market, and we support both. Our point of view is that these models are closed because of what we just discussed, the security, the compliance is very critical.
You want to have commercial grade models in the enterprise where there is a level of SLAs. There's a level of investment made and making sure that these are hardened. Two trillion plus enterprise runs on these models in its co-pilot, services and offerings.
And we serve the largest public and private customers in the world. It is very important that we are able to live up to that promise and to have a level of oversight and control over it for sure. That said, we also respect that there is a proliferation of innovation happening in the open source movement, and we embrace that, and that is great, and we support that.
And you can bring your own model into our model catalog inside Azure ML. You can host your models on Azure. We support open source. We are also the owner of GitHub, the largest open source code repository in the world. So we are all in on the support of open source.
I think it's just a matter of the positioning of what does the customer really want, if the customer wants a model with a commercial company backing it up and supporting it, but also wants to augment that model with their own open source creation or any other third party creation. We give them the tooling.
So one is a model with all of the enterprise grade features and the others are models that are also innovating at a very rapid pace. They could be complementary. We leave it up to the enterprise as customer choice. We provide the tooling to support both, and the customer needs to make that choice and decide what works best for them.
But we are certainly very clear that the opportunity in the market is for both and is gone continue evolving in that direction.
Pauline: Are there specific criteria that you'd say open source model or you see customers going towards open source model because of X or Y or Z?
Ali: I think there are a lot of models right now that are happening. If you look at, for example, vertical models like Bloomberg launched BloombergGPT as an example. That's based on the Bloom LLM, and it's a mix of their own private data and public data as well. But that's like oriented towards a finance vertical.
And our point of view is sure, there is a room for that. But if you look at GPT-4, it's a horizontal model. It is very rich and can address a very large set of use cases. And in some cases, some customers may feel they have a need for a much more verticalized implementation, and we say great. If that's your use case and if you found a way to implement it, through an open source model or another closed model, that's one area we see that.
Of course, there are other areas where there is a grassroots innovation happening. And there are costs of course. An Azure OpenAI service, by definition. is a premium service, by definition, has certain economics around it.
And some startups want to have different economic parameters in terms of their cost structure and so on. And in some cases they may not need all the premium features. That's okay with us as well. If you can serve your needs through that, then that's all goodness. And if you can do it on Azure, that's great, and if you can do it with our tooling, that's even better. So we support that per se.
I'd say kind of like that's bucket two and then bucket three really is there could be and especially in the enterprises, there could be legacy work that has happened that is all open source based. I mean, a lot of enterprises that have built their own ML platform, for example, all on open source. You take Uber, with Michelangelo and many others. So there's sometimes that institutional knowledge and know-how where they want to capitalize on that and build upon it.
There are areas we're seeing also where, on a global level, certain countries or certain geographies feel that the language support within GPT-4 is not as good as, for example, US English, and it's of course getting better every day.
And we say, listen, you can run an Azure machine translation on top of it and get even better results. But some of them also want to build their own local language. They want to be language Y or X or so on. And I'm not going to name any particulars but they want to kind of do something like that.
These are all great things and we support them. And we don't see any contradiction to just the innovation cycle. And I think if anything, if we can, collectively as an industry and as a global market, continue to understand how to drive the innovation forward and how to do it in a responsible way, I think at the end, it's all good for our customers collectively and the market is big enough for and can be served by a lot of different players. So that's totally fine with us.
Pauline: Certainly Microsoft is well positioned to serve a wide spectrum of use cases and types of ways the customers want to deploy, which is great.
What are you seeing are the biggest bottlenecks or areas that customers fall down on whether it's talent or cleanliness or ability to work on the data? What's stopping some of these customers from deploying as successfully as they would like?
Ali: Yeah, couple of things. The first is in the enterprise, a lot of customers get the mandate from the top. There's the excitement phase, and then they realize, well this is not point and click. This, actually, there's some real work here. There's some integration work that has to happen. You gotta connect the back end, you gotta connect to the frontline of business applications. There's real work, and sometimes between the CEO mandate and the reality, especially in very large, complex, established enterprises, there is time for the work to be done and there's sometimes that challenge of well, my CEO wants something next week.
This is not the consumer version of chat. It doesn't work that way. So I think enterprises are beginning to understand that the promise is definitely there. These models work and they work very well for the use cases we discussed earlier, but this is still a complex enterprise implementation.
You still have to go through what we just discussed on security. We integration, data sources, backend work that has to happen, front end work that has to happen, middle tier work that has to happen. All of these things need a little bit of time. So what we're seeing is the adoption is and the demand is very high.
And I would say there is more to come because now enterprises have been doing the work and now they're beginning to launch the services. And like we just launched AT&T. That took a few months but we got it done. With AT&T, great leadership there as well, or Mercedes, or many of our other clients.
That's I think one aspect. The second, honestly, depending on the enterprise and the organization, there's also a cultural dimension. The change management process is something that customers need to work through, and even the most sophisticated enterprise, what we're seeing is, they do need to have that process in place.
They need to have their own set of principles and guidance of how they want to use it, how they want to work through the policy issues, how they want to work through the business process issues, how they want to work through their own products and services and so on.
So there's a cultural, there's a change management, there's a journey. And Microsoft itself went through a journey. Like it's not like we overnight had the service available. I mean, we've been working on this for many years. The AI journey has been going on for three and a half decades.
So there's always that change management bucket. I would say that companies that will thrive and survive really well are really doing kind of buckets one and two really well, are doing that in a very thoughtful way.
And then I would say the third really is it has to start from the top. It needs leadership sponsorship. Where I see things go really well is CEO mandate or a SLT mandate, like CEO direct report or CEO with the level of, especially for large enterprises, I mean, I'm not talking about like small pilot projects we're talking about like complex implementations. There is leadership at the top that understands the strategic value and the nature of the transformation needed here and why it's imperative to move fast responsibly.
Again, I emphasize that word move fast responsibly, do it in the right way. Build something durable, scalable, compliant, and secure and when you have leadership sponsorship at the top. Of course, you are able to help move organizational inertia because big organizations have their own cultural challenges and dynamics and so on. And you need to align key leaders in that change management process and make sure that they see the end game. They see the end result, they see where this vision is heading and they all have a role to play in that.
It has to be an inclusive process and especially when it's cutting across multiple departments at an enterprise and they have to kind of rally towards that. And when you bring these three things together, we see just tremendous success rate and we see tremendous gains and we've seen that with a lot of the customers I mentioned that.
Certainly in Microsoft, I mean, it starts at Satya. Satya is just an incredible leader who has really mobilized the company in the right direction, but done it also in a way where he's brought all of Microsoft together towards that common goal.
And I think we are kind of like one of those case studies that you look at and say we've transformed from the PC era. We didn't do so well into mobile but we did great with cloud and now we're going we're building on that, and now we're going into AI. And there's that kind of transformative leadership at the top that we are just grateful for that deep, deep leadership.
So those are the things we see in general. Of course, I'm not getting into the complex technical detail. A lot of our customers are way smarter than us. And we learn from them every day. Like, I go into meetings and we start discussing certain integration issues certain and it's like, wow, that's like really smart way of implementing it in your enterprise and we don't have all the answers, but we know our products and services well enough that we complement the customer conversation.
We have great partners, great customers. They're all just amazing in their own way. And every day is a great opportunity to learn from them, get challenged and bring our best game in that conversation and help them in that journey, help them transform, help them bring value to their employees, their customers, their stakeholders. And that learning feeds back into better products and better also responsible frameworks of how we navigate this role.
Pauline: Certainly a full circle and Microsoft has certainly and Satya has done an incredible job to be sure. So that's wonderful.
Last question before we head into rapid fire. I think there's a lot of startup CEOs who are trying to figure out how to navigate the Big Tech competitive landscape, and certainly Microsoft is number one or number two on that list. What are pieces of advice that you would give to startups as they're trying to build companies and build value for their customers when you have giants like Microsoft who can offer the infrastructure, the foundation model and apps?
Ali: Great question. I work with a lot of startups and I advise a lot of them. I'd say, you have to have a clear sense of the problem you're going after and your sense of purpose around that problem and your conviction level.
And you need to have an understanding of a durable business model that can attach to that product that is solving that particular problem. And to your point if that problem is and I mean it in a very positive way because a problem is an opportunity basically that somebody is going after. Like you're trying to solve a pain point in the market that somebody has not solved or somebody has solved but has not solved the best possible ability.
And I don't want to name any particular startups but startups who either solve it and fill in that niche that nobody has solved or solve it in a better way than somebody else and have the ability to sustain that with a good, solid product and a sense of purpose and a durable business value, those startups will definitely thrive and survive and they will grow. And there's a big market for that.
So I’d say as a principle, I would also say that it is important to understand what is that startup's DNA. And only the founder can say that. Nobody else can comment on that. If the DNA is an enterprise DNA, don't try to go build the consumer service. If the DNA is a consumer DNA, don't try to go build an enterprise and vice versa. So it's understanding what is that DNA, what is that core skillset set you bring to the table that is going to address those earlier points? The problem statement, the business model and so on, because they go hand in hand.
The third really honestly is focus. This is such a broad market. Even at the scale of Microsoft, we can't play in every part of the market nor do we want to play in every part of the market. Because we, even at our size, need to be clear where we are focusing to create that value for our customers and our partners and continue to innovate. Because even at the scale of Microsoft, resources are limited. So focusing your energy on the problem and how you're solving it is very, very important.
And then my last point is it may sound cliche but resilience is important. I've been in Microsoft for 25 years and it's great to be where we are today, but we've had our days where we've had our fair share of challenges and continue to have in other areas.
Having the conviction that you're working on a problem that is truly going to solve a pain in the market and being able to adjust and pivot accordingly and being resilient, and applying a growth mindset every day, especially if you're like very early stage vs. a established company pivoting.
Like, if you're very early stage, those are the days where survival is going to be complex in terms of how we navigate the shifting markets. But what I've seen from a lot of the startups that “made it”, I would say crossed the chasm to use Geoffrey Moore's vernacular in the technology adoption lifecycle.
There are companies who solved that problem, that niche became mainstream, crossed the chasm, achieved a certain level of scale. They were able to continue to innovate. They have a durable business model. They have a core DNA that is complementary to that and they've been able to persevere in very complex, ups and downs of the market landscape.
So that is what I would say like my general, again, general advice because every startup is different. And they were way smarter than me and they know what they're doing but what I've observed at the macro level are some of these things, for sure.
Pauline: Focus, clarity, resilience, all great things right now in this market.
With that, let's move on to the rapid fire round. first question, what's your definition of AGI, and when do you think we'll get it?
Ali: I think today's LLMs have a generalized level of intelligence, AI models that can reason over very large sums of data across multidisciplinary domains and provide a level of output that is very close to a human level output. And we are seeing that with the examples we gave earlier.
We are in the generalized intelligence early phase. It's anybody's guess when we truly achieve AGI. Somebody may call it the singularity, somebody may call it something else. I will quote Sam Altman in saying these are tools. They're not creatures. There really are tools, and if you understand the tech, there are statistical probabilistic models that just work really well.
And I'm of the conviction the human will always remain at the center. And, my personal opinion, personal opinion is really that. These are tools that complement the human and they're getting really, really good. And will continue getting, better.
But I don't have any timeframe in mind. I think, we're in a very early stages of breakthroughs in the generalized intelligence domain and it will continue to improve.
Pauline: Fair enough. Second question: how much do you worry about the existential risk that powerful AI poses to human civilization?
Ali: Personally, I'd say, there's clearly a dialogue happening in the industry, and there's clearly a discussion and we want to acknowledge that discussion. We want to acknowledge that there's a certain set of questions being posed by, good questions, good dialogue, and it's healthy dialogue.
But I go back to what I said earlier. I mean, these are tools and we have control over these tools, in how we implement them. And we welcome the dialogue around how we make sure that these tools are managed responsibly and managed in a way that improves broader society and human life going forward.
So, again, I'm an optimist. I think, as with every innovation, there's always an area that we need to think through. And I won't belabor the point of every invention has had its own adjustment. There's going to be an adjustment that we're gone have to work through.
Pauline: Makes sense. Who are one or two of the biggest influences on your mental framework on Ai?
Ali: Great question. I would say a leader I worked for in Microsoft, Qi Lu. Back then when I worked for him, Qi was one of the top AI brains and really helped shape a lot of what I know today and my thinking. And I started the journey with him. And he was an incredible leader in Microsoft and continues to be in his own right today and really contributed a lot to great things, at Microsoft, whether it's Bing or Office or the AI journey itself. So definitely he's way up there.
I think more broadly, I do follow, all of the key luminaries out there. Certainly, of course, OpenAI, today Sam Altman, are super big influential people. Our own internal leadership, whether it's Kevin Scott at Microsoft, Satya, our own command chain, Scott Guthrie. They’re all great leaders in their own right.
I do read a lot of the literature from, of course, the different points of view, whether it's Yoshua Bengio, Geoffrey Hinton, or Yann, or many of those key leaders in the industry. I believe in the diversity of knowledge. And I believe that making sure you read these different diverse points of view is very important for me as a human being to shape my thoughts. So I listen to a lot of podcasts. I watch a lot of videos, read a lot of books, and I read of course our internal literature.
But there aren't enough hours of the day, so maybe I'll use generative AI to summarize that for me, at some point, so I can create a bit more time in my busy schedule. But certainly those are great leaders and I've been lucky to have been shaped by some of them for sure.
Pauline: A biased answer, but I'll allow it. Last rapid fire: what is one thing that you strongly believe about the world of AI today that you think most people would disagree with you on?
Ali: As I said, I'm an optimist. I believe really in the power of technology to improve human life. I genuinely deeply believe in that, at every level, not just AI, whether it is adopting the latest gadget or implementing technology and creating value in the enterprise or the consumer or healthcare.
Today, it'll be probably biased as well, but I would say given I'm in technology, but I look at how much of our lives have improved as a result of technology. I'll give you a near and dear story that one of my best friends, his son, of course, started driving, and at age 16.5 has the right of course to drive by himself.
At 17, unfortunately, he had a very bad accident, and it was the entire technology in the car that saved his life. Literally the airbags deployed, the car itself sent a signal to as soon as to first responders. They dispatched the first responders, they came in, they found the car with location, they rescued the teenager.
And if that was 20 years ago, it would've been tragic. So, I mean, again, there's AI in that because there's computer vision in that. There's a lot of different, components of that. But deeply, I would say some people may disagree about the benefits of technology.
I'm not saying everything in technology is beneficial to broader society, but in aggregate, I would say I'm an optimist. I believe in the power of technology and improving human life. And I think AI is one of those technologies that, done right, done responsibly, is going to bring us a lot of value going forward.
Pauline: Well, we love the optimism. We certainly need it now more than ever, potentially. Ali, thank you so much for spending the time to go through and in such great detail all these different parts of the AI stack and landscape and use cases. So really appreciate you coming on today.
Ali: Oh, thank you, Pauline. Thank you for having me and really appreciate you doing these podcasts and keeping the dialogue going.
I think it's very important a partner like you in the VC community that is working on the cutting edge with the most important startups in the industry to really continue to have that dialogue. So I truly, genuinely appreciate the partnership.
Pauline: Thank you, Ali.