This is Cross Validated, a podcast where we speak to practitioners and builders who are making AI deployments a reality.
Today’s guest on Cross Validated is Manju Rajashekhar, who is the VP of Engineering at Etsy. Manju has spent over seven years at Etsy, was an early employee at Twitter, and has spent time at VMWare and Microsoft. Etsy is an ecommerce platform connecting millions of creative buyers and sellers around the world.
Listen and subscribe on Spotify and Apple.
Follow me on Twitter (@paulinebhyang)!
Transcription of our conversation:
Pauline: Welcome to Cross Validated, a podcast with practitioners and builders who are making AI in the enterprise a reality. I'm your host, Pauline Yang, and I'm a partner at Altimeter Capital, a lifecycle technology investment firm based in Silicon Valley. Today, our guest is Manju Rajasekar. He is the VP of Engineering at Etsy. Previously, he had co-founded Blackbird AI, an ML company focused on search which Etsy acquired in 2016. He was also early at Twitter and has spent time at VMware and Microsoft. Thanks so much for joining the podcast, Manju.
Manju: Thank you, Pauline. And I'm really excited to be here.
Pauline: Let's kick off with a quick background on Etsy. I'm sure a lot of viewers and listeners know what Etsy is, but certainly always love to hear it from the horse's mouth a little bit about the company's mission and then your role.
Manju: Yeah, Etsy is a two sided marketplace of handmade and vintage goods. We have more than 100 million items and our mission has always been about keeping commerce human. So we want to keep the human aspect of commerce and that mission is really important to us.
Over my seven year tenure at Etsy, I've led multiple different organizations. I've led search, ads, recommendations. I've built knowledge base, core buyer experience. And this year, I'm leading two enabling groups called machine learning enablement and the second one is personalization engine. And so the way to think about the enablement groups is that unlike product groups like search, ads, and recommendation that are responsible for top line GMS and the top line revenue, enabling groups are responsible for creating enabling solutions or platform solutions over which products can be built.
So enabling groups in a sense have a second order impact on our top line. And the reason around why we built machine learning enablement and personalization this year is that what we saw at Etsy is that over the years we have made tremendous impact in the three product areas that is search ads and recs, but we really wanted to take the learnings that we had from these three areas, especially the machine learning learnings from these areas and expand them to the rest of the product landscape. So the goal of these two enabling groups was around democratizing machine learning and accelerating machine learning across our product landscape.
Pauline: There's so much to dive in there. I'd love to double click into the search, ads, and rec, particularly for a two-sided marketplace. I imagine that's really important and so what is the role that AI and ML play in that particular use case, for example?
Manju: Yeah, so Etsy has had a long history with machine learning and the primary problem that any buyer ends up encountering when they come to Etsy is how do you find what you're looking for? And sometimes you don't even have the right language to describe what you're looking for.
And even if you do, is the inventory or have the sellers described the items in a way that maps with the buyers’ language, right? A great good example is now I might go to Etsy, and I search for retro lamps but what I really wanted was also vintage lamps to show up. And how do you get semantic matching? So we call that problem at the semantic gap: the gap that exists between buyers’ language and the seller's description of an inventory. And a lot of the core construct there is matching those two things, is matching a buyer and a seller, so that a buyer is mapped with the right items of the seller and either through their implicit intent or through their explicit intent.
Pauline: That's, that's really interesting, so we talked a little bit about the search. How does that connect to the ads and recommendations component?
Manju: Yeah. And so the big piece to all of that is actually ranking and the problem of ranking is let's say you go to Etsy today and if you search for teardrop earrings and you'll find 45,000 results. Now, no buyer will be able to go through all of those 45,000 results.
And so the ranking is of those 45,000 results, which ones you should show on the first page of a search results. And in that first page, which ones you should show that are most relevant to the buyer's information need or the buyer's need of teardrop earrings. And so, the problem really maps to how do you sift through this entire inventory of relevant results and show the most relevant to the buyer?
Pauline: And how much of that do you include personalization? Because certainly personalized content is a big trend that I think is now finally being enabled with some of these new gen AI use cases.
And we'll talk about gen AI in a little bit but how does my history as a buyer on Etsy impact the search or the ads or the recs that I get?
Manju: Yeah. And let's go back to the same example of teardrop earrings. Perhaps you have expressed preference to what's really minimal style and someone else might have expressed preferences to color gold vs. silver.
So where personalization comes in is to learn this implicit intent without buyers really telling you what that intent was but learn it through what they had done before on Etsy. And that's where personalization comes in. So personalization really allows results for one set of users to look very different than the results for other sets of users.
So that's one. But personalization could also let you use where the search results are being shown to show them different results. Like a jacket. If you type in the query jacket, the results in the U.S. must look very different than the results set in Australia because they are in two different seasons, right?
So, one might want more summer jackets, but one might want more winter jackets. So the spectrum of personalization is huge. It goes all the way from coarse personalization to all the way to the fine print personalization. And really the problem there is which ones you should choose and what kind of personalization you want to do at which points in the buying journey.
Pauline: Yeah. And you've been at Etsy for a little bit now, I think seven years or so, which is a long time, particularly in AI ML. Even if you just again stick to the search, ads, and recommendation use case. If you think back in the last three years, what has been the biggest unlock that you've seen to get better results on the KPI side that you guys are trying to tackle?
Manju: Yeah, maybe a good answer to that is to look at the framing of what's the value of machine learning work, and that has shifted over the years. But the way we have been looking at the value of machine learning work is machine learning work could either help at the top line or could help at the bottom line.
So, when machine learning work helps with the top line, what it really enables is there are more users coming to your marketplace, buyers find what they're looking for, there's more repeat purchases but overall, there's more revenue flowing through the system. That's what happens when you grow the top line.
And when machine learning helps with the bottom line it really comes out down to improvements in the margins so you would see automation and operational efficiency happening. And so the places where you needed more resources now you need less resources because of machine learning.
So that's the value of machine learning work. And that immediately leads to the following question, how do you create value of machine learning work? How do you create this value? And the more interesting problem is how do you create repeatable value? So there's value creation, but there's repeatable value creation.
And the value creation question can be answered, through two ways. One is we make machine learning better and the other is make machine learning faster. So I'll talk about these two things because those two aspects have changed over the last three years. And so making machine learning better implies you can make your models better.
And if you looked at the history of machine learning, the way how we applied it at Etsy is that we started out with applying linear models to the ranking problem, then we went to nonlinear models, then we brought in tree models, and they were state of the art, and we were applying it in 2016 - 2017, and then we brought in deep learning.
And so now you have deep learning models being applied. But the moment you have deep learning models being applied, it actually changes the landscape because now you can say, should we optimize for a single objective? Should we optimize for multiple objectives? Should we have multitasked models? And so the way that would look like is that you could have models that are ranking models that are just focused on improving purchases or the purchase rate, but the moment you bring in multi-objective you can say, let's look at not only improving purchase rate but also looking at the long-term lifetime value of a buyer and let's optimize both.
So that has been a big shift in how the machine learning work has evolved and they have happened incrementally. And every stage in that incremental step has added value, which is moving the value creation funnel bigger, value creation lever more and more.
There's actually the other aspect of making models better, which is to make the data better. With deep learning that has gotten easier because the more data you give to deep learning models, they typically get better. And I say typically because it's not always the case.
And when you had these models working at scale, the more data you had, the better model performance becomes and with more data you can also build models that have more parameters and more complex models and the evolution of that keeps improving. So that's the making the models better aspect of how you can create value with machine learning.
Pauline: I love this framework and as an investor, I'm all about frameworks so I love that, you've now put a framework around this. If you think about the last three years and if you had to put a percentage on it, how much of the ML value has come from the top line or the bottom line? And I ask because I think there's now this big debate about how do you make the unit economics of AI/ML work in, let's say, Google search queries, right? And so, I'm curious as you break down 100 percent is it 50/ 50. Has there been more towards the revenue or the cost-saving side? What's been your experience?
Manju: You have to actually do both, and you can't focus on one or the other. And here's one way to think about it: especially with large language models, but this was even true with deep learning models is when you apply it to surfaces that require results to be served really quickly, which is true with ads, which is true with search, just also true with the recommendations, you want to make sure that these models can operate with really low latency.
The moment you cross a given latency threshold, you will see that there's a drop off in terms of buyers just bouncing from your experience. But you only get below that latency threshold if you're really diligent about how you're building these models and what kind of features are you using for these models and then what latency profiles are going to work for you.
And that directly also maps to cost because you're really diligent about how expensive it is to run these models. So you can do these trade offs around if the cost to run these models is more expensive than the marginal value of these models, then it's a net negative. But if you can build a framework around the cost to run these models, [if] the marginal benefit is not there, but there's a long-term benefit of running these models then they end up making sense. So there are all of these tradeoffs today around when to run these models and when to not run these models and where can we apply it and where we shouldn't be applying that.
Pauline: Yeah. No, I really appreciate that. You came into Etsy through an acquisition, right? And I think one of the big debates with enterprise buyers is this build vs. buy. How do you think about it? And what's been your experience moving from the acquiree to now the acquirer of these technologies?
Manju: That's actually a great question. Right now, I'm running a group called machine learning enablement. We started out this year. It's a newly created group and the mission of the group is to accelerate machine learning at Etsy. And it's also to democratize machine learning.
And as we were putting together our strategy like one of the questions, what are our strategic pillars? What should our prioritization framework be? And one of the core principles that we all aligned on was to move up the stack. That's the principle. And what we mean by that is let's play in spaces that is our core competency and let's not play in the spaces where the things are going to get commoditized.
And what's slowly getting commoditized we can see that all of the cloud providers before the arrival of Gen AI, all of the cloud providers were providing a machine learning platform. You have Amazon Sagemaker, you have Vertex AI and you have a similar thing from Databricks, you have similar thing from Azure.
So each of the cloud providers have a platform and they all give you a way to do distributed training, they all give you a way to do inference, they all have some sort of a feature store, they all have some sort of evaluation mechanism.
And trying to recreate that is not our core competency. And so as a two-sided marketplace, our core competency is the products that we're building and the data from those products that we can use to train our machine learning models. And so that's where we end up playing. So in terms of build vs. buy, the first question we ask is is this going to be our core competency? And the moment it is not, it's better for us to buy something there rather than building it ourselves. So that's one.
The other is what is the opportunity cost in terms of time and resources? Even if you decide to build it, there's an opportunity cost in terms of time because we have to hire, we have to spend the time building it. And it's not really clear whether the final product that we end up building will create value for us. So that's the second piece that we end up looking at.
Pauline: It's interesting because we follow the cloud players really closely, and we've followed various waves of AI. Etsy has had a long history of deploying models, whether it's decision tree models or deep learning models and now with more of these Gen AI models, it certainly does feel like delivery through an API has become more rampant than ever. And so what does this mean more broadly for the ecosystem?
Manju: Yeah, the big shift that we see happening with gen AI is that it's lowering the barrier of entry to experiment with machine learning. So before the arrival of gen AI, we could only experiment with machine learning and deploy them at scale in areas where we knew that the financial impact, the ROI is pretty huge there. And which is why I talked about the areas around like search, ads and recommendation, because those are really like the value creation machines of a two-sided marketplace.
But with the arrival of gen AI, what's happening is that non-experts can experiment with machine learning and build prototypes because all they need is prompt engineering or an API and build a prototype so that's definitely true with gen AI. What, however, is not true but I'm betting that we would get there in some time frame is building a prototype has changed the landscape and there's this like Cambrian explosion of these different ideas showing up.
But the process of productionizing it is not really straightforward. It's not really straightforward from a cost perspective. It's not really straightforward because many of these models are really stochastic in nature. So it breaks the paradigm for someone who's not coming from machine learning to just use it as an API and see that they're finding different results but the same query in two different contexts right.
And so that hasn't fully formed yet but that's because the whole gen AI ecosystem and the landscape is still maturing pretty rapidly. We don't have a canonical or a paved road for a gen AI stack, but we have it for machine learning, like the classical machine learning platform.
I think there's a paved road for it. There is a standard building blocks that can piece together to build a machine learning model and deploy a machine learning model. But that's not really true with Gen AI. So I think the experimentation, so the prototyping has gotten really great. But the other parts around productionizing, experimenting, and coming back and iterating around it, that is still hard today.
Pauline: People have been talking about well, what happens when these models move into production? And so can you just dive a little bit deeper as to what the challenges are and what gives you confidence that we will get there on a standardized production path?
Manju: I think some confidence just comes from history and here’s why. What I'm unsure about how long it's going to take for that to happen. I think one of the big, big questions that we all have with large language models is why is it so expensive to run them? But if you look back in history, today you can buy a phone and it barely cost you like $200 or even for $50 you can find a smart phone. But that wasn't true like 15 years ago where, if you wanted to buy a laptop or a computer it might cost you like $10,000 right.
So the cost curve is gonna come down as a given item becomes a commodity. So if you all believe that more and more of the use cases are going to leverage gen AI, the cost curve is going to come down.
And when that happens, I think it's going to become really economical to not just prototype with gen AI, but also to productionize with gen AI. And the timeframe for it is still unknown, like how long it's going to take there. But that's the optimist in me, which says that well that's how usually it works where things that are commodity usually end up competing for price so that they're accessible to all of us in the long run.
Pauline: Which brings me to my favorite trillion-dollar debate question, open source vs. closed source models. What is your sense of which will win? Especially given the context of these large language models are really expensive and they need to come down for I think society to deploy them as ubiquitously as we all think it should.
Manju: Yeah, I'll pick a leaf again from history. It's the open source or closed source is if we were at the time when Linux was just being built and Microsoft was the dominant operating system at that time. And many of us were skeptical about Linux even succeeding in this space. In fact, there were other forms of open source operating system including Linux, FreeBSD, SunOSs had its own version of licensing.
And Linux came into the space and the change that happened was over a long period of time, more and more developers wanted the open source model rather than the closed source model. Because there's one thing that's true about developers: this unending desire to know how a particular system is built. That's the heart of every engineer. We want to know, what are the different ingredients that make this thing work? And a closed source model. We can poke at it, from different ways in order to figure out, like maybe this is because the model is behaving that a particular way, here is how it was built, but we would still want to build something in the open source to validate that.
So while closed source model might get the early mover advantage, the open source model might start behind but eventually it will catch up. So like the trend that we see with Microsoft today where they are more open source friendly than they were like fifteen years ago. And the same thing is going to happen with these large language models too.
Pauline: Pretty strong open source championship from your end, which I love. How do you think about the use cases and maybe where people should be thinking more about using the closed source - the OpenAI, the anthropic models, in which scenario would you just say, hey, you should just go use those!
Manju: I believe that finding the product market fit triumphs over everything else. So if you take that mindset, I think a lot of us right now in gen AI space is they're looking at where it makes sense and where do we have the attention of the consumers if we end up adopting it?
And where is that moat, right? But once we have the product market fit, whether it's open source or closed source, I think that's a non-issue around it. So it's really around, let's figure out the product market fits first.
Pauline: Got it. Makes sense.
Manju: There are a lot of gen AI applications that are out there, but none of them have really got the attention of a user yet. Like we all play around with it but over time, people drop off from those things. The novelty effect dies down. So there's still a place where the product market fit with gen AI hasn't been figured out. But we will get there.
Pauline: Would you say that even with Copilot, GitHub copilot being as well adopted you know as we read it is, would you still argue that that doesn't have the consumer or the user love just yet or the longevity?
Manju: It has, but the way I see GitHub Copilot is it's all around efficiency, right? So, it's making developers more productive. So, if you go back to the earlier model that I was talking about like does it help with the top line or does it help with the bottom line and the GitHub Copilot is really about we could do the same work but now more efficiently and get there a lot sooner, but we don't have this, we don't have a Gen AI application that is core value is part of a value creation cycle yet.
Pauline: Got it. Got it. Okay. Yeah, that makes sense. And yeah, the way that you're bifurcating it makes sense.
So, bringing it back to Etsy and not leaving gen AI yet, how do you think about the most exciting potential use cases of gen AI whether it's been deployed in production or not within Etsy and e-commerce companies more broadly?
Manju: I feel like all of us are still in the cycle of trying to hone the product market fit with gen AI. For Etsy, one of the things that we've been looking at is in search especially, like when we look at search, Google has trained us on keyword search for decades now. For the first time now, we have natural language that is accessible to all of us and natural language that a computer can understand and interpret in a clear way.
So that leads to the question, should there now be a different paradigm in terms of how a buyer would go about finding what they're wanting to find, or how would buyers go about discovery? And so a good prompt would be let's say if I wanted to decorate my living room and I say help me decorate my living room in a mid-century modern style.
Now that query is incredibly hard to ask in a keyword-based search interface. But that query is now possible. That kind of discovery is not possible if you change the interface for search. So at least with Etsy, we're exploring that particular direction to see where, like what makes sense for buyers and what wouldn't make sense for buyers.
Pauline: What about from the efficiency perspective, like what are you most excited about in terms of making your org more productive?
Manju: GitHub Copilot has definitely been really helpful for developers. And a part of the reasoning is that it's already part of a normal developer flow. Developers are very comfortable with autocomplete. They're very comfortable with all of the tooling that an editor provides them. And GitHub Copilot just integrates that in it really well. So recommendation that comes out of GitHub Copilot, the developers just view it as an autocomplete. They either use it or they don't use it.
But what we see internally is that developers have been accepting some of the suggestions that come out, they accept it and they modify it a little bit and move to the next stage. So it's helping on the efficiency side for developers.
Pauline: Yeah. As you think about the tech stack and what’s been something that's been unlocked because of something, either a technology that you guys have built or bought. Is there something that comes to mind that you would say oh, you know this has really created a lot of unlock for Etsy internally that's happening in my tech stack?
Manju: It's not related to gen AI but we’ve seen that deep learning has created, has unlocked a lot of things for us and really what deep learning has given, it's made the process of getting to production a lot easier with deep learning. It's allowed us to focus on multi-objective and multitask models, which is really important for a two-sided marketplace, because with any two-sided market there is always network effects that ends up in place. So there are these second order impacts that now we can start optimizing on just having models that are really good at not just one objective, but multiple objectives.
Pauline: Yeah. And as you think about the biggest opportunity that either you see or gaps that you see in building or deploying AI that you think Etsy or another company should build, what would those opportunities or gaps be?
Manju: The biggest opportunity or the biggest unknown that still we haven't solved is model explainability and interpretability. And a part of the reason is it's a hard problem. And so we have all been focused on making models really performant, but we always end up in this place where like why is it more performant?
How does it really work? And what are the ingredients that make it work? There are some unintended consequences that we are not aware of, and that area is still catching up. It is still a valuable problem that I think we need to solve from a society standpoint where we need to know why some of the models work the way they do.
We need to be able to explain them in a way that all of us can understand. What we have today is that we can make sense of the models based on how they behave, but we still don't know why some of the ingredients make it work the way they work today.
Pauline: Yeah. And you mentioned that you're also leading up an ML enablement team. What are some of the findings that you've had to help empower everyone else in the org to use ML?
Manju: I think the big pieces at Etsy, we are a very learning driven culture. A good analogy for this is that if you look at non-machine learning experiences, that we built today we get from idea to production in just a week or so, but that's not true with machine learning.
And with machine learning, when we decided to build machine learning enablement group was we looked at like, what does it take once you have an idea to get it all the way to production and that turnaround time was pretty huge. And so a big question that we asked is can we shorten that turnaround?
Can we make machine learning work look very similar to a pure product experiences? That led to the thesis of we need to accelerate machine learning all the way from concept to production but do it in a matter of weeks which means that they would be more hypotheses out there that we can all experiment with and find the right direction in terms of product market fit. So that was one.
The other is that some of the machine learning tasks are very similar. They're just different domains. So I was talking about search and recommendations, but search and recommendations at its core are very similar problems at the end. It's a ranking problem. On search, you're ranking based on the query that was being asked. On recommendation, you're ranking based on user preferences, right? So, they're kind of a similar problem. And so trying to standardize as much of those pieces but what we call them as paved roads or paved paths, the standardization allows us to then focus on simplifying access to those building blocks.
And if you simplify access to those building blocks, then it opens up that ecosystem to non-experts, because then they can just focus on the building blocks and interact with it.
Pauline: Right. Yeah, I love that. And you mentioned earlier that you thought data is a main differentiation for Etsy, right? That is one of the strategic pillars. What does that mean as you think about production going forward? For example, we've now heard from more buyers saying that because we know how valuable our data is, it's going to make us think twice before we bring on a vendor who's going to use all of that because we want to keep it to ourselves.
Does that resonate with you? Does that not resonate with you? What does that mean for your role now that data or not now that but data is being accepted as a lot more strategic than it ever has been.
Manju: I would say for us, Etsy inventory is very unique, and which also implies that what we learn about our data is unique. And so a really good analogy is that sellers would make items that are just one of a kind and there's only one of them available. And you sell it on the marketplace and that's it.
So common techniques that are applied in other marketplaces are not transferable to Etsy because you don't have any history of the item and that makes it really, really unique in one aspect.
The other aspect is that there is no standard taxonomy of items. People would make their personalized items that don't fall under a given taxonomy. There is no specific categorization of items. When you look at the cohorts of items that are on Etsy, a hundred million inventory, you'll find niches that are not found anywhere else, but they're just on Etsy.
So the uniqueness of the inventory, the fact that there is the human aspect of it just makes it really unique but also makes that all of the data associated with it very unique because no other marketplace has that kind of breadth and depth of inventory that Etsy has.
Pauline: Right. Right. That makes sense. And so what does that mean for how you work with external companies?
Manju: Yeah, so it goes back to with machine learning, like the way you create value with machine learning is you could focus on improving your models or you can focus on data.
So they both become really important in that puzzle. Without the two of them, we can't create repeatable value with machine learning, which also means that the data and the model aspects are really critical to us and they become our competitive edge.
Pauline: Certainly. We've certainly heard that from a lot more of enterprise buyers and certainly people at scale operating across 100 million items, as you mentioned. And so that makes a ton of sense. And with that, let's go to rapid fire.
First rapid-fire question: what's your definition of AGI? And when do you think we'll get there?
Manju: I think I mentioned this. I'm optimistic by nature, but I'm also excited about the future of what AI is bringing to this space. What I don't know is when we are going to get there, but we will get there soon in some timeframe, either in my lifetime or the next. The definition of AGI is just in terms of the outcome. It frees us from all of the mundane things and allows us to focus on our time on areas that really matter.
Pauline: I like that. That's the first time I've heard that. And I like that definition.
Manju: Do you really want to drive a car from your home to your workplace? Or you have a car that drives you there while you use the time for something else. And that's a great use of your time. So I think AGI, if you just focus it in terms of the outcomes that we can enable, there is a space that you could enable outcomes that frees up time for us in ways that we would never have imagined.
Pauline: Your answer is also very distinctly human centric. I think a lot of the other definitions I've heard have been around the technology. What can the technology do vs. what should the technology to do for us? And so, I appreciate your very human-focused definition. Certainly it'll benefit all of us if we're all focused on that vs. blindly on what can the technology do.
Manju: I'm optimistic that that's the place that we end up at as humanity.
Pauline: So we certainly need it. So, let's cross our fingers. Second rapid-fire question: what is the biggest challenge facing AI practitioners or researchers today?
Manju: I alluded to model interpretability and explainability. I feel like we still don't fully understand how these large language models work and why they work that way.
And so I'm confident that we will be able to unpack it soon, but we aren't there yet. And that has a big impact on how we think of - the societal impact of that is huge and addressing that is super important.
Pauline: I think, right now, it's still too much of a black box for us.
Second to last rapid-fire question: who are one or two of the biggest influences on your mental framework on AI?
Manju: I love Andrew Ng, and he has both the technical depth as well as the ability to explain a really complex topic that anyone can understand and talk about the implications as we are progressing and on this side of the technology. So that's one.
And the other is It's not an AI, but it’s Richard Feynman. He is a very famous physicist. Many of us are aware of him. And he has this ability of first principles thinking and approaching everything from a curious mind. And I really appreciate that.
Pauline: Yeah, I've appreciated how multidisciplinarian and AI as a practice is. And so I love that answer.
And last rapid fire: what is one thing that you strongly believe about the world of AI today that you think most people would disagree with you on?
Manju: So one thing I think when technology shift happens, we all believe that the change would happen immediately, and everything is going to change immediately. But often that's not true. And sometimes change happens really, really rapidly. And sometimes it takes many, many decades before you even see that.
So I believe the change with AI is going to happen. What's not really clear is whether it's going to happen soon or whether it's going to take many decades before we really see the change.
And history is a great teacher. If you wanted to double click into it, look at when cars came about and over many years how cars have gotten smarter and smarter over time, but that hasn't happened instantly even though we wanted it to happen.
Pauline: So does this mean that you're on the spectrum of deployments of AI and really integrating it into our everyday lives is going to take longer than people think. Is that fair?
Manju: Exactly. It's going to take longer than we think. And overall it's going to make humanity better.
Pauline: Yeah, I think we both live in the Bay area. And so it's been really fun watching the Waymo cars and, certainly if AI is anything like autonomous cars, it'll be always three or five years away. And yet in 2023, I think we're finally starting to actually see it come for autonomous cars. So let's see how long it'll take for AI as well.
Manju, thank you so much for coming onto the podcast. It was such a delight to chat with you and. And certainly I think it'll be a very dynamic next few months. And we'll have to circle back and see how some of your predictions play out.
Manju: Great. Well, thank you so much for having me here. I know I really enjoyed being here.
I like your topics, let me know if you ever would be open to doing a guest post with A.I. Supremacy.