Scaling AI Systems Responsibly with PayPal’s Hui Wang
This is Cross Validated, a podcast where we speak to practitioners and builders who are making AI deployments a reality.
Today our guest is Hui Wang, who is the VP of Data Science and Machine Learning at PayPal. She spent nearly 20 years at PayPal and before that got a PhD in Statistics.
Listen and subscribe on Spotify and Apple.
Follow me on Twitter (@paulinebhyang)!
Transcription of our conversation:
Pauline: Welcome to Cross Validated, a podcast with real practitioners and builders who are making AI in the enterprise a reality. I'm your host Pauline Yang, and I'm a partner at Altimeter Capital, a lifecycle technology investment firm based in Silicon Valley.
Today our guest is Hui Wang, who is the VP of Data Science and Machine Learning at PayPal. She spent nearly 20 years at PayPal and before that got a PhD in Statistics. Hui, thank you so much for being on the show today.
Hui: Thank you for having me.
Pauline: We're super excited about this conversation. Would love to kick off with a little bit about PayPal, the company's mission and a little bit about your role.
Hui: Sure. I hope all of you use PayPal or Venmo or one of our brands, one way or the other. But PayPal has remained at the forefront of digital payment innovation and revolution for more than 20 years. Now we have a two sided global network that connects people and businesses around the world in more than 200 markets.
Our mission is to democratize financial services to ensure that everyone regardless of your background or economic standing has access to affordable, convenient and secure products and services to take control of your financial lives.
A few numbers just to show you our PayPal scale: we have 435 million active consumers and merchants globally. We process as of 2022 $1.36 trillion total payment volumes with 22.3 billion payment transactions. So that's PayPal's global scale.
My team or myself, I manage a team called Global Analytics and Data Science. Our mission is to empower PayPal's vision with best-in-class analytics, data science and AI/ML capabilities. So my team is responsible for providing all the analytical insights and foresights to drive business as well as on the forefront of deploying AI/ML technologies that will drive automation and enable the best customer experience and business strategy. And most recently we also started to act as a center of excellence for all the generative AI efforts along the enterprise.
Pauline: Well, I'm so excited to dig into so many things you talked about. Just to get started, can you talk a little bit more about what role does AI/ML play within PayPal and sort of ensuring that you hit your company's goals?
Hui: PayPal actually has been an early adopter of AI and also a leader in fintech industry when it comes to AI adoption. We've been building and leveraging our AI capabilities and expertise for over a decade, really a long time. So I'll give you a few examples. So as early as 2013, we started to build our in-house graph capabilities. And in 2017, we were actually the first in a Fintech industry that pushed a real-time fraud detection model based on deep learning architecture to live. And then we've also been employing our transformer-based deep learning, which is the same technology that empowers ChatGPT.
So today we use AI/ML across a very broad spectrum of PayPal businesses including fraud detection, risk management, customer protection and all the way to personalization and commerce enablement. And it has led to tremendous impact for PayPal. Over the past few years, as our business has grown, our volume almost doubled. We are able to keep our fraud loss rate at around 11 bps, which is lowest in our industry, and that's all thanks to the advancement in AI and ML technologies.
Pauline: It'd be great to double click on particularly fraud detection. Just given, I know your role in particular, you've done a lot in risk modeling and risk sciences. Talk about how that's evolved as a use case over the last five years and how upcoming technologies - whether it is transformer-based models or whatnot - has allowed that use case to be better or what's been unlocked?
Hui: Fraud detection is actually a very interesting, and I would say, fastest evolving domain within the broader AI/ML adoption within PayPal. We always say like - this is like a cat and mouse game because there are people on the other side, fraudsters on the other side of the world are probably also trying to attack us with very advanced technology.
So we can’t stay put and always stay on the forefront of technology. I think like in the last few years, I would like to share that in 2017 we started to from the first time push our real time fraud detection model based on deep learning architecture. And then by 2019, all our fraud detection models are based on deep learning, sophisticated architecture.
And ever since then, we continue to evolve. So today actually, thanks to the technology advancement on the compute power and also thanks to the availability of humongous amount of data, we're able to actually almost like a very rapidly or sometimes automatically update our fraud detection algorithms without human beings.
So that's really thanks to the modern technology as well as the machine learning architecture that are combining both the robustness and adaptiveness in our fraud detection algorithm.
Pauline: It's interesting because at Altimeter when we talk about AI and the definition of AI, our definition is really using massive amounts of compute and data to help humans make better decisions and faster decisions on everything. And it's so funny that you bring up those exact two things: the increase of power and the increase of compute. How do you contextualize how 1) the amounts of computer data have scaled over the last five years, and then 2) what does that mean for the use case of fraud detection that you can use those to do it on a much higher scale?
Hui: So it's a great question. So we think about that, so thanks to the compute power, thanks to the availability of humongous amount of data, we are able to sift through and see a lot more granular patterns within our universe and with that being able to again, I would say, make sure all the good customers have the best experience.
So in our risk prevention words was to save the dolphin who used to be caught in our net. So now, because it's an amount of data, we're able to see through the most granular pattern and they enable even a dolphin to be able to not be caught by our net. And then the way we do it is actually a lot of trade-off and balancing in going into the consideration.
So it's not like a way I shoot everything with the most advanced technology on the biggest amount of data. We're actually carefully strategizing and making sure again, we use the right amount of data and the right amount of compute power on the right problems and with that accomplish not just the business outcome we aspire to accomplish, but also do it in a way that's most economical and efficient for our business.
Pauline: Again, I love that because I think the business challenges that I'm seeing a lot of companies deal with is how to think about the ROI. These technologies require so much investment, whether it's the data, the compute, the talent. How do you think about or how does PayPal think about the ROI of this product and the investment that goes into it?
Especially, as everyone's talking about these massive models, trillion parameter models now. I think there's a question of inference and the cost of inference and whether that makes business sense. Would love to get your commentary again, just because you have such a rich history of this particular use case as well as many others at scale.
Hui: So we actually take a very practical approach is about choosing the right tool for the right problem. Actually, I would even say not every problem needs to be tackled by AI, I guess. So many noises out there around AI. People feel like AI can solve every single problem. Not necessarily.
So our philosophy actually, if a simple methodology can work, you don't want to go more complex than that, so we take a very careful tiered approach almost depends on how complex the problem is, depends on how critical the problem is to our business. And then we design a tiered approach where we use the most amount of data, most efficient technology to tackle the most complex problem that matters the most for our business.
While again, we use the simpler ones to tackle and if we can achieve the same business outcome, there's really no need for us to go and be more sophisticated. It's a very careful tiered approach when it comes to picking the right technology for the right problem based on and to achieve the ROI we desire to achieve.
Pauline: How does that tiered approach work for the models themselves? Obviously, the sort of trillion dollar question or debate is really closed source, proprietary models vs. open source. And obviously with Llama 2, the open source community has a lot of energy. I'm curious what your opinions are on that very critical debate.
Hui: We leverage a lot of open source, these big foundation models, the big larger language models. As we all know, I think by now the whole world knows those models are very expensive, very costly to train and to maintain. There is no reason I would say for anyone to start from ground zero anymore. So we should you always be able to sit upon... I say today's LLM has all the knowledge from the humankind. We think everybody should start from that instead of again, really going to start from the scratch.
And then we can focus all our energies and our compute power and our brains on bringing these to even the next level, to customize it for our own business or on our own data so that again we make sure every step is actually only become or will take us to become better basically.
Pauline: That makes total sense. If I take a step back and again you've been at PayPal for 20 years and so you've seen, I'm sure, multiple waves of AI/ML. I'm curious, how do you think about one or two examples of something that wasn't possible five years ago, that with the current technology today, it is possible. Would love to hear some anecdotes from the front lines.
Hui: I mentioned one example earlier around like our ability to be able to almost like automatically and rapidly refresh our deep learning model, which was like unheard of.
I mean, let's say not long ago, a deep learning model would take a week, a month to train just because we need to tune so many hyper parameters, but now actually we're able to rapidly within days, refresh our deep learning model and this is by the way, it's critical for fraud detection because fraud patterns change in real time and all the time. I can offer another example
Pauline:. And before we go to that example is it that you're just continuously training on more data or how does that work when you say automatically refresh your deep learning models?
Hui: So there are many different ways and we pick the right tool for the right problem. The simplest way is actually we're just running new data through the same architecture and the same process. Even by doing that, it sounds very simple, but our deep learning is able to learn the latest pattern coming from our data. It actually provides a lot of value. But in this process, we can actually introduce iteratively more advancement or more progress - like we can add a new feature, we can tune the network architecture optimized, but again, our philosophy is we only do it when it's necessary. So we're picking the right tool for the right problem.
Pauline: Got it. And then I interrupted you before you talked about your second use case.
Hui: Another example which is I mentioned PayPal, we have a two sided network: 430 million customers, 200 markets. So you can think about our data very much connected that you can imagine it's like almost on a graph. But let's say 10 years ago, like it's super hard, almost impossible for us to analyze our data on a real graph and leverage machine learning or AI technology based on what native graph data architecture.
But nowadays again, thanks to the advancement in the silicon, in the computer, thanks to all the advancement in graph databases, so we are able actually to analyse the data within a humongous graph and then with that being able to unleash or discover a lot more new patterns that can be used to drive our business.
Pauline: Super interesting examples. Really appreciate you walking through those. I'd like to shift gears a little bit and talk about generative AI, obviously the topic du jour right now. What do you think the opportunity is for generative AI within PayPal and Fintech companies more broadly? What are the biggest use cases that you think will come out of this?
Hui: I think before generative AI. So in general, we believe there are a lot of potential coming from the power of AI in general and then again, generative AI in particular. So luckily, at PayPal given our scale, we have a lot of data. We actually have more than 300 petabytes of data within our universe.
So that provides, I would say is our real asset or our competitive advantage to enable us to use AI or gen AI to provide a customer experience that no one else could. And then particularly for gen AI, I think we believe the potential is real. There is a lot of potential in terms of how gen AI can empower us to improve our productivity and efficiency at a scale that no one could even think of a few years ago. And then, on the way to remove all the manual work from our people's day to day. So we believe there is a lot of real huge potential coming with the gen AI. And then, we also wanted to make sure when we do this in a way with an ultimate responsibility. So we want to do it safely, we want to do it responsibly because again we are in an industry that we need to protect our people. So the trust is in our brand. So that's always front and center. So factors like quality, our IP security, privacy, transparency, all those responsible AI principles come in front and center in all these gen AI efforts.
So we want to make sure again, we can balance both sides out. And then while we unleash the full potential of generative AI, we also do it in an efficient and responsible way.
Pauline: I really appreciate that you bring up the responsibility. I'm going to pause on that really quickly and come back to the potential.
If you look five years out, if gen AI hits all the potential that you think it has the opportunity to, what does that look like for fintech companies and as consumers of those fintech companies like myself?
Hui: So I think there are if I separate like internal or external. I think internally like there is a vision, can we make sure we get rid of all the manual work from everybody's life. This goes beyond actually PayPal so gen AI comes with that promise so can we get rid of all the manual work from everybody's life? So that's kind of one way.
The other one is can we leverage gen AI to empower again consumer experience that we never thought about. So everybody will have a virtual assistant. Then this virtual assistant can guide you or can assist you towards anything you want it to do. So that's really a blue sky type of potential but I do think actually gen AI has the promise to get us there.
Pauline: So you mentioned that one of the biggest opportunities with gen AI is really removing manual labor. What form do you think that takes?
Hui: So we believe, in order to really balance out, we talk a lot about balance to balance out opportunity and risk when it comes to AI or in particular gen AI, a copilot approach is probably like the one that comes with the most promise and we actually can start now.
So what I meant was initially we expect these AI tools will act as a human assist. So it can be in many different areas. Actually, it can be for our risk. It can be in customer support or in our software development process. So the AI tool can come as an assist and that will make our people a lot more productive and a lot more efficient.
And that will also balance out the risk vs. the reward, putting it this way, and we can reduce the risk tremendously while we get again maybe 80% or 90% of the benefit almost immediately. As we evolve, of course, we can train the AI machine to be smarter and smarter, all the way to a point that actually will be more like an autopilot while we still have human being playing a validation role.
So the next step after the AI acting as a copilot and we can all go on the way from there. And then arguably we just need the AI machine to be as accurate or more accurate than humans. No one is 100% in our own decision making process. So that shows the path and also shows a lot of promise of what AI and gen AI can do for all of us.
Pauline: Absolutely. What would be one or two of the use cases that you see that are the most either high potential that you're seeing being deployed within PayPal or that you think will be deployed soon in this copilot model?
Hui: We are evaluating internally a lot of different use cases. So I think we also talk about [how] we want actually all hands on keyboard. Every employee in the company started to adopt this new technology and see again what it can bring to us. So we’re internally testing many, many use cases. We see a lot of promise or potential like everybody else do like we talk about in the productivity and efficiency gain domain.
But again, there are a lot of use cases we are evaluating. So whenever we see a good outcome, then we will be on the path to push it to production or really push it to our entire company.
Pauline: I love this concept of having all your employees with the hands-on keyboard. Are there any tips or tricks that you found in terms of incentivizing that behaviour and incentivizing experimentation right now in this period?
Hui: So a few things we do. So we created a COE (Center of Excellence unit) within the company that takes on the responsibility of education and awareness. So that's kind of number one.
And secondly, actually thanks to our sponsorship from our executive CTO, we run a company wide hackathon. I said, okay, get all the ideas on your mind out. And then we offer prizes from again monetary to all the way FaceTime with our executives, things like that to encourage really cross sourcing all the creative ideas from our employees. So that has been actually quite successful. We were able to get hundreds and hundreds of great ideas.
And then with the help of the Center of Excellence, we're able to see through these ideas and find the ones that have the highest potential to deliver the biggest impact. And then as a company we actually start driving concerted effort towards those high-impact initiatives. So that has proven to work for us.
Pauline: I love that funnel system. Thank you so much for sharing that.
I do want to come back to responsible AI because again, it is so important that it's talked about and it sounds like PayPal and your team and you have thought a lot about this.
What do you think is the biggest barrier or challenge to get there? Because I imagine many leaders would hear your statement and say, absolutely, I want that. The question is how and so what is the biggest obstacle that you see for us to get there?
Hui: I talk about the scale being a kind of a competitive advantage for any AI application. The scale could be in the scale of your compute or the scale of your data.
Actually, the same theme is at least one of the biggest challenges when it comes to doing it responsibly. So again, in responsible AI, you want to make sure, for example, your machine learns based on accurate data but when your data gets over a certain scale, how do you make sure that you're accurate?
So all these hallucination risks people talk about for LLMs or whatever all these large language models out there come from that. So again, when the scale gets over a certain threshold, it becomes very hard for you to make sure again, the inputs are of high quality, are accurate, and it comes without bias, and it's inclusive. All those kinds of things make it harder to do that. So this is one challenge that comes with these bigger and bigger AI/ML applications when it comes to responsible AI.
Pauline: How do you think or what have you seen has worked? Or where would you see, where would you want to see more research for us to tackle this very big issue of hallucinations? And particularly when it comes to financial data, I imagine consumers are even more sensitive. They don't want there to be these virtual bots quoting us the wrong information. And so how do you think we can tackle this?
Hui: I think first of all, like you mentioned, because at PayPal, because of the industry we are in, we always take responsible AI very seriously.
So we always say we innovate with responsibility. So we have had a responsible AI program within the company for many years. So we've been again adopting responsible AI processes and policies on all the advanced AI solutions we've been using for business. And gen AI just comes in. So the lucky thing is we don't have to start from scratch. We already have a pretty mature responsible AI program within the company. We can adopt a similar approach process or policies on generative AI-related initiatives.
So I think the key here is really to have a combination of a centralized the force that can drive discipline, drive process, and drive governance while leveraging again all the powers around the enterprise to cultivate the spirit of using AI or using gen AI to get the value. So it's really a combination of, again, focus and centralize the force that would drive the policy and process and then with that to enable the whole company to be able to capitalize on AI or generative AI.
Pauline: I think it's really important, especially if you believe that one of the big potential use cases is just removing labor, certainly it shouldn't be a consumer. It should be every worker, which is most consumers. And so I think it is very important that all of PayPal internally also have access to this.
Shifting gears a little bit. I'd love to hear about how your tech stack has evolved, particularly as you think about the framework of buying vs. building. How has that changed in the last few years or how the tooling has changed in that period?
Hui: So today, our internal AI tech stack is really a combination of home-grown components as well as many open source tools. So actually, this has changed over the years. Like 15 years ago, when we first started introducing advanced AI to PayPal system, we had to rely on home-grown enterprise AI management system because back then there was not much out there for us. Things have evolved. So in the last 10 years, we've been integrating and incorporating more and more open source tools, TensorFlow and so on and so forth into our MLOps platform.
Our philosophy again in AI domain is like there is no silver bullet. It's not like everything we need deep learning or whatever. So back to what I said, we choose the right tool for the right problem. And then as a company, we want to make sure we provide the whole suite of tools to our machine learning scientists and engineers so that we can enable and empower them to pick the right tool for the right problem. So that has been our philosophy.
The other thing is also along this line, we also want to make sure our MLOps system is suitable for different use cases. So I’ll give you an example. There are certain risk use cases that require very high service level, very short SLA, things like that. So again, then we build upon the full stack to support that vs we also have some other use cases that doesn't have as high service level requirements or SLA requirements. Then we can actually rely on more offline deployment platform that will enable the same business goal. So we take a very careful kind of approach here again.
Pauline: Sorry, just to make sure that our listeners can crystallize these use cases in their mind. Can you give an example of what would be something that does need high uptime or SLAs and then some that aren't?
Hui: So for example, fraud detection. Because we enable fast and secure money like payment between buyer and seller. So our fraud detection has to happen within a few hundred milliseconds while our customer is on our website trying to complete this payment so then we protect our customer and then we get rid of or stop the bad guys from doing damage to our people. So that has to happen like a very high SLA, high availability and very short latency is allowed.
Versus some others so I would say some for example in the customer service domain if we are processing like a response to an email. If a customer sends us an email, we have to respond. So that kind of use case is a bit lower in terms of service level requirements, especially in terms of turnaround time, because it's meant to be an asynchronous process. So there actually we do not have to build a real time system to support use cases like that.
Pauline: That makes sense. And I can't help but ask because we talked a lot about fraud detection, and then I think it's such an interesting use case. But certainly in all the surveys we've seen and talking to enterprise customers, customer service certainly is an important one.
And so would love to hear how that connects to the gen AI opportunity that we've been talking about and what that looks like again at scale because PayPal serves so many different customers in so many different geographies, right? And so can you talk a little bit about that use case?
Hui: So this is definitely a domain that we think gen AI or large language models can help really change the game. I guess, by now all of us have experienced all this large language model based chatbot, how good they can be, how human they can appear to be. So I think internally within PayPal, actually, we've been on this journey to use AI to innovate or really change the game in our broader customer service domain for years.
I think it's more than five years. And then we've built a lot of AI component that can help automatically help our customers resolve whatever problems they have in mind or a service demand that a customer is looking for. So now with large language model, we are really thinking of bringing it significantly to a next level and also make it a lot more human-like. So I truly believe this is an area that comes with potentially the biggest potential actually, when it comes to gen AI application.
Pauline: It just feels like one of those win-win-win scenarios that gen AI can help transform. So I'm very excited to see that come out as well.
As you think about data, and how important data is certainly, I think so many Fortune 500 leaders are thinking about how to protect my data, how to protect my customers’ data. Again I think finTech is one of those higher regulated industries. How do data privacy, data security, data, lineage, all of those topics, how do you think about them? And what are the biggest trends that you've seen within your org that you would call out?
Hui: We take data security and privacy really front and center in our overall strategy. So rest assured, so again, we protect our customers’ data. Everything we can to do that. So don't worry. Our customers never need to worry about again the security and privacy when it comes to our production on their data.
And I think the way we approach it, there is a technical way in accomplishing the same goal, but not compromising on the data security and privacy. I don't know if the listeners, your listeners, I'll refer you guys to, I think, a professor Shafi Goldwasser, from Berkeley. She has been talking about safe machine learning for quite a few years, so that's about a technical soft tool, a data privacy or data security problem. So there is a way we can build our powerful machine learning solutions to deliver business impact but do it in a way that's long term sustainable and protects our customers data in the biggest way and that no one needs to worry about.
Pauline: I'll have to put that in the show notes. I don't think I was familiar with her work before and so I'll have to do some reading. Last question before we get into rapid fire. What are some of the biggest opportunities or gaps that you see in building or deploying AI that you like to see either PayPal or another company build?
Hui: I will still going along the line, how do we deploy these very big, humongous AI applications responsibly? And I said, again, because the data scale is increasing all the time, it actually makes it more and more challenging. So I'll give you an example. Things I want to make sure our AI solutions come without bias, are inclusive.
How do we make sure? It's all about what we give this AI machine to learn. How do we ensure our data is actually comes without bias? Our data comes in an inclusive way. And these are things we are building, we have to continue to enhance and then the whole industry can think of tackling.
One way is that we want to make sure our workforce is inclusive or is diversified. That kind of helps ensure every step of the things we do. We always have this mindset that we need to build our solution also in the same inclusive and diversified way.
Pauline: My guess is that work will never be done. And so that will be an on-going social and commercial problem.
First question for rapid fire, what is your definition of AGI, and when do you think we'll get there?
Hui: AGI to me is really like a human. So a robot will really act like a human, and that's AGI from my point of view.
I think how long it's going to take us to be there. I think AI actually needs the whole ecosystem to win. If you only ask about technology, I think we will get there sooner rather than later. Like LLM actually in the speed that we get to where we are was faster than what I had anticipated. But the whole ecosystem will take longer. So with that, I was trying to say if the Industrial Revolution, I guess, took us 80 years, took mankind 80 years. I think, like I say, AGI will take us decades instead of like 80 or 100 years, given the speed of technology advancement.
Pauline: When you say ecosystem, what do you think will take the longest?
Hui: I think when we talk about ecosystem, so for AI solution, we talk about first mile and then there’s the AI compute, and the last mile. First mile is the data we've talked a lot about, how do we make sure it's responsible, things like that. So the important, another important part of the ecosystem is the last mile. So what kind of use cases we want to adopt and which AI technology we want to adopt after we do it, so what? Does it really make a difference in people's mind in what way?
So if you think about driving an airplane, autopilot works very well but there are still copilots. What does this really mean to people's life? Things along the lines of what I meant by the entire ecosystem. Again, technology will advance very fast, but these other things need to catch up for this whole end-to-end AI to really deliver impact for human beings. So that's what I meant.
Pauline: So is your framework then that not to say that the data piece is done, but are we in like the scale and compute piece still and then we're still trying to figure out that last mile? Is it sequential when you say that ecosystem is three parts?
Hui: It's very important all three actually have to advance. Not together. But all three have to advance. Again, technology many times will be leading. Like again, neural networks we talk about today and the technology was developed in the 1960s. So technology will be leading, but without the other, the entire ecosystem, the other things to catch up, we're not able to get all the potential or all the impact out of the AI solutions.
Pauline: Yeah, that makes sense. Okay, second question, what is your AI regulatory mental framework?
Hui: So on my mind, it's really a balancing act. So we are in FinTech. Financial industry calls for responsible AI and then tech industry has a lot of innovation, so we put them together. So it's really about responsible innovation. So like a balancing act. That's the mental model.
Pauline: Makes sense. I think it would be hard balancing act from this point forward, but certainly one that we have to keep pushing on. Third question: what do you think is the biggest challenge facing AI practitioners or researchers today?
Hui: So the challenge, one is on the first mile. How do we make sure we can manage the humongous amount of data in a way that is usable and also responsible. And then the last mile, where does it make sense for us to employ an AI? Where it doesn't actually?
And the last but not least, again, how do we make sure we can pick the right technology for the right problem, combining all the factors of ROI (return on investment), the GTM. All this is coming together to decide where to deploy AI in the most optimal way.
Pauline: Amazing. And second to last rapid fire question: who are one or two of the biggest influences on your mental framework on AI?
Hui: Yeah, I mentioned Shafi Goldwasser. So I won’t talk about her again.
The other influence for me is actually Professor Michael Jordan. Actually, I took his machine learning class. That was the very first machine learning class I had in my life and that opened up the door of machine learning. Recent years, he talks a lot about machine learning and economics. So one of his talks: when machine learning fails, economics might help. So it goes back to my mental model. It's actually not only about AI, not only about machine learning actually. You want to put all these multiple factors together and that's how AI solutions can really start delivering impact or value to business or to human beings.
Pauline: I love that. And last rapid fire question. What is one thing that you believe strongly about the world of AI today that you think most people would disagree with you on?
Hui: So technology is actually the least concern on my mind. So we can always build the best LLM or the most advanced AI technology. It’s these other things going back to my point around the ecosystem. The first mile and the last mile, they have to run at a similar speed. They cannot be left behind too much. Otherwise, again, even the best technology doesn't mean anything to us.
Pauline: With that, Hui, thank you so much for doing this. It's been a wonderful conversation and I appreciate you talking through all these big use cases that I think are really the tip of the iceberg for AI and really appreciate all the commentary.
And so thank you so much for being on!
Hui: Thank you. It was a pleasure.