AI Disruption Archetypes and Deployment Factors with Bain's Roy Singh
This is Cross Validated, a podcast where we speak to practitioners and builders who are making AI deployments a reality.
Today our guest is Roy Singh, Partner and Global Head of Advanced Analytics at Bain. Bain announced a partnership with OpenAI recently to identify the value of and implement AI among Bain's Fortune 5000 clients.
Listen and subscribe on Spotify and Apple.
Follow me on Twitter (@paulinebhyang)!
Transcription of our conversation:
Pauline: Welcome to Cross Validated, a podcast with real practitioners and builders who are making AI in the enterprise a reality. I'm your host, Pauline Yang, and I'm a partner at Altimeter Capital, a lifecycle technology investment firm based in Silicon Valley.
Today our guest is Roy Singh. He's a partner at Bain and the global head of Advanced Analytics practice. Thank you so much for being on the show, Roy. I imagine most listeners know Bain as one of the globally leading management consulting companies. You've been there now for over six years. I'm curious how and when you got involved with AI.
Roy: Thanks Pauline. I really appreciate you having me on today. So it was quite a while back. My first neural networks were built as a college student back in the early nineties. And even before that, back in the eighties, I was building rule-based systems for symbol games like playing chess. And that was for fun.
And over the last couple of decades, before Bain, I joined Bain about six years ago as you mentioned, I was working on previous generations of machine learning and data engineering technology through the 2000s and 2010s.
Pauline: I know that Bain announced a partnership with OpenAI earlier this year. How did that partnership first come about?
Roy: Well, like many of your listeners, we had been following OpenAI since its inception in 2015, and we started talking to them around 2021 as we saw the foundation models were going to take a lead in creating a step change in performance in machine learning around text, images and audio.
And so the discussions progressed and one thing led to another when we entered into a partnership agreement about a year ago to use the technology internally at Bain but also to bring it to our enterprise clients.
Pauline: And what's been the most surprising thing to you or to Bain about working with OpenAI in the last year and then tracking them for longer?
Roy: So I think there's been a lot of surprises. I think like the rest of the world, we have been surprised at how rapidly OpenAI has moved. We have been surprised at how rapidly the whole space has moved, and I think that includes the advances in the capability of GPT-4, the multimodal capabilities that it brings not just text, but also vision and I'm sure soon other capabilities.
I think the launch of ChatGPT when we started the conversations, we very much had a mental model of a foundation model, API platform provider. And suddenly we now have an emerging new player in the consumer facing search and information market that has really taken off like a rocket. And that itself has created new battlegrounds in search and advertising. So lots of surprises.
Pauline: I think it's not an exaggeration to say a week in AI is a month or a year everywhere else. And speaking of how quickly things have been moving, I think one of the biggest pieces of news in this last week, even in AI, is just some of the mind blowing stats that Microsoft reported. What stood out to me was that they announced that they have 2,500 Azure OpenAI service customers, which is 10 x quarter to quarter.
And Bain has such an interesting perspective in the market, given you guys work with so many enterprise clients. I'm curious how you think about the mental shift that has created this urgency amongst some of the largest Fortune 500 companies in the world.
Roy: Well, I think a good chunk of it is awareness and some of that we just talked about. We saw over the last couple of years very public breakthroughs represented by Dall-e, but also you know Stable Diffusion, others, that I think really caught the imagination of the public at large.
And then that culminated in the launch of ChatGPT which as you know from OpenAI was really launched more as an experiment to get feedback on the technology. And the adoption of that then has led to a huge awareness among consumers, and obviously enterprise CEOs are also consumers. And so many of our clients are really excited about the potential of this technology to apply to their businesses, the potential to drive productivity to create new experiences for their customers and they're thinking through what the new capabilities mean, you know, for themselves, for their businesses, for their industries.
At the same time, it's still very early. I'd say the majority of our clients have obviously deployed statistical and machine learning techniques, you know for some years. Still, relatively few have really at scale deployed large amounts of deep learning into production. Even that lifecycle of adoption, just given the difficulties of training of MLops and so on, that journey was still one that many enterprises were on. At the moment, with regard to foundation models, there's a lot of experimentation and people are looking to get to production as quickly as possible. But the numbers that are in production is very small.
Pauline: The consumer vs. the more enterprise angle is a really interesting one. And ChatGPT really can be accessed by anyone. When you think about your enterprise customers, is the pull to do something with generative AI more top-down, of boards saying they need to do it, or more bottoms-up, you know employees within the organization using ChatGPT, where is the pull coming from?
Roy: I think you know what I'm gonna say. I think it's both. And I even say maybe there are the three dimensions to it. I think that there's the top down. This could be big for my business unit, for my function, and we should do something about this. I think that there are business users that are experimenting with ChatGPT and other relevant technologies and saying, wow, this is just useful for my job day in, day out.
And then there are developers who are prototype developers and data scientists who are prototyping, experimenting against the APIs. And I think it's the confluence of all of those three things. But I would say that over the past, I think it's the business user as well as the business executive element that has really accelerated due to public awareness over the past four to six months.
Pauline: And you mentioned something interesting about that a lot of these enterprises had thought about putting in deep learning neural network within their products, but many have failed to do it in full production. As you think about this next wave of AI, how do customers or how should customers be thinking about frameworks about where they should be deploying and how they should be deploying so that this is a more successful wave of deployment than maybe the former waves of AI or ML?
Roy: Good question. There's the where and the how. I think in terms of where. I think the three factors that we advise our clients to look at are value, feasibility and risk. So obviously whether you're trying to drive new revenues, whether you're trying to drive productivity and address cost, there are business problems that have different value of prizes associated with them. That's a pretty obvious factor.
Against that, we know that the feasibility both given where the technology is today, but also where enterprises are in their respective journeys can vary. So for example, there are cases where some of our clients get very excited about the ability to have, let's say, an intelligent assistant that is answering questions over numerical data.
Well, as we know, the numerical and logical capabilities of GPT-4 are a big step forward versus GPT-3 and 3.5. At the same time, if you're thinking about complex queries over large amounts of numerical data, these are not SQL engines and there's a set of numerical, logical and symbolic reasoning, which may still be better handled by other technologies or by technologies in conjunction with a large language model. And then I think there's a question of risk and, around risk, different varieties of risk. I think there's risk of customer perceptions and acceptance of, for example, the data being used.
I think sometimes people want to be talking to a human being about a particular experience. For example, you know, if you've got a complaint, there are obviously legal risks. There's very well publicized debate over copyright ownership that there are potentially issues around bias and model safety that need to be addressed.
So I think weighing up the value, you know, is it worth doing? Can we do it? And what are the risks of doing it? And then in terms of how, it's all about, I think, starting small and being very focused and bounded, to say what is the simplest possible way. So for example, in contact center, we often advise starting with an internally facing application that can make contact center agents more productive as a precursor to exposing chatbot technologies to end users and consumers.
Pauline: That's really interesting. And actually, speaking of contact center, is that one of the more popular use cases, or as these enterprises truly go into more of production mode, what are you finding are the most popular use cases or areas where deploying generative AI has been successful?
Roy: I would say the three sorts of archetypes are where we have companies and processes that are communication heavy, services heavy or knowledge heavy. So we have companies in telecoms, in banking and financial services to some extent in other areas like utilities where customer service is a huge part of the value proposition of these organizations. And there's a substantial proportion of the organization that's engaged in delivering that.
And these technologies have the potential to improve the level of service, just help understand customers better, deliver them a better service, make contact center agents and customer service professionals more productive. So that's definitely one archetype of communication heavy organizations and all of the processes and use cases around that.
We see also organizations, for example, in life sciences that have researchers that are engaged in synthesizing knowledge all day every day. Same thing in industrial engineering and in aerospace and so on so I think that's another archetype. And then I think that there are other areas where you have a service heavy offering and our own business at Bain is an example of that, but also an accounting and law, and outsourcing that there are just many businesses where white collar services and information workers are substantial amount of what's being provided. So these are some of the archetypes where we're seeing early traction.
Pauline: That's really helpful framework. What about, what are some of the most innovative or radical use cases that you've seen of deployments of OpenAI?
Roy: So, I would say for example, what Microsoft has done with Bing in partnership with OpenAI, I think is reinventing search obviously, and our experience of it. And I think we all feel that, the way that we've interacted with computers over the last twenty years, which has largely been search based. And we go and we type something into a search box and then we go and visit ten pages and we synthesize that. That's now being automated. An intelligent assistant is gonna go and do that for us.
I would say Microsoft and Google have innovated, I think, is really interesting what's happening in cyber security. So if you think about Microsoft Security Copilot and also Google with the launch of Sec-PaLM, I think language models that are trained in a domain specific way on what, 10 years ago, we would've taken a SIEM. I think that is really groundbreaking because as we know, cyber security professionals are just drowning in all of these events and data. Can we help them navigate that?
Couple of others, I'd say, Instacart, I find pretty inspiring as they've incorporated an intelligent assistant into their applications so that recipes, menus, nutritional information is directly accessible and also shoppable. You can just research a recipe, plan out the schedule of meals, and then hit buy. I think that it's a pretty inspiring vision of the shopping experience they have.
Just for fun, I think we see new research emerge every day. And just as I was reading through your topics that you wanted to talk about yesterday, Hugging Face put out a paper on generative disco. So encourage your listeners to go and look that up. But it's basically the idea of having music professionals able to come up with visualizations for pieces of music that are actually timed, with the music, so probably not something that we will actively work on with our enterprise clients, but it's a lot of fun.
Pauline: I saw on Twitter that there is a music AI hackathon I think happening relatively soon, so I'm very curious to see what comes out of those. But those are actually a wide spectrum of use cases, so appreciate you walking through some of those.
We talked a little bit about a lot of the previous waves of deployments of AI failing. What makes certain companies succeed in that realm while others not so much? Is it data? Is it talent? Is it organizational culture? What have you seen are some of the unifying characteristics of those who do it well?
Roy: I think that the first part I would say is about balancing delivering value from the use cases against building foundational capabilities. And what I mean by that is we see companies that are really all about focusing on, okay, let me apply this machine learning technology in this place and over in another place over here. And what we find is that, if you take only a use case focus, you don't build the common data sets, the common technology infrastructure, the common organizational skill sets and DNA for that to be scalable. We build a hundred different things in a hundred different ways without actually getting the velocity and the liftoff of reusable technology and reusable organizational capabilities.
I think the second thing is inherently for the past 15 plus years, I'm sure maybe longer, machine learning has been bleeding edge and it continues to be bleeding edge, and there has to be kind of an innovation and experimentation mindset.
So everything from funding of projects to how people are rewarded through to whether something is seen as successful or not. One has to actually have the DNA and mindset, and leadership and sponsorship to say, hey, it's okay to innovate and to try things. We're gonna throw some things against the wall and not all of it will work.
I think the collaboration across - it's kind of a cliche - but the collaboration across the technology and business orgs for example, product management skills are critical. So this is not some typically something that will succeed and where companies will get value without having a really close collaboration on these use cases to say, hey, let's try a bunch of things in the contact center. Some things will work. Let's have scrum teams of engineers, of data scientists, of business people working together and pivoting to find the things that really do work.
And then finally, the technology environments and skills, being able to if you have a cost of experimentation that means that you cannot quickly spin up an environment to be able to test out models, to be able to launch applications, to be able to go from the successful ones to scaling. And if you don't have the skills and the team to nimbly manage that process, you're just gonna lose relative to competitors that do. So those are a few aspects, I'm sure many more that we could discuss, but those are the top of mind.
Pauline: And actually on that point, there's a lot of companies right now, I imagine going through experimentation, trying to figure out where they can start deploying this. And of course to actually make true, to deliver on that value, you have to put them into production. How does a company know that they're ready to deploy and or what are the telltale signs that you know it's ready to go from experimentation to scaling it?
Roy: So I think the signs are different, I would say, or the bar is different if you're thinking about internal applications versus external customer applications. So I think often our own employees internally within organizations, team members are more forgiving. So if there are tools that can help them get their jobs done better, and even if there are some rough edges, I think that often the bar can be in a somewhat different place relative to launching a consumer facing application or a B2B customer facing application where your brand is on the line.
That said, I'd say so I think go internal first, right? I think it's easier to think about launching and scaling those applications. That said, I would say, even the experience of ChatGPT and Bing has kind of educated the world to the fact that you can adapt on the fly and provide something that has tremendous value while also adapting to feedback as it comes in from users. And so I think that leveraging that same mentality.
And then finally, I would say that there's just a lot of traditional DevOps software engineering practices that are still, not only still important, but even more important. And so testing and right now just the very notion of how do you test a large language model? There's multiple different aspects to testing. There's performance testing, there's model accuracy, which is itself, how do you define the accuracy of a conversation, bias, truthfulness, many other aspects.
But having, I think, a robust test suite and operational toolkit is I think one of the key things that organizations will need to think about in scaling these.
Pauline: I think what the ChatGPT, it's always been important in startups to build in public and I think with these models and the importance of RLHF, having that human feedback incredibly quickly has been a lot of the difference. And so I think, that's super helpful.
One thing I've been thinking about is there's a lot of problems that these models have. I think a big one that everyone talks about is hallucination and it doesn't seem clear yet exactly whether the best way to figure out these issues is through the model themselves or with more of the tech stack and other ways of implementing it or augmenting it. How do you think about the importance of the fundamental foundation of models themselves versus the tech stack around it and how do you see that tech stack evolving over time?
Roy: Big question and I think we're all still learning about that. I would say about a year and a half ago, probably our own mental model was that you take a foundation model and if you want, if you need to encode new knowledge in it, then you do that through fine tuning. And I think we've shifted in our thinking quite a bit, you know, learning from OpenAI and from others in the space.
And now our mental model is more that fine-tuning is really helpful for adapting the style of output of a model, and when it comes to, at least with text-based models, when it comes to adding knowledge to the model, and as you say, the tech stack surrounding the model can be more important. So we're seeing obviously, as you and your listeners know, the rise of vector databases that there’s a whole bunch of questions there around, will those prosper as a standalone category, will they be integrated into things like Postgres with PGVector and Elasticsearch and others?
We're seeing the rise of frameworks like Langchain and Haystack and LlamaIndex that add capabilities on top. We're seeing, obviously as you mentioned, RLHF or the whole training process around reinforcement learning companies like Snorkel, adding active learning into that process too.
So I think what we're gonna see is really a new software stack emerging across all of the traditional aspects of the software development life cycle. So around development, testing, monitoring, and also functionally around things like workflow orchestration, service proxies and so on.
So I think we're gonna learn some lessons from spaces like ML ops, data engineering and service meshes. Sometimes we’ll take some of those technologies directly. We've seen the launch this week of or over the past couple of weeks of Databricks efforts to build in LLM functionality into ML flow.
And sometimes I think it will need to be new frameworks. But I think that there will be a ton of change in this space, as we see this whole new stack built out. And I think probably more of it will be outside of the core foundation model than within.
Pauline: I think just given how powerful these foundation models are, there's been a lot of almost angst from ML ops practitioners as to what that means for them. What are you seeing in terms of how organizations are maybe shifting resources or shifting what their ML teams are focused on, given that the APIs can deliver a lot, right?
Roy: So I think that we have certainly seen some of our clients that have large teams that are devoted to NLP, computer vision and building bespoke specialized models say, okay, how can we leverage the new foundation models to be more effective now. I don't think it's leading to a reduction in the number of people needed.
I think it's actually just much more we can go after more problems and things that were economically infeasible because we had to have a million pieces of labeled data we can actually now go after, so we have more use cases. But our computer vision, our NLP engineers need to think about their roles in different ways as opposed to always having to train the model from scratch and from the ground up. How can you rely on some pre-built models?
Now with that, I think that there's a whole set of questions around how do you design a reliable software and model workflow when part of what you're using is not under your control. So how do you do things like again control the timing of version upgrades from your foundation model provider and have a robust test suite that can ensure that the performance of your models against the target tasks doesn't degrade. So I think we're still working out all of those processes. And there's a general shift towards saying how can we leverage these and have our own machine learning engineers and data scientists within organizations work at a high level of the stack.
And the other thing I'd say is, let's also remember that the vast majority of machine learning and analytical capability within most enterprises is actually put against structured data and tabular data. And while that has seen some modest and continued improvement, it hasn't been through the step change that unstructured data has in foundation models. So I think it's kind of a mixed story. It's not that foundation models are completely replacing analytics and engineering teams at enterprises.
Pauline: I'm sure a lot of ML ops practitioners will be happy to hear that. If I think about the last few months, if I think about earlier this year, we had the Dall-e moment where the open source of Stable Diffusion and how good it was compared to Dall-e was pretty mind blowing for a lot of us.
And I think in the past few weeks, it feels like we've had that on the tech side just with a number of open sources language models that have come out between Llama and StableLM and Hugging Face’s ChatGPT . I think there's a big debate right now between open source and closed source, do you have one model or multiple models? What's your perspective as to how this shift happens and where does this go over the next five or 10 years?
Roy: Absolutely. That is a huge question that nobody knows the answer to.
Pauline: Trillion dollar question.
Roy: It is, it really is. I'd say as well, it’s just hard, honestly, as you pointed out this before, for all of us to keep up with the pace of change, new models every week and so on. That said, as we think over the long term, And there's been some very influential papers on the scaling laws of AI models that again, we can refer your readers to in the show notes. I think there's one argument that says that we are continuing to see emergent behavior from the sheer size of these models that OpenAI, Google and others are scaling. And with that argument, I think you could argue that economies of scale around compute power will continue to be a differentiator. And that there may be other economies of scale, such as, for example, access to research talent, the ability to do deals with copyright holders, the ability to access data sets.
And then there's I think a countervailing argument that says that there's a long line of models that go generally by the names of furry animals. So I think if I'm getting my lineage right, it started with Chinchilla. Somehow there was also UL2, and there was Llama and Alpaca. And all of these were very much focused on how can we be much more efficient in the size of the models that we're training and maybe train a similarly sized model for longer on the same data set to get performance out.
The balance between those will the big models win versus will we get more efficient? Now both of those will be dynamics and we'll have to see exactly where those scaling laws play out. And we see obviously other moves, and many of us will have seen, for example, BloombergGPT and there's a host of others. I mentioned Sec-PaLM earlier from Google, where there are emerging domain specific models. So the exact stack between what proportions of the market will be taken by true horizontal foundation models vs. domain specific models. Will those domain specific models be built on top of those foundation models, and what will the economics be? And then finally, what is actually the relative value versus the original training of the models vs. things like RLHF and the human feedback on the base model. I think all of that is up for grabs over the next few years and will play out.
Pauline: And speaking of how quickly companies are moving, when we talk to founders, there certainly seems to be a lot of nervousness and energy around how to think about competing with Fortune 500s, the big tech companies like Microsoft and Google, and you have incredible access to the Fortune 500 companies that startups would love to target as customers.
And so what would you say to startup CEOs and founders about how to compete in this market and how they can differentiate themselves in this very dynamic competitive ecosystem?
Roy: I think so there's a couple of different strategies. I would say from a technology perspective, there will continue to be lots of parts of this emerging software stack as we build around, as we move from the notion of almost like a CPU, as a generic computation unit, to potentially a language model as a reasoning unit that can reason and perform tasks against natural language instead of against computer codes. So that's like a huge shift. And as we make that shift, there will be lots of aspects in terms of testing, monitoring, labeling, managing reinforcement learning, scaling in production, chaos testing and resilience testing and so on.
There will just be many different aspects of that that need to be solved. So I think there's one very viable business model and obviously things like ethics bias testing, many of these will need to be solved and I think there's definitely one viable business model, which is to take one of these and become the best in the world at doing it. And I think that there are end states there where tech founders can stay independent and be the best in the world and have a unicorn company that is solving ethics and bias in a sustainable way, for example. And then there are end scenarios where companies like that get acquired by a big technology or cloud vendor and get folded into a broader foundational technology and machine learning platform.
So I think there's that being the best in the world at part of this emerging stack is one strategy. And then I think the other one is to go after a very specific industry use case or collection of use cases. So for example, going into, let's say utilities or oil and gas, and thinking about the new uses of multimodal computer vision in predictive maintenance and just becoming really good at that and putting models into production, getting a bunch of feedback on what worked and what didn't and using reinforcement or active learning to train a set of models that just catapults you further ahead. And there you are differentiating on some of the data sets that you might use to fine tune or otherwise specialized models, you're differentiating based on the feedback that you're getting from enterprise customers to get your models more performant and you're differentiating based on the level of integration that you have with specific industry workflow or collection of workflows. So those are a couple of specific strategies that I think are really viable.
Pauline: I think the vertical side of the world is gonna become a lot more interesting as I think a lot of the areas that were maybe underserved or had less attention will become very big areas of opportunity.
And before we head into the rapid fire round, any last words to the Fortune 500 companies or the enterprise practitioners as to how to best set themselves up for success as we enter into this rapid generative AI deployment period?
Roy: I'd just say get yourself set up to experiment and get yourself set up to scale and those are two different things, and you're gonna need to cultivate both of those in your organization at the same time.
Pauline: And with that, let's head into our rapid fire round. So first rapid fire question, what's your definition of AGI and when do you think we'll get there?
Roy: I think as human beings, there are a bunch of capabilities that we all have: the ability to perceive the things around us, to understand how the world works, to communicate with others and understand them, also to reason and plan and use tools, and even also to learn new skills.
So I think for the machine to be seen to be truly intelligent, it's gonna have to be good at doing all of those things. And as to when I think we'll get there, I think we're still some decades away. But every year that I think about that question, I think my answer, my estimate shortens.
Pauline: Number two, how much do you worry about the existential risk that powerful AI poses to human civilization?
Roy: What I worry most about [is] military applications. I think similar to the advent of nuclear technology, we're gonna need to work through how countries define and enforce boundaries around the use of this technology and for military purposes.
I worry also about the wave of cybersecurity, which is that we're about to see as this technology potentially gets into the hands of bad actors and leads to a wave of attacks against systems. And also I think its influence on the political process and public discourse. We've seen obviously the impact of fake news in social media over the past few years in an election context. And I think that this can exacerbate that issue.
Pauline: So does that mean you're not worried about AI stealing people's jobs?
Roy: I think that for some number of decades, centuries, we have been through discontinuities and revolutions in technology. And the example I always like to give is when you look at the advent of the spreadsheet in the 1990s, you saw this huge reduction in the number of bookkeepers.
Bookkeeping was a profession before that, but you saw this huge explosion in financial analysts and it was driven by the same technology change. And I think similarly here, we will see certain role types reduce radically and disappear quickly. Others come about, and I think one of the interesting things about this revolution is that it's a technology that is accessible by natural language and so therefore empowers a lot more people than some of the previous technologies.
Pauline: That's a very optimistic view. I appreciate that. Third rapid fire question. What is the biggest challenge facing AI practitioners or researchers today?
Roy: I think keeping up with knowledge, it's moving so rapidly.
The rate of change not only in the knowledge, but also in the software environment is also very high. And then accessibility to data sets. And large models. So the state of the art in foundation models is no longer within academia, it's within industry. And I think that poses questions about the traditional model of academic institutions being able to be at the forefront and state of the art in this field.
Pauline: I know there's a lot of AI and people who spend half their time just reading the latest paper, so that's certainly a big one. Fourth, who are one or two of the biggest influences on your mental framework on AI?
Roy: So I think about AI fundamentally as being the intersection of what is the statistical heritage, a computer science heritage and also other fields like control theory. And for me, I think some of the statistical and Bayesian thinking from people like Andrew Gelman, Kevin Murphy, I think that's been very influential on my way of looking at machine learning, looking at the world.
I think, within computer science, not that I've spent a lot of time on John McCarthy's work, but I think early on being a lisp packer and playing with functional languages, I think that the early work on AI in lisp, and in prologue, and in functional and symbolic languages, I think has been pretty influential on me.
And then practically, I would say just the tremendous community around the python and pydata world and just practically being able to try things out has also been very influential. That's not one individual, but it's a community.
Pauline: Very diverse answers. I love it. And last rapid fire: what is one thing that you believe strongly about the world of AI that you think most people would disagree with you on?
Roy: I'll give you two. One may be more controversial, but the first one is that I don't think that the current generation of language models or other, most other mainstream machine learning models encode an understanding of causality. So Judea Pearl and others have written a lot about this. There are academic groups focused on this problem. But I think that causal inference, building causal inference into the current generation of AI engines will need a significant departure from the architectures and techniques that we're using today.
And I also happen to believe, and this may be holding onto some things that I've seen to be valuable in the past, but I fundamentally believe that, as well as things like large foundational AI models that are built on a deep learning architecture, that symbolic reasoning and potentially even classical machine learning and statistics will play a role in the ultimate engine that is AGI engine that comes about that. I don't think it will be purely based on stacking layers of deep learning networks.
Pauline: Those are certainly some hot takes. And with that, thank you so much for joining us today. A lot of the frameworks that you've provided, I'm sure others will find very helpful. And so thank you for taking the time and coming on and bringing such thoughtful answers to the conversation.
Roy: Thank you, Pauline.