AI in Gaming and Enabling Disruption with Unity's Danny Lange
This is Cross Validated, a podcast where we speak to practitioners and builders who are making AI deployments a reality.
Our first guest is with Danny Lange, SVP of AI at Unity Technologies, and former Head of Machine Learning at Uber and Amazon, with guest host, Rob Toews, Partner at Radical Ventures.
Listen and subscribe on Spotify and Apple.
Follow me on Twitter (@paulinebhyang) as well as Danny (@danny_lange) and Rob (@_RobToews)!
Transcription of our conversation:
Pauline: AI has captured the world's attention and imagination and will change nearly every aspect of how businesses are built and run. Business leaders will need to rethink their architectures and adapt to remain competitive. Join us on cross validated as we speak to practitioners and builders who are making AI deployments in the enterprise a reality. We'll explore the challenges and opportunities of this transformative technology and discover how it is being used to drive innovation, efficiency, and growth.
Danny: Imagination and innovation goes hand in hand here. And I think that’s what teams need to look for is really less hardcore software engineering execution, if you think of it that way and more of seeing the opportunities for changing the way that a business operates through these new technologies.
Pauline: Welcome to Cross Validated, a podcast with practitioners and builders who are making AI deployments in the enterprise a reality. I'm your host, Pauline Yang, and I'm a partner at Altimeter Capital, a lifecycle technology investment firm based in Silicon Valley.
Rob: And I'm your guest host Rob Toews. I'm a partner at Radical Ventures. A venture capital firm focused entirely on AI.
Pauline: Thanks so much for being our first guest on the show Danny, you've been involved with AI since the early two thousands and have been leading teams building AI at Microsoft, Amazon, Uber, and now Unity. Would love to just start with telling us what Unity does and what your role is as the senior Vice President of AI.
Danny: Thank you. And thank you for having me. I've really been looking forward to this session. Unity is is a game platform or game engine company. I've had the pleasure, as you said, working for some really well known brands, and there's a lot of people who don't know Unity. But we are about 75% of our mobile games are built on our platform.
So about 500 new games shipping every day. So we are really the preferred platform out there for game developers, for video game developers. And it's a very interesting place to be a VP of AI because it actually spans pretty wide compared to what most people would believe.
So at one end of the scale of my job is to actually enable monetization for game developers. So that's what some people sort of call the boring AI or machine learning stuff, but it's really about getting ads in front of game players so that game developers and studios can finance their development.
And then at the very other end of the scale is, creativity, basically using AI. A lot of the stuff that we're talking about right now with generative AI to get that in the hands of our game developers so they can deliver higher quality games with a much higher productivity than in the past.
Rob: Thanks for that overview, Danny. Really exciting where Unity's positioned really in the middle of the gaming ecosystem and, and increasingly the, the generative AI ecosystem as well. We wanted to start the conversation at a high level by talking about the flurry of news related to AI and generative AI.
That's been coming out as a kind of steady drumbeat over the past several weeks.
There's been a lot of big announcements and releases, but arguably the most newsworthy of all has been open AI's release of GPT-4 along with their announcement about ChatGPT plugins last week that enable developers to connect OpenAI's large language models to external applications and other external data sources. I'm curious to hear your reaction when you saw these announcements get made over the past several weeks.
Danny: Let me first say, and the pun is intended here. It's a game changer. That's a fact. I would also admit, having been in this area for over two decades I keep getting amazed and surprised at the same time.
Things are progressing much faster than than I thought that anyone thought. So I want to be really humble here and just basically say that. GPT-4 has actually shocked me. I know how last language models work. I'm an expert in reinforcement learning. I just never expected it to be this good, this quickly.
And it goes back... there are other epic moments such as AlphaZero and AlphaGo from Deep Mind where a system could play Go better than any human that that happened five, 10 years before anyone expected that so we on a rapid trajectory here,
Pauline: I'm curious, what's been the most surprising application that you've found from GPT-4?
Danny: I think its ability to play along with you. Remember, a large language model is just really sort of a compressed - it has a compressed version of all information on the internet. All textual information. But the fact that it, through reinforcement learning, has learned a policy that allow it to engage in a dialogue with the user that is pretty innovative.
It’s more than writing a summary of something. It's more than finishing a paragraph, but it is playing along in character, and working with you on your premises, on your instructions. That is really remarkable.
Pauline: That makes sense. I remember when I was first talking to developers about their experience about ChatGPT, it seemed like the magic came from having this thought partner, which required this back and forth that you're talking about.
And as you take a step back and hot take these developments as it relates to Unity, what has been the biggest or coolest unlock for the business?
Danny: There's no doubt that if we look at the graphical side first, it is tools like Midjourney and Stable Diffusion. The fact that you can create graphics that are awesome quality can be used directly in a game. That's already had a huge impact. I gave a fireside chat five months ago, I asked 150 studios in the room, has anyone played with Midjourney or Stable Diffusion? And I was prepared to sort of explain what it is and stuff was brand new at that time? Every single hand came up and a few studios - five months ago had already shipped the first games, generating some of the graphic assets using Stable Diffusion. That is really an epic moment of change.
Rob: As we take how a lot of these cutting edge new technologies will impact Unity's business, one thing that I think is really interesting is Unity is working on launching a generative AI marketplace for video game developers. Can you explain to our audience the concept behind this and, and what developers can expect?
Danny: It's really a two-sided marketplace. It's also really important to take… the way that there are AI companies, there are startups out there who create this fantastic new technology. And it's a race and it's a competition to get it to the market. At the other end of the other side of the market, there's all, they're all the game developers.
They're looking for productivity improvements. They're looking for new technology to dazzle their players. So what we are doing with this marketplace slash ecosystem is to make it easier for the startups, for the technology providers to meet their future customers, the game developers. And, and that's what, that's really the key essence is it's both a push and the pull.
And I think it's very important. It's a very important moment to be completely upfront here. At Unity, we have basically concluded that the developments are going so fast right now, we can't be the builders of all this. We need to bring the builders to our customers, the game developers, and that's what we are doing with this marketplace.
Pauline: It's interesting that, you know, you talked about the 150 studios and everyone raising their hands. How do you take the importance of speed versus quality versus distribution and as that plays into the role that Unity is in the game development world, how do you balance all of these things?
Danny: That's a very good question. The way you have to look at it is that there are a wide range of creators in this world. Some of them are the well-known AAA studios. They spend millions of dollars. They work on games for years. It’s… I tend to call them sort of the Steven Spielbergs of game creation.
And then at the way or the end of the scale, individual game developers. And their ability to reach a market very quickly because they work on their own dime. And they finance this their own way through advertisement, mostly through advertisement. They have very different demands.
The big studios - they can spend all the money they have on fancy graphics. The little people out there, the indies, 1, 2, 3, 4, people in a game studio - they have to use all available tools and they have to hit the market quickly enough to generate revenue for them to create the next game for them.
It's really a lifestyle business. It's a business of love. They, do what they love every day, but they have to live. So they have to make money on it. And I think that the ecosystem and the marketplace that we are launching is addressing sort of the needs of that into development.
Pauline: I know that when Altimeter invested in Unity, that was one of the big reasons why we were so excited about the platform, was that Unity was an important part of enabling these individual game developers.
If you think about some of the biggest challenges that you've found to unlock some of this value from generative AI, what are those and how is a unity addressing them or overcoming them?
Danny: It's really a matter of ease of uses. I come with an AI and machine learning background into the gaming industry six years ago, and there was a lot of stuff I wanted to do and I found it very difficult to do because there are a lot of artistic and creative needs on the game creation side.
It's not just about highly efficient infrastructure, etc. So often we would have a piece of technology and game develops would look at it and say, well, I don't really have time to deal with that because I have a tech deadline here. And that's what you just thought about this, this productivity need.
And what we have found is essentially that a lot of the technology that has been available so far is just too complicated to use. If you are in a big enterprise and you are an AI team and you can study papers and you can experiment all day, and then six months later you have something that's not how game development works.
It's not how the business works. So the real challenge here is to bring to these developers without making them AI technology experts, but they have to be power users of it. So ease of use is really what matters here and bring value. This is not an academic exercise. So I think that's the most important learning I've had.
Some of my teams have developed incredible technology that could animate NPCs or characters in a game through reinforcement learning. But it would take weeks, a month [and] it would take lots of compute resources and you would have to build infrastructure for it. And a lot of developers look at that and say, yeah, it's really cool, but I don't really have time to become a specialist in this. I can't hire anyone for this. I really need tools that allows me to hit the ground running much, much sooner.
Rob: And Danny on this topic of kind of the challenges of getting up and running with building AI. One thing that I think everyone has come to appreciate much more over the past couple years is just how expensive and how challenging it is to build AI. In terms of team resources, in terms of compute, requirements and so forth, curious how you would advise companies that are earlier on in their journey with AI in terms of how to take making those costs manageable and deploying AI in their businesses in a way that's practical.
Danny: That is actually a very important question. I think we are to a certain extent, very, very lucky right now that especially around LLMs, large language models, there's this whole concept of foundation models where all the heavy lifting has already been done and as a smaller company, you are able to customize or tweak fine, tune the big model to your particular domain, and, and that's very fortunate.
It may not be something we can we'll see in all cases. So let me get back to actually answering the real question, which is, how are you gonna, as a smaller startup, how are you gonna be up against companies that have, you know, literally, billions of dollars in cloud consumption funding?
I'm not saying they have it in cash, but they have it in cloud resources. How can you compete against that? And I think that is very important here to think back to mid nineties and the internet. At that time, everybody wanted to be like the next AOL. Everybody wanted to be something that brought the internet to people.
But at the end of the day, that really down to phone companies, AOL offering dial-up lines, phone companies offering DSL and cable companies offering high speed cable connections. And of course, small companies cannot go in and compete in that space.
But 99% of the internet, or maybe 99.9% of the internet is actually the services provided on it, not the infrastructure, not being the fiber optical company behind it. And I think when you look at AI, you have to, and I want everybody out there to really take this, you have to take that 99.9% of AI not being large language models, but bring the of those models in brand new areas that none of us can actually even imagine at this point.
So novel use - be the next Facebook. Facebook doesn't run internet. They just offer up social media. I can tell you in the mid nineties, social media was not a word. It's like nobody thought about it yet. So that's where I think we really need to look when it comes to AI, not being too humble by the big players, but really go in and say, nobody's doing this. I can use large language models. AI can use generative AI in graphics to create these things that nobody have ever been able to do before and focus on.
Pauline: Actually, Danny, on that point, can you give us a tangible example where you've gone through that with Unity and sort of how do you think through what exactly you're gonna implement?
Danny: I think we are very careful not to spread ourselves too widely around and try to do everything. We are trying to go in and create something that virtually nobody has done before. And I will give an example of that, which is we have spent actually last five years creating a deep neural net inference machine or engine. So basically you can run inference of a deep learned model, a deep neural net right there within your game on any device that Unity supports. So that will range from a Sony, PlayStation, Xbox, PC, Mac, all the way to an iOS device or an Android device. We support somewhere between 20 to 30 different platforms.
But we will allow you to run deep neural nets across all of those platforms locally on the device. Nobody else is doing that. This is what Unity is famous for. That's why we are so popular, because you can write once and run on many different platforms and we are bringing that to the AI space. And that's where we put our focus here.
We don't really wanna do chat bots that everybody else can do better than we can. So I think that's an example. We find that particular spot where we can make a difference and we can support our creators. Just think about not having to go back to a cloud service in every single interaction with a player you can use.
You can use these graphical models as a part of the graphics component of your game because there's no latency. It's all local on your device. If you use a chat bot, you don't have to go all the way back back to some internet, which cloud service, that's gonna, charge you for their service. You can do it all. I think that's an example of how we try to really focus our resources where it will matter the most for our developers.
Pauline: I think every organization is gonna have to figure out what is the most important and what can they deliver for the most important persona for each of these organizations.
Given your experience across so many respected organizations, would love to talk about the most interesting tips of the trade or hacks for fellow practitioners or people who are just earlier in their journey of deploying AI.
You've been at this a very long time. If you reflect on the last 20 years, how would you say, or what would you say is the biggest shift in how AI systems are deployed within companies and their own processes over this period?
Danny: AI and machine learning has had a profound impact on a wide range of industries. And early on it was in the early 2000, it was on sort of very narrow specific areas, such as recommendation was a big one. And when I look at sort of what has happened over the years, I see the impact of teams that did hands-on work. They basically saw an opportunity, and I'll give you a few examples of that, but they saw opportunities to really bring the business forward by picking a very interesting area and have, I would say a disruptive impact on it.
And I can mention a few examples. At Amazon it was moving from a classic data scientist driven recommendation method for Prime video recommendations, which had worked for years and years. To one where you basically sucked in everything you knew from IMDB and just trained a deep learned model and let it do the recommendations.
And boom, we'll beat the data scientist and created 30 plus percent lift overnight. Because apparently all the information from IMDB helped a lot. Traditionally at that time, the ground rule for data scientists was noise in and noise out, but people who actually played with deep neural nets have this ability to learn what's noise and what's signal and filled out the noise. That’s a part of the nature of deep neural nets. So by trying this in practice and actually deployed in A/B testing, you could demonstrate to a big company like Amazon that there's something here that will be a game changer for you.
I will give another example that I love a lot from Uber was something that TK did not like initially. The vision for Uber was transportation. What was it? Transportation as easy as running tap water, something like that. You need a taxi, you'll get a taxi.
So he resisted the reservation mechanism. Because he did not want to have this idea that a driver would sign up and pick you up at a given time. All that stuff. You should always just get an Uber when you need it. And we all knew reservations were really something that people wanted.
So that was a team that basically developed a machine learning model to predict when to make that call. So if you need a car tomorrow morning, eight, tomorrow morning, they will run the inference engine, the machine learning model, and they will try to predict when to place that that call to a driver. So actually you did not get a reservation.
They used machine learning to predict to predict accurately when the driver would be at at your house. And that was a very neat application, and that's actually how reservations at Uber worked for a long time. Just again, a practical example that shows people around you that machine learning and AI can actually do magic and change the way you do your business.
So my recommendation to teams are question some of these sort of traditional deterministic ways of doing things and start thinking probabilities. Very often you'll find very neat solutions and they will pave the way for a bigger change where everybody starts saying, but this is the future. this is how we're going to do things.
Rob: Fascinating examples of early applications of machine learning at really in high impact use cases at both Uber and Amazon. Thanks for sharing those. On this topic tips of the trade for operationalizing AI at scale within enterprises, I'm curious to hear your thoughts from a team perspective and what you think is most important and most necessary in terms of team composition and and coordination for a company to successfully deploy AI and maybe specifically, Danny within, to your earlier analogy of those 99.9% of companies that are not necessarily building the foundation models themselves, but are looking to kind of integrate and productize AI.
Danny: I think that imagination and innovation goes hand in hand here. And I think that’s what teams need to look for is really less hardcore software engineering execution, if you think of it that way and more of seeing the opportunities for changing the way that a business operates through these new technologies.
I think it's very, very important to not take old-fashioned software engineering as what I call Newtonian - that everything can be computed now, past and in the future, which is not true. That's why we have Heisenberg from quantum mechanics and his concept of uncertainty. This is how machine learning and AI operates is by probabilities, is by uncertainty.
And that's how our teams need to think as well. And that is my biggest advice, most important advice to these teams is to look broad and look at opportunities within their business to actually deploy these technologies in ways that completely change the way that they do the business.
And have that imagination. And I think that's the biggest shortcoming. It's actually much harder than you think to have these ideas, to disrupt your own business. I gave the example of reservations at Uber. I thought that was actually at that time, a very, very neat example of it was actually literally a hack week thing.
One team demonstrated this, everybody said, huh, we can do reservations without having a reservation system. It's like completely upside down. It sounds easy, but it's not for teams to come up with this. So that's why we need teams that have, I would say, greater diversity, broader experience, and probably also less risk adverse.
Pauline: I'm curious then, as a follow, do you, do you keep this team separate from the rest of the teams that are running? The rest of the business? Or how do you staff up or organize them?
Danny: I think it's actually would be very wrong to keep it separate for a very long time. You can initially do that, but I think it's very important to understand that the impact of this technology is really making a change when it's broad across the company. So I worked for Amazon. Machine learning was very near to Jeff Bezos. We are not a department that was hidden somewhere deep, deep down. That was important. And we actually put a version of the internal machine learning system.
We did put that on AWS as the first AI / machine learning service on AWS. Because to Jeff B, it was so important to get this out to the world. And you can go to Uber. This was a TK thing. TK wanted to run Uber more efficiently than anyone else. He knew machine learning / AI was gonna do that, including the self drivng cars, which I think was a stretch.
And then at Unity, John Riccitiello, our CEO, hired me to have a similar impact on Unity. Not just doing it in a corner, but doing it broadly across the company. And as I mentioned in our opening here, my work had been on the monetization side with advertisement networks, something that people don't think is very sexy. But this is what enabled creators to make a living for making games. So some people may not like these advertisement networks, but they're actually actually very important for the creators and for all the indies.
Otherwise they can't finance development of the games. And then all the way over to animation and graphics creation and reinforcement learning for NPCs, etc. etc., company needs to look at this as something that goes across the company, not just in a corner of exclusive development.
Rob: And on this topic of how the practice of machine learning has evolved over the years and decades, I'm curious to hear your reflections on how you've seen the available tooling evolve. And as part of that question, how you would advise listeners and their respective organizations to take this question of when to buy versus when to build internally for tooling?
Danny: That's actually a variation of one of my favorite questions, Rob, because I've been through this early on we would spend five years developing something from ground up there would be absolutely nowhere to go to get any tools or anything. You have to write all the code yourself.
Then gradually I experienced that at Microsoft there were certain things we could rely on, certain internal tools, certain components that enabled you to maybe create something in a couple of years from scratch. When I joined Amazon and took over the machine learning team there, we basically I would say relied on six or seven other services in AWS. We added more our components to it. So we're looking at that saying, oh, I can maybe now develop a service in 18 months, 12-18 months. When I joined Uber, we went all open source. So we just used a bunch of open source tools and hired a lot of smart people, and we could get going in 6-12 months.
And where we are today, I'm gonna risk something here saying I think that maybe open source has sort of outplayed itself a bit here. And we are all full circle back to vendor based software. I am actually starting to become a huge proponent of do not try to download software. Don't invent it yourself. That's number one.
Do not try to download it, fix it yourself. Just there are vendors out there. All kinds of services whether that is labeling services for annotating data, MLOps, your ability to monitor that your models are performing well, etc.
All that stuff at Amazon, we had to invent. All that stuff didn't exist today. You can actually purchase that from vendors. You can demand certain quality from them, certain support, and you can now focus on how to change your business, not mess around with a lot of software. And I think that we are at that transition now where a lot of companies are either looking or they will soon have to look for bringing in vendors to a degree that we haven't seen in a long time in our space.
Pauline: I'm very curious as to whether that statement extends to large language models, given that there's so many open models, whether it's Alpaca or Databricks Dolly. Do you include that in that category?
Danny: I think that a very concrete example, a lot of people have noticed that Midjourney, the imagery they create gets better and it just improves. And I know a lot of people who spent all day using Midjourney to generate graphics content for game development. And at the same time, look at Stable Diffusion, the model distributed open source by Stability AI. It's sort of the same as it was two or three months ago.
So the difference is that there's no feedback loop. You build a model in open source. You share it with the world. The world is happy, but then the world uses it and they don't actually make it better. So one of the most important elements here is what I would characterize as the network effect.
The thing that made Google better and better is every time we click, we help Google figure out what's important and not important. That feedback loop, Midjourney gets it. The open source projects, they don't really get it. They can retrain, they can share experience. They're really losing out.
So I think that the thing that makes ChatGPT and GPT-4 better than anything else is the feedback loop. That first, that feedback loop were hired people, were contractors, paid by OpenAI and Microsoft to provide human feedback for reinforcement learning algorithm. That was sort of the first step.
But now genie is out of the bottle and we are all using it and making it better through our use. When we like something, we use it. When we don't like it, we tell them we don't like it. That's how they are gonna make it better. So I think that with a lot of these systems we are gonna look towards network effect, that effect of making it better. And that's again a bit of a game changer. It’s not just a matter of downloading software and wrap it all up and then move on. Now this is something you have to maintain and maintain as long as it's operational.
Pauline: That's certainly a hot take.
So appreciate that as you think, whether it's opportunities in ML ops or more broadly, what are some of the biggest opportunities or gaps that you see in building or deploying AI that you'd either like to see Unity or some other company build?
Danny: I think the most important one to me is transparency. It is the ability to better understand whatever has been generated, where does it come from? So if you're looking at text, it's a bit like, what were the sources for this information? Can I trust that number is right? Can I go and look it up? Tell me where you got it from. In graphics, what inspired this particular graphical image?
I wanna make sure that I don't stumble into some kind of copyright issue, that it actually looks very much like something I just didn't know it. So that kind of transparency is completely lacking today. And a lot of people talk very abstractly about ethics and all that stuff, bias in models, etc., try to use Midjourney to generate images of a powerful CEO and you get… a vast majority to be white, middle-aged male. It would be nice to know more about how models derive this. And the reason I'm smiling here, and you can't see this on a podcast, but that's because with deep neural nets, that is increasingly difficult to do.
Transparency is very difficult. These networks have their ways of doing things and it's very hard for us, and it's even hard for their creators. Explain some of the outputs. So I think that's a big area that is a big challenge. It's a big area of improvement and something that we could really need.
Rob: On another topic, Danny, there's a lot of discussion and even angst right now amongst startup founders around how quickly the world of AI is moving and what that means for building an an AI startup and finding a product to focus on and a customer problem to go after. This is something you've already touched on a little bit, but having spent time in senior leadership roles at large companies like Amazon and Microsoft and now currently Unity, what advice would you have for early stage startup CEOs today in terms of navigating big tech companies like the Googles and Microsofts of the world and the formidable distribution and capital advantages that those companies have?
Danny: I think the most important part of my answer here would be what I talked about previously, which was don’t try to compete with the big guys head on. Don't try to do what they're doing. Try to do what they cannot necessarily do.
So try to work out applications. It's a reason that Google didn't really ship a thing like ChatGPT first. Because it's kind of really not great for the search business. I remember when I was at Microsoft, Google came out with Open Office and later Google Docs and they said, oh, it's Microsoft, we can't make Office free. It's a very important business with 10,000 employees in that division. We can't make it free, so how can we, you know, that kind of competition, it's not very constructive. What I really recommend is to look at these AI and machine learning technologies and the extreme fast progress being made and then work on unsurprising applications that just people didn't think was possible.
We see a lot of work around how to build 3D structures from just plain video because when you play a video, you get a sense of because of the different angles that you see, objects that you can get a sense of that 3D structure, we have seen a lot of work in animation, which is really a hard problem.
A lot of generative AI, if you look at it, is, is text based. And then it goes from text based to 2D graphics. We still have ways to go on 3D graphics, so 3D assets and 4D, you add motion to it over time. Then it gets really complicated. There's a lot of stuff that can be done right now that you don't have to be Google sized to deal with. It's just really a matter of imagination.
Pauline: I'm sure entrepreneurs are gonna be so happy to hear that. You don't have to have Google size or distribution or CPUs, and in all it takes is imagination. So thank you for that. And with that, let's move on to our rapid fire round.
So first question of five, what's your definition of AGI, and when do you think we'll get there?
Danny: So AGI is a complicated thing.
It's not a human system, it's a different species. It's a system that will be way smarter than us at certain points and not as smart as we are on other points. It's like when you look at animals, dogs have a good nose. We don't have a good nose. They have certain features that we don't have.
So AGI is not … remember the early days when airplane makers try to make airplanes with flapping wings. So trying to make an AI system that mimicking our brain, forget about it. That's not AGI. So I think AGI is this other species. It's very smart on certain points, not at all.
And it could be a fantastic partner for us and we'll be very impressed with it. We are not there yet. I think this is probably another 10 years, but not necessarily much longer than that either. And I'm gonna risk here because it's difficult to predict.
Pauline: When will you know, though, at what, what needs to happen for you to know we've gotten to AGI?
Danny: It needs to be highly competitive at the level of intellect to a human. So understanding very complex relationships in a 3D world, whether that is actually truly a self-driving car, which is still way out there. It's really understanding the complexities that goes on between humans and when humans interact.
One of the points of humans is in the last a hundred thousand years, our brains have not been updated. Our brains are the same hardware as when we were in the jungle in Africa. But we have been able to put a person on the moon since then purely by developing communication language organization between us.
So when we look at AGI, it means probably multiple AI systems working together. And we are gonna see the steps towards these systems that are gonna create their own feedback loops, are gonna do stuff where we are like, wow, humanity could not have done that.
Rob: All right. Second rapid fire round question related to the AGI topic.
This is a topic that's been on a lot of folks' minds recently. How much do you worry, Danny, about the existential risk that a powerful AI could pose to human civilization?
Danny: I believe there is an existential risk there. I think it is very important that we are not too humble there and that we basically decide on who controls AI systems. It has to be with the human aspect because these systems, and I know that up and close, these systems are incredible good at optimizing. They will optimize and they will work around and they will find loopholes and they will do whatever they can to be more efficient.
And humanity cannot allow them to just work around us. That would be very dangerous. I do wanna say that I think that climate change is gonna kill us way sooner than any AI and that we actually need really smart AI to help us solve that global crisis. But that's probably a different topic.
Pauline: Third, what is the most exciting AI application you've seen outside of your company?
Danny: I mentioned that I think GPT-4 as a chatbot, as an in character chatbot. The fact that you can ask it to be, hey, you are now Super Mario. I want to have a conversation with you, and you have to stay within that character.
That's pretty impressive that you can now start games that are just operating within character, but you don't have to script everything. I think that's gonna have a profound impact on development of games, graphics. I already talked about that. That's a big one too.
Rob: All right, next question.
This is the classic Peter Thiel question. What is one thing that you believe strongly about the world of AI today that you think most people out there would disagree with?
Danny: I’m more optimistic on when AI is gonna happen and on the impact, the true impact of AI. I think that a lot of people think it's gonna be sometime in the future far, far out. A lot of people are gonna be very disappointed in the very short term that we are not making more progress.
I work very closely in the early days of the internet and I have seen that impact it has had now that we could never have imagined. So I am probably more bullish on AI than most people actually. I'm not that scared of it, because I think we actually needed to help us.
Pauline: And last rapid fire question, what is the biggest challenge facing AI practitioners or researchers today?
Danny: Imagination. Like every time you have an idea, you start looking around. Someone is already working on that. It’s really become extremely challenging to be out there in front.
And I see a lot of incremental work. I see a lot of repetition out there. I would really love to see much more disruptive and really innovative stuff. And I know it's hard. It's really hard. But we really need to push the boundaries.
Pauline: Danny, thank you so much for coming on and being our first guest on Cross Validated.
This has been such an illuminating conversation and I wanna thank Rob for joining us as well.
Rob: Yeah, thanks for having me, and thanks for chatting, Danny. Really great conversation.
Danny: Wonderful questions. I really enjoyed this and I hope it's useful being useful to you and to your listeners.
Thanks. Thank you.