What OpenAI & Google engineers learned deploying 50+ AI products in production
By Lenny's Podcast
Full Transcript
We worked on a guest post together, got this really key insight that building AI products is very different from building non AI products. Most people tend to ignore the non determinism. You don't know how the user might behave with your product and you also don't know how the LLM might respond to that. The second difference is the agency control trade off. Every time you hand over decision making capabilities to agentic systems, you're kind of relinquishing some amount of control on your end. This This significantly changes the way you should be building product. So So we recommend building step by step. When you start it forces you to think about what is the problem that I'm going to solve in all these advancements of the AI? 1 easy slippery slope is to keep thinking about complexities of the solution and forget the problem that you're trying to solve. It's It's not about being the first company to have an agent among your competitors. It's about have you built the right flywheels in place so that you can improve over time. What What kind of ways of working do you see in companies that build AI products successfully? I I used to work with the CEO of NOW Rackspace. He would have this block every day in the morning which would say catching up with AI 4 to 6am leaders have to get back to being hands on. You must be comfortable with the fact that your intuitions might not be right and you probably are the dumbest person in the room and you want to learn from everyone. What do you think the next year of AI is going to look like? Persistence Persistence is extremely valuable. Successful companies right now building in any new area, they are going through the pain of learning this, implementing this and understanding what works and what doesn't work. Pain is the new mode. Today my guests are Aishwarya Raganti and Kriti Badam. Kriti works on Codex at OpenAI and has spent the last decade building AI and ML infrastructure at Google and at Kumo. Ash was an early AI researcher at Alexa and Microsoft and has published over 35 research papers. Together they've led and supported over 50 AI product deployments across companies like Amazon, Databricks, OpenAI, Google in both startups and large enterprises. Together they also teach the number one rated AI course on Maven where they teach product leaders all of the key lessons they've learned about building successful AI products. The goal of this episode is to save you and your team a lot of pain and suffering and wasted time trying to build your AI product. Whether you are already struggling to make your product work or want to avoid that struggle. This episode is for you. If you enjoy this podcast, don't forget to subscribe and follow it in your favorite podcasting app or YouTube. It helps to tremendously and if you become an annual subscriber of my newsletter, you get a year free of a ton of incredible products, including a year free of Lovable replit, bold, gamma n8n, linear, devin, post Hoc Superhuman, Descript, Whisper Flow, Perplexity Warp, Granola, Magic Pattern, Drake, Cast, Chapter D, Mobbit and Stripe Atlas. Head on over to lennysnewsletter.com and click product Pass. With that, I bring you Aishwarya Raiganti and Kiriti Batum after a short word from our sponsors. This episode is brought to you by Merge Product leaders hate building integrations. They're messy, they're slow to build, they're a huge drain on your roadmap, and they're definitely not why you got into Product in the first place. Lucky for you, Merge is obsessed with integrations with a single API. B2B SaaS companies embed merge into their product and ship 220 customer facing integrations in weeks, not quarters. Think of Merge like Plaid, but for everything. B2B SaaS companies like Mistral, AI Ramp and Drata use Merge to connect their customers accounting, HR ticketing, CRM and file storage systems to power everything from automatic onboarding to AI ready data pipelines. Even better, Merge now supports the secure deployment of connectors to AI agents with a new product so that you can safely power AI workflows with real customer data. If your product needs customer data from dozens of systems, Merge is the fastest, safest way to get it. Book and attend a meeting@merge.dev Lenny and they'll send you a $50Amazon gift card. That's merge.dev Lenny. This episode is brought to you by Strela, the customer research platform built for the AI era. Here's the truth about user it's never been more important or more painful. Teams want to understand why customers do what they do, but recruiting users, running interviews and analyzing insights takes weeks. By the time the results are in, the moment to act has passed. Strela changes that. It's the first platform that uses AI to run and analyze in depth interviews, automatically bringing fast and continuous user research to every team. Strela's AI moderator asks real follow up questions, probing deeper when answers are vague, and surfaces patterns across hundreds of conversations all in a few hours, not weeks. Product design and research teams at companies like Amazon and Duolingo are Already using Strela for figma prototype testing, concept validation, and customer journey research. Getting insights overnight instead of waiting for the next sprint. If your team wants to understand customers at the speed you ship products, try Strela. Run your next study@strela.IO Lenny that's s t r e l l a IO. Lenny, Ash and Ki, thank you so. Much Much for being here and welcome to the podcast. Thank you, thank you for having us. Super excited for this. Let me set the stage for the. Conversation Conversation that we're going to have today. So you two have built a bunch of AI products yourself. You've gone deep with a lot of companies who have built AI products, have struggled to build AI products, build AI agents. You also teach a course on building AI products successfully. That and you're kind of like on this mission to just reduce pain and suffering and failure that you constantly see people go through when they're building AI products. So to set a little just foundation for the conversation we're going to have, what are you seeing on the ground within companies trying to build AI products? What's going well? What's not going Well? I think 2025 has been significantly different than 2024. One, the skepticism has significantly reduced. There were tons of leaders last year who probably thought this would be yet another crypto wave and kind of skeptical to get started. And a lot of the use cases that I saw last year were more of slapchat on your data, right? And that was calling themselves an AI product. And this year a ton of companies are really rethinking their user experiences and their workf and all of that and really understanding that you need to deconstruct and reconstruct your processes in order to have a. In order to build successful AI products, right? And that's the good stuff. The bad stuff is the execution is still all over the place. Think of it, right? This is a three year old field. There are no playbooks, there are no textbooks. So you really need to figure out as you go. And the AI lifecycle, both pre deployment and post deployment, is very different as compared to a traditional software lifecycle. And so a lot of old contracts and handoffs between traditional roles like say PMs and engineers and data folks, has now been broken and people are really getting adapted to this new way of working together and kind of owning the same feedback loop in a way. Because previously I feel like PMs and engineers and all of these folks had their own feedback loops to optimize and now you need to be probably sitting in the Same room. You're probably looking at agent traces together and deciding how your product should behave. So it's a tighter form of collaboration. So companies are still kind of figuring that out. That's kind of what I see in my consulting practice this year. So let me follow that thread. We worked on a guest post together that came out a few months ago. And the thing that stood out to me most, that stuck with me most after working on that post is yeah, this really key insight that building AI products is very different from building non AI products. And the thing that you're big on getting across is there's two very big differences. Talk about those two differences. Yes. And again, I want to make sure that we drive home the right point. There are tons of similarities of building AI systems and software systems as well. But then there are some things that kind of fundamentally change the way you build software systems versus AI systems. Right. And one of them that most people tend to ignore is the non determinism. You're pretty much working with a non deterministic API as compared to traditional software. What does that mean and why does that have to affect us? Is in traditional software you pretty much have a very well mapped decision engine or workflow. Think of something like booking.com, right? You, you have an intention that you want to make, booking in San Francisco for two nights, etc. The product has kind of been built so that your intention can be converted into a particular action and you kind of are clicking through a bunch of buttons, options, forms and all of that and you finally achieve your intention. But now that layer in AI products is completely being replaced by a very fluid interface, which is mostly natural language, which means the user can literally come up with ton of ways of saying or communicating their intentions. Right? And that kind of changes a lot of things because now you don't know how your user is going to behave. That's on the input side. And the output is also that you're working with a non deterministic probabilistic API, which is your LLM. And LLMs are pretty sensitive to prompt phrasings and they're pretty much black boxes. So you don't even know how the output surface will look like. So you don't know how the user might behave with your product and you also don't know how the LLM might respond to that. So you're now working with an input, output and a process and you don't understand all the three very well. You're trying to kind of anticipate behavior and build for it. And with agentic systems, this kind of gets even harder. And that's where we talk about the second difference, which is the agency control trade off. Right. What do we mean by that? And I'm kind of shocked so many people don't talk about this. They're extremely obsessed with building autonomous systems, agents that can do work for you. But every time you hand over decision making capabilities or autonomy to agentic systems, you're kind of relinquishing some amount of control on your end. Right. And when you do that, you want to make sure that your agent has gained your trust or it is reliable enough that you can allow it to make decisions. And that's where we talk about this agency control trade off, which is if you give your AI agent or your AI system, whatever it is, more agency, which is the ability to make decisions, you are also losing some control. And you want to make sure that the agent or the AI system has earned that ability or has built up trust over time. So just to summarize what you're sharing here, essentially, people have been building product software products for a long time. We're now in a world where the software you're building is one non deterministic, can just do things differently. Like, you know, as you said, you go to booking.com, you find a hotel, it's going to be the same experience every time. You'll see different hotels, but it's a predictable experience. With AI, you can't predict that it's going to be the exact same thing, the thing that you plan it to be every time. And then the other is there's this trade off between agency and control. How much will the AI do for you versus how much should the person still be in charge? And what I'm hearing is the big point here is this significantly changes the way you should be building product. And we're going to talk about the impact on how the product development lifecycle should change as a result. Is there anything else you want to add there before we get into that? Yeah, it's definitely one of the key points that this kind of distinction needs to exist in your mind when you're starting to build. For example, think about if your objective is to hike Half Dome in Yosemite, you don't start hiking it every day, but you start training yourself in minor parts and then you slowly improve and then you go to the end goal. I feel like that's extremely similar to what you want to build AI products in the sense that when you don't start with Agents with all the tools and all the context that you have in the company in day one and expect it to work or even tinker at that level. You need to be deliberately starting in places where there is minimal impact and more human control so that you have a good grip of what are the current capabilities and what can I do with them, and then slowly lean into the more agency and lesser control. So this gives you that confidence that, okay, I can know that, okay, this is the particular problem that I'm facing and the AI can solve this extent of it. And then let me next think through what context I need to bring in what kind of tools I need to add to this to improve the experience. Right. So I feel like it's also, it's a good and a bad thing in the sense that it's good that you don't have to see the complexity of the outside world of like, you know, all of these fancy AI agents force and feel like, I cannot do that. It's always everyone is starting from very minimalistic structures and then evolving. And the second part is like, it's also the bad thing is that as you are like, you know, trying to build this one click agents into your company, you don't have to be overrun with this complexity. You can like slowly graduate. So that's extremely important. And we see this as a repeating pattern over and over. Okay, so let's actually follow that. Right. Because that's a really important component of how you recommend people build AI stuff. AI stuff, AI products, AI agents, all the AI things. So give us an example of what you're talking about here. This idea of starting slow with agency and control and then moving kind of up this rung. Yeah. For example, very important or like very prevalent application of AI agents is like customer support. Right. Imagine you are a company who has a lot of customer support tickets. And why even imagine OpenAI face the exact same thing when we were launching products and there was a huge spike of support volume as we launch successful products like image or GPT5 and things like that. The kind of questions you get is different. The kind of problems that the customers bring to you is different. So it's not about just dumping all the list of help center articles that you have into the AI agent. You kind of understand what are the things that you can build. And so initially the first step of it would be something like you have your support agents, the human support agents. But you will be suggesting in terms of, okay, this is what the AI thinks that is the right thing do. And then you get that feedback loop from the humans that okay, this is actually a good suggestion for me in this particular case and this is a bad suggestion. And then you can go back and understand, okay, this is what the drawbacks are or this is where the blind spots are. And then how do I fix that? And once you get that, you can increase the autonomy to say that okay, I don't need to suggest the human. I'll actually show the, show the answer directly to the customer, support to the customer. And then we can actually add more complexity in terms of okay, I was only answering questions based on Help center articles, but now let me add new functionality. Like I can actually issue refunds to the customers. I can actually raise feature requests with the engineering team and all of these things. So if you start with all of this on day one, it's incredibly hard to control the complexity. So we recommend like you know, building step by step and then increasing it. Awesome. And you have a visual actually that will share of what this looks like. But just to kind of mirror back what you're describing, this idea of start with high control, low agency in your, the example you gave is the support agent just kind of giving suggestions is not able to do anything. The user is in charge. And then as that becomes useful and you are confident it's doing the right sort of work, you give it a little more agency and you kind of pull back on the control the user has. And then if that's starting to go well, then you give it more agency and the user needs less control to control it. Awesome. I think the higher level idea here is with AI systems it's all about behavior calibration. It's incredibly impossible to predict upfront how your system behaves. Now what do you do about it? You make sure that you don't ruin your customer experience or your end user experience. You keep that as is, but then remove the amount of control that the human has and there is no single right way of doing it. You can decide how to constrain that autonomy. Right. A very, I mean a different example of how you could constrain autonomy is pre authorization use cases. Insurance pre authorization is a very ripe use case for AI because clinicians spend a lot of time pre authorizing things like blood tests, MRIs and things like that. Right. And there are some cases which are more of low hanging fruits. For instance MRIs and blood tests because as soon as you know patient's information it's easier to approve that. And AI could do that versus something like an invasive surgery et cetera is more high risk. You don't want to be doing that autonomously. So you can kind of determine which of these use cases should go through that human and the loop layer versus which of the use cases AI can conveniently handle. And then all through this process, you're also logging what the human is doing, right? Because you want to build a flywheel that you could use in order to improve your system. So you're essentially not ruining the user experience, not eroding trust, at the same time logging what humans would otherwise do so that you can continuously improve your system. So let me give you a few more examples of this kind of progression that you recommend. And the reason I'm spending so much time here is this is a really key part of your recommendation to help people build more successful AI products. This idea of start slow with high control and low agency and then build up over time once you've built confidence that it's doing the right sort of work. So a few more examples that you shared in your post that I'll just read. So say you're building a coding assistant. V1 would be just suggest inline completion and boilerplate snippets. V2 would be generate larger blocks like tests or refactors for humans to review. And then v3 is just apply the changes and open PRs autonomously. And then another example is a marketing assistant. So V1 would be draft emails or social copy. Just like here's what I would do. V2 is build a multi step campaign and run the campaign and then V3 is just launch it A, B test it, auto optimize campaigns across channels. Awesome. Yeah. Yeah. And again, just to summarize where we're at, just to give people the advice we've shared so far. One is just important to understand AI products are different, they're non deterministic. And you pointed out, and I forgot to actually mirror back this point both on the input and the output. The user experience is non deterministic. Like people will see different things, different outputs, different chat conversations, different maybe ui, if it's designing the UI for you. Of it, it's also the most beautiful part of AI, which is, I mean we're all much more comfortable talking than following a bunch of buttons and all of that, right? So the bar to using AI products is much lower because you can be as natural as you would be with humans. But that's also the problem, which is there are tons of ways we communicate and you want to make sure that that intent is rightly communicated and the right actions are taken because most of your systems are deterministic and you want to achieve a deterministic outcome, but with non deterministic technology. And that's where it gets a little messy. Awesome. Okay, that's a. I love, I love the, the optimistic version of the why this is good. Okay. And then the other piece is this idea of this trade off of autonomy versus control when you're designing thing. And what I imagine what you're seeing is people try to jump to the idea like the V3 immediately and that's when they get into trouble. Both. It's probably a lot harder to build that and it's just doesn't work. And then they're just like, okay, this is a failure. What are we even doing? Exactly. I feel there is like a bunch of things that you actually have to get confidence in before you get to V3. And it's easy to get overwhelmed that, oh, my AI agent is doing these things wrong in 100 different ways and you're not going to actually tabulate all of them and fix it. Even though you've learned how do you deal with the evaluation practices and stuff like that, if you're starting on the wrong spot, you are actually going to have a hard time correcting things from there. And when you start small and when you start with building like a very minimalistic version with high human control and low agency, it also forces you to think about what is the problem that I'm going to solve. We use this term called problem first. And to me it was obvious in the sense that, yeah, I do need to think about the problem. But it's incredible how well it resonates with the people that in all these advancements of the AI that we are seeing, one easy slippery slope is to just keep thinking about complexities of the solution and not unforget the problem that you're trying to solve. So when you're trying to start at a smaller scale of autonomy, you start to really think about what is the problem that I'm trying to solve and how do I break it down into levels of autonomy that I can build later. So that is incredibly useful when we keep repeating this pattern over and over with everyone we talk to. And there's so many other benefits to limiting autonomy because there's just danger also of the thing doing too much for you and just messing up your, I don't know, your database sending out all these emails you never expected. There's like so many reasons this is a good idea. Yep. Yep. I recently read this paper from a bunch of folks at UC Berkeley, basically Mate, Zaharia, Ion Stoica and the folks at Databricks and it said about 74 or 75% of the enterprises that they had spoken to, their biggest problem was reliability. And that's also why weren't comfortable deploying products to their end users and building customer facing products. Because they just weren't sure or they just weren't comfortable doing that and exposing their users to a bunch of these risks. Right. And that's also why they think a lot of AI products today have to do with productivity because it's much low autonomy versus end to end agents that would replace workflows. And yeah, I love their work otherwise as well, but I think that's very in line with what at least we're seeing at my startup as well. Okay, very interesting. There's an episode that'll come out before this conversation where we go deep into another problem that this avoids, which is around prompt injection and jailbreaking and just how big of a risk that is for AI products where it's essentially an unsolved and unsolvable problem potentially. I'm not going to go down that track, but that's. Yeah, that's a pretty scary conversation we had that'll be out before this conversation. I think that will be a huge problem once systems go mainstream. We're still so busy building AI products that we're not worried about security, but it will be such a huge problem to kind of, especially with this non deterministic API again. Right. So you're kind of stuck because there are tons of instructions that you could inject within your prompt and then yeah, it's going really bad. Okay, let's actually spend a little time here because it's actually really interesting to me and no one's talking about this stuff, which is like the conversation we had is just. It's pretty easy to get AI to trick to do stuff it shouldn't do. And there's all these guardrail systems people put in place, but turns out these guardrails aren't actually very good and you can always get around them. And to your point, as agents become more autonomous and robots, it gets pretty scary that you could get AI to do things you shouldn't do. I think this is definitely a problem, but I feel in the current spectrum of customers adopting AI, the extent to which companies can actually get advantage of AI or improve their processes or streamline the existing processes that they have. I feel it's still in the very early stage. 2025 has been an extremely busy year for AI agents and customers trying to adopt AI, but I feel the penetration is still not as much as you would actually get advantage out of. So with the right set of human in the loop points in here, I feel we can actually avoid a bunch of these things and focus more towards streamlining the processes. And I am more on the optimist side in the sense that you need to try and adopt this before actually trying to be only highlighting the negative aspects of what could go wrong. So I feel strongly that companies has to adopt this. They definitely like no company at OpenAI we talk to has never had been the case that, oh, AI cannot help me in this case. It has always been that, oh, there is this set of things that it can optimize for me and then let me see how I can adopt it. Sweet. I always like the optimistic perspective. I'm excited for you to listen to this and see what you think because it's really interesting and to your point, there's a lot of things to focus on. It's one of many things to worry about and think about. Okay, let's get back on track here. So we've shared a bunch of pro tips and important pieces of advice. Let me ask, what other patterns and kind of ways of working do you see in companies that do this well and teams that build AI products successfully? And then just what are the most common pitfalls people fall into? So we could just maybe start with what are other ways that companies do this well, build AI product successfully. I. I almost think of it as like a success triangle with three dimensions. It's never always technical. Every technology problem is a people problem first. And with companies that we have worked with, it's these three dimensions, right? Like great leaders, good culture and technical prowess with leaders itself. We work with a lot of companies for their AI transformation training strategy and stuff like that. And I feel like a lot of companies, the leaders have built intuitions over 10 or 15 years and they are kind of highly regarded for those intuitions. But now with AI in the picture, those intuitions will have to be relearned and leaders have to be vulnerable to do that. Right. I used to work with the CEO of now Rackspace Gajen. So he would have this block every day in the morning which would say catching up with AI 4 to 6am and he would not have any meetings or anything like that. And that was Just haste time to pick up on the latest AI podcasts or information and all of that. And he would have weekend wipe coding sessions and stuff like that. So I think leaders have to get back to being hands on. And that's not because they have to be implementing these things, but more of rebuilding their intuitions. Because you must be comfortable with the fact that your intuitions might not be right and you probably are the dumbest person in the room and you want to learn from everyone. And that I've seen that being a very distinguishing factor of companies that build products which are successful because you're kind of bringing in that top down approach. It's almost always impossible for it to be bottom up. You can't have a bunch of engineers go and get buy in from the leader if they just don't trust in the technology or if they have misaligned expectations about the technology. I've heard from so many folks who are building that our leaders just don't understand the extent to which AI can solve a particular problem or they just wipe code something and assume it's easy to take it to production. And you really need to understand the range of what AI can solve today so that you can guide decisions within the company. The second one is the culture itself, right? And again, I work with enterprises where AI is not their main thing and they need to bring in AI into their processes just because a competitor is doing it and just because it does make sense. Because there are use cases that are very right, right. Then along the way I feel a lot of companies have this culture of FOMO and you will be replaced and those kind of things and people get really afraid. Subject matter experts are such a huge part of building AI products that work because you really need to consult them to understand how your AI is behaving or what the ideal behavior should be. But then I've spoken to a bunch of companies where the subject matter experts just don't want to talk to you because they think their job is being replaced. So as I mean again, this comes from the leader itself. You want to build a culture of empowerment, of augmenting AI into your own workflows so that you can 10x at what you're doing instead of saying that probably you'll be replaced if you don't adopt AI and stuff like that. So that kind of an empowering culture always helps. You want to make your entire organization be in it together and make AI work for you instead of trying to of guard their own jobs, et cetera. And with AI, it's also true that it opens up a lot more opportunities than before. So you could have your employees doing a lot more things than before and 10x their productivity. And the third one is the technical part which we talk about, right? I think folks that are successful are incredibly obsessed about understanding their workflows very well and augmenting parts that could be ripe for AI versus the ones that might need human in the loop somewhere, et cetera. Whenever you're trying to automate some part of a workflow, it's never the case that you could use an AI agent and that will kind of solve your problems, right? It's always, you probably have a machine learning model that's going to do some part of the job. You have deterministic code doing some part of the job. So you really need to be obsessed with understanding that workflow so you can choose the right tool for the problem instead of being obsessed with the technology itself. And another pattern I see is also folks really understand this idea of working with a non deterministic API, which is your LLM. And what that means is they also understand the AI development life cycle looks very different and they iterate pretty quickly, which is can I build something iterate quickly in a way that it doesn't ruin my customer experience at the same time gives me enough amount of data so that I can estimate behavior, right? So they build that flywheel very quickly. As of today, it's not about being the first company to have an agent among your competitors. It's about have you built the right flywheels in place so that you can improve over time. Right? When someone comes up to me and says we have this one click agent, it's going to be deployed in your system and then in two or three days it'll start showing you significant gains. I would almost be skeptical because it's just not possible. And that's not because the models aren't there, but because enterprise data and infrastructure is very messy and you need a bit to. Even the agent needs a bit to understand how these systems work. There are very messy taxonomies everywhere. People tend to do things like get customer data we want, get customer data we do, and these kind of things. And all those functions exist and they are being called and there's basically there's a lot of tech debt that you need to deal with. So most of the times if you're obsessed with the problem itself and you understand your workflows very well, you will know how to improve your agents over time. Instead of just slightly slapping an agent and Assuming that it'll work from day one, I probably will go as far to say that if someone's selling you one click agents, it's pure marketing. You don't want to buy into that. I would rather go with a company that says we're going to build this pipeline for you and that will learn over time and kind of build a flywheel to improve than something that's going to work out of the box to replace any critical workflow or to build something that can give you significant ROI. Easily takes 4 to 6 months of work even if you have the best data layer and infrastructure layer. Amazing. Amazing. There's a lot there that resonates so deeply with other conversations I've been having on this podcast. One is just for a company to be successful at seeing a lot of impact from AI, the founder CEO has to be deep into it. I had Dan Shipper on the podcast and they work with a bunch of companies helping them adopt AI, and he said that's the number one predictor of success is the CEO chatting with ChatGPT, Claude, whatever, many times a day. I love this example. You gave the Rackspace, like catch up on AI news in the morning every day. I was imagining he'd be like chatting with the chatbot versus reading news. With the kind of information you have as of today, you could just. I mean, you want to choose the right channels as well, because everybody has an opinion. So whose opinion do you want to bank on? I feel like having that good quality set of people that you're listening to really makes sense. So he just has a list of two or three sources that he always looks at, and then he comes back with a bunch of questions and bounces it around with a bunch of AI experts to see what they think about it. So that's cool. It's It's pretty cool. I was like, why are you doing so much? And then he says it trickles down into a bunch of decisions that we would take. Okay, let me talk about another topic that's very. It's been a hot topic on this podcast. It was a hot topic on Twitter for a while. Evals. A lot of people are obsessed with evals, think they're the solution to a lot of problems in AI. A lot of people think they're overrated that way. You don't need evals. You can just feel the vibes and you'll. You'll be all right. You talk about in terms of like what is going on in the community? I, I feel there's just this false dichotomy of like this either evals is going to solve everything or online monitoring or production monitoring is going to solve everything. And I find no reason to trust one of the extremes in the sense that I will entirely bank my application on this or like that to solve the thing. So if you take a step back, think of what are evals. Evals are basically your trusted product thinking or your knowledge about the product that is going into this set of data sets that you're going to build in the sense that this is what matters, matters to be like, this is the kind of problems that my agent should not do and let me build a list of data sets so that I'm going to do well on those. And in terms of production monitoring, what you're doing there is you're deploying your application and then you're having this some sort of key metrics that actually communicate back to you on how customers are using your product. Like you could be deploying any agent and like if the customer is giving a thumbs up for your interaction, you better want to know that. So that is what production monitoring is going to do, right? And this production monitoring has existed for products like for a long time. Just that now with the AI agents you need to be monitoring a lot more granularity. It's not just the customer always giving you explicit feedback, but there is many implicit feedback that you can get. For example in ChatGPT, right? Like if you are liking the answer, you can actually give a thumbs up. Or if you don't like the answer, sometimes customers don't give you thumbs down but actually regenerate the answer answer. So that is a clear indication that the initial answer that you generated is not meeting the customer's expectation. Right? So these are the kind of implicit signals you always need to think about. And that spectrum has been increasing in terms of production monitoring. Now let's come back to the initial topic of like, okay, is it evals or is it production monitoring? What does it matter? So I feel again we go back to this problem. First approach, what is it that you're trying to build? Like you're trying to build a reliable application for your customers that's not going to do a bad thing, it's always going to do the right thing. Or if it is doing a wrong thing, you're basically alerted very quickly. So I Break this down into two parts. One is, nobody goes into deploying an application without actually just testing that. This testing could be wipes or this testing could be, okay, I have this 10 questions that, that it should not go wrong no matter what changes I make. And let me build this and let's call this an evaluation data set. Now let's say you build this, you deployed this, and then you figured, okay, now I need to understand whether it's doing the right thing or not. So if you're a high throughput or like a high transaction customer, you cannot practically sit and evaluate all the traces. Right? You need some indication to understand what are the things that I should look at at. And this is where production monitoring comes into the picture that you cannot predict the base in which your agent could be doing wrong. But all of these other implicit signals and explicit signals, those are going to communicate back to you what are the traces that you need to look at. And that is where production monitoring helps. And once you get this kind of traces, you need to examine what are the failure patterns that you're seeing in these different types of interactions. And is there something that I really care about that should not happen? And if that kind of failure modes are happening, then I need to think about building an evaluation data set for it. And okay, let's say I built an evaluation data set for my agent, trying to offer refunds where explicitly I have configured it not to. So I built this evaluation data set and then like I made my changes in tools or prompts or whatever and then I deployed the second version of the product. Right now there is no guarantee that this is the only problem that you're going to see. You still need production monitoring to actually have like, you know, catch different kinds of problems that you might encounter. So I feel evals are important, production monitoring is important. But this notion of only one of them is going to solve things for you. That is completely dismissible in my opinion. All right, a very reasonable answer. And the point here isn't it's not just as simple as do both. It's more the that there are different things to catch and one approach won't catch all the things you need to be paying attention to. Exactly. Awesome. I want to take two steps back and kind of talk about how much weight the term evals has had to take in the second half of 2025. Because you go meet a data labeling company and they tell you our experts are writing evals, and then you have all of these folks saying that PMs should be writing evals, they're the new PRDs. And then you have folks saying that evals is pretty much everything, which is the feedback loop you're supposed to be building to improve your products. Now step back as a beginner and kind of think, what are evals? Why is everyone saying evals? And these are actually different parts of the process. And nobody is wrong in the sense that yes, these are evals, but when a data labeling company is telling you that our experts are writing evals, they're actually referring to error analysis or experts just leaving notes on what should be right. Lawyers and doctors write evals. That doesn't mean they're building LLM judges or they're building this entire feedback loop. And when you say that a PM should be writing evals, doesn't mean they have to write an LLM judge that's good enough for production. I think there are also very prescriptive ways of doing this and plus one to kd, which is you cannot predict upfront if you need to be building AN LLM JUDGE vs. You need to be using implicit signals from production monitoring, et cetera. I think Martin Fowler at some point had this term called semantic diffusion back in the 2000s, which kind of means that someone comes up with the term, everybody starts butchering it with their own definitions, and then you kind of lose the actual definition of it. That is kind of what is happening to evals or agents or any word in AI as of today. Everybody kind of sees a different side to it, I guess. But if you make a bunch of practitioners sit together and ask them, is it important to build actionable feedback loop for AI products, I think all of them will agree. Now how you do that really depends on your application itself. When you go to complex use cases, it's incredibly hard to build LLM judges because you see a lot of emerging patterns. If you built a judge that would test for verbosity or something like that, turns out that you're seeing newer patterns that your LLM judge is not able to catch. And then you just end up building too many evals. And at that point it just makes sense to look at your user signals, fix them, check if you have regressed, and move on instead of actually building these judges. So it all depends. I think one statement that every ML practitioner will tell you is it really depends on the context. Don't be obsessed with prescriptions, they're going to change. That's such an important point. This idea that especially that evals just means many things to different people now. It's just like A term for so many things. And it's complicated to just talk about evals when you see it as the stuff data labeling companies are giving you. Anthony's PM or. Right. And there's also benchmarks. People call benchmarks a little bit evaluate. It's like, I. I recently spoke to a client who told me, we do evals. Yeah. And And I was like, okay, can you show me your data set? And say, no, we just checked LM arena and artificial analysis. These are, you know, independent benchmarks. And we know that this model is the right one for our use case. And I'm like, you're not doing evals. That's not evals. Those are model evals. But But it makes sense. Like the word, you know, it could be used in that context. I get why people think that. But yeah, now it's just confusing it even more. Yep. Just Just like one more line of questioning here that I think, think that's on my mind is the reason this became kind of a big debate is cloud code. The head of cloud code Boris was like, nah, we don't do evals on cloud code. It's all vibes. What can you share Kriti on Codex and the Codex team, how you approach evals? So Codex, we have like this balanced approach of like, you know, you need to have evals and you need to definitely listen to your customers. And I think Alex has been on your podcast recently and he's been talking about how. How we are extremely focused on building the right product. Right. Right. And a big part of it is basically listening to your customers. And coding agents are extremely unique compared to agents for other domains in the sense that these are actually built for customizability and these are built for engineers. So coding agent is not a product which is going to solve these top five workflows or top six workflows or whatever. Right. It's meant to be customizable in multi different ways. And the. The implication of that is that your product is going to be used in different integrations and different kinds of tools and different kinds of things. So it gets really hard to build an evaluation data set for all kinds of interactions that your customers are going to use your product for. Right. Right. But that said, you also need to understand that, okay, if I'm going to make a change, it's at least not going to damage something that is really core to the product. So we have evaluation for doing that. At the same time, we take extreme care on understanding how the customers are using it. For example, we built this code review product recently, and it has been gaining extreme amount of traction and I feel like many, many bugs in OpenAI as well as even our external customers are getting caught with this. And now let's say if I'm making a model change to the code review or different kinds of RL mechanism that I trained with with it and now if I'm going to deploy it, I definitely do want to ab test and identify whether it's actually finding the right mistakes and how are users reacting to it. And sometimes if users do get annoyed by your incorrect code reviews, they go to the extent of just switching off the product. Those are the signals that you want to look at and make sure that your new changes are doing the right thing. And it's extremely hard for us to think of these kind of scenarios beforehand and develop evaluation data sets for it. So I feel like there's a bit of both. Like there's a lot of wipes and there's a lot of like customer feedback and we are super active on like the social media to understand if anybody's having certain types of problems and quickly fix that. You do here that makes so much sense. Okay, what I'm hearing Codex Pro evals, but it's not a no enough. You need to. But also just watch customer behavior and feedback. And also there's some vibes just like is this feeling good? Is this as I'm using it, generating great code that I'm excited about, that I think is great. I don't think if anybody's coming and seeing that I have this concrete set of evals that I can bet my life on and then I don't need to think about anything else. It's not going to work. And every new model that we are going to launch, we get together as a team and test just different things. Each person is concentrating on something else and we have this list of hard problems that we have and we throw that to the model and see how well they are progressing. So it's like custom evals for each engineer you would say and just understand what the product is doing in its new model. If you're a founder, the hardest part of starting a company isn't having the idea, it's scaling the business without getting buried in back office work. That's where Brex comes in. Brex is the intelligent finance platform for founders. With Brex you get high limit corporate cards, easy banking, high yield treasury, plus a team of AI Agents that handle manual finance tasks for you. They'll do all the stuff that you don't want to do, like file your expenses, scour transactions for waste, and run reports, all according to your rules. Hour already and we haven't even covered your extremely powerful software development workflow for building AI products that YouTube develop, that you teach in your course that you basically combines all the stuff we've been talking about into you a step by step approach to building AI products. You call it the Continuous calibration, Continuous Development Framework. Let's pull up a visual to show people what the heck we're talking about and then just walk us through what this is, how this works, how teams can shift the way they build their AI products to this approach to help them avoid a lot of pain and suffering. Before we go about explaining the life cycle, a quick story on why Kirita and I came up with this is because there are tons of companies that we keep talking to that have the pressure from their competitors because they're all building agents. We should be building agents that are entirely autonomous. And I did end up working with a few customers where we built these end to end agents. And turns out that because you start off at a place where you don't know how the user might interact with your system and what kind of responses or actions the AI might come up with, it's really hard to fix problems when you have this really huge workflow which is taking four or five steps, making tons of decisions. You just end up debugging so much and then kind of hot fixing to the point where at a time we were building for a customer support use case, which is the example that we give in the newsletter as well, and we had to shut down the product because we were doing so many hot fixes and there was no way we could count all the emerging problems that were coming up. Right. And there's also quite some news online recently. I think Air Canada had this thing where one of their agents predicted or hallucinated a policy for a refund which was not part of their original playbook and they had to go buy it because legal stuff. And there have been a ton of really scary incidents. And that's where the idea comes from. How can you build so that that you don't lose customer trust and you don't end up or your agent or AI system doesn't end up making decisions that are super dangerous to the company itself. At the same time, build a flywheel so that you can improve your product as you go. Right? And that's where we came up with this idea of continuous calibration, continuous development. The idea is pretty simple, which is we have this right side of the loop, which is continuous development, where you scope, scope capability and curate data. Essentially get a data set of what your expected inputs are and what your expected outputs should be looking at. This is a very good exercise before you start building any AI product, because many times you figure out that a lot of the folks within the team are just not aligned on how the product should behave. And that's where your PMs can really give in a lot more information and your subject matter experts as well. So you have this data set that you know your AI product should be doing really well on. One it's not comprehensive, but it lets you get started and then you set up the application and then design the right kind of evaluation metrics. And I intentionally use the term evaluation metrics, although we say evals, because I just want to be very specific in what it is, because evaluation is a process. Evaluation metrics are dimensions that you want to focus on during the process, right? And then you go about deploying run your evaluation metrics. The second part is the continuous calibration, which is the part where you understand what behavior you hadn't expected in the beginning. Because when you start the development process, you have this data set that you're optimizing for. But more often than not, you realize that that data set is not comprehensive enough because users start behaving with your systems in ways that you did not predict. And that's where you want to do the calibration piece, right? I've deployed my system. Now I see that there are patterns that I did not really expect. And your evaluation metrics should give you some insight into those patterns. But sometimes you figure out that those metrics were also not enough. And you probably have new error patterns that you've not thought about. And that's where you analyze your behavior, spot error patterns, you apply fixes for issues that you see, but you also design newer evaluation metrics to figure out that they are emerging patterns, patterns. And that doesn't mean you should always design evaluation metrics. There are some errors that you can just fix and not really come back to because they're very spot errors. For instance, there's a tool calling error just because your tool wasn't defined well and stuff like that, you can just fix it and move on. And this is pretty much how an AI product lifecycle would look like. But what we specifically also mention is while you're going through these iterations, try to, to think of lower agency iterations in the beginning and higher control iterations. What that means is constrain the number of decisions your AI systems can make and make sure that they're humans in the loop and then increase that over time because you're kind of building a flywheel of behavior and you're understanding what kind of use cases are coming in or how your users are using the system. Right? And one example I think we give in the newsletter itself is the customer support. This is a nice image that kind of shows how you can think of agency and control as two dimensions. And each of your versions keep on increasing the agency or the ability of your AI system to make decisions and lower the control as you go. And one example that we give is that of the customer support agent, where you can break it down into three versions. The first version is just routing, which is, is your agent able to classify and route a particular ticket to the right department? And sometimes when you read this, you probably think, is it so hard to just do routing? Why can't an agent easily do that? And when you go to enterprises, routing itself can be a super complex problem. Any retail company, any popular retail company that you can think of has hierarchical taxonomies. Most of the times the taxonomies are incredibly messy. I have, have worked in, you know, use cases where you probably have taxonomy that says, you know, some tax, some kind of hierarchy, and then that says shoes, and then women's shoes and men's shoes all at the same layer where ideally you should be having shoes, and then women's shoes and men's shoes should be sub, you know, classes, right? And then you're like, okay, fine, I could just merge that and you go further and you see that there's also another section on the shoes that says for women and for men. And it's just not aggregated, it's not fixed for some reason. So if an agent kind of sees this kind of a taxonomy, what is it supposed to do? Where is it supposed to route? And a lot of the times we are not aware of these problems until you actually go about building something and understanding it. And when these kind of problems, real human agents see these kind of problems, they know what to check next. Maybe they realize that the, the node that says for women and for men That's Under Shoes was Last updated in 2019, which means that it's just a dead node that's lying there and not being used. So they kind of know that, okay, we're supposed to be looking at a different node and stuff like that. And I'm not saying agents cannot understand this or models are not capable enough to understand this, but there are really weird rules within enterprises that are not documented anywhere. And you want to make sure that the agents have all of that context instead of just throwing the problem at their priority. Right? Yeah. Coming back to the versions we had, routing was one where you have really high control, because even if your agent routes to the wrong department, humans can take control and undo those actions. And along the way, you also figure out that you probably are dealing with a ton of data issues that you need to fix and make sure that your data layer is good enough for the agent to function. We do is what we said of a copilot, which is, is now that you figured out routing works fine after a few iterations and you fixed all of your data issues, you could go to the next step, which is can my agent provide suggestions based on some standard operating procedures that we have for the customer support agent? Right. And it could just generate a draft that the human can make changes to. And when you do this, you're also logging human behavior, which means that how much of this draft was used by the customer support agent or what was omitted? So you're actually getting error analysis for free when you do this, because you're literally logging everything that the user is doing that you could then build back into your flywheel. And then we say post that once you've figured out that those drafts look good. And most of the times, maybe humans are not making too many changes, they're using these drafts as is. That's when you want to go to your end to end resolution assistant. That could draft a resolution that could solve the ticket as well. Right. And those are the stages of agency, where you start with low agency and then you go up high rate. We also have this really nice table that we put together, which is what do you do at each version and what you learn that can enable you to go to the next step. And what information do you get that you can feed into the loop? Right. When you're just doing your routing, you have better quality routing data. You also know what kind of prompts you need to be building to improve the routing routing system. Essentially, you're figuring out your structure for context engineering and building that flywheel that you want. And while I go through this, I want to also be very clear that two things. One is, when you build with CCCD in mind, it doesn't mean that you've fixed the problem all for once. It's possible that you've probably gone through V3 and you see a new distribution of data that you never previously imagined. But this is just just one way to lower your risk, which is you get enough information about how users behave with your system before going to a point of complete autonomy. And the second thing is you're also kind of building this implicit logging system. A lot of people come and tell us that, oh wait, there are evals, right? Why do you need something like this? The issue with just building a bunch of evaluation metrics and then having them in production is evaluation metrics catch only the errors that you're already aware of. But there can be a lot of emerging patterns that you understand only after you put things in production, right? So for those emerging patterns, you're kind of creating a low risk kind of a framework so that you could understand user behavior and not really be in a position where there are tons of errors and you're trying to fix all that them at once. And this is not the only way to do it. There are tons of different ways. You want to decide how you constrain your autonomy. It could be based on the number of actions that the agent is taking, which is what we do in this example. It could be based on topic. There are just some domains where it's pretty high risk to make a system completely autonomous for certain decisions. But for some other topics, it's okay to make them completely autonomous and depending on the complexity of the problem. I guess we'll link folks to this actual post if they want to go really deep. You basically go through all of these steps by step. A bunch of examples. And the idea here is, as you said, that the reason everything about what you're describing here is, is about making it continuous and iterative and kind of moving along this progression of higher autonomy, less control. And this idea of even calling continuous calibration continuous development is communicating. It's this kind of iterative process. And just to be clear, this, this naming is kind of owed to CI. Cicd, Continuous Integration, Continuous Deployment Suite and the idea here is like that this is the version of that for AI where instead of just like integrating to unit tests and deploying constantly, it's running evals, looking at results, iterating on the metrics you're watching, figuring out where it's breaking and iterating on that. Awesome. Okay, so again we'll point people to this post if they want to go deeper. That was a great overview. Is there anything else before we go into different topic around this framework specifically they think is important for people to know? I think one of the most common questions we get is how do I know if I need to go to the next stage or if this is calibrated enough? There's not really a rulebook you can follow, but it's all about minimizing surprise. Which means, let's say you're calibrating every one or two days and you figured out that you're not seeing new data distribution patterns, your users have been pretty consistent with how they're behaving with the system. Then the amount of information you gain is kind of very low. And that's when you know you can actually go to the next stage. Right? And it's all about the wipes at that point. Like, do you know you're ready, you're not receiving any new information. But also it really helps to understand that sometimes there are events that could completely, you know, mess up the calibration of your system. An example is GPT4O doesn't exist anymore or it's going to be deprecated in APIs as well. So most companies that were using 4 or should switch to 5 and 5 has very different properties. So that's where your calibration's off again. You want to go back and do this process again. Sometimes users start behaving with systems also differently over time, or user behavior evolves even with consumer products. Right. You don't talk to ChatGPT the same way you were talking, say two years ago, just because, you know, the capabilities have increased so much. And also just people get excited when these systems can solve solve one task, they want to try it out on other tasks as well. We built this system for underwriters. At some point, underwriting is a painful task. There are agreements that are like loan applications that are like 30 or 40 pages. And the idea for this bank was to build a system that could help underwriters pick policies and information about the bank so that they could approve loans. Right? And for a good three or four months, everybody was pretty impressed with the system. We had underwriters actually report gains in terms of how much time they were spending, et cetera, and post three months. We realized that they were so excited with the product that they started asking very deep questions that we never anticipated. They would just throw the entire application document at the system and go like, for a case that looks like this, what did previous underwriters do? And for a user that just seems like a natural extension of what they were doing. But the building behind it should significantly change. Now you need to understand what does for a case like this mean in the context of the loan itself. Is it referring to people of a particular, you know, income range, or is it referring to people in a particular GEO and stuff like that? And then you need to pick up historical documents, analyze those documents and then tell them, okay, this is what it looks like versus just saying that there's a policy X, Y and Z and you want to look up that policy. So something that might seem very natural to an end user might be very hard to build as a product builder. And you see that user behavior also evolves over time. And that's when you know that you want to go back and recalibrate. What do you think is overhyped in the AI space right now? And even more importantly, what do you think is under. I am, as I said, super optimistic in different things that are going in AI. So I wouldn't say overhyped, but I feel kind of misunderstood is the concept of multi agents. People have this notion of like, I have this incredibly complex problem now I'm going to break it down into, hey, you're this agent, take care of this. You're this agent, take care of this. And now if I somehow connect all of these agents, they think they're the agent utopia. And, and it's never the case that there are incredibly successful multi agent systems that are built right. Like there's no doubt about that. But I feel a lot of it comes in terms of how are you limiting the ways in which the system can go off tracks. And for example, if you're building a supervisor agent and there are sub agents that actually do the work for the supervisor agent, that is a very successful pattern. But coming with this notion of I'm going to, to divide the responsibilities based on functionality and somehow expect all of that to work together in some sort of gossip protocol that is extremely misunderstood that you could do that. I don't think current ways of building and current model capabilities are right there in terms of building those kind of applications. I feel that is misunderstood than over underrated, I feel it's hard to probably believe, but I still feel coding agents are underrated in the sense that I feel like you can go on Twitter and you can go on Reddit and you see a lot of chatter about coding agents. But talking to an engineer in any random company, especially outside of Bay Area, you can see the amount of impact this coding agents can create and the penetration is very low. So I feel like 2025 and 2026 is going to be like an incredible year for optimizing all of these processes. And I feel that is going to be creating a lot of value with AI. That's really interesting on that first point. So the idea there is you'll probably be more successful building and using an agent that is able to do its own sub agent splitting of work versus like a bunch of say Codex agents. Where you do this task, you do. That That task, you can have agents to do these things and you as a human can orchestrate it or you can have like one larger agent that is going to orchestrate all of these things. But letting the agents communicate in terms of peer to peer kind of protocol and then especially doing this inside a customer support kind of use case is incredibly hard to control what kind of agent is replying to your customer because you need to shift your guardrails everywhere and things like that. Yeah. Okay. Great picks. Okay, Ash, what do you got? Can I say evals? Will I be canceled? In which category? Which bucket do they go? Overrated. Overrated. Overrated. Okay, go. Go for it. We won't let you get canceled. Just kidding. I think evals are misunderstood. They are important folks. I'm not saying they're not important, but I think just this, I'm going to keep jumping across tools and going to pick up and learn if new tool is overrated. I still am old school and feel like you would really need to be obsessed with the business problem you're trying to solve. AI is only a tool. Try to think of it that way. Way. Of course you need to be learning about the latest and greatest. But don't be so obsessed with just building so quickly. Building is really cheap today. Design is more expensive. Really thinking about your product, what you're going to build, is it going to really solve a pain point is what is way more valuable today and it will only become more true in the near future. Right. So really obsessing about your problem and design is underrated and just roll. Building is overrated, I guess. Awesome. Okay, similar sort of question from a product point of view. What do you think the next year of AI is going to look like give us a vision of where you think things are going to go by, say by the end of 2026. Yeah, I feel there's a lot of promise in terms of this background agents or proactive agents who is like, they are going to basically understand your workflow even more. If you think of where is AI failing to create value today? It's mainly about not understanding the context. And the reason that it's not understanding the context is it's not plugged into the right places where actual work is happening. And as you do more of this, you can give the agent more of context and then it start to see the world around you and understand what are the set of metrics that you're optimizing for or what are the kind of activities that you're trying to do do. It is a very easy extension from there to actually gain more out of it and then let the agent prompt you back. We already do this in terms of ChatGPT Pulse, which kind of gives you this daily update of things you might care about. And it's very nice to actually have that jog your brain up in terms of, oh, this is something that I haven't thought about. Maybe this is good. And now when you extend this to more complex tasks like a coding agent, which says that, okay, I have fixed five of your linear tickets and here are the patches. Just review them them at the start of your day. So I feel that is going to be like extremely useful. And I see that as like a strong direction in which, like products are going to build in 2026. That's so cool. So essentially agents kind of anticipating what you want to do and getting ahead of you. And here's I've solved these problems for you, or I think this is going to crash your site. Maybe you should fix this thing right here. Or I see the spike here and let's refactor our database. I mean, amazing. What a world. Okay, Ash, what do you got? I'm all in for multimodal experiences in 2026. I think we have done quite some progress in 2025, and not just in terms of generation, but also understanding. Until Now, I think LLMs have been our most commonly used models. But as humans, we are multimodal creatures. I would say language is probably one of our last forms of evolution. As the three of us are talking, I think we're constantly getting so many signals like, oh, Lenny's not in his head, so probably I would go in this direction. Or Lenny's poor, so let me stop talking. So there's a chain of thought behind your chain of thought and you're constantly altering it with language. That dimension of expression is not explored as well. So if we could build better multimodal experiences that would get us closer to human like conversation, richness and. Yeah, I think. And just. You will also just given the kind of models, there's a bunch of boring tasks as well which are ripe for AI if multimodal understanding gets better. There are so many handwritten documents and really messy PDFs that cannot be passed even by the best of the models as of today. And if it's possible, there'd be so much data that we can tap into. Awesome. I just saw Demis from DeepMind, AI, Google, whatever they call the whole org, talking about this where he thinks that's going to be a big part of where they're going. Combining the image model work, the LLM and also their world model stuff, Genie I think is what it's called. So that's going to be a wild, wild time. Okay, last question. If someone wants to just get better at building AI products, what's just maybe one skill or maybe two skills that you think they should lean into and develop? I think we did cover a bunch of best practices for AI products, which is start small, try to get your iteration going well, and build a flywheel and all of that. But again, if you kind of look at it at a 10,000ft level for anybody building today, like I was saying, implementation is going to be ridiculously cheap in the next few years. So really nail down your design, your judgment, your taste and all of that. And in general, if you're building a career as well, I feel for the past few years, your former years, say the first two, three years of building your career is always focused on execution, mechanics and all of that. And now we have AI that could help you ramp pretty quickly and post that. I mean, after a few years, I think everybody's job, job becomes about your taste, your judgment and kind of, you know, what is uniquely you. I think nail down on that part and try to figure out how you can bring in that kind of a perspective. It doesn't have to mean that you should be significantly old or have years of experience. We recently hired someone and we use this very popular app for tracking our tasks, right? And we've been using it for years and we take pay a high subscription fee for it. And this guy just came with his own wipe coded app to the meeting. He onboarded us to all of it. And he's like, okay, let's start using this. And I think that kind of agency and that kind of ownership to really rethink experiences is what will set people apart. And I'm not being blind to the fact that wipe coded apps have high maintenance costs. And maybe as we scale as a company, we have to replace it or we have to think of better approaches. But given that we're a small size company now and just I was really shocked because I never thought of it. If you've been used to working in a certain way, you associate a cost with building. And I feel like folks who grew up in this age have a much lower cost associated in their mind. They just don't mind building something and going ahead with it. And that's. They're also very enthusiastic to try out new tools. That's also probably why AI products have this retention problem, because everybody's so excited about trying out these new tools and all of that. But essentially having the agency and ownership and I think it's also going to be the end of the busy work era. You can't be sitting in a corner doing something that doesn't move the needle for a company. You really need to be thinking about, you know, end to end workflows, how you can bring in more impact. I think all of that would be super important. That reminds me, I just had Jason Lemket on the podcast. He's very smart on sales, go to market runs, asterisk and he replaced his whole sales team with agents. He had 10 salespeople. Now he has 1.2 and 20 agents. And one of the agents, it was just tracking everyone's updates to Salesforce and kind of updating it automatically for them based on their calls. And one of the salespeople is like, okay, I quit. And it turned out he wasn't really doing anything. He was just sitting around and he's like, okay, this will catch me. I gotta get out of here. So to your point about you can't. It'll be harder to sit around and twiddle your thumbs. I think is really right. Yeah. I think to add on to that, I feel like persistence is also something that is extremely valuable, especially given that anybody who wants to build something is the information is like at your fingertips even more than the past decade. Right. Right. You You can learn anything overnight and become that sort of like Ironman kind of approach. So it's feel like having that persistence and going through the pain of learning this, implementing this and understanding what works and what doesn't work. And as you are going through this pain of developing multiple approaches and then solving the problem, I feel that is going to be the real moat as an individual. AI products say more about this. I love this concept. Pain is the new moat. Is there more there? Yeah, I feel as a company, I mean successful companies right now building in any new area, they are successful not because they're first to the market or like they have this fancy feature that more customers are liking it. They went through the pain of understanding what are the set of non negotiable things and trade them off exactly with like what are the features or like what are the model capabilities that they can use to solve that problem. Problem. This is not a straightforward process. There's no textbook to do this or there's no straightforward way or a known threaded path to be here. So a lot of this pain I was talking about is just going through this iteration of okay, let's try this and if this doesn't work, let's try this. And that kind of knowledge that you built across the organization or across your own experience, lived experiences. I feel that pain is what translates into the moat of the company. Right. This could be like a product of evals or like something that you built and I feel that is going to be the game changer. That is awesome. It's like turning a coal into diamond. Yes. Okay. I feel like we've done a great job helping people avoid some of the biggest issues people consistently run into building AI products. We've covered so many them of, of the pitfalls and the ways to actually do it correctly. Before we get to our very exciting lightning round, is there anything else that you wanted to share? Anything else you want to leave listeners with? Be obsessed with your customers. Be obsessed with the problem. AI is just a tool and try to make sure that you're really understanding your workflows. 80% of so called AI engineers, AI PM spend their time actually understanding their workflows very well. They're not not building the fanciest and the you know, most cool models or workflows around it, they're actually in the weeds understanding their customers behavior and data. And whenever a software engineer who's never done AI before, here's the term, look at your data, I think it's a huge revelation to them but it's always been the case you need to go there, look at your data, understand your users and that's going to be a huge differentiator data. That's a great way to close it. It's not the AI isn't the answer. It's a tool to solve the problem. With that, we have reached our very exciting lightning round. I've got five questions for both of you. Are you ready? Yay. Yes. All right, so you can both answer them. You can pick one which you want to answer either way. Up to you. What are two or three books you find yourself recommending most to other people? For me, it's this book called When Breath Becomes Air Lenny. It was written by Paul Kalaniti. I think he was an Indian origin neurosurgeon who's diagnosed with lung cancer at 31 or 32. And the whole book is his memoir and just is written after he was diagnosed. And it's, it's really beautiful, especially because I read it during COVID and all we ever wanted to do during COVID of Stay Alive. There are a bunch of really nice quotes within the book as well, but I remember one of them, he was kind of of arguing against a very popular quote by Socrates, which is the unexamined life is not worth living or something like that. And which means you really need to be thinking about your choices. You need to, you know, understand your values, your mission and all of that. And Paul says if the unexamined life is not worth living, was the unlived life worth examining? Which means are you spending so much time just understanding your mission and purpose that you've forgotten to live? And I think it everybody who's staying in the AI era and building and continuously going through this phase of reinventing themselves need to take a pause and live for a bit. I guess they need to stop evaluating life too much. I was going to say that that's where my mind went. Got to write some evals for your life. Oh my God. We've gone too far. Yep. Yep. Yep. Yeah, that's my favorite book. I like more of science fiction books so I really like this Three Body Problem series. It's like a three book series. It has elements of grander than science fiction, life outside Earth and how it impacts human decision making process. And it also has elements of geopolitics and how much important or valuable abstract science is to human progress. And then when that gets stopped, it's not noticeable in everyday life, but it can cause devastating effects. So I feel like AI helping in these areas, for example, is going to be extremely crucial. And that book is like a nice example of what could happen otherwise. Completely agree. Absolutely. Love might be my favorite sci fi book except or series even. And it's three. I have to read them all three. By the way. I find that it only got really good about one and a half books in. So if anyone's tried it and like, what the heck is going on here? Just keep reading and get to the middle of the second one and then gets. Mind blowing. Yes. If you love sci fi and you're an AI, you got to read this book called A Fire upon the Deep by Vernon Vinge. Check it out. It's incredible. I saw Noah Smith on his newsletter recommend this book and there's like a whole. There's like sequels to it, but this is the one. It's so incredible. And it's actually. Turns out it's about AGI and super intelligence and all these things and it's just like so epic and no one's heard of it. Thank you. There you go. I'm giving you one back. Okay, next question. What's a favorite recent movie or TV show that you've really enjoyed? I started rewatching Silicon Valley and I think it's so true, it's so timeless. Everything is repeating all over again. Anybody who's watched it a few years ago should start rewatching it and you'll see that it's eerily similar to everything that's happening right now with the AI wave. That's a good idea to rewatch. Watch it. I love that their whole business was like an algorithm to compress, like a compression algorithm. It's like maybe a precursor to LLMs in some small way. Oh yeah. All right, Treaty, what you got? I'm going to drag race and say, not a movie or a TV show, but there's this game I picked up recently called Expedition 33. It has nothing to do with AI, but it's an incredibly, incredibly well made game. In terms of the gameplay or like the movie and the story and the music, it's been amazing. I I love that you have time to play games. That's a great sign. I love that someone OpenAI. I'm just imagining you're. There's nothing else going on except just coding and yeah, it has been incredibly. Hard Hard to find time for that. That's good. That's a good sign. I'm happy to hear this. Okay, what's a favorite product that you've recently discovered that you really love? For me, it's Whisper Flow. I think I've been using it quite a bit and I didn't know I Needed it so much. The best part is it's a conceptual transcription tool which means if you go to Codex and start using Whisperflow, it starts identifying variables and all of that. And it's so seamless in terms of transcription to instruction. You could say something like, I'm so excited today. Add three exclamation marks and it seamlessly switches. It adds those three exclamation marks. Instead of writing, add three exclamation marks. And I think it's pretty cool. If you're not using, you should try it. I'll do a plug get Whisper Flow for free for an entire year. For a year, for free. By becoming an annual subscriber of my newsletter. And that's how I got access to it. Lenny. There There we go. It's like I think I. I pitched this deal. I think people don't truly understand how incredible this is. They're like, no way. This is real. It's real. And 18 other products. Lenny product pass dot com. Check it out. Moving on. Curious. Awesome. I actually am a stickler for productivity. I keep experimenting new cli tools and things which can make me faster. So I feel like Raycast has been amazing. I've discovered all this new shortcuts that you can use to open different things. Type in shortcut commands and things like that. And Caffeinate is another thing that I've recently discovered from my teammates. It helps you prevent Mac from sleeping. So you can run this really long codecs task for four or five hours locally. Let it build the thing and then you can wake up and be like, okay, this is good. I like this. That's hilarious. That combo Codex and caffeinate, you guys need to use it like build that yourself. An OpenAI version of that or the Codex agent should just keep your Mac from sleeping. That's so funny. By the way, Raycast also part of Lenny's product pass. One year free of Raycast. Amazing. We We weren't. Lenny didn't tell us these folks. Yes. Yes. These These are actually our favorite books. These These are just two of 19 products. Products. No No caffeinate though. I don't know if that's even paid. Okay, let's keep going. Do you have a favorite life motto that you find yourself coming back to in work or in life? For me, I think this is what my dad told me when I was a kid and it's always stuck. Which is they told it couldn't be done, but the fool didn't know it, so he did it anyway. I think be foolish enough to believe that you can do anything if you put your heart to it, especially now, because. Because you have so much data at your hand that could be pointing towards the fact that you probably will be unsuccessful. How many podcasts made it to more than a thousand subscribers? Or how many companies hit more than 1 million? Err. And there's always data to show you that you want to be successful, but sometimes just be foolish and go ahead with it. That's great. Yeah. For me, I am more of an overthinker. So I really like this quote from Steve Jobs that you can only connect the dots looking back backwards. So it's a lot of the times there are like numerous choices and you don't really know the optimal one to pick. But life's life works in ways that you can actually see back and be like, oh, these are actually beautiful in terms of how I would transition. So I feel like that is extremely useful in like, you know, keep moving forward, keep experimenting. Final question. Whenever I have two guests on the podcast at once, I like to ask this question. What's something that you admire about the other person? Person? I think with Kiridi, it's about he's pretty calm and very grounded and he's always been my sounding board. I can throw a ton of ideas at him and he always comes up with. He's able to anticipate the kind of issues that might run into. And he's extremely kind and lets his work speak instead of actually doing a lot of talking, I guess. But if I had to pick one, I think he's the most incredible husband. Reveal. Little did people know. Yeah, we've been married for four years and been the most beautiful four years of my life. Oh, Oh, wow. Okay. How do you follow that? Yeah, it's super hard to follow that. I would say I am extremely privileged in terms of working with like really smart people in great companies in the Silicon Valley. And I feel the unique thing that stands with Aishwarya across like any other smart folks I've worked on is like, she has this really amazing knack of teaching and explaining something in a very understandable and easy to comprehend way. And that combined with persistence, is super useful, especially in this fast moving AI world that we are in, in the sense that there's so many new things coming up, it feels overwhelming. But when I hear her talk about like, this is how you make sense of this entire thing. This is where it plugs in. I feel like, oh, that is so simple. Like, I can also do that. So she empowers a lot of people by simplifying things and you know, like explaining things in the most understandable way. So I feel that is like an incredible quality. Amazing. How sweet. I gotta do this all the time. I need more. Yes to that was great. Okay, final questions. Where Where can folks find stuff that you're. Working Working on finding online? Talk about about your, share your course link and then just how can listeners be useful to you? I write a lot on LinkedIn, so if you want to listen to pragmatists who've been in the weeds working on AI products and what they're seeing, you can follow my work. We also have a GitHub repository with about 20k stars and that repository is all about good resources for learning AI. It's completely free and if you like what we spoke today, we also run a super popular course. We leave a link to it on building enterprise AI products and the course is a lot about unlearning mindsets and following like a problem first approach instead of a tool first or a hype first approach. So you can check that out as well. And if you don't want to do the course, we write a lot, we give out a lot of free resources, we have free sessions. So make sure you follow our work. Yeah, I would also add that I you can also find me on LinkedIn. I don't like write a lot I guess but but I'm super all excited to just talk to any complex product that you're building and if you have thoughts on how you can use coding agents to make your life better or however the problems that you're seeing. Always my DMs are open and we can have a great discussion. Awesome. Well, Kiriti and Ash, thank you so much for being here. Thank you so much. Thank Thank you Eleni. This was so much fun. So much fun. Bye everyone. Thank you so much for listening. If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify or your favorite podcast app. Also, please consider giving us a rating or leaving a review as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show@lennyspodcast.com See you in the next episode.
