Why Everyone’s Sleeping on Google’s AI Tools — And How To Use Them

Why Everyone’s Sleeping on Google’s AI Tools — And How To Use Them

I sat down with Elizabeth Knopf for what turned into one of the most eye-opening conversations I’ve had about accessible AI tools and no-code development.

Elizabeth walked me through Google’s newly released Nano Banana AI (their Gemini Flash 2.5 image model), demonstrated the surprisingly powerful features hiding in Google AI Studio, and showed me how she used Claude Code to build her own custom RSS feed aggregator and project management system—all without traditional coding experience.

What makes this conversation particularly valuable is that Elizabeth didn’t just explain these tools theoretically; she demonstrated them live. We built a history trivia game for kids in real-time, analyzed video content to count Christmas lights, and explored how the Stream feature in Google AI Studio can provide real-time debugging feedback as you work. Perhaps most impressively, Elizabeth revealed how she replaced a $20/month subscription service by building her own solution in Claude Code, complete with AI agents that automatically manage her daily tasks and project workflows.

This article includes the main topics we covered, a link to watch the full podcast conversation, and the complete transcript below.

Key Topics Covered

In this episode, Elizabeth and I explored cutting-edge AI tools and practical applications for entrepreneurs:

  • Nano Banana AI & Visual Memory Technology – Understanding Google’s Gemini Flash 2.5 image model and its breakthrough “visual memory” feature that maintains consistency across multiple images (solving the long-standing problem of character/object persistence in AI-generated content), plus its superior precision in editing specific elements within images
  • Google AI Studio Deep Dive – A complete walkthrough of Google’s underutilized AI Studio platform, including the game-changing Stream feature that lets you share your screen or webcam for real-time debugging and learning, plus ready-made app templates and integration with Google’s ecosystem (Maps, YouTube, Sheets)
  • Building Custom Apps Without Coding – How Elizabeth used Claude Code to replace her $20/month RSS feed tool by building a custom aggregator, plus her sophisticated project management dashboard with AI agents that automatically pull daily todos and manage workflows—demonstrating the arbitrage opportunity available to non-technical entrepreneurs
  • No-Code Platform Comparison – Practical insights on when to use Google AI Studio versus Claude Code versus Lovable/Replit, including live demonstrations of building apps directly in the browser and understanding the different workflows each platform offers

Watch Now

Watch the full conversation below, or click here to watch on YouTube.

Full Transcript

Nik Hulewsky: You’re saying, I could have just had this talk to Gemini live on and it would’ve been looking at what I was trying to build and like give me real time feedback?

Elizabeth Knopf Absolutely.

Nik Hulewsky: That’s insane. If it could generate from this one episode, then I’m gonna hang this call up and end early ’cause I’m done for the week. This is incredible. Why the f didn’t I know—this is insane. Why the freak wouldn’t I have ever done this? This is amazing.

[Intro Music]

Nik Hulewsky: Okay, Liz, I am excited to talk to you today because you are going to show me how to actually use some amazing tools, but the things we’re going to talk about today are the things that are currently taking the AI world by storm. The first one is Nano Banana, which is the model that Google just released, and the second one is you’re actually going to show us with Claude Code how you replaced one of your apps and created your own app using Claude Code natural language. You’re not a coder, you’re not a developer, but you’re going to show us how we can do that ourselves. So, first question: what the freak is Nano Banana? I keep seeing this crap all over the internet.

Elizabeth Knopf: So, you know how a couple weeks ago we were talking about how Google sucks at marketing? Well, I think they were listening to our pod because someone stepped up their game and they did a great job finally releasing a product called Nano Banana. And all that is, is their Gemini Flash 2.5 image model. So, it’s just another image model, but it’s actually a pretty awesome image model.

Nik Hulewsky: I found this Reddit thread, I’m going to show it to you, and these are some examples of Nano Banana, of what they did. So this is a picture of Ronaldo, he’s drinking a water bottle, and they’ve Photoshopped a Coca-Cola, they also Photoshopped a jersey on him. The face looks Photoshopped to me just because I see them next to each other, but pretty cool. Here’s one I think is nuts. They took this picture on the left, it’s two guys in the middle of the day with a Toronto skyline behind them, and then they Photoshopped that to at night. It’s literally the same guys in the same picture, you wouldn’t have known that. Incredible editing. Oh, this guy took a picture in front of his house, it looks like he’s in Nova Scotia, and then they changed his clothes and the lighting and it looks like he’s in a flea market in Morocco or something like that. So, really, really good image generation. My question is, like, what are people using this for? Is it just like a novelty trick?

Elizabeth Knopf: A couple of key challenges that have come up with a lot of the image generation models—first off, it’s just like replacing things and doing simple edits can be a little bit hard to do. Even with ChatGPT in comparison, you would say make this, you know, change this one element, and sometimes it would work and sometimes it wouldn’t. And same with Google’s previous models. And so now, it has just become much more precise with specific edits and iteration of those edits. Changing the shirt, changing the product, like it actually is much more accurate in terms of how it takes whatever you’re prompting it to change and making those changes. And I would say Adobe was still sort of the leader, you know, with Photoshop, but literally they were probably still leading the way in some of that editing versus again going to one of these image models to do your editing. But now, Nano Banana has really sort of advanced the ball in terms of doing that. The second key feature, and I think is the more exciting and really big deal, is what’s called visual memory. This is essentially where you can have from one image to the next, it persists as the same person, same object, same thing. And that has been the biggest flaw in image generation because you would see these little ads or short ads and there would be a person but they’d slightly look different with different bone structure or something in the next cut. And so now, persistence has actually been improved so dramatically. And that’s because they came up with some interesting technology on how they’re building the model. But those are really the big deals, yeah.

Nik Hulewsky: Oh, that’s really cool. So if I was starting to use these image generating models—that was always a problem, right? It’s like, cool, I generated this cool logo but how do I get it to stay the same logo across multiple iterations? Whereas this is like, well now you can do that. You can keep the same person, the same image, or the same logo. It stays persistent. Here is my question to you: we used to talk about multimodal—and maybe this is just me because I’m a caveman—but we used to talk about multimodal models where you’re like, “Oh, I can upload an image, and then I could upload text, and I could upload a video.” Those are all three different data types and in the past it was only text, but now you can kind of upload some images and video’s getting there. However, my perception of multimodal has actually changed a little bit. I think of multimodal now as having an LLM that I use for synthesizing data, I have an AI image generator I can use to create logos, etc. I can even get these small language models that are specific use cases… maybe they’re healthcare, maybe they’re tech-related… or maybe there’s a deep research function. So those are actually different modes of AI. I think Google has done a really good job of creating this studio… it kind of incorporates a lot of these different modes but people don’t really use it or know about it. You were telling me about Google AI Lab—I can’t remember exactly what it’s called. What is it?

Elizabeth Knopf: Okay, so since they have so many different locations and properties, they’re terrible. It’s bad. aistudio.google.com. So, it’s Google AI Studio is the name. One of the locations you can access Nano Banana, or of course you can go straight to Gemini and access Nano Banana there too. Google AI Studio has now started incorporating in that model into being able to build out little apps and other tools with this new model. So that’s I think pretty exciting and we’ll dive in here in a second just to see what’s basically possible.

Nik Hulewsky: Okay, so this is it. I’m in Google’s AI Studio. First of all, I don’t know, it must have been two months ago or so, but you showed me a version of this and it was overwhelming. This chat interface is clean. I mean, it looks like the OpenAI chat interface. I don’t get instant anxiety looking at it. Okay, so on the left side I have Chat, which looks the same as OpenAI, but then I have Stream. What is Stream?

Elizabeth Knopf: So Stream is actually a pretty cool thing. This is where you can basically use your webcam or share your screen. So like, if you are debugging something, if you want to learn something, if you want Google to watch a video with you and give you feedback or see you do something live and talk with you, it is actually doing that and capturing it in real time. This is like—people are sitting on this feature. It is pretty incredible. I use it a lot for just debugging and when I’m like doing something or following instructions and I’m like, “What am I getting wrong?” or “Look at my screen.” So I don’t have to take a gazillion screenshots. I’ll do that, or I’ve tried putting like a YouTube video and having it sort of walk through it with me. So there’s a lot of different cool things you can do with this feature that again is like totally slept on.

Nik Hulewsky: I did all these interviews with people. I wanted to look at all the interviews to be able to pull out information that maybe I’m not seeing. But there was a problem cleaning the data. So I had it all stored in Google and one of the coolest tools, Google Apps Script, I was writing a script. I was like, “Claude, can you help me write this script?” And so this is me just going back and forth, it’s writing the script. And then every time that it would screw up, I had to take a screenshot of the execution log and then I would copy it and I would just literally paste it and it was like, “Oh, this error was because of blah blah blah.” You’re saying, I could have just had this talk to Gemini live on and it would’ve been looking at what I was trying to build and like give me real time feedback.

Elizabeth Knopf: Absolutely. Yes.

Nik Hulewsky: That’s insane.

Elizabeth Knopf: Exactly. I mean, I will say that I have had in the past where like I love Claude, so sometimes I think Claude is better at feedback because again, they’re using the Gemini chat model, so it depends on your opinion there, but Gemini’s been getting a lot better. But absolutely you can do that. So yeah, so like again, if people haven’t been using Veo, pretty spectacular for like videos. They have also other sorts of media now you can produce, so whether it’s like speech generation, doing audio… again you can start doing a lot of cool stuff in here. And one of the things also that I think is worth noting on Nano Banana that I also think is cool related to that persistence element is like you could do step-by-step guides. So I haven’t yet tried it for software, but I have tried it for like fitness: “Show me a step-by-step guide with images of how to properly lift weights or do an exercise,” and it will take the same person going through each of those steps. So you can just again get very creative and imaginative on using those step-by-step guides. Obviously they have a recipe one here, which I think that’s a pretty cool feature. And as you can go in here, they have a bunch of cool examples and templates that you can be inspired by.

Nik Hulewsky: That’s insane. Yeah, yeah, yeah.

Elizabeth Knopf: Okay. So now another cool place that again people don’t even know about, and this is a helpful way for beginners who are a little bit intimidated to get started in building apps. If you go to “Build,” this is similar to some of the other LLMs that will have sort of like little mini apps examples. They used to call these “Gems,” which was just really confusing, now they just call them apps. Basically, they have little starter apps and examples, some of which have incorporated in Nano Banana already. And there’s others that have sort of showcased some of the real power of what Google and Gemini can do, which again is really on the sort of video/multimedia components that I would say the other models each sort of have elements of, but I really think that Google is excelling and doubling down on that capability of both animation, video, audio and streaming.

Nik Hulewsky: Is this kind of an answer to Claude Code?

Elizabeth Knopf: Sort of. I would say they’re very different in terms of like the workflow, but everybody’s trying to build out some kind of coding app because everyone sees like personal software is what’s happening. That is what is being cultivated through LLMs as really I would say the biggest area of maybe annihilation or augmentation from what’s happening. So, you know, everyone’s building apps. And so you can do that now with ChatGPT, with Anthropic, with Claude Code, with Grok, you can all sort of build within the browser. So then still again, you can build it from here in the browser. There’s still a couple steps to like get it into production, but Google has like that full workstream of getting from a little mini app into production whereas some of the other guys don’t have the same kind of infrastructure that Google has.

Nik Hulewsky: So it’s probably more similar to a Replit or Lovable than it is to Claude Code.

Elizabeth Knopf: Sort of. But like again, they all have like you can start prompting to build something. This I would say is like still keeping very simple apps. I would say Claude Code is still sort of the best in going in the same environment interface from idea to production and building out sort of the infrastructure. Like again, I would say Gemini is doing well but again, there’s something about their interface that I’m just not as like thrilled about, so I think it’s personal preference.

Nik Hulewsky: I literally just put this prompt: “Build me a game I can play with my kids, it’s a history trivia game with rounds of questions that is tailored to everyone’s age, the app should keep score.” It’s been running for 75 seconds and I can see it real-time building this out. So like in this sense it reminds me of Lovable and that I’m like watching it build real time. Boom, there it is. And now I have basically an MVP to test around with. So I could say Nik, age 40, and then I’m going to add another player and we’re going to say Liz, age 29.

Elizabeth Knopf: Oh, thank you.

Nik Hulewsky: And then we’re going to start the game and we’re playing. It’s generating the questions. This is me: “Which Roman Emperor famously appointed his favorite horse, Incitatus, to the position of consul…?” I have no idea. I’m going to say Nero. Not quite! Caligula. All right, Liz, here’s your question. “Which explorer is often credited with the first circumnavigation of the Earth, despite dying halfway through the journey?”

Elizabeth Knopf: Was that Magellan?

Nik Hulewsky: Ferdinand Magellan. Correct! He died in the Philippines I think is where he died. This is freaking cool. I just built this!

Elizabeth Knopf: It’s awesome. And so if you want to go back in a second, I’ll show you because I think this is cool. Cloud artifacts can do this, ChatGPT can do this, Grok can do this, but I think there’s a couple pieces with the Nano Banana element and some of the features they have that’s pretty cool. Scroll down… Maps. People don’t talk about map making or leveraging maps. Map Explorer. You can incorporate in map elements. I started doing an exploration on the Google Maps for my kid’s game—like, “Where in the World is Carmen Sandiego?” meet something else. And maps is just like, you know, embedded and native to Google’s stack, and so you can start doing cool things with maps.

Nik Hulewsky: Oh, this would be so cool. I could do a total game with my kids. Gobekli Tepe in Turkey, which was honestly just found recently. I could say, “Take me somewhere ancient.” And my son… I know my oldest son’s going to be like, “Let’s go to Gobekli Tepe.” And then all of a sudden we could just go to YouTube… like I could build this out and be like “Take me somewhere cool and then give me the top three YouTube videos to link to” just because it’s in the Google ecosystem. That’s awesome.

Elizabeth Knopf: Totally. All right, let’s go to another cool feature they have. It’s called Video Analyzer. We all sort of want to understand—like, you can get a transcript from a video, but like if we’re doing this podcast, we want to say, “Where’s our body language bad?” or “What is happening in this video?” and you can literally in their little app here add in a video.

Nik Hulewsky: I made this video a couple of months ago of people hanging Christmas lights with some music on it. I’m going to add it here. Explore this video via… let’s do a chart. A chart of the total number of lightbulbs seen. I have no idea if that will work, I just thought it’d be interesting to try.

Elizabeth Knopf: Wow. That’s pretty cool. You can start imagining to like—it can actually ingest video. Wow. That’s pretty cool.

Nik Hulewsky: Now I have to go count that! But yeah, here’s the video. You can see this guy’s just setting up all the lightbulbs. I believe it. I am not going to go count them, but I believe it. How cool is that? That’s freaking cool.

Elizabeth Knopf: It’s pretty cool. So again, like now you can start like ingesting video as a medium and doing more than just getting a transcript, which I just think is pretty cool.

Nik Hulewsky: I just think with this AI Studio that Google has, man, for like novices like me, I can have the video running while it’s watching me go through stuff, it can tell me in real time what things I’m messing up, I can create apps… it takes the good of Lovable, Replit, and it takes Claude which helps me debug things. And then it has a good image generator. Oh, and then I can integrate my YouTube video, I can integrate my Google Sheets… like that’s one thing that bugs the crap out of me… when I ask Claude to parse data and then it’s like “Sure, go ahead, just copy and paste.” I’m like “No, just give me the CSV.”

Elizabeth Knopf: Yeah, no, and it’s interesting because right now people that are watching and paying attention, like this is a great arbitrage opportunity for any of this stuff. Like again, people don’t know about Google AI Studio, so you can literally take that code, find a good URL that has some good traffic already and just put it up, like minimal changes… if people get ad revenue or whatever for a free tool.

Nik Hulewsky: That is such a freaking good point. It’s an amazing arbitrage because it used to be that software development was only for a select few who had gone through a ton of school and understood how to code. Now we can do it with natural language and something like Google’s AI Studio. You have no coding experience and you just told me you built an app recently that replaced one of your apps. What was that?

Elizabeth Knopf: Okay, let me share my screen and show you what I’ve been working on. I used to pay for an RSS feed tool that basically aggregates websites, keywords, different social media sources into a feed so that you can see everything in one spot versus going to a gazillion different sites or having to read through a million newsletters. So I was paying $20 a month for this.

Nik Hulewsky: And what was it going out and scraping? Was it going to Twitter, YouTube…?

Elizabeth Knopf: Yep, it was scraping specific sites like xAI news, OpenAI blogs… I input all these. I had my own Twitter feed here. I was paying 20 bucks a month for this.

Nik Hulewsky: This is incredible. Why the f didn’t I know—this is insane.

Elizabeth Knopf: So I just decided let me see if I can build this myself. This is my project management dashboard that I started building in Claude Code.

Nik Hulewsky: Of course you built a project management dashboard! How did you build this? Is it automatically pulling in new projects? Do you have to populate it every time? What am I looking at here?

Elizabeth Knopf: No, no, no. This is like a project management dashboard because as I start building projects, and me being an obsessive multitasker especially with Claude Code, I have too many projects and I need to keep track of what’s what. I started building out todo lists, agents, different frameworks that I was starting to leverage. So I first went to GitHub to see if someone might have already solved this. This one was my own because again, you sort of have to decide what should you build for yourself versus what should you build for others. I just prompted Claude and told it what I want to achieve. Assume that if you haven’t done it before, you’re probably going to have to redo the whole project more than once until you refine what you need.

Nik Hulewsky: What were those agents doing?

Elizabeth Knopf: I have a couple different agents. I have a security agent that will help just make sure I have security best practices going. I have an agent that pulls my todo list on a daily basis and adds them to that dashboard. That runs independent of me.

Nik Hulewsky: So would it be fair to say that this is an SOP folder?

Elizabeth Knopf: It is more though, because it’s both an SOP plus the execution. So I have actual agents in there, which again all I did was tell Claude “Make an agent that can do X, Y, and Z things.” An agent that will get all of my todos from every project on a daily basis and aggregate them and add them to that dashboard.

Nik Hulewsky: Holy crap. This is insane. Personal Operating System—and then it has all the files.

Elizabeth Knopf: Exactly. I have a file where I have rules of what I want for that project. I tell it: “Create a todo file that breaks down every part of the project into bite-sized tasks and then update.” I’ve had to rebuild this like a few times.

Nik Hulewsky: I know this just comes easy and naturally to you, but I think there’s a huge opportunity to start businesses just around this—around helping people and companies get organized with their data infrastructure with respect to AI. You always blow my mind, Liz.

Elizabeth Knopf: You know what agent we really need to build though—or maybe I’ll start working on this—is basically all the stuff we talk about from the pod that can then go and build off those little apps for us.

Nik Hulewsky: That would be awesome. All right, next week. It’s a date.

Elizabeth Knopf: Sounds good.

Nik Hulewsky: All right, hopefully you liked that episode. If you’ve made it this far, you’re either really committed or you’re stuck doing yard work and you can’t actually skip on your phone. So while I have you, the show is growing but I have a favor to ask of you. Will you please help me grow the show? I want to reach more people. There’s a couple things you can do. Like and subscribe is the simplest thing. Obviously you want to get notifications for when the next episode is coming out. But if you go the next step, will you leave me a review? Five star on Spotify or Apple. What that does is it tells the algorithm that, oh hey, this is a high-value podcast because more people are leaving reviews for it and it then pushes it out to more people. So that’s why when people are like “Will you log in, subscribe and put a five-star rating?” it’s not just to make themselves feel better, it’s actually to get more exposure for the show. So if you do that for me, I would greatly appreciate it and I’ll see you next time.