How to go from zero to AI-first in 3 hours
Feel behind? Here is 3h curated YouTube videos to get you from AI-curious to AI-capable before your Monday sync. Catch up on everything that matters right now (and will be obsolete in 6 months).
Let’s be real. As a product manager, you have hundred things to do on your current projects. You haven’t had time for a deep dive on Generative AI tools. But you finally look up from your backlog, and it feels like you’ve been in a cave for a year.
Suddenly, everyone is obsessed. It’s all over Slack, it’s popping up in meetings, it’s in every discussion with your colleagues, and top management is asking about it in town halls. Your team is using terms like RAG, AI agents, Gems, and MCP, and you’re just... lost. And it’s not just curiosity; there’s a real urgency to be AI-first.
Your starting line
Why AI-first?
The why is simple. The primary goal is to make “Ask AI first” your new default reflex. We’re not doing this just to use new tech; we’re doing it to increase our collective speed, creativity, and quality. For any task you start, your first question should be: “How can AI help me?”.
If that sounds daunting, I’ve got you. This article is the crash course. It’s a 3-hour weekend playbook designed to catch you up.
A quick warning: AI is moving so fast that this playlist will probably be obsolete in 6 months. The good news? You can master everything that matters right now in the time it takes to watch a long movie.
What this playbook is
This is a 100% practical, hands-on how-to guide.
It’s a foundational introduction to build your confidence with core AI productivity tools.
It focuses on the mindset and principles, which are permanent, not just the tools, which will constantly change.
What this playbook IS NOT
This is NOT a deep dive into complex AI theory or abstract futurism.
This is NOT a legal or privacy guide (your company might have its own separate, existing training for that).
This is NOT a comprehensive masterclass. I can’t possibly cover every tool, job function, or advanced topic (like QA, HR, or Marketing). I’ll focus on a core stack common in many companies: Slack, Gemini, NotebookLM, Miro, GitHub Copilot, and Cursor.
If your company uses different tools (like ChatGPT, Claude, etc.), that’s great! The principles you learn here are universal.
How to use this guide
This is the starting line for your self-learning journey. To get the most out of it, you must follow the 3-step learning loop for each module.
Watch: View the curated video (most are 10-15 minutes).
To-Do (THE MOST IMPORTANT STEP): I’ll give you a practical to-do list. You must build the AI-first habit by doing. The real learning comes from doing, not just watching.
Quiz: Complete the fun level-up quiz to check your grasp of the key principles.
The curated videos total about 3 hours of watch time, but you should plan to set aside additional practice time throughout your week to turn these actions into true, lasting habits.
The agenda
Here’s the full agenda for this crash course.
Module 3: Prompting: Persona, task, context, and format (10 min)
Module 4: Deep Research: Scope, write, and find gaps (15 min)
Module 6: Google Meet: Transcribe, summarize, and customize (10 min)
Module 8: NotebookLM: Analyze docs, videos, and websites (20 min)
Module 9: Miro AI: Brainstorm, diagram, and summarize (5 min)
Module 10: GitHub Copilot: Suggest, explain, and automate (15 min)
Module 12: Figma Make: Prompt, visualize, and refine ideas (10 min)
Module 13: AI agents: Instruct, delegate, and automate (10 min)
Get support and share your wins
Got questions? Stuck? Have a better video or a new to-do idea to suggest?
Leave a comment below! This is a living document. I’ll do my best to reply and update the playbook with the best ideas from the community.
Let’s begin!
Module 1: Slack AI: Search, summarize, and recap
PM problem: You’re back from a 3-day weekend, or you’ve just been pulled into a new project. You open Slack and see a mile-long thread and a project channel with hundreds of unread messages. Your old reflex: spend 30 minutes scrolling (and losing your mind).
AI-first reflex: Instantly summarize the thread, ask the channel for a 7-day summary, and get a daily recap of your most important projects. This module is your first quick-win for reclaiming your time.
Tutorials:
Key concepts:
Get answers, not just results: The biggest shift is how you search. Stop using keywords. Ask Slack AI questions in natural language, just like you’d ask a teammate (e.g., “Who is doing the creative for Project Eagle?”).
Your catch-up superpower: You can instantly summarize any thread or channel. This is perfect for a PM. You can summarize a project channel to see what progress has been made, an account channel to learn about a customer, or a user feedback channel to find trends and themes.
Control the timeframe: When you summarize, you can choose a specific timeframe, like just your “unread messages” or (most powerfully) the “last 7 days”.
Your proactive daily digest: The “Recap” feature is your personalized digest. You select only the channels you can’t monitor 24/7 but need to stay informed on. It gives you a daily summary of missed messages so you can stay in the loop without being overwhelmed.
Trust but verify (with sources): This is critical for PMs. Every answer or summary Slack AI gives you provides source links. You can click them to jump directly to the original message in the conversation for full context.
To-do: The real learning comes from doing. Go try this right now in your Slack workspace.
Ask a question: Go to the Slack search bar and ask a natural language question about one of your projects (e.g., “What is the status of [Project Name]?” or “What is the latest update on the design sprint?”).
Summarize a thread: Find a long, unread thread in one of your channels and hit the “Summarize” button. See if the summary gives you what you need in 10 seconds.
Summarize a channel: Go to a busy project channel. Use the “Summarize” feature and select “Last 7 days”.
Set up your recap: Click the “Recap” button in your sidebar and add 3-5 of your most important project, team, or feedback channels.
Quiz:
Module 2: Gemini: A full feature tour
PM Problem: You have a blank page for a new PRD. You have a 50-page user research PDF to synthesize. You know the answer to a question is somewhere in your Google Drive, but you can’t find it.
AI-first reflex: You stop starting with a blank page. Gemini is your new Swiss Army knife. You use it to upload files for summaries, brainstorm first drafts, search your Drive, and even get a podcast-style overview of a document for your commute.
Tutorial:
Key concepts:
It’s more than a chatbot: This is the key takeaway. You can upload files (PDFs, docs) and even images for analysis. (e.g., “Summarize this 30-page PDF” or “Analyze this screenshot of a competitor’s flow”).
It’s your Google Drive search engine: You can connect your Google Drive (in Settings > Apps). This is a game-changer. You can now ask, “What were the main takeaways from the ‘Project Eagle Q3’ doc?” and it will find and read the file for you.
Deep Research for complex tasks: For big, open-ended questions (like market research or planning), use the Deep Research mode. It will search hundreds of websites to create a detailed, condensed summary.
Gems for repeatable tasks: Gems are custom versions of Gemini you create for specific, repeatable tasks. Think of them as productizing your best prompts. Examples: “Meeting Recap Generator,” “PRD First Draft Writer,” or “User Feedback Synthesizer.”
Audio overview for your commute: When you upload a document, you can often generate a podcast-style audio summary. This lets you read your project docs while you’re walking or driving.
Fact-check everything: AI can and does make mistakes. Always use the built-in fact-checker. Click the three-dot menu and select “double-check the response,” which uses a standard Google search to verify its claims.
To-do: Time to build the habit. Go to gemini.google.com.
Summarize a file: Upload a PDF or Google Doc (like a project brief or research doc) using the plus (+) icon and ask: “Summarize this in 3 bullet points.”
Connect your Drive: Click the settings gear (left menu) > “Apps” and connect your Google Drive. Start a new chat and ask a question about a specific file you know is in your Drive.
Try Deep Research: Go to the Deep Research tab (at the top) and try a PM prompt like: “Create a competitor analysis plan for a new B2B SaaS product.”
Explore Gems: Click Gems (left menu) and explore the pre-built Editor. Paste in a paragraph of your own writing (from an email or doc) and ask it to “refine this to be more clear and concise.”
Quiz:
Module 3: Prompting: Persona, task, context, and format
PM problem: You try using AI, but the results are generic, vague, and useless. You ask it to “write a PRD” and it gives you a high-school-level template. You get frustrated and go back to a blank page.
AI-first reflex: You stop chatting and start briefing. You learn to give the AI a spec, just like you’d give to an engineer or designer, using a simple 4-part framework.
Tutorial:
Key concepts: This module teaches Google’s 4-part formula for writing prompts that actually get you what you want. Think of it as your new AI-first brief.
The 4-part AI-first brief (PTCF):
P = Persona: Who do you want the AI to be? (e.g., “You are an expert Product Manager at a B2B SaaS company...”)
T = Task: What, specifically, do you want it to do? (e.g., “...write 5 potential user stories for a new ‘admin dashboard’ feature.”)
C = Context: What background info does it need? (e.g., “The feature is for non-technical admins. The goal is to give them a high-level view of user activity...”)
F = Format: How should the output look? (e.g., “Put this in a table with columns for: User Story, Acceptance Criteria, and Business Value.”)
This framework is universal: This isn’t just for Gemini. The PTCF framework works effectively across all major AI tools, including ChatGPT, Claude, and others.
The magic prompt-refiner: If you get a weak result, or you’re not sure what context to add, just ask the AI: “What questions do you have about this prompt?” It will literally give you a checklist of the exact details it needs to give you a better answer.
Don’t accept the first draft: Always iterate on the output. Get a result, then give follow-up commands like: “Make this more concise,” “Put this in a 90-day plan,” or “Now re-format this as a table.”
To-do: Time to build the most critical habit.
Write a PTCF prompt: Go to Gemini. Draft your next real work request (e.g., writing an email, brainstorming ideas, summarizing notes) using the full Persona, Task, Context, and Format framework.
Use the magic refiner: In a new chat, write a simple, weak prompt (e.g., “Write a blog post about AI”). Then, ask the follow-up: “What questions do you have about this prompt?” Watch as it tells you exactly what to add.
Iterate on output: Take any AI response you just got. Ask it: “Now put this into a table.” If it works, try the “Export to Google Sheets” feature.
Quiz:
Module 4: Deep Research: Scope, write, and find gaps
PM problem: You’re starting a new initiative in a domain you know nothing about. You need to get up to speed, fast. Your old reflex: spend days in a rabbit hole of browser tabs, articles, and competitor teardowns.
AI-first reflex: You use Gemini’s Deep Research mode as your personal research assistant. You ask it to “map the field,” “find knowledge gaps,” and “summarize opposing arguments.” It scans hundreds of websites and gives you a complete overview in minutes.
Tutorial:
Key concepts:
Create an instant map of any topic: Deep Research mode scans years of peer-reviewed research and articles to give you a complete map of a topic. You can instantly find the major subtopics, key trends, and most importantly the key knowledge gaps.
Write your first 0.1 draft: This is the ultimate blank page killer. Ask it to generate a first draft of a literature review, market analysis, or introduction. It will do the grunt work, saving you hours. You can then export it directly to Google Docs to edit and refine.
Find weaknesses in your argument (rigor): This is critical for PMs. Before you go into a stakeholder meeting, ask Gemini to find the main critiques or alternative perspectives on your argument. This makes your own ideas more robust and prepares you for any challenge.
Find adjacent opportunities: AI makes it easy to find overlaps with adjacent fields. This is a powerhouse for innovation. You can ask, “How have researchers in [Adjacent Field] approached the problem of [Your Problem]?” to discover new techniques or collaboration opportunities.
Bonus tip - Canvas mode: The video mentions a bonus feature called Canvas. This is an interactive mode where you can edit individual sections of an AI-generated document. You can highlight one paragraph that has no references and ask, “Find 3-5 references that support this statement.”
To-do: Time to become a super-researcher.
Scope a topic: Open Gemini and (in Deep Research mode) use a prompt like: “Based on market research from the past 3 years, what are the major knowledge gaps in the [Your Area of Interest, e.g., ‘customer onboarding SaaS’] market?”
Test your argument: Take a core assumption you have (e.g., “users prefer a self-serve model”). Ask Gemini: “What are the main critiques or alternative perspectives challenging the idea that [Your Core Argument]?”
Find collaborators: Ask Gemini: “How have companies in adjacent fields like [e.g., ‘B2C gaming’] approached the problem of [e.g., ‘user engagement and retention’]?”
Quiz:
Module 5: Gems: Build your personal AI assistant
PM problem: You do the same, tedious tasks every week. Maybe it’s formatting your weekly project report, drafting similar replies to user feedback, or summarizing customer inquiries. It’s low-value work that makes you sigh every time it appears on your calendar.
AI-first reflex: You automate it. You build a custom Gem to act as your personal expert for that one specific task. You productize your prompt so you never have to write it again.
Tutorial:
Key concepts:
Gems are productized prompts: A Gem is a custom version of Gemini you build to act as your personal expert for any repetitive task (like drafting emails or formatting reports).
Use the PTCF framework (again!): The best way to build a Gem is to use the Persona, Task, Context, Format framework you learned in Module 3 for its instructions.
Give your Gem knowledge: You can upload files (like PDFs with FAQs, project specs, or business information) directly into your Gem. It will use this document as its single source of truth.
The real time-saver: The magic is that the Gem already has all the instructions and context saved. You don’t have to write a complex prompt every time. You just give it the new input (like “Here is the latest customer email”) and it does the work.
(New) Gems are shareable: You can now share your Gems with your team, just like you’d share a Google Doc. Build a “User Feedback Summarizer” and share it with your whole product team.
To-do: Let’s build your first AI assistant.
Brainstorm your annoying tasks: List 3-5 of your most repetitive, tedious weekly tasks.
Pick one: Open the “Gem Manager” in Gemini and start a “New Gem.”
Write the spec: Write instructions for your Gem using the Persona, Task, Context, and Format framework. (Or, just write simple notes and ask Gemini to rewrite the instructions for you!)
Add knowledge: Upload a relevant document to the “Add knowledge” section to give your Gem context (e.g., a sample of a good report, a PRD, an FAQ doc).
Test your Gem: Give your new Gem a simple prompt related to its task (like pasting in a chunk of data or a customer email) and see it work.
Quiz:
Module 6: Google Meet: Transcribe, summarize, and customize
PM problem: You’re in a critical stakeholder meeting. You’re either frantically typing notes and missing the conversation (and the body language), or you’re fully engaged but end the call with zero record of who agreed to what. Both are terrible.
AI-first reflex: You automate. You turn on Gemini in Google Meet to get a perfect transcript. Then, you use a custom Gem to turn that transcript into your perfect notes: action items by owner, key decisions, and a concise summary.
Tutorial:
Key concepts: This workflow is a game-changer for your meetings. Here are the core concepts to understand:
You can activate Gemini directly in Google Meet to automatically transcribe the entire call and generate a summary.
After the call, all notes and transcripts are automatically saved in a Google Drive folder called “Meetings recordings”.
A critical benefit is that Gemini’s transcript knows who said what. This is essential for creating accurate, custom summaries and assigning action items to the right person.
The pro move is to copy the raw transcript (not the generic summary) and paste it into a custom Gem.
You then train your custom Gem by giving it specific instructions (a system prompt) on exactly how you want your notes formatted (e.g., “list action items by person,” “extract key decisions,” “create a one-paragraph summary”). This gives you perfect, tailored notes every time.
To-do: Let’s make sure you never lose an action item again.
Turn it on: In your very next Google Meet, click the button to “Start” Gemini note-taking.
Find the folder: After the call, find the “Meetings recordings” folder in your Google Drive and locate the transcript.
Build your “Notes Gem”: Create a new “Gem” (or start a new chat) and give it your custom prompt for meeting notes.
Run the transcript: Copy the full transcript from the Google Doc and paste it into your new Gem.
Compare and iterate: Compare your custom output to Google’s generic summary. Edit your Gem’s instructions to dial it in perfectly.
Quiz:
Module 7: Canvas: Outline, prototype, and quiz
PM problem: Your engineers are talking about “stateless requests” for a new “REST API,” and you’re just nodding along. You need to learn this new technical concept fast to be effective, but reading dry documentation is slow and passive.
AI-first reflex: You stop just reading and start interacting. You use Gemini’s Canvas feature to build a personal, interactive study guide. You highlight terms you don’t know, generate mini-prototypes to see how things work, and quiz yourself, all in one place.
Tutorial:
Key concepts:
It’s an interactive study guide: Gemini Canvas is a powerful tool for learning new skills by letting you build an interactive study guide from a single prompt.
Get in-context explanations: This is the killer feature. You can highlight any text (like “JSON” or “stateless”) in the study guide and ask Gemini to “explain this in more detail” right inside the canvas. The document dynamically updates with the new info.
Learn by doing with prototypes: Canvas can use its code generation to create interactive prototypes (like an API simulator). This lets you understand complex concepts visually by doing instead of just reading.
Instantly test your knowledge: Once you’ve built your guide, you can ask Gemini to create an interactive quiz based on the content it just generated, helping you lock in the knowledge.
Learn on the go: The audio overviews feature lets you transform your entire study guide into a podcast-style summary to listen to on your commute.
To-do: Let’s learn a new skill in 10 minutes.
Pick your topic: Open Gemini and pick a simple concept you need to learn for your job (e.g., “What is ‘Agile methodology’?” or “Explain ‘feature flags’”).
Create the guide: Use the prompt: “I’m new to [Your Topic]. Create a simple study guide for me. I am a Product Manager and not a technical expert.”
Get clarification: Find a term in the response you don’t understand, highlight it, and ask Gemini: “Explain this to me in simple terms.”
Test your knowledge: Ask Gemini: “Now, create a 5-question multiple-choice quiz based on this study guide to test my knowledge.”
Quiz:
Module 8: NotebookLM: Analyze docs, videos, and websites
PM problem: Your project info is scattered. You have 10 PRDs in Google Drive, 5 user research PDFs, 20 competitor website links, and a 1-hour YouTube video of a user interview. You need to find one specific answer (like “What was the main feedback on feature X?”), but it’s buried in one of those 36 sources. Your old reflex: open 36 tabs and start skimming.
AI-first reflex: You build a single source of truth. You create one NotebookLM project and upload all 36 sources. You’re now chatting with your entire project. You ask, “What was the main feedback on feature X?” and it answers with citations, showing you exactly which doc and page the answer came from.
Tutorial:
Key concepts:
It’s source-grounded: This is the key concept. NotebookLM is an AI assistant that only uses the documents you upload to provide answers. It stops hallucinations because it’s not guessing from the whole internet; it’s grounded in your sources.
Upload your entire project: You can create notebooks for different projects and upload your PDFs, Google Drive files, website links, and even YouTube video links. This is perfect for centralizing user research, competitor links, and project specs.
Chat with your docs and get cited answers: You can ask direct questions to all your sources at once. NotebookLM will answer with citations, showing you exactly which document (or video timestamp) the information came from.
Instant summaries, mind maps, and podcasts: With one click, you can get a summary of a 1-hour video, a mind map of a complex PRD, an FAQ, or an audio overview (a podcast) of your research docs for your commute.
To-do: Let’s build your project brain.
Create your notebook: Go to notebooklm.google.com and create your first new notebook (e.g., “[Project Name] - Research”).
Upload your sources: Upload at least 3 different sources: a Google Doc (like a PRD), a website link (like a competitor’s homepage), and a PDF (like a user research report).
Get an instant summary: Click on one of the sources you just uploaded to get an instant summary and list of key topics.
Chat with your project: Use the chat box to ask a specific question that spans all your documents (e.g., “What are the main user pain points mentioned across all sources?”).
Try a power feature: Click “Audio Overview” to generate an instant podcast based on your documents.
Quiz:
Module 9: Miro AI: Brainstorm, diagram, and summarize
PM problem: Your Miro board after a brainstorming session is a glorious mess of sticky notes, screenshots, and disconnected arrows. Now, you have to spend an hour manually sorting, clustering, and turning it all into a structured project plan or competitor analysis doc.
AI-first reflex: You use the AI built into the board. You generate a mind map from a text prompt to start. Then, you select all your messy sticky notes and screenshots and ask Miro AI to “summarize the selected content,” “compare these competitor examples,” or “turn this into an actionable project plan,” and it does, in seconds.
Tutorial:
Key concepts:
AI is built into the board: Miro AI is right on your toolbar. It can generate new content from scratch (Docs, Tables, Diagrams, Mind maps, etc.) without you ever leaving your workflow.
The context-aware superpower: This is its most powerful feature. You can select existing items on your board (sticky notes, images, diagrams) and use them as context for a new prompt.
Seamless workflow: This context-aware ability lets you move from one phase to the next instantly. You can go from a brainstorm (a mind map) to competitive analysis (a document) to an execution plan (a table) all in one continuous flow, using the previous step’s output as the input for the next.
To-do: Let’s turn chaos into clarity.
Generate a diagram: Open a new Miro board, click the “Create with AI” icon, and ask it to generate a Diagram > Mind map on a topic (e.g., “create a mind map of marketing channels for a new product”).
Add your own context: Add a few sticky notes or images to the board (e.g., screenshots of competitor websites).
Use context-aware AI: Select your new mind map AND your sticky notes/images. Open Miro AI and choose Document.
Summarize and analyze: Use the prompt: “Summarize the selected content and compare the ideas in the mind map to the competitor examples.”
Quiz:
Module 10: GitHub Copilot: Suggest, explain, and automate
PM problem: You’re in a sprint planning meeting, and your tech lead says a simple feature request is 5 story points because the legacy code is confusing. Later, you look at a Pull Request (PR) and have no idea what the wall of red and green text actually means. You feel disconnected from the technical work.
AI-first reflex: You understand that your dev team now has an AI pair programmer. You can now ask, “Can you have Copilot explain this confusing code to us in plain English?” You know they can generate tests faster, write documentation instantly, and build features by assembling AI-suggested code, not just typing every line by hand.
Tutorial:
Key concepts: You don’t need to know how to use Copilot, but you must know what it’s capable of.
It’s an AI pair programmer: AI is no longer just a fancy search. It actively suggests and writes entire blocks of code for developers in real-time. This frees them from repetitive tasks to focus on building better, more creative products.
The universal translator (
/explain): This is your new best friend. A developer can highlight any confusing block of code and ask Copilot to explain it in plain English. This shatters the barrier between technical and non-technical team members.Plain English commands: Developers can now edit code using natural language. Just like in the video, they can highlight code and type, “change this to allow ‘r’ for rock”, and the AI does the refactor. This means faster iteration and bug fixes.
Automated quality and documentation: Copilot helps automate related tasks. It can write automated tests to improve quality and write summaries of code changes (commit messages). This means you get clearer, faster updates on what actually got done.
To-do: Your mission is about collaboration, not coding.
Schedule 15 mins with an engineer: Ask an engineer on your team to give you a live demo of how they use Copilot.
Ask for an explanation: Find a recent PR or a piece of code you’re curious about. Ask your engineer to use the
/explaincommand and share the plain-English summary with you.See a commit written: Ask them to show you how they use Copilot to generate a commit message (a summary of their changes).
Quiz:
Module 11: Cursor: Build, debug, and visually edit
PM problem: The feedback loop is slow. You ask for a simple change (e.g., “make that button yellow”). Your engineer has to find the code, change it, push it, deploy it to a staging site, and then you can finally see it... only to realize you actually meant a different yellow.
AI-first reflex: You understand your dev team can now use vibe coding tools. You can join a screen share, watch them click the button, and tell the AI in plain English, “Make this button yellow”. You see the change instantly in a live preview, allowing you to give feedback in seconds, not hours.
Tutorial:
Key concepts: This tool shows the future of development, where building an app feels more like editing a visual design.
Chat + Live preview: This is the game-changer. AI coding tools now combine the chat with a live browser preview. Developers can talk to their code and see their changes (like a button changing color) happen instantly, dramatically speeding up the build-and-test cycle.
Visual editing with plain English: Developers can now visually edit apps by just clicking on an element (like a button or a title) and telling the AI in plain English what to do.
Faster debugging: The built-in browser has console logs (a list of errors). Developers can feed error messages directly to the AI, which acts as an expert assistant to help find the solution much faster.
Run an AI horse race: Instead of just one AI’s solution, a developer can ask multiple AI models (like GPT-5, Claude 3.5, and Composer) for a solution to the same problem, then pick the fastest or most efficient one.
New habits (”New Agent”): To get the best results, devs are learning to start a new agent (or chat) for each new task. This keeps the AI focused and stops it from getting confused by old, unrelated instructions.
To-do: Your mission is to experience this new feedback loop.
Schedule a 15-min vibe coding session: Ask an engineer on your team to share their screen while using Cursor (or a similar tool).
Suggest a visual edit: Suggest a simple visual change (e.g., “change the text on that button” or “change that title’s color”).
Watch the magic: See if they can use the visual click-and-tell feature to make your change. Watch it happen instantly in the live preview.
Ask about models: Ask them if they ever use multiple models to solve a tricky bug.
Quiz:
Module 12: Figma Make: Prompt, visualize, and refine ideas
PM problem: You have a great idea for a new feature, but it’s hard to explain. Your “back of the napkin” sketch or text description needs to be translated by a designer into a low-fi wireframe, which takes time and can lead to misinterpretations.
AI-first reflex: You stop waiting. You (or your designer) use an AI tool to generate a web interface in minutes from your simple text prompt or even a photo of your sketch. This allows your team to visualize, debate, and collaborate on a tangible prototype almost instantly.
Tutorial:
Key concepts:
3 ways to create: AI can now generate complex interfaces from three simple inputs:
A text prompt: (e.g., “Create a modern web dashboard with 4 KPI cards and a data table”).
An existing Figma file: You can copy/paste an existing design to have the AI iterate on it.
An image: You can upload a simple sketch or a screenshot and have the AI turn it into a functional UI.
Speed and collaboration: This technology massively speeds up the process from “idea” to “usable prototype.” It allows anyone on the team (not just designers) to visualize an idea, making collaboration faster and more concrete.
Mindset > Tool: We’re looking at Figma Make, but this is a bigger trend (v0.dev, Lovable.dev, bolt.new, Google AI Studio, etc.) The specific tool will change, but the AI-first mindset of using AI as a creative partner is what’s important.
Generate, then edit: The AI gets you an 80% complete first draft. From there, your team can use AI-powered editing to refine the design, like changing all fonts or colors at once with a single command.
To-do: Time to visualize an idea.
Generate from a prompt: Open Figma Make (or a similar tool: v0.dev, Lovable.dev, bolt.new, Google AI Studio) and try a prompt: “Create a modern login screen for a mobile app with a logo, email field, password field, and a ‘Sign Up’ button.”
Generate from an image: Take a screenshot of an app you like. Upload it to the AI and use a simple prompt like: “Create this interface.”
Try AI editing: In the generated design, select a text element and use the AI editor to “change all button text to be bold.”
Quiz:
Module 13: AI Agents: Instruct, delegate, and automate
PM problem: You’ve finally mastered using AI chatbots (LLMs) like Gemini. But now, in meetings, your tech lead and leadership team keep using a new term: “AI agents.” They sound completely different, and you’re not sure why. You’re missing the key concept.
AI-first reflex: You can clearly articulate the evolution from a passive chatbot to an autonomous agent. You understand the massive change that happens when the AI stops just following your steps and starts making its own decisions to achieve a goal. This clarity is essential for future roadmapping.
Tutorial:
Key concepts: This video breaks down the 3-level evolution of AI.
Level 1: LLMs, the chatbot: This is what you use today. Standard AI chatbots are passive. They respond to your prompts based on their training data but can’t access your personal, real-time info (like your calendar or email).
Level 2: AI workflows, the recipe: These follow a predefined path set by a human. You are the decision-maker, telling the AI, “First, do this, then do this”.
What is RAG? You’ll hear this acronym constantly. RAG (Retrieval Augmented Generation) is just a fancy name for an AI workflow where the AI “looks something up” (like a library of private documents) before answering.
Level 3: AI agents, the game-changer: This is the massive change. The LLM becomes the decision-maker. You stop giving it steps and start giving it a goal.
The ReAct framework: Most agents use this framework. The AI autonomously decides what to do. It Reasons (makes a plan), Acts (uses tools like Google Search or your calendar), and Iterates (critiques its own work and tries again) until the goal is achieved.
To-do: This is a conceptual module, so your mission is to apply the concept.
Analyze a task: Think of a common PM task, like “Write a competitor analysis report for our top 3 competitors.”
Write a workflow prompt: Write down the steps you would give an AI to complete this (e.g., “1. Search for competitors. 2. Find their pricing pages. 3. Summarize them in a table...”). That’s a workflow.
Write an agent goal: Now, write the goal you would give an AI Agent (e.g., “Produce a full competitor analysis report for our top 3 competitors, identifying their key strengths, weaknesses, and pricing models.”). The Agent would then decide how to do all the steps itself.
Quiz:
Module 14: MCP: Give your LLM new skills
PM problem: Your AI is stuck in a box. It doesn’t know what’s on your calendar, it can’t read your unread Slack messages, and it can’t pull data from your team’s Jira board. It’s a generalist, not a true personal assistant that can actually do things for you.
AI-first reflex: You understand that for your AI to be truly useful, it needs a secure key to unlock your other apps. You know this key is a standard called MCP (Model Context Protocol). You’re now thinking about which trusted servers you can use to build these connections, so your AI can finally go from a generalist to a specialist that actually works for you.
Tutorial:
Key concepts:
Your AI is in a box: By default, LLMs can’t see your personal, real-time data (like your calendar, a database, or a specific website).
MCP is the key: MCP (Model Context Protocol) is the technology that lets your AI get out of its box and connect to your external tools and data.
It’s a universal language: This is the most important part. MCP is a standardized protocol. It allows any AI app (the client, like Cursor or a chatbot) to talk to any MCP server (the middle-man that holds the keys to all your tools).
The goal is personalization: This is the tech that will finally let your AI check your calendar, summarize your Slack, or pull data from your team’s tools.
Security is the whole point: This system involves giving a server your personal API keys and tokens. The entire protocol is built around security, which is why it’s critical to only use trusted servers.
To-do: This is conceptual, but you can map out the logic.
Brainstorm your connections: List the top 3 apps you wish your AI could connect to (e.g., Google Calendar, Slack, Jira, Figma).
Find a server (conceptually): Look at a trusted service that acts as an MCP server (like the Zapier example in the video).
Map the flow: Think about a prompt you would use if it was connected, like: “Read my #product-feedback channel in Slack for the last 24 hours, summarize the top 3 themes, and create a new ‘AI Feedback’ doc in my Google Drive.” This is the future MCP enables.
Quiz:
Module 15: A2A: How agents talk and work together
PM problem: You’re starting to see the future. You have an AI Agent (Module 13) that can use MCP (Module 14) to access your own apps. But it’s still an island. It can’t talk to the Salesforce Agent at your company, the Jira Agent from Atlassian, or the Slack Agent. How do you build a truly powerful, end-to-end workflow (e.g., “Summarize feedback from Slack, find the customer in Salesforce, and create a ticket in Jira”) if all the best agents can’t talk to each other?
AI-first reflex: You realize the walled garden problem needs a common language. You understand the A2A (Agent-to-Agent) protocol is that open-source standard. You’re no longer just building a single agent; you’re thinking about an ecosystem. You know your agent will publish an agent card (like a public API doc) to advertise its skills, securely authenticate, and collaborate, all while keeping its internal logic private.
Tutorial:
Key concepts: This is the final piece of the puzzle, explaining how a mesh of agents will work.
The common language standard: The A2A (Agent-to-Agent) protocol is an open-source standard (housed by the Linux Foundation). It provides a common language so agents from different companies (e.g., a flight agent and a hotel agent) can work together to solve complex tasks.
The 3-stage handshake: It works in three main stages:
Discovery: An agent finds another agent and its skills by reading its public agent card (a JSON file listing its capabilities).
Authentication: They securely verify each other’s identity.
Communication: They exchange tasks and information using common web standards (JSON-RPC over HTTPS).
Privacy is the key benefit: Agents are treated as opaque. This is critical for enterprise use. It means your agent can collaborate with an external agent without revealing its internal logic, proprietary tools, or private data.
Handles long-running tasks: For tasks that take time (like waiting for human input), the protocol streams status updates using SSE (Server-Sent Events). This means your client agent isn’t just left with a loading spinner.
To-do: This is a thought experiment to map your agent’s future.
Map your agent ecosystem: Think of a complex workflow you’d want to automate (e.g., “Analyze user feedback and file bugs”). List the other agents your agent would need to talk to (e.g., Slack agent, Jira agent, Google Calendar agent).
Define your agent card: What skills would your agent advertise to the world? List 2-3 (e.g.,
summarize_user_feedback,identify_sentiment,find_related_tickets“).Explain the opaque benefit: In one sentence, explain to a stakeholder why it’s a good thing that your agent is opaque when talking to the Jira agent. (Hint: It protects your proprietary logic and company data).
Quiz:
Your journey begins now!
Congratulations! You’ve just completed a 3-hour crash course that took you from the basics of prompting to the future of collaborative AI agents. You now have the full vocabulary and mental models to lead your product teams with confidence.
But this isn’t the finish line. This is the starting line.
This playbook was never just about the videos or the quizzes; it was about building a new reflex. The real learning begins tomorrow, when you start applying these skills to your own work, your own problems, and your own ideas. Your goal is to make the AI-first mindset second nature.
Your mission
Apply one thing: Don’t try to boil the ocean. Pick one task this week, a document, a competitor analysis, a meeting summary, and force yourself to use AI for it. Start small.
Keep practicing: The AI-first mindset is a muscle. You have to use it every day, even for just 10 minutes, to make it strong.
Be curious: The tools you saw today will be different in six months. The principles will not. Stay curious, keep exploring, and subscribe to a newsletter like TLDR AI to stay current in 5 minutes a day.
Adopting an AI-first mindset is the essential first step, where you must force yourself to use these tools, so that you can then progress to the next stage of using AI with discernment, knowing when it’s relevant, understanding its nuances, and picking the right tool for the job.
You are not alone
Don’t be afraid to get stuck. Everyone does. When you find a prompt or a trick that saves you 20 minutes, post it in the comments below! Your win will help 10 other people.
Likewise, if you get a confusing result, ask for help in the comments. We can all learn from each other.
By adopting this mindset, you are not just making your own work faster or better. You are actively helping build the next version of our products and our companies, ones that are faster, smarter, and more innovative.
Thank you for being a part of this. Go build!

