Ep 48: From GPTs to AGI: What’s Next?

Listen to the Podcast Episode for a deeper dive


From GPTs to AGI: What's Next? | AI for Interior Designers™
AI for Interior Designers™ Podcast

From GPTs to AGI: What's Next?

We have moved past the "what are these tools?" phase. The next shift — from siloed, prompt-dependent AI to fully autonomous agentic systems — is arriving faster than most designers realize. Here is what it means and what to do now.

This blog was written using AI as a recap from the recording, then edited by the author for accuracy and details.
Key Takeaways
  • We are in the gap between useful-but-disconnected AI tools and fully integrated agentic systems. Right now, powerful tools exist but do not work together. What is coming next eliminates the manual handoffs between them.
  • Agentic AI acts like a smart teammate — it understands the end goal, breaks it into steps, chooses the right tools, takes action, and reviews its own output. The designer stays the decision-maker. The logistics get handled.
  • AGI is further out but conceptually important: AI that can think and adapt across contexts rather than completing one defined task. The grocery list example makes it concrete — find recipes, build a meal plan, generate an Instacart order — applied to design project management.
  • Trust, privacy, and ethics are not afterthoughts. As these systems become more autonomous and have access to more client data, the questions around secure implementation, data handling, and ethical use become more urgent, not less.
  • We are the first generation building the norms for this. There is no established playbook for how designers navigate AI at this level. The designers who stay curious, share what they learn, and stay grounded in their values are the ones who will help shape what comes next.

The Three Phases of AI — Where We Are and Where We Are Going

Jenna returned from several weeks on the road — conferences, speaking, disconnecting from daily AI routines — and came back to what felt like a different landscape. That experience of coming back and finding everything shifted is a useful frame for understanding the pace of change. Not slow evolution. Actual discontinuities, visible in weeks.

The clearest way to understand the trajectory is through three distinct phases, each with meaningfully different implications for how designers work.

Where We Are Now
Powerful Tools, Disconnected Systems
ChatGPT, Midjourney, Visual Electric, Zapier, Make — each individually capable, but not natively connected. Manual handoffs, formatting, and troubleshooting eat the time that the tools theoretically save. It works, but the friction is real.
What's Arriving Next
Agentic AI — The Seamless Teammate
AI that understands a goal, breaks it into steps, chooses the right tools, executes independently, reviews its output, and hands you the result. No prompts for each step. No manual file movement. You stay the decision-maker — the logistics are handled.
Further on the Horizon
AGI — Context-Adaptive Intelligence
Artificial General Intelligence — AI that thinks and adapts across a wide range of contexts rather than completing one defined task. Give it a project goal and it manages the full scope: research, sourcing, communication, scheduling, delivery. Early glimpses are already visible in everyday tools.

The Current Patchwork — Why Great Tools Still Create Friction

Most designers working with AI right now are running a collection of genuinely useful tools that do not talk to each other. The output from one requires manual processing before it can be used as the input to another. That handoff work — copying, reformatting, uploading, checking — is invisible from the outside but significant in practice.

ChatGPT Ideation, content drafting, proposals, custom GPTs for client onboarding
Midjourney / Firefly / Visual Electric Concept visuals, mood boards, branded imagery — generated separately, assembled manually
Zapier / Make / Lindy Workflow automation connecting apps — powerful but fragile, breaks without warning, requires maintenance
Project tools MyDoma, Studio Designer, Canva — each excellent within their domain, but require manual data entry from outside sources

The honest assessment: this setup works, and the tools are genuinely powerful. But it is not seamless. Every tool transition is a potential friction point, and the time spent on transitions is time not spent designing. That is exactly the problem agentic AI is being built to solve.

What Agentic AI Actually Looks Like in a Design Practice

The scenario Jenna describes: you walk into a client meeting. An AI assistant listens, picks up on "coastal kitchen" and "natural wood cabinetry," and begins pulling relevant visual ideas in real time. By the time the meeting wraps, it has already started building a sourcing list, generating a proposal using your pricing structure, and formatting it into your branded presentation template. No prompts between steps. No switching apps. No friction.

This is not speculative — it is a description of what Jenna is seeing in conversations with teams at Google and other companies actively building these systems. The timeline is closer than most designers currently expect.

The key distinction from current AI tools: agentic systems do not wait for a command at each step. They understand the outcome, determine the path, execute independently, self-review, and deliver. The human stays in the loop for decisions, but the logistics happen without being managed manually.

Learns Your Preferences
Vendors, palettes, design styles, pricing structures, communication tone — an agentic system builds a working model of how you work and applies it consistently without re-explaining.
Creates Branded Deliverables
Mood boards, spec sheets, proposals, and presentations auto-formatted to your templates and brand standards — triggered by meeting content, not manual input.
Writes in Your Voice
Client emails, vendor communications, and follow-ups drafted in your established tone — reviewed by you before sending, but not written from scratch.
Manages Sourcing and Pricing
Tracks vendor updates, pricing changes, and availability without manual checking — surfaces relevant changes in the context of active projects.

"You're still the decision-maker. But all the time-consuming logistics? They're handled."

— Jenna Gaidusek

AGI — What It Is and Why the Grocery List Example Matters

Agentic AI handles complex multi-step tasks within a defined domain. AGI — Artificial General Intelligence — goes further: it is AI that can think and adapt across contexts the way a human does, without needing the task to be pre-defined within its training.

The grocery list example makes this concrete: give a current AI tool a grocery list and it can help you check prices. Give the same task to AGI and it finds relevant recipes, generates a weekly meal plan, identifies what you already have, builds an Instacart order, and organizes it for checkout — all from the original input, with no additional prompting for each step.

Applied to interior design: give AGI a project brief and it could research comparable projects, identify relevant sustainable materials, cross-reference building code requirements, generate a preliminary space plan, draft the client questionnaire, schedule the site visit, and prepare the onboarding package — as a connected sequence, not as individual tasks.

AGI is further out than agentic AI. Early glimpses are already appearing in everyday consumer tools. The meaningful arrival in professional design applications is likely still a few years away — but the trajectory is visible now, and understanding where it is headed shapes how to build skills and systems today.

Trust, Privacy, and the Ethics Questions That Get More Urgent, Not Less

The more autonomous and capable these systems become, the more consequential the questions around how they handle sensitive information. Interior design involves high-end clients, proprietary business information, personal preferences, and detailed knowledge of how specific people live — all of which are exactly the kind of data that an agentic system would need to access to function well.

Platforms like Google's Agent Space are building with secure, encrypted systems — but as Jenna notes, that promise cannot be fully evaluated until these systems are tested at scale in real-world conditions. The track record does not yet exist. That makes thoughtful implementation particularly important: which information gets fed to which systems, what access levels are granted, and how client data is handled all require intentional decisions, not default trust.

"As these tools become more capable, issues around privacy, data security, and ethical use must remain front and center. These are not conversations for later. They are for now."

— Jenna Gaidusek

Jenna dedicated the preceding episode entirely to ethical AI in design — a signal that this is not a footnote to the tools conversation but a parallel track that deserves equal attention. The designers who engage with the ethics questions now are the ones best positioned to use these systems responsibly when they arrive at scale.

Frequently Asked Questions
Current AI tools respond to prompts — you give an instruction, they produce an output, and you decide what to do next. Agentic AI understands a goal, determines the steps needed to reach it, selects the appropriate tools for each step, executes those steps independently, reviews its own output for quality, and delivers the completed result. The difference is like the gap between an assistant who does whatever you ask in the moment and one who you can give a project to and trust to handle the execution while you focus on other things. The human stays in the decision-making role, but the coordination and logistics happen without supervision.
Jenna's assessment at the time of this recording: sooner than most designers expect, based on direct conversations with teams building these systems. Early versions are already appearing in consumer and enterprise tools. Google's Agent Space was launched or imminent at the time of recording, integrating Notebook LM and other agents. The mainstream availability of reliable, professional-grade agentic systems for design businesses is likely within the next one to two years, though the full capability Jenna describes — attending meetings, generating proposals, managing sourcing — will arrive incrementally rather than all at once.
Yes — specifically in two ways. First, document your preferences, processes, and standards in forms that AI systems can learn from: written brand voice guides, proposal templates, vendor preferences, pricing structures, and communication style guidelines. These are the inputs that will allow an agentic system to work in your style rather than generically. Second, build familiarity with current AI tools even if the process still requires manual steps. The muscle memory and judgment you develop now — knowing what good output looks like, how to evaluate AI suggestions, when to override — is exactly what will make you effective when the systems become more autonomous.
Google Agent Space is Google's platform for building and running AI agents — essentially custom-configured agentic AI assistants that can access your Google suite (Drive, Gmail, Calendar, Docs) and external tools including Notebook LM. For designers, the significance is the integration with tools already in use: an agent that can access your project documents, emails, calendar, and notes and execute multi-step tasks across all of them without manual coordination between apps. At the time of recording, Agent Space had recently launched for general access. Its capabilities and reliability at professional scale were still being established.
With more autonomy comes more access — and more access means more risk if the system is compromised or misused. Key questions to ask before deploying any agentic system in a client-facing capacity: What data does this system need to access, and what does it do with that data after the task is complete? Is client information encrypted in transit and at rest? Does the system retain conversation or project data for training purposes — and can that be opted out? How are access permissions scoped — does the agent have access to everything, or only what it needs? The answers to these questions should inform what you choose to automate and what you keep manual, regardless of how convenient full automation would be.
Stay Ahead of the Shift
The DAIly — Weekly AI Updates Built for Designers
The AI landscape changes fast enough that six weeks away feels like a new world. The DAIly keeps you current with practical, bite-sized updates on the tools and shifts that matter for your design practice — without the noise.
 

Disclaimer: This blog was written using AI as a recap from the recording then edited by the author for accuracy and details.

 

Previous
Previous

Ep 49: AI in the Classroom with Emily Allen Burroughs from DSA

Next
Next

Ep 47: Is AI Saving Us or Screwing Us? Let's discuss