Ep 52: AI in Interior Design: A Candid Conversation with Sarah Daniele

Listen to the Podcast Episode for a deeper dive

AI in Interior Design: A Candid Conversation with Sarah Daniele | AI for Interior Designers™
AI for Interior Designers™ Podcast

AI in Interior Design: A Candid Conversation with Sarah Daniele

Sarah Daniele flips the script and interviews Jenna — a peer-to-peer conversation between two people who have been building design technology together for years, talking honestly about what the tools actually do and where it is all going.

This blog was written using AI as a recap from the recording, then edited by the author for accuracy and details.
Key Takeaways
  • AI imagery is visual communication, not a replacement for rendering. ChatGPT with DALL-E generates quick mock-ups that speed up client feedback loops. It cannot replace a measured, detailed render — and that matters for construction work.
  • Visual Electric lets you build a branded style that stays consistent across every output. By uploading your logo and reference images, you create a preset style that applies your brand's color and feeling automatically — so you are not re-explaining your aesthetic every time.
  • Notetakers are the most underutilized category in designer workflows. Recording meetings and running the transcript through an LLM to draft proposals is one of the highest-value, lowest-effort AI applications available right now.
  • The Typeform → ChatGPT → MyDoma onboarding flow is a concrete, repeatable system. A seven-year onboarding questionnaire workflow, now accelerated by AI, produces a client profile that launches a project in MyDoma with all rooms pre-configured in minutes.
  • Google Notebook LM is Sarah's favorite underrated tool. Feeding multiple sources — videos, PDFs, transcripts, links — into one notebook and having it generate an audio podcast-style summary is a genuinely different way to learn and retain complex information.
Sarah Daniele – MyDoma & Studio Designer
Episode Guest
Sarah Daniele
Principal of Designer Solutions, MyDoma & Studio Designer

Sarah Daniele is the co-founder of MyDoma — the design project management and visualization platform she and her husband built to solve the problems they ran into while running their own design firm. MyDoma acquired Jenna's eDesign Tribe and eDesignU in 2022, and was subsequently acquired by Studio Designer. Sarah now serves as Principal of Designer Solutions across both platforms, working with thousands of design firms on workflow and technology adoption.

MyDoma Studio Designer Design Workflow Product Development Interior Design

Visual Communication: What Is Actually Working Right Now

Sarah opened by asking Jenna for her top visual tools — and the answer is clear-eyed about what AI imagery can and cannot do. The category Jenna keeps coming back to is visual communication: getting ideas on screen fast enough to get genuine client feedback before investing hours in a polished render.

The key distinction she makes throughout this episode: AI-generated images are for ideation and communication. They are not for construction documents. They are not accurate enough at scale for structural decisions. Anyone claiming otherwise "has no idea what goes into a project." That line matters — because it protects the designer's role while making the case for the tool.

"Visual communication has never been more real with AI right now. It is not replacing rendering — it is cutting down on revisions you would have to make after a render that took 13 hours."

— Jenna Gaidusek

Why Visual Electric's Style System Changes the Game for Designers

Sarah pushed on Visual Electric specifically — how does it differ from just using ChatGPT for imagery? The answer is the style system. In ChatGPT, you have to re-explain your aesthetic every conversation. It is chat-based: you describe what you want, it generates, you describe adjustments, it regenerates. There is no persistent style memory.

Visual Electric works more like Photoshop on a canvas — you work directly on an image rather than through a text thread. More importantly, you can save a brand style: upload your logo, add reference images that represent your aesthetic, and the tool generates a style description that you can apply with one click to any future output. Jenna's saved style is described as "futuristic nod to nostalgia" — blending her brand colors into a consistent visual language that is immediately recognizable across everything she posts.

For client project work, this means you could create a project-specific style based on the client's inspiration images and apply it consistently throughout the ideation phase. Instead of generating from scratch each time, you are generating within a defined aesthetic territory that matches how the client found you in the first place.

The style system also solves the "generic AI aesthetic" problem. Rather than getting outputs that look like everyone else's AI content, your Visual Electric generations look like yours — because the style is built from your actual portfolio and brand identity.

Notetakers and the Meeting-to-Proposal Workflow

Sarah uses Zoom's AI Companion. Jenna uses Loom — which now integrates directly with Zoom to record meetings, automatically generate timestamped outlines, and produce action items in her video library. For in-person meetings and job site walkthroughs, she uses Plaud: a physical device that records two-way conversations and generates clean, searchable transcripts.

But the notetaker is just the capture layer. The workflow that follows is where the value really compounds:

1
Record the initial client meeting — virtual via Zoom/Loom, in-person via Plaud or the native notes transcriber on iOS or Android.
2
Copy the full transcript into ChatGPT or Perplexity, along with a sample of your proposal format or a custom GPT trained on your template.
3
Prompt: "Here's the transcript and here's my proposal format. Write the proposal for this project." The output includes all client details, stated preferences, budget references, and scope items — nothing lost from the meeting.
4
Extract the style language from the transcript — how they described the feeling they want, their must-haves and dislikes — and use those words as a prompt in ChatGPT or Visual Electric to generate inspiration imagery for the proposal.
5
Drop everything into your proposal template in Canva and format for delivery. The text came from your meeting. The images came from the client's own words. Nothing was invented.

Jenna's observation: the post-meeting proposal process used to take several hours — gathering notes, making sure nothing was missing, writing to the client's specific requests, formatting. Now the meeting is the work. The proposal is the output of a 15-minute AI processing session.

The Typeform → ChatGPT → MyDoma Onboarding Flow

Jenna has been using a Typeform onboarding questionnaire for seven years — originally built because she has always worked virtually and needed a way to get comprehensive project information without the friction of a phone call before the relationship was established. The questionnaire now includes AI-generated flatlays showing different room aesthetics so clients can point to images rather than trying to describe styles in words they may not know.

After the questionnaire, the workflow connects directly to project setup in MyDoma through its AI assistant, Max:

1
Typeform questionnaire download. After a client submits, export the CSV file with all their answers.
2
Combine CSV + initial call transcript in ChatGPT. Prompt: "Create a client project profile — project details, style preferences, must-haves, things they dislike." The output is a concise brief that captures both what they wrote and what they said.
3
Shorten to a brief summary with all rooms listed comma-separated. This is the exact format Max needs to set up a project in MyDoma.
4
Launch the project in MyDoma via Max. Because the rooms are comma-separated, Max creates all the product views (shopping lists) for each room automatically — a setup process that used to take up to 20 minutes now takes seconds.

"That comma separation — it sounds like nothing, but it saves half an hour on a big project because Max reads it like a spreadsheet and sets up every single room view instantly."

— Jenna Gaidusek

Google Notebook LM — Sarah's Favorite Tool Nobody Is Talking About Enough

Sarah brought up Google Notebook LM as one of her most-used tools, specifically for learning. The concept: instead of Googling one question and getting one answer, you feed multiple sources on a topic — YouTube videos, PDFs, transcripts, website links — into a single notebook, and then ask questions or request summaries across all of it simultaneously.

For designers specifically, the applications are wide. Put every document, transcript, and note from a client project into one notebook and search across all of it in conversation. Ask "what did we say about the pendant placement?" and get an answer that references specific points from specific meetings. Put building code updates, permit requirements, and contractor communications into a notebook and have it explain what changed and what you need to know.

Sarah's favorite feature: the audio summary. Notebook LM can generate a podcast-style dialogue between two AI voices summarizing the content of your notebook. If you are more of an auditory learner, this is a radically different way to absorb complex information — and the results are significantly better than most AI-generated audio because the source material is your actual documents, not a hallucinated summary.

Find it at: Search "Google Notebook LM" — it is a Google product accessible with any standard Google account. Free tier available.

Frequently Asked Questions
No. AI imagery is for visual communication and conceptual ideation — not for construction documents, accurate floor plans, or structural design work. Anyone claiming otherwise does not understand what actually goes into a professional design project. ChatGPT's O3 model with a lot of dimensional information can produce a reasonable-looking image, but the accuracy required for real-world execution is not there yet. Jenna estimates we are about a year away from meaningful progress on scale accuracy. The more likely near-term development is AI infused into existing design programs (like AutoCAD or Chief Architect) to speed up specific tasks rather than a replacement for those platforms entirely.
ChatGPT is conversation-based: you describe what you want in a chat thread, iterate through text, and the image changes based on your descriptions. It is good for quick iterations but lacks persistent style settings — you re-explain your aesthetic every session. Visual Electric works more like a canvas tool, where you interact directly with the image rather than describing changes in text. Its key advantage for designers is the saved style system: you upload your brand assets once, and every future generation automatically applies your aesthetic without re-prompting. It also aggregates multiple AI image models, so you can switch between generators while staying in one interface.
Jenna uses Plaud — a physical recorder that can be held in your hand or placed on a table, records two-way conversations, and generates searchable transcripts. For designers who do not want a dedicated device, both iOS and Android have built-in transcription tools in the native notes app. Start recording from a widget on your home screen, speak your notes as you walk through a space, and the transcript is waiting when you get back to your desk. Always disclose to clients when you are recording — both for ethical reasons and because in many jurisdictions it is legally required.
Upload one to three white-background JPEG images of a product — the same type of silo photography vendors provide — and Mazing XR generates a downloadable GLB (3D model) file along with an AR model at the same time. The GLB file imports directly into MyDoma's visualizer without conversion. For other platforms, free online file converters can handle the format change. Accuracy is approximate — scale may come in off, particularly if you do not specify dimensions — but the model is adjustable in your visualizer. Jenna notes that even approximate accuracy is a significant improvement over spending hours hand-modeling in SketchUp.
Google Notebook LM is a research and learning tool that lets you upload multiple sources — YouTube video links, PDFs, transcripts, website URLs — into a single "notebook" and then ask questions across all of them or request summaries. For designers, this means you can put every document from a client project into one notebook and search across all of it in conversation; feed building code updates and contractor documents into a notebook and ask it to explain what changed; or compile research on a design topic from multiple sources and have it synthesized rather than reading each individually. Sarah's favorite feature is the audio summary — Notebook LM can generate a podcast-format dialogue between two AI voices summarizing your notebook content. Accessible free with any Google account.
Connect with Sarah
MyDoma & Studio Designer — The Design Workflow Platforms Jenna Uses
Sarah and her team work directly with thousands of design firms on workflow and technology. MyDoma's Max AI assistant is the one Jenna uses to launch client projects in seconds. Reach out to Sarah directly at sarah.daniele@studiodesigner.com.
 

Disclaimer: This blog was written using AI as a recap from the recording then edited by the author for accuracy and details.


Previous
Previous

Ep 53: Connecting the Dots through AI Community Conversations with Julia Reinert

Next
Next

Ep 51: Simple Ways to Start Implementing AI and What’s Coming Next