Ep 25: Is AI Skewing Our Perception of Reality?

Listen to the Podcast Episode for a deeper dive

Is AI Skewing Our Perception of Reality? | AI for Interior Designers™
AI for Interior Designers™ Podcast

Is AI Skewing Our Perception of Reality?

AI-generated video is here — Luma AI, Runway, Sora — and it is producing visuals that blur the line between what is real and what is rendered. For interior designers using these tools in client presentations, that blurring raises important questions about expectation-setting, transparency, and professional integrity.

This blog was written using AI as a recap from the recording, then edited by the author for accuracy and details.
Key Takeaways
  • AI-generated video tools — Luma AI, Runway AI, and OpenAI's Sora — are producing increasingly photorealistic visuals that are genuinely difficult to distinguish from real footage. The capability has arrived faster than most people expected.
  • For designers using AI-generated visuals in client presentations, the blurring of real and rendered creates a real risk: clients who cannot distinguish the two may form expectations that no physical implementation can meet, damaging the trust relationship when the built space does not match the AI perfection.
  • Transparency about AI-generated content is both an ethical obligation and a practical trust-building practice. Labeling AI-generated visuals as such, and setting explicit expectations about what real-world implementation will and will not match, protects the client relationship rather than risking it.
  • The same technology that creates concern — hyper-realistic AI rendering — is also genuinely powerful for design communication. The goal is not to avoid these tools but to use them with the transparency and expectation-setting that responsible professional practice requires.
  • Designers who stay current with AI video tools and understand their capabilities are in the best position to use them ethically and to advise clients who encounter AI-generated content in other contexts about what they are actually seeing.

The AI Video Tools That Are Changing What "Real" Looks Like

The image generation conversation has been running for two years. The video generation conversation is newer — and the pace of improvement has been faster than most anticipated. These three tools represent the current frontier of what AI can produce in motion.

The Specific Problem This Creates for Design Practice

Realistic 3D renderings have been part of design practice for years — and the expectation management challenge they create is not new. Clients have always needed to be told that a rendering is a visual approximation, not a guarantee. The difference with AI-generated video is the degree of realism: when a client watches a Luma AI walkthrough of their future kitchen, the visual quality exceeds what most designers could produce with traditional rendering software, at a fraction of the time and cost.

The gap between that visual and the physical reality of the built space is real and unavoidable. Materials behave differently in natural light than in rendered light. Construction tolerances are not pixel-perfect. The specific product specified may arrive differently than shown. Real life is not photorealistic rendering — and no amount of AI sophistication changes that.

Unrealistic client expectations. When AI-generated visuals look more perfect than any physical implementation can be, clients may form expectations that no execution can meet — leading to disappointment even when the design outcome is objectively excellent.
Reality/fiction blurring. As AI video becomes more prevalent, clients may encounter AI-generated content in advertising, social media, and presentations without knowing it — normalizing a visual standard that is not achievable in the physical world.
Trust erosion if not disclosed. A client who realizes that a presentation visual was AI-generated — after forming expectations based on it — may feel misled, even if the designer's intent was simply to communicate the design direction effectively.
Industry-wide perception distortion. As AI-generated design imagery proliferates across social media and marketing, clients may develop reference points for "great design" that are based on AI renderings rather than real spaces — making the physical world increasingly feel inadequate by comparison.

"When AI-generated visuals look almost too perfect, we run the risk of setting unrealistic expectations for clients. No matter how talented we are, real-life implementation rarely matches the flawlessness of an AI rendering."

— Jenna Gaidusek

How to Use These Tools Responsibly — The Ethical Practice Layer

The goal is not to avoid AI video tools — they are genuinely powerful for design communication and the designers who use them well will have a real competitive advantage in presentation quality. The goal is to use them in ways that preserve rather than undermine the client trust relationship.

Label AI-generated visuals explicitly. In any client presentation, mark AI-generated images and videos clearly — "AI-generated concept visualization" or "AI rendering of proposed design." This sets the right expectation before the client forms one on their own.
Set expectations about the rendering-to-reality gap. Have an explicit conversation about how close the physical implementation will come to the AI-generated visual — what will match closely and what will differ. This conversation protects the relationship when the built space is finished.
Use AI visuals for direction, not specification. AI-generated visuals are best used to communicate aesthetic direction and spatial feel — not as technical specifications for materials, finishes, or exact product placement. Keep the technical documentation in professional design tools.
Educate clients about what they are seeing. Many clients will encounter AI-generated content in other contexts — advertising, social media, real estate listings — without knowing it. Being the professional who explains the technology and its implications positions you as a trusted advisor, not just a visual producer.
Stay current as the technology evolves. The capabilities of AI video tools are changing rapidly. Designers who stay informed can assess new tools accurately, use them appropriately, and have informed conversations with clients about what AI-generated content can and cannot represent.

The Genuine Upside — Why These Tools Are Worth Using Responsibly

The concerns are real and worth taking seriously. So is the upside. AI video tools represent a meaningful capability expansion for how designers communicate design intent — and the designers who use them well will produce client experiences that were simply not achievable with previous tools.

A Luma AI walkthrough of a proposed kitchen renovation — labeled clearly as AI-generated concept visualization — gives a client an immersive, moving sense of how the space will feel that no static rendering can replicate. A Runway-edited video that transitions from the current state of a room to the proposed design is a more powerful sales tool than any before/after static comparison. These are genuine communication improvements, not just visual gimmicks.

The Toys R Us commercial created with OpenAI's Sora — referenced in this episode — is a useful benchmark. It demonstrates what AI-generated video can produce at the commercial production level: footage that is visually indistinguishable from traditionally produced advertising. That same technology, applied to a residential design walkthrough, produces a client experience that feels premium and considered. The tool is powerful. The responsibility for how it is used remains entirely with the designer.

Toys R Us + OpenAI Sora AI Commercial — the example Jenna references in this episode:

Watch on YouTube ↗ — a useful benchmark for understanding what AI video generation can currently produce at commercial quality.

Frequently Asked Questions
The basic workflow for a design presentation: generate a still rendering of the proposed space (using Midjourney, Visual Electric, or a traditional rendering tool), then feed that still image into Luma AI's Dream Machine with a text prompt describing the camera movement you want — "slow pan from left to right," "gentle camera push into the kitchen," "overhead to eye-level transition." Luma generates a short video clip animating the still. You can also prompt purely from text without a still image, though image-to-video typically produces more controlled results for design-specific content. Runway offers more editing control and is better suited for combining AI-generated clips with existing footage or adding transitions between design states.
Sora is OpenAI's video generation model — capable of producing up to 60-second photorealistic video clips from text descriptions. It was demonstrated publicly in early 2024 and made available to ChatGPT Plus and Pro subscribers later that year. As of this episode's recording, Sora was still in limited access; by the time this blog is published, availability may have expanded. The commercial-quality benchmark the Toys R Us video demonstrates — broadcast-quality advertising produced entirely with AI — shows the trajectory of what will become accessible to designers for client presentations.
A simple, clear framing: "This visualization was created using AI tools based on my design direction for your space. It gives you a realistic sense of how the space will feel and what the aesthetic direction looks like — think of it as a very detailed sketch rather than a photograph of the finished result. The actual space will be close to what you see here, but materials, lighting, and proportions may look slightly different in person — that's normal and expected. What I want you to take from this is the overall direction and feel." That conversation sets the right expectation, demonstrates transparency, and positions you as someone who explains their tools rather than hides them.
Yes — specifically in contexts where the visual is intended to serve as a technical specification rather than a directional communication tool. If a client is making material, product, or construction decisions based on a visual, that visual should be produced by professional design software with accurate dimensional and material representation, not by AI generation. AI-generated visuals excel at communicating aesthetic direction and spatial feel; they are not appropriate as technical reference for contractors, fabricators, or purchasing decisions. Using them for the former and professional documentation for the latter keeps both types of communication honest.
Yes — and this is the larger point Jenna is making in this episode. As AI-generated visuals of homes, products, and spaces proliferate across social media, advertising, and real estate listings, clients are increasingly exposed to a visual reference standard that is not based on real spaces. This affects the design industry in two ways: it can create unrealistic baseline expectations before a client ever hires a designer, and it can make genuinely excellent real-world design feel inadequate by comparison. Designers who are aware of this dynamic can proactively shape client expectations, contextualize AI-generated visuals they encounter in other contexts, and build practices grounded in the honest representation of what real design delivers.

 

Disclaimer: This blog was written using AI as a recap from the recording then edited by the author for accuracy and details.

 
 
Previous
Previous

Ep 26: Navigating AI Anxiety in Interior Design

Next
Next

Ep 24: AI Enabled Smart Home Technology