Ep 25: Is AI Skewing Our Perception of Reality?
Listen to the Podcast Episode for a deeper dive
Is AI Skewing Our Perception of Reality?
AI-generated video is here — Luma AI, Runway, Sora — and it is producing visuals that blur the line between what is real and what is rendered. For interior designers using these tools in client presentations, that blurring raises important questions about expectation-setting, transparency, and professional integrity.
- AI-generated video tools — Luma AI, Runway AI, and OpenAI's Sora — are producing increasingly photorealistic visuals that are genuinely difficult to distinguish from real footage. The capability has arrived faster than most people expected.
- For designers using AI-generated visuals in client presentations, the blurring of real and rendered creates a real risk: clients who cannot distinguish the two may form expectations that no physical implementation can meet, damaging the trust relationship when the built space does not match the AI perfection.
- Transparency about AI-generated content is both an ethical obligation and a practical trust-building practice. Labeling AI-generated visuals as such, and setting explicit expectations about what real-world implementation will and will not match, protects the client relationship rather than risking it.
- The same technology that creates concern — hyper-realistic AI rendering — is also genuinely powerful for design communication. The goal is not to avoid these tools but to use them with the transparency and expectation-setting that responsible professional practice requires.
- Designers who stay current with AI video tools and understand their capabilities are in the best position to use them ethically and to advise clients who encounter AI-generated content in other contexts about what they are actually seeing.
The AI Video Tools That Are Changing What "Real" Looks Like
The image generation conversation has been running for two years. The video generation conversation is newer — and the pace of improvement has been faster than most anticipated. These three tools represent the current frontier of what AI can produce in motion.
The Specific Problem This Creates for Design Practice
Realistic 3D renderings have been part of design practice for years — and the expectation management challenge they create is not new. Clients have always needed to be told that a rendering is a visual approximation, not a guarantee. The difference with AI-generated video is the degree of realism: when a client watches a Luma AI walkthrough of their future kitchen, the visual quality exceeds what most designers could produce with traditional rendering software, at a fraction of the time and cost.
The gap between that visual and the physical reality of the built space is real and unavoidable. Materials behave differently in natural light than in rendered light. Construction tolerances are not pixel-perfect. The specific product specified may arrive differently than shown. Real life is not photorealistic rendering — and no amount of AI sophistication changes that.
"When AI-generated visuals look almost too perfect, we run the risk of setting unrealistic expectations for clients. No matter how talented we are, real-life implementation rarely matches the flawlessness of an AI rendering."
— Jenna GaidusekHow to Use These Tools Responsibly — The Ethical Practice Layer
The goal is not to avoid AI video tools — they are genuinely powerful for design communication and the designers who use them well will have a real competitive advantage in presentation quality. The goal is to use them in ways that preserve rather than undermine the client trust relationship.
The Genuine Upside — Why These Tools Are Worth Using Responsibly
The concerns are real and worth taking seriously. So is the upside. AI video tools represent a meaningful capability expansion for how designers communicate design intent — and the designers who use them well will produce client experiences that were simply not achievable with previous tools.
A Luma AI walkthrough of a proposed kitchen renovation — labeled clearly as AI-generated concept visualization — gives a client an immersive, moving sense of how the space will feel that no static rendering can replicate. A Runway-edited video that transitions from the current state of a room to the proposed design is a more powerful sales tool than any before/after static comparison. These are genuine communication improvements, not just visual gimmicks.
The Toys R Us commercial created with OpenAI's Sora — referenced in this episode — is a useful benchmark. It demonstrates what AI-generated video can produce at the commercial production level: footage that is visually indistinguishable from traditionally produced advertising. That same technology, applied to a residential design walkthrough, produces a client experience that feels premium and considered. The tool is powerful. The responsibility for how it is used remains entirely with the designer.
Toys R Us + OpenAI Sora AI Commercial — the example Jenna references in this episode:
Watch on YouTube ↗ — a useful benchmark for understanding what AI video generation can currently produce at commercial quality.
Jenna is the go-to educator for design professionals who want to use technology without losing their creative edge. A designer turned tech advocate, she's a nationally recognized speaker, podcast host, community builder, and custom app builder based in Charleston, SC.
Disclaimer: This blog was written using AI as a recap from the recording then edited by the author for accuracy and details.
