Ep 20: Using AI to Fine-Tune Your Design Services and Pricing Strategy
Listen to the Podcast Episode for a deeper dive
Using AI to Fine-Tune Your Design Services and Pricing Strategy
Jenna runs the same service packaging prompt through four different AI models — ChatGPT, Google Gemini, Microsoft Copilot, and Perplexity — and compares the results. The outputs are different in every useful way, and the comparison reveals exactly how to use AI as a pricing strategy tool.
- Different AI models produce meaningfully different pricing and packaging suggestions from the same prompt — comparing outputs across models gives a broader range of options to evaluate than using a single tool.
- ChatGPT and Perplexity aligned on similar mid-to-high pricing tiers; Google Gemini went more budget-conscious; Microsoft Copilot came in lowest across all tiers. The variation reflects different training emphases and design philosophy assumptions baked into each model.
- The quality of AI pricing output depends entirely on the quality of the input. Providing specific information about your current services, rates, and deliverables produces more useful suggestions than a generic request.
- AI-generated pricing is a starting point, not a final answer. The outputs need to be evaluated against your specific market, your ideal client profile, and your actual cost structure — which only you know.
- This exercise is a practical demonstration of AI as a business strategy tool — not just a creative or administrative tool. LLMs can help designers think through service packaging, market positioning, and pricing structure in ways that used to require a business coach or consultant.
The Experiment — One Prompt, Four Models
The setup is simple and replicable: Jenna took a detailed, specific prompt describing her current virtual design services and hourly rate, and ran it through four different AI models simultaneously. The goal was to see what each model would suggest for three new tiered service packages — and how differently they would approach the same problem.
This is important methodology: the same prompt, run through different models, produces meaningfully different outputs. That variation is not noise — it is signal about the different assumptions, training emphases, and market perspectives each model brings to the question. Comparing across models gives you more raw material to evaluate than any single model can provide.
The key to this prompt working well: it includes specific information about current offerings and the existing hourly rate. Giving the AI context about where you are starting from produces outputs calibrated to your actual practice rather than generic pricing suggestions that assume nothing about your current positioning.
What Each Model Suggested — The Full Comparison
What the Comparison Reveals — Reading the Outputs
The four models did not produce four versions of the same answer. They produced four meaningfully different perspectives on how to package and price virtual design services — each with its own market positioning assumptions, deliverable philosophies, and price-to-value calibrations.
"The key is to provide comprehensive information about your services to receive the most accurate and helpful responses. Each AI model provided unique insights, underscoring the importance of experimenting with different tools."
— Jenna GaidusekJenna is the go-to educator for design professionals who want to use technology without losing their creative edge. A designer turned tech advocate, she's a nationally recognized speaker, podcast host, community builder, and custom app builder based in Charleston, SC.
Disclaimer: This blog was written using AI as a recap from the recording then edited by the author for accuracy and details.
