top of page
Search

Rendering Vision with ChatGPT

  • krishnagilda23
  • Aug 10, 2025
  • 3 min read

I like participating in Render Weekly’s prompts — it helps me think freely and design products with complete control over every detail. I also enjoy seeing what others in the community are creating.

For the latest prompt — to design a flower vase — I participated with these renders.


Over the past few prompts, I’ve been experimenting with different AI tools like Ideogram, Vizcom, Meshy, and more. This time, I decided to try ChatGPT — not just for research, but for refining my ideas and generating the final visuals.

In this post, I’m sharing my process of designing with ChatGPT.


Context

It started with some research to figure out my design direction.

I brainstormed with ChatGPT — exploring possible themes, features, and materials. The responses were insightful, but I found it harder to make design decisions with so much synthesized information handed to me instantly.

When I do my own research, I immerse myself in it — finding sources, understanding details, and slowly building context. This gives me time to connect the dots and build a story. With AI, it’s different. It’s like striking gold without having to dig — exciting, but also a bit overwhelming.

The information from ChatGPT was valuable — I just needed to get used to this faster way of working.


Initial Explorations

My first idea was to make something distinctly Indian. I took inspiration from Indian Jali patterns, which are traditionally made from materials often used for vases. 


I also wanted to solve a user experience problem, but I couldn’t think of a way to improve it without adding complexity. My goal is always to keep things simple.

None of these ideas felt right. While scrolling Pinterest, I came across a cement-cast vase (image below) — and that sparked a new direction.


Design Direction

But what exactly?

One evening, while walking, I remembered the game Monument Valley. It reminded me of MC Escher’s architectural illusions — which inspired the game in the first place.

That’s when it clicked: what if I made a cement-cast vase inspired by Monument Valley’s architecture? The aim was to bring an MC Escher–style optical illusion into the form of a vase.


 

Designing with ChatGPT

Since I was already on ChatGPT, I decided to give it a try for designing the vase. The initial results were very unsatisfactory.

I realised ChatGPT tends to forget context and lose track of the conversation, so I had to build its contextual memory step-by-step by asking it to recall specific concepts:


  • “Do you know about MC Escher?”

  • “Do you know about MC Escher’s architecture?”

  • “Do you know about the Monument Valley game?”

  • “Do you know about cement casting?”

  • “How about we make a flower vase with cement casting?”


With this context set, I asked it to design the vase. 


It generated different options, but I realised it didn’t fully understand the physics or practicality of the design.

So, I decided to build the CAD model myself. With the direction clear, I started sketching and modelling, refining step-by-step until I arrived at a composition I liked. Then I locked the CAD model.



Visualisation using ChatGPT

Initially, I planned to go the conventional route: rendering in KeyShot and testing its AI features. But getting the cement material right was tricky.

I tried creating a custom procedural material and importing one from Ambient CG, but ran into UV orientation issues in KeyShot. Since my CAD was made in Plasticity, KeyShot struggled with the geometry. Unwrapping and remapping UVs was becoming too complicated, as I had to redo many surfaces.

That’s when I shifted and decided to render using AI.

First, I tried Vizcom, but I wasn’t satisfied with the results. To get what I wanted, I would have had to create a custom palette (as I explained in an earlier post) — which meant more work.

Then I tried ChatGPT — and eventually got results close to my vision.

Here’s the thing with ChatGPT: it’s not a “one-shot” artist. It usually takes 5–6 attempts with variations to get something acceptable.

At first, the renders weren’t great, but after a few iterations, the results improved. Below is one example showing the journey of a render.



By doing this, I was essentially building a reference library inside the conversation. Once the context was set, I uploaded my reference images and asked it to render them realistically with my desired lighting setup.

With some tweaks and back-and-forth, I got results close to my vision.


Review

While my experience with ChatGPT wasn’t perfect, I did manage to get visuals that matched my vision. This project involved a lot of back-and-forth between conventional rendering in KeyShot and experimenting with AI.

It was an interesting learning curve — both in using ChatGPT creatively and in adapting my design process to AI tools.

If you’ve used AI for visualising your designs, I’d love to hear how you approach it.

 
 
 

Comments


bottom of page