top of page
Search

Bringing Consistency to AI-Generated Designs

  • krishnagilda23
  • Apr 4
  • 2 min read

This is my first post in a while, and I’m excited to share my recent work!

Lately, I’ve been experimenting with generative AI—mostly for quick ideation. But this time, I wanted to push it further and develop a structured workflow to bring consistency to AI-generated visuals.

To put this to the test, I decided to design a pasta for Render Weekly’s #RWPasta challenge.

In this post, I’ll briefly share my process and how I figured out a way to create consistent AI-generated images using multiple tools.


My Process:


1) Exploring Initial Concepts

Since I had recently purchased an Ideogram subscription for another project, I decided to start there. I explored different pasta shapes, material textures, and lighting setups. These variations helped me shape a clear vision for the final design.



2) Creating a 3D Model

With a direction in mind, I needed a 3D model of the pasta. My first attempt was with Vizcom, but it struggled to interpret the complex shape. Then, I tried Meshy, which generates 3D models from images. While it provided some useful results, they weren’t refined enough for my needs. However, Meshy was excellent for generating realistic textures.

Since no tool could fully capture what I envisioned, I decided to manually model the pasta in Rhino 7.




3) Texturing & Refining the Renders

Once the 3D model was ready, I used Meshy to transfer textures onto it. Then, I moved to Vizcom for rendering—but a new challenge emerged. While individual images looked great, they lacked consistency across different views.

To solve this, I took a more systematic approach: ✔ Revisited Ideogram and refined my concept, selecting two specific design directions. ✔ Used ChatGPT to structure my prompts more clearly—breaking them into sections: shape, texture, environment, and mood. ✔ Created a custom palette in Vizcom using Ideogram visuals to guide the style.

With these refinements—textured 3D model + structured prompts + custom palette—Vizcom finally generated consistent renders from multiple angles.





4) Creating Contextual Renders

At this point, I had clean product renders, but I wanted to place the pasta in a realistic, summery, and dreamy setting. To achieve this: ✔ I made a quick KeyShot render and used it as a reference in Vizcom. ✔ Combining this base with structured prompts, I generated high-quality contextual renders that felt more refined and immersive.




5) Final enhanced outcomes from Vizcom






Conclusion


This was an exciting process—not just because I created a shuriken-shaped pasta, but because I developed a reliable workflow for using multiple AI tools to achieve high-quality, consistent results efficiently.

There’s still room for improvement, but this experiment felt like a step toward using AI more strategically rather than just for quick explorations.

Check out the attached images—I’d love to hear your thoughts! 🚀


PS: AI Video creation


I also tried to create videos from image frames, but none of the results were satisfactory.





Fin.


 
 
 

Comments


bottom of page