Debabrata Pruseth

Technocrat | Traveler | Story Teller

The Evolution of AI Thinking: From Chain of Thought to Diagram of Thought

Why do some AI prompts work brilliantly while others fall flat?
In our latest blog, we break down 5 powerful AI thinking techniques—CoT, ToT, LoT, IoT & DoT—that can transform how you use AI.
If you want smarter, more accurate results, this is a must-read!

The Evolution of AI Thinking: From Chain of Thought to Diagram of Thought Read More »

Diagram of Thought Prompting: Making AI Think Like a System

What if AI didn’t just think in steps—but connected ideas like a mind map?
In our latest blog, we explore Diagram of Thought prompting—a powerful technique that helps AI propose, critique, refine, and combine ideas for better answers.
If you want more structured and well-rounded AI responses, this is a must-read!

Diagram of Thought Prompting: Making AI Think Like a System Read More »

Iteration of Thought Prompting: Making AI Improve Its Own Thinking

Great answers don’t happen in one go—they improve over time.
Our new blog introduces Iteration of Thought prompting, a method that helps AI refine its responses through self-review and iteration.

Iteration of Thought Prompting: Making AI Improve Its Own Thinking Read More »

Logic of Thought Prompting: Making AI Reason Like a Logician

Ever felt AI answers sound convincing… but aren’t actually correct?
In our latest blog, we explore Logic of Thought prompting—a powerful technique that helps AI reason using facts, rules, and structured logic.
If you want more accurate and reliable AI responses, this is a must-read!

Logic of Thought Prompting: Making AI Reason Like a Logician Read More »

Tree of Thought Prompting Explained: Make AI Think Smarter

Tree of Thought Prompting Explained: Make AI Think Smarter Read More »

Earth, in a Bottle. A Robot, with Imagination. And an AI That “Knows Everything.”

Earth Models, World Models, and Global Models are three fast-growing “families” of AI systems that are often mentioned together—but they solve very different problems. This blog provides an accessible, research-oriented map of the landscape for readers who are new to AI (and for technical readers entering the field).

Earth, in a Bottle. A Robot, with Imagination. And an AI That “Knows Everything.” Read More »

Deep Learning Concepts for Beginners : Cooking Pasta Analogy

What does learning deep learning have in common with cooking pasta?

More than you might think.

In this article, I explain deep learning concepts for beginners using the familiar process of learning to cook pasta—preparing ingredients, tasting, adjusting, and improving with every attempt. It’s a simple, intuitive way to understand how neural networks train, make mistakes, and get better over time—without heavy math or jargon.

Deep Learning Concepts for Beginners : Cooking Pasta Analogy Read More »

Deep Learning Concepts for Beginners : Basketball Player Analogy

What if understanding neural networks felt as natural as watching someone learn basketball?

In this short read, I explain how a neural network learns—using the simple, familiar journey of a player practicing shots, missing, adjusting, and improving. No heavy math. Just intuition, clarity, and a fresh way to see how AI actually learns.

Deep Learning Concepts for Beginners : Basketball Player Analogy Read More »

A Beginner-Friendly Guide to Privacy in AI

Privacy in AI

What happens when your AI model is accurate… but not private?
Trust disappears.
In this blog, I show you how to build privacy into your models using simple concepts like epsilon, clipping, and federated learning. We even run 60 privacy experiments to find the perfect balance. A great read for students and young professionals exploring responsible AI.

A Beginner-Friendly Guide to Privacy in AI Read More »

A Beginner-Friendly Guide to Explainable AI (XAI)

Curious how AI models make decisions—or what goes on inside their “black box”?

In my latest blog, I break down explainable AI with hands-on Python examples, using real tools like SHAP, LIME, ELI5, and DALEX. Whether you’re a student, educator, or just passionate about responsible tech, this guide will help you see (and trust!) how machine learning models “show their work.”

Discover:
– What “explainability” really means
– How to interpret model explanations and plots
– Why transparency and fairness in AI matter

A Beginner-Friendly Guide to Explainable AI (XAI) Read More »

Scroll to Top