Monday, October 13, 2025

Unveiling the Power and Pitfalls of AI: Part 2 - Asking The Right Question

Asking the Right Questions

Part I — The Binary Foundation

At first glance, asking a question of an AI seems simple. You have something in mind, so you type: “Do X on Y” or “How do I make a widget?” But there are important facts to remember when dealing with an AI system.

AI is not human, and it cannot think like a human. Humans can look up from the keyboard and take in the natural world. We have feelings that shape our interactions, whether we like it or not. We also have imagination — that unquantifiable trait that ensures no two humans are ever truly equal.

AI models, all AI models, are nothing more than mathematical engines. When you break down any computer, AI model, or even a calculator, there is one guiding law that makes it all work:

Everything is binary.

In computer science terms, binary is simply on/off or true/false. Every machine operates on this principle. Even though we dress it up with interfaces and let it “talk” to us, AI is still a machine at its core.

This matters because humans don’t think in binary terms. Even Spock, the archetype of logic, made educated assumptions from time to time. Every machine action or program, however, can be reduced to a long chain of conditional checks: when X happens, do Y.


Part II — Context and Ambiguity

Ask ten people the same question and you’ll almost certainly get at least one conflicting answer. Humans bring context, intuition, and contradiction into our reasoning. AI does not. It is entirely based on mathematical probabilities and predictors. That’s as close as you’ll get to imagination in a machine.

When an AI forms a sentence, it isn’t “thinking” about meaning. It is calculating the probability of the next character or word following the previous one, step by step.

Large Language Models (LLMs) are trained on enormous amounts of data. And by “data,” we don’t mean they’re simply fed a dictionary. To converse naturally with humans, they must be trained in context. The same five words can mean entirely different things depending on the situation. Context is what allows AI to approximate your meaning and generate a plausible answer.

Consider the phrase: “There’s more than one way to skin a cat.” Now imagine explaining that to something that has never seen a cat, doesn’t know what “skin” is, and must follow your instructions in exactly one way. Machines cannot guess. They will simply formulate a plan based on your perceived intent.

Take the instruction: “Create a method that prints my name.”

  • “Create” → does the user mean “write code,” “design,” or “invent”?
  • “Method” → implies programming, but in what language?
  • “Print” → could mean console output, sending to a printer, or drawing on screen.
  • “Name” → whose name? The user’s? A variable? A place?

What seems obvious to a human is riddled with ambiguity for a machine. Without precise context, the AI can only approximate intent.


Part III — Predictions, Memory, and Guardrails

This is why prediction is essential. To create natural conversation, AI models must be allowed to make assumptions. Without predictions, the AI would have to interrogate you endlessly to pin down exactly what you meant by a simple request.

When you give instructions, it’s worth asking yourself clarifying questions first. The more ambiguity you close off, the better the result.

Memory and Context

AI often feels like it “remembers” what you’ve said before. In reality, it remembers context, not specifics. If you’ve been working on a research paper about black bears, the AI will keep that context in mind. But it won’t recall that two days ago you mentioned the TV show Grizzly Adams.

This “context memory” is a sliding window — limited by size or time. Whether measured in hours or megabytes is an internal detail, but the principle is the same: the AI holds onto recent context, not permanent memory.

Different Implementations

Over the past months, I’ve used multiple implementations of the same underlying model, each with its own quirks:

  • Windows Copilot excels at conceptualizing and shaping architectural design.
  • Visual Studio’s AI assistant is the “workhorse,” capable of building entire applications when guided properly.
  • At times, I’ve even used one AI to generate prompts for another, chaining their strengths together.

Even when the model is the same, the implementation can dramatically change the experience.

Guardrails

With enough use, you begin to notice conversational patterns — and the guardrails. Guardrails are the limits or directives built into AI systems to control what they can and cannot do.

For example, restrictions around adult or offensive material are enforced by directives. In some cases, you can even see these guardrails at work. Features like Think Deeper in Windows Copilot reveal the AI’s reasoning process as it generates a response, showing how it navigates within its constraints.


Conclusion

AI is an extraordinarily powerful tool when used correctly. If your goal is casual conversation, AI bots are designed to keep you engaged and agreeable. But if your goals are more task‑oriented, you’ll get far better results by crafting prompts with forethought and specificity.

Above all, remember: AI is not a human mind. It is a machine — a worldwide encyclopedia with an interactive interface. The clearer your questions, the sharper and more useful its answers will be.

The quality of the answers depends on the quality of the questions.




No comments:

Post a Comment