AI Sense

/ey-eye sens/ noun
  1. The basic intuitions a person needs before using any AI tool.
  2. A practical understanding of how AI actually works, what AI can and cannot do, and why the same AI behaves differently in different apps.

Note for nerds: Here AI means LLM

Every word depends on the words before it.

AI does not understand your question and then write a reply. AI predicts one word at a time. AI looks at all the words that came before, and picks the most likely next word. Then AI looks at everything again, including the word it just picked, and predicts the next one. This keeps going until the full response is built. This means every word AI has seen so far shapes what comes next. The previous words are the context. Change the context, change the output. This is why what you type matters so much. Your words are the context AI uses to predict everything that follows.

AI does not know when it is wrong.

AI does not fact check itself. For AI, facts and false information are all the same. They are just words. AI picks the most likely next word based on patterns, not based on what is true. AI has no concept of truth. AI just has patterns. When you ask AI a question, AI does not look up the answer. AI predicts what a good answer would look like based on patterns from training data. If those patterns point toward the correct answer, great. If they point toward something made up, AI writes that with the same confidence. AI cannot tell the difference. This is why AI can write a perfectly structured paragraph about a historical event and get the dates completely wrong. The structure looks right because AI learned how paragraphs work. The facts are wrong because AI never learned what is true. For AI, a real fact and a made up fact feel exactly the same. Both are just the next most likely word.

AI forgets you the moment you leave.

AI does not remember you. When you close a conversation and start a new one, AI has no idea who you are, what you discussed, or what you asked it to do differently. Every new chat is a completely blank slate. Imagine calling a restaurant every day and giving detailed food preferences. Every single call, the person who picks up has never spoken to you before. You have to explain everything from scratch. That is what talking to AI is like. No matter how long your last conversation was, the next one starts from zero. If an app seems to remember your name or preferences, that is the app doing it, not the AI. Apps like ChatGPT save notes about you behind the scenes and quietly paste them into every new conversation. The AI itself has zero memory. The app is faking it. This also means anything you shared in a conversation disappears the moment the chat ends. Some apps work around this by asking AI at the end of a conversation "Is there anything from this conversation we should remember?" The app then saves those notes and quietly injects them into future conversations. But that is the app remembering, not AI. AI itself still starts every conversation with zero memory.

You are not the only one instructing AI.

Remember Sense 1: AI predicts the next word based on all the words that came before. Those words are the context. Context is everything AI can see when generating a response. The more relevant the context, the better the prediction. The worse the context, the worse the output. Context is the single most important factor in what AI produces. Now here is the part most people miss. When you type a message, you see only your words. But AI sees much more. Your message is just one part of a larger context that gets sent every time. The app assembles this context behind the scenes before AI reads a single word from you. At the top sits the system prompt, a set of hidden instructions written by the app developer. Below that is your full conversation history, every message you sent and every response AI gave, replayed from the beginning. Remember how we said AI forgets you the moment you leave? The memories the app saved from past conversations also get pasted into this context. Your latest message goes at the very bottom. AI reads all of this, top to bottom, every single time you hit send. There is a catch. AI has a limited context window. Think of it as a fixed size page. Everything, the system prompt, conversation history, saved memories, and your message, all has to fit on that one page. As your conversation grows longer, older parts start getting pushed out or compressed. Important details you mentioned early in the chat can quietly disappear. This leads to context rot. The longer the conversation, the more noise builds up. Your key instruction from 30 messages ago becomes a needle in a haystack. AI still reads everything in the context, but when there is too much, AI struggles to find and prioritize the parts that matter. There is another problem. AI pays the most attention to the beginning and end of the context. Information stuck in the middle gets overlooked. So even if your important instruction is still technically inside the context window, AI can miss it simply because it is buried in the middle of a long conversation. Researchers call this "lost in the middle." This is why long conversations tend to go off track. The context got too noisy, AI lost focus, and the details that mattered slipped through.

The app around AI matters more than the AI itself.

ChatGPT, Claude, Gemini all use similar AI technology underneath. What makes them feel completely different is the harness built around that AI. A harness is made up of several components. The instructions tell AI how to behave and what personality to have. The safety filters control what AI can and cannot say. The tools give AI abilities like searching the web, running code, or reading files. The memory decides whether AI remembers you between conversations. The context management controls how much information AI can hold at once and what gets dropped when space runs out. Think of the AI model as an engine. The harness is the steering wheel, the brakes, the dashboard, and the road rules all combined. The same engine inside two different cars drives very differently. Same with AI. The same model inside two different apps behaves like two completely different products because the harness is different.