Minimum AI Knowledge for Practical Use, Part 2: Context Matters More Than Prompt Wording
A practical guide to giving AI useful context: goals, constraints, logs, related files, and completion criteria.
Contents
In Part 1, I framed an LLM as a system that generates likely next answers from context.
That leads directly to the next question:
How should we provide that context?
When people start using AI, prompt wording usually gets a lot of attention.
"What sentence should I use?"
"Should I assign a role?"
"If I say 'act like an expert,' will the answer become expert-level?"
Prompt wording matters. But in development work, something matters more:
the context the model can use to make a judgment.
Without good context, even a polished prompt can produce a vague answer. With good context, even a plain request can produce something useful.
The Same Question Changes With Context
Consider this question:
This API is slow. How should I improve it?
The model can give generally valid suggestions:
- Add caching.
- Optimize database queries.
- Reduce response payload size.
- Use a CDN.
- Move slow work to an async process.
None of this is wrong, but it is not enough to start safely.
Now add context:
Environment:
- Edge API function
- Reads stats from a small database and cache
- Returns views, likes, and comment counts together
Symptom:
- Article detail page feels slow
- Mobile users feel it more
Constraints:
- I do not want more fixed monthly cost
- Admin preview pages must not call stats APIs
- Public posts should still increment views
Need:
- What to measure first
- Cost-safe improvement order
- Safe changes vs risky changes
Now the answer has to consider the runtime, database, preview behavior, and cost.
The key is not beautiful wording. The key is giving the model the boundary conditions it needs.
More Context Is Not Always Better
Should we paste every file and every log?
No.
Context needs design too.
Too little context makes the model guess. Too much context hides the important parts. The goal is not maximum information. The goal is relevant information.
I usually divide context like this:
| Type | What to Include | Example |
|---|---|---|
| Goal | What should change | Make article detail loading faster |
| Current structure | Systems, files, data flow | API layer, database, static HTML |
| Symptom | Where the problem appears | Slow first load, delayed stats |
| Constraints | What must not change | No cost increase, no draft leak |
| Evidence | Logs, errors, measurements | HTTP status, console error |
| Completion criteria | How to know it is done | 200 response, no public draft exposure |
This one question helps:
Will this information change the model's decision?
If not, it probably does not need to be in the first context package.
Logs Beat Vague Summaries
"The automation is broken" is not very useful.
One actual log line is often much better:
cat: /path/to/task-config.md: Operation not permitted
This immediately points toward a permission problem.
Without that line, we would have to consider cron registration, paths, tokens, network, processes, and many other possibilities.
A useful debugging context can be short:
Command:
npm run build
Result:
HTTP/2 404
Expectation:
Admin preview should return a login screen or a normal page.
Public draft URL should return 404.
That is enough to narrow the analysis.
The Order of Context Helps
I prefer this order:
1. Goal
2. Current situation
3. Related files or logs
4. Constraints
5. Completion criteria
6. Desired report format
This is also easy for a human to read.
The model sees the target first, then the evidence, then the boundaries.
If we paste a huge log first and say "analyze this," the answer is more likely to become unfocused.
A Simple Template
For development work, this template is usually enough:
Goal:
-
Current situation:
-
Related files/logs:
-
Constraints:
-
Completion criteria:
-
Please make a short plan first, then proceed.
At the end, summarize the changes and verification results.
The last line matters.
Asking for a short plan first lets you see how the model understood the task. If the direction is wrong, you can correct it before implementation starts.
Summary
Prompting is important, but in development work, context quality usually matters more.
Good context = goal + current structure + symptom + constraints + evidence + completion criteria
With this habit, AI output becomes more stable and easier to verify.
In the next post, I will look at tokens and context windows: how much information we can give the model, and how to decide what should fit inside that space.
Good context matters, but we also need to understand the size and limits of the space it has to fit into.
Comments
0