What I Still Check Manually When Using AI Coding Tools
A practical checklist for what developers should still verify manually when using AI coding tools: builds, tests, deployments, security, cost, policy, and user impact.
Contents
AI coding tools can make development much faster.
They can read files, edit code, update documentation, and run checks. Tasks that used to take much longer can move quickly.
But there is one important trap:
If AI changed the code, that does not automatically mean the change is ready for production.
There are still areas that need human review. In fact, the faster the implementation moves, the clearer the checkpoints need to be.
This post is my practical checklist for what I still check manually when using AI coding tools. The faster the change moves, the clearer the checkpoints need to be.
Builds and Tests Need Real Verification
The first checkpoint is build and test output.
Even if the tool says the change is complete, the build can still fail. Type errors, missing imports, environment differences, and dependency issues can happen at any time.
At minimum, I want to check:
- Syntax checks
- Type checks
- Unit tests
- Static generation
- Local runtime behavior
- Actual UI rendering
For frontend work, looking at the screen matters. A small CSS change can create horizontal scrolling on mobile. A feature can work while a button label overflows.
In development, "it runs" and "it is ready for users" are different states.
Deployment Requires More Caution
Implementation and deployment are separate concerns.
Deployment affects real users. If an app, backend, Worker, and database are connected, even a small change can create an operational issue.
Before deployment, I want to ask:
- Will this change be visible to production users?
- Does it require an app update?
- Is there a database migration?
- Can it be rolled back?
- What health check should be run after deployment?
AI can help with deployment commands, but the final approval should remain human.
Security and Secrets Cannot Be Trusted Blindly
Security needs extra attention.
AI may add example tokens, temporary bypasses, admin access shortcuts, or debug logs because they are convenient during development. Those can be dangerous in production.
The checks are clear:
- Are secrets kept out of source code?
- Are admin APIs protected?
- Are local bypasses truly local?
- Do logs expose personal data or tokens?
- Do public docs reveal sensitive internal details?
Security should not be handled with "it seems fine."
Cost Impact Needs Human Review
For personal projects, cost matters.
AI may implement a feature correctly, but that feature can still create Worker, database, storage, or external API usage.
Visitor counters, view counts, and likes look small, but they can become write-heavy if traffic grows.
Cost checks include:
- How many API calls does one visitor trigger?
- Do image requests go through a Worker?
- Is caching applied?
- Could the free tier be exceeded?
- Could refreshes or attacks create unexpected usage?
- Is there a kill switch?
Low fixed cost requires reviewing the cost path of each feature.
Policies Should Be Checked Against Current Sources
App review, AdSense, privacy, payments, and platform policies change over time.
An AI tool may have outdated information. For policy-related work, I want to verify against official documentation.
This is especially important for:
- App Store review requirements
- Google Play testing requirements
- AdSense approval preparation
- Privacy policy and data collection disclosures
- Platform payment rules
Policy mistakes are expensive. Updating a paragraph is easy. Fixing review rejection or account issues is not.
Summary
AI coding tools are powerful, but they do not remove the need for final verification.
AI speeds up the work.
It does not take over operational responsibility.
So the goal is not to work slowly. The goal is to build quickly and verify calmly.
Comments
0