10 Days with Claude Code: 152 Commits and a Massively Improved Website
Table of Contents
I have been massively productive over the last ten days, though not in the way I expected. On 2026-01-08, I started using Claude Code, Anthropic's CLI tool for AI-assisted development. What started as curiosity turned into 152 commits, 23 new features, 59 bug fixes, and a test suite that grew to 822 unit tests and 123 E2E tests.
This is not a review of Claude Code. This is documenting what we actually built together.
The Setup
Claude Code runs in your terminal and has direct access to your codebase. It can read files, write code, run commands, and execute tests. Unlike chat-based interfaces where you copy-paste code back and forth, Claude Code operates directly in your project.
The first thing I did was create a CLAUDE.md file in my repository root. This file contains project-specific instructions that Claude reads at the start of every session. Mine includes workflow rules (always create branches, always write tests, always run lint), build commands, architecture overview, and coding patterns. Think of it as onboarding documentation for your AI pair programmer.
It is important that these rules are at the beginning and use strong imperative language as otherwise they will not always be enforced or used by Claude Code.
Here is the condensed version:
## Workflow Rules (STRICTLY ENFORCED)
1. NEVER commit to main - create a new branch BEFORE writing ANY code
2. ALWAYS write unit tests for every code change
3. ALWAYS write E2E tests for user-facing changes
4. ZERO tolerance for lint/TypeScript errors
5. Run tests BEFORE every commit
6. Commit after EACH task, push when ALL tasks complete
7. CREATE A PR when doneWith these guardrails in place, Claude Code follows a consistent workflow: create a branch, implement a feature, write tests, run lint, commit, push, create a PR. Every single time.
MCP and the React Best Practices Skill
Out of the box, Claude Code is a capable general-purpose coding assistant. But for a Next.js project deployed on Vercel, I wanted it to have deeper, more specific knowledge. This is where the Model Context Protocol (MCP) becomes essential.
MCP is an open standard that lets AI assistants connect to external tools and data sources. Instead of relying solely on training data that may be outdated, Claude Code can query live documentation and much more.
I configured two MCP servers that proved crucial for this project. Vercel MCP connects Claude Code directly to the Vercel platform. When I asked Claude Code to investigate and improve OG image generation, it could query the official documentation rather than hallucinating answers. Next.js 16+ includes built-in MCP support through the next-devtools-mcp package, giving Claude Code access to up-to-date information about Next.js internals and best practices.
On top of that, Vercel published a structured repository containing a decade of React and Next.js optimization knowledge, packaged as an agent skill. This skill shaped much of the performance work in this project. When Claude suggested replacing Framer Motion with CSS transitions, or lazy-loading the analytics function, or applying React.memo to expensive components, those recommendations came from established patterns rather than generic advice.
The combination of these tools meant Claude Code was not just a general-purpose assistant. It had authoritative knowledge of Vercel's platform, access to Next.js documentation, and a structured framework for React performance optimization. For a project built on this exact stack, that domain-specific knowledge made a pretty significant difference.
What We Built
Analytics Dashboard
The biggest feature was a protected analytics dashboard. I wanted visibility into page views without relying solely on Vercel Analytics and my midnight Cron job that would pull this information from the database and send it to me via Telegram.
Claude built the entire thing: authentication with token-based access, a summary view with total views and trends, a 30-day chart, separate sections for blog posts, talks, and other pages, and a refresh mechanism. Along the way, we discovered and fixed several calculation bugs in how daily views were derived from cumulative snapshots.
The dashboard required about 15 commits to get right. Each iteration fixed edge cases: first-day spikes when no baseline data existed, chart bars not rendering due to CSS height issues, talks not matching correctly due to date format inconsistencies. Claude would implement a fix, I would test it, find another issue, and we would iterate.
The biggest learning here is that while Claude Code understands your code and how it interacts with the database, it does not have a full understanding of your database. The breakthrough happened when I asked Claude Code to give me SQL queries so that I could run them against my database. The result allowed Claude Code to get a comprehensive understanding of my tables and data structure.
Telegram Bot
I already had a Telegram webhook for daily page view notifications. Claude extended it into a full command system with /ping, /help, /stats, /pageviews, /recent, /projects, /talks, /tools, and /deploy. The implementation included registering the command menu with Telegram's BotFather API, so commands appear as autocomplete suggestions when you type / in the chat.
Claude Code provided and executed all the API calls to Telegram to extend my existing Telegram Bot. I did not have to dig into Telegram's documentation at all.
Static OG Images
Previously, I had a dynamic /api/og route that generated Open Graph images on demand. This worked but added latency.
Claude migrated the entire system to Next.js file-based OG images. Every blog post, every talk, every static page, and every tool now has a dedicated opengraph-image.tsx file that generates a static image at build time. The QR code generator even has a dynamic OG image that updates based on the URL parameter.
This migration touched dozens of files and required careful handling of the metadata system to avoid conflicts between manually specified images and auto-generated ones.
Analytics Tracking
Beyond the dashboard, Claude implemented analytics tracking throughout the site. Every CTA button, explore card, blog post click, talk click, project link, table of contents navigation, code copy action, and QR generator interaction now fires tracking events.
The implementation prioritizes performance. The tracking function is lazily loaded to avoid increasing the initial bundle size. Components are split between server and client to minimize JavaScript shipped to the browser.
Accessibility
Claude audited the site against WCAG guidelines and implemented fixes: skip links that become visible on focus, proper ARIA labels on all landmarks, correct heading hierarchy across all pages, alt text verification for images, and keyboard navigation improvements.
SEO
Multiple structured data schemas were added: BreadcrumbList for improved SERP appearance, HowTo schema for tutorial blog posts (4 posts received step-by-step markup), and Event schema improvements for talks. An llms.txt file was added to help AI systems understand and index the site. The robots.txt was updated with rules for AI crawlers while ensuring Twitterbot can still access OG images.
Performance
Performance work happened across multiple fronts. On bundle size, Framer Motion was replaced with CSS transitions, the Shiki/MDX bundle was optimized with tree-shaking, the analytics tracking function was lazy-loaded, JSZip was dynamically imported only when needed, and legacy polyfills were eliminated by targeting modern browsers.
On the React side, React.memo was applied to expensive components, BlueskyPost was refactored to use SWR for data fetching, polling was replaced with event-based approaches where possible, and unnecessary re-subscriptions in the useToast hook were fixed.
For Core Web Vitals, preconnect hints were added for Twitter, Vercel, and Bluesky domains, aspect-ratio was set on Bluesky images to prevent CLS, and priority loading was added to the first images in blog posts.
On the database, a PostgreSQL RPC function was created for incrementPageViews to reduce round trips, and getDailyPageViews calls were parallelized in the Telegram summary command.
Testing
The test suite grew substantially. 822 unit tests now cover utilities, components, API routes, and edge cases. 123 E2E tests run across Chromium, WebKit, and mobile viewports. The TimeAgo component alone has 78 tests.
Claude writes tests as part of every feature implementation. When fixing bugs, it adds regression tests. When I mention a concern about edge cases, it adds tests for those too. The E2E tests run in parallel across browser projects in CI.
Security and CI/CD
Two security-focused changes happened. A comprehensive Content Security Policy was added with proper directives for scripts, styles, images, fonts, connections, and frames. Twitter/X embed support required specific additions. Claude also upgraded hono to fix a JWT algorithm confusion vulnerability that was reported by Dependabot.
The GitHub Actions workflows were optimized with Playwright browser caching to speed up E2E tests, an ESLint job added to the PR workflow, blog content validation in PRs, consolidated RSS generation, and matrix strategy for parallel E2E test execution.
The Mindset
Using Claude Code changes how you work. You become the architect. The orchestrator. You think macro and let Claude handle the micro.
This requires having the bigger picture in mind. You describe what needs to happen, then leave the details to Claude. It is like working with an expert. You stop worrying about every semicolon or exactly which utility function gets called. Results matter, not the specifics of how things get done.
However, this does not mean you can lean back. It is the opposite. You need broad knowledge across many areas to guide the system effectively.
The better you understand your stack, your tools, and the available context sources, the better the results. Knowing when to augment Claude with MCP-backed documentation, platform-specific knowledge, or established React patterns makes a noticeable difference.
This is not about replacing expertise. It rewards it. AI amplifies whatever level of understanding you bring to the table.
The shift took some adjustment. My instinct was to specify everything. Over time, I learned to describe outcomes instead. Claude often found better implementations than what I would have written myself.
What Worked
Iterative bug fixing worked really well. Complex features like the analytics dashboard required multiple rounds of fixes. Claude maintains context across the conversation, remembers what was already tried, and builds on previous attempts.
Having tests as a requirement in CLAUDE.md means every feature ships with coverage. When bugs are found, regression tests are added automatically.
Claude follows existing patterns in the codebase. If I use a certain naming convention or component structure, it matches that style. Every change goes through the same flow: branch, implement, test, commit, push, PR. This created a clean git history with atomic commits.
What Needed Guidance
Claude proposes solutions but needs direction on which approach to take. For the OG images migration, I had to specify that I wanted static file-based images rather than continuing with the dynamic API route.
When multiple issues exist, Claude will fix them all unless told to focus. Sometimes I wanted a quick fix for one thing, not a comprehensive refactor.
Tests that depend on environment variables (like Telegram bot tokens) needed guidance on how to handle missing configuration gracefully.
The Numbers
| Metric | Count |
|---|---|
| Total Commits | 152 |
| New Features | 23 |
| Bug Fixes | 59 |
| Performance Optimizations | 21 |
| Unit Tests | 822 |
| E2E Tests | 123 |
| Blog Posts Written | 2 |
| Days | 10 |
Conclusion
Claude Code is not magic. It is a tool that requires clear instructions, good project structure, and active collaboration. The CLAUDE.md file is crucial. Without explicit workflow rules, the output would be inconsistent.
What makes it effective is the combination of direct codebase access, conversation context, and the ability to run commands. Copy-pasting code into a chat window cannot replicate this. The feedback loop of "implement, test, see error, fix" happens in seconds rather than minutes.
Ten days and 152 commits later, my website has better analytics, better SEO, better performance, better accessibility, better test coverage, and better documentation. That is not a bad outcome for a side project.