Three months after launch, your client calls. A competitor site that looks like it was built during the Obama administration is ranking above them for every keyword that matters. The competitor has a clip art logo. Their font is Comic Sans adjacent. Their mobile layout breaks on anything narrower than a 2012 MacBook.
And yet there they are, sitting on page one, while your carefully crafted project is buried somewhere on page three.
This situation is more common than most developers want to admit. The reason is almost always the same: technical decisions made during the build that nobody flagged at the time. Not a design problem. Not a content problem. A set of quiet, unglamorous issues that search engines care about and most project checklists ignore.
JavaScript Frameworks Don’t Play Nice With Google by Default
React, Vue, Angular, and similar frameworks produce fast, polished interfaces. They also tend to produce sites where the actual content lives inside JavaScript components that load after the initial HTML response.
Google crawls pages in two passes. The first pass reads whatever the server sends back immediately. If your page content is locked inside components that render client-side, that first pass returns a mostly empty document. Google then queues the page for a second full render, but for newer or lower-authority sites, that queue is backed up. Pages can sit waiting for weeks.
This is one of the core reasons older sites outrank newer ones so consistently. That 2018 WordPress site serves a complete HTML document on the first request. Every paragraph, every heading, every internal link is visible to the crawler immediately. No queue. No waiting. Indexed the same day.
The practical check: run curl with a Googlebot user-agent against your URL and compare what comes back to what you see in the browser. The difference is often much larger than expected. If you’re building with a JS framework, server-side rendering or static generation needs to be properly implemented, not just enabled in theory.
Hosting Is Treated as a Budget Decision When It’s Actually an SEO Decision
Most projects land on shared hosting because it’s cheap and the client doesn’t ask questions. The problem is that shared hosting puts your site on a server alongside hundreds of others. When Google’s crawler visits, the server is busy. Response times of two or three seconds are routine.
Google has a timeout threshold for crawls. Slow servers mean fewer pages get crawled per visit. New content takes longer to appear in the index. In competitive local markets, this shows up as a measurable disadvantage that compounds over time.
A VPS with dedicated resources typically brings response times under 200ms. SEO specialists running local SEO audits consistently see server response time flagged in crawl reports for sites that are underperforming relative to their content. The improvement from switching hosting rarely costs more than a few dollars a month and shows up in crawl frequency within weeks.
Crawl Budget Gets Burned on Pages That Shouldn’t Exist
Google doesn’t crawl every page of every site on every visit. Each site gets a crawl budget, a rough limit on how many pages the crawler will process per visit, based on server speed and the site’s overall authority. Waste that budget on garbage URLs and your important pages get skipped.
Several patterns generate junk URLs without anyone realizing. Session IDs appended to URLs create hundreds of duplicate pages that Google treats as separate content. Faceted navigation on e-commerce sites produces thousands of filter combinations that serve nearly identical products. Paginated archives without canonical tags spread link equity thin across dozens of shallow pages.
These aren’t bugs. Users never notice them. But they drain crawl budget quietly for months until someone actually runs a proper audit and looks at the coverage report.
Canonical tags, a well-configured robots.txt, and noindex on deep pagination fix most of it. An hour of work that rarely makes it onto a project scope because nobody asks for it.
Your Lighthouse Score Is Not Your Core Web Vitals Score
Lighthouse in a local dev environment produces optimistic scores. No network latency, no third-party scripts competing for bandwidth, no real users on mobile with inconsistent connections. The number looks great and means very little.
Google’s Core Web Vitals measurements come from field data, real users loading real pages under real conditions. LCP drops when the hero image is full resolution with no proper sizing attributes. CLS spikes when an ad or chat widget loads late and shoves the layout around. INP fails when a heavy JavaScript bundle is blocking the main thread while a user tries to interact with something.
Google Search Console shows field data under the Core Web Vitals report. That’s the number that affects rankings. Not the Lighthouse score on your laptop.
Schema Markup Is Still Being Skipped on Custom Builds
Structured data tells Google what your content actually is. An article, a product, a local business, a FAQ. Without it, Google makes its best guess. With it, your listing gets rich snippets: star ratings, pricing, availability, FAQs directly in the search result.
Rich snippets improve click-through rate. Higher click-through rate signals engagement quality. Engagement quality influences rankings over time. The competitor’s WordPress site probably has an SEO plugin generating basic schema automatically. The custom build almost certainly has none unless someone specifically added it.
JSON-LD schema takes less than an hour to implement and stays useful for the entire life of the site. It doesn’t require touching existing code, just a script block in the head.
The Pattern Behind All of This
Every issue described above has the same root cause. The build was evaluated on how it looked and how it performed for a user sitting in front of it. Nobody evaluated it from the perspective of a crawler trying to index it on a tight schedule.
Design quality and technical SEO are completely separate grading systems. A site can score well on one and fail the other. The competitor with the ugly layout from 2018 passed the technical exam. Your modern build may not have.
Running a crawl audit before launch, not after, is the habit that closes this gap. Tools like Screaming Frog or Google Search Console’s coverage report show every one of these issues clearly. Fixing them at launch costs an afternoon. Fixing them after six months of lost rankings costs significantly more.






