Q1 2026 · Search Traffic Investigation

Why search traffic dropped and what we owe the service-seekers who couldn't find us.

Between late January and mid-March 2026, Google search traffic to sfserviceguide.org collapsed by roughly 84%. April recovered most of the loss, but the underlying cause is still in place. This report documents what we found, why it happened, and the work needed so the next algorithm shift doesn't take us offline again.

Published April 30, 2026·Audience: ShelterTech board & askdarcel-web maintainers·Sources: Google Analytics 4, Google Search Console

What the data shows

Weekly Google search clicks to sfserviceguide.org. The shaded band marks the period of suppressed traffic. Source: Google Search Console, 1 January – 29 April 2026.

Peak weekly clicks
~3,100
Mid-January, before the drop
Trough weekly clicks
~290
Mid-March, ~91% lower than peak
Recovery weekly clicks
~3,300
Late April, fully recovered
Pages crawled but not indexed
5,322
More than 60% of known URLs

The drop wasn't evenly distributed. Brand searches (people typing sf service guide) barely moved — 48 clicks in February, 57 in April. The homepage was almost untouched. What collapsed was long-tail service-seeker traffic: people searching for an actual service or category and landing on a service detail page like /services/3578.

Every one of the top affected pages tells the same story: roughly zero impressions in February, then thousands in April. The pages didn't change. Google's willingness to rank them did.

PageFeb 1–28Apr 1–30Δ
/services/3578133,698+33,697
/services/3131011,857+11,857
/services/266456,031+6,026
/services/342204,086+4,086
/services/256203,641+3,641
/services/356003,038+3,038
/services/23602,816+2,816
/ (homepage)2,7415,367+2,626

Search Console impressions, comparing the trough month (February) against the recovery month (April). Service detail pages went from effectively invisible to ranking; the homepage was barely affected.

What didn’t cause it

Before diagnosing the cause, we ruled out the usual suspects.

  • A Google penaltySearch Console → Manual Actions: No issues detected. Security Issues: clean.
  • A tracking or analytics regressionDirect, referral, Bing, Yahoo, and DuckDuckGo traffic stayed flat across the period. Only Google organic moved. A tracking break would have affected all sources.
  • A demand collapseBrand searches and homepage visits were essentially unchanged. People who knew about us still came; people discovering us through Google didn’t.
  • A site outage or 5xx waveSearch Console reports two pages returning server errors out of 8,750 — well within normal noise. Page fetches succeed when re-tested.

What did cause it

sfserviceguide.org has three structural gaps that, in combination, leave Google guessing about which of our pages are worth indexing. When Google's algorithm updated in late January, it guessed against us.

i

No sitemap is published

For developers

Requesting /sitemap.xml returns a 200 with Content-Type: text/html: the React SPA's catch-all route serves the homepage shell. The Submitted Sitemaps table in Search Console is empty.

In plain terms

A sitemap is the directory we hand Google so it knows which pages exist and matter. We've never given Google one. It's discovering our 8,750 service pages only by following stray links.

ii

No robots.txt exists

For developers

/robots.txt returns the SPA's HTML shell, not a robots file. There is no Sitemap: directive, no crawl guidance, no canonical host signal.

In plain terms

This is the standard file every reputable website publishes to guide search engines. Its absence is itself a quality signal — one Google reads as “this site is unmaintained.”

iii

Service pages render only in the browser

For developers

Every URL on sfserviceguide.org returns the same shell HTML: one <title>SF Service Guide</title>, an empty <div id="root">, and a JavaScript bundle. There is no per-page title, description, canonical link, or Open Graph metadata in the initial response. Google must execute the bundle in its rate-limited rendering pipeline before it sees real content.

In plain terms

When Google's crawler arrives, every page on our site looks identical at first glance. It has to do extra work — and decide we're worth the effort — to discover what each page actually is. That work happens on a separate, slower, less-reliable schedule than normal crawling. When Google tightens the rules on which sites get that effort, we're on the wrong side of the line.

The remediation plan

Ordered by impact

Three critical fixes, two high-value follow-ups, and ongoing operational changes. The first three close the structural gaps identified above; the rest harden the recovery.

01

Generate and submit a sitemap.xml

Critical
Add a backend endpoint that emits an XML sitemap listing every service, organization, and category page, refreshed daily. Submit the sitemap URL inside Google Search Console.
WhyThis single change tells Google what exists on our site and which URLs are canonical. Expected lift: a substantial reduction in the 5,322 unindexed pages within 4–8 weeks.
Owneraskdarcel-web backendEffort~1 day
02

Publish a real /robots.txt

Critical
Serve a static robots.txt at the root with User-agent: *, Allow: /, and a Sitemap: directive pointing to the new sitemap. Make sure the SPA catch-all does not intercept this path.
WhyTiny change, large signal. Its absence is currently being read as a sign of an unmaintained site.
Owneraskdarcel-web backendEffort~1 hour
03

Server-render or prerender service detail pages

Critical
Each service page should ship meaningful HTML on first response: a unique <title>, a unique <meta name="description">, a <link rel="canonical">, and the service's name, address, and description in the body. Options ranked by effort: a build-time prerender of the top N pages, a render-on-demand service in front of the SPA, or a phased migration to a server-rendered framework.
WhyThis is the durable fix. Until the HTML response carries real content, every algorithm change is a coin toss for our visibility.
Owneraskdarcel-web frontend & backendEffort~1–3 weeks
04

Add structured data (JSON-LD) to service pages

High value
Embed schema.org/LocalBusiness or schema.org/Service metadata for each service: name, address, phone, hours, service area.
WhyUnlocks rich results (knowledge panels, “near me” inclusion) and gives Google authoritative facts to anchor each page's identity to.
Owneraskdarcel-web frontendEffort~2–3 days, after #03
05

Strengthen internal linking between services

High value
Surface related services on each detail page. Ensure every service is linked from at least one category or results page that itself appears in the sitemap.
WhySearch Console reported “Referring page: None detected” for our top-performing service URL. Internal links are how Google measures which pages we ourselves consider important.
Owneraskdarcel-web frontendEffort~1 week
06

Standing weekly review of this dashboard

Operational
Watch the bounce-rate, weekly users, and source-mix tiles. A divergence between brand and long-tail traffic — like the one in late January — is the early signal of an indexing problem.
WhyThe Q1 drop went unnoticed for weeks because nobody was watching. A standing 15-minute weekly review would have caught it within 7 days.
OwnerShelterTech program leadEffort15 min/week
07

Connect this dashboard to Search Console

Operational
Wire the existing service account into the Search Console API and add a new section that surfaces top losing queries and pages, side-by-side with the GA panels.
WhyGA tells us what happened on our site. Search Console tells us what happened in front of it. We need both visible in the same place to diagnose future drops in minutes, not weeks.
Ownerdashboard maintainerEffort~1 day

How we keep this from happening again

As an engineering practice

Treat sitemap, robots.txt, and per-page metadata as non-negotiable parts of the service-page contract. Add a lightweight CI check that fails if /sitemap.xml or /robots.txt ever stop returning their expected content types.

As a program practice

Hold a 15-minute traffic review every Monday using this dashboard. If the gap between brand traffic and long-tail traffic widens by more than ~30% week-over-week, treat it as a P1 signal: something between us and the people we serve has broken.

As a board practice

Include a one-line traffic note in the monthly board update — weekly users, bounce rate, and a flag for any week that looked unusual. The Q1 drop was visible in the data on day one; we just weren't looking at it together.

What the dashboard already does

The dashboard at the home page of this site is now live, refreshed from Google Analytics hourly, and historisable by month. It is the foundation for the operational and board practices above — use it.

Methodology & sources

GADaily and monthly user counts pulled from the Google Analytics 4 Data API against property 352293213. Source / medium attribution comes from the sessionSourceMedium dimension.

GSCClick, impression, query, and indexing data pulled from Google Search Console for the property sfserviceguide.org, verified 30 April 2026. Period coverage: 1 January – 29 April 2026.

SiteLive HTTP probes against /sitemap.xml, /robots.txt, and the homepage shell to confirm absence of SEO scaffolding and the SPA-only rendering pattern.

CaveatsThe weekly clicks chart is an aggregation of daily Search Console data; small variances against direct GSC totals are expected. GA4 under-reports user counts by 10–30% across the board due to tracking blockers and browser tracking-prevention; the shape of the drop is reliable even if absolute counts skew low.