Competitive intelligence for AI-mediated buying decisions. Where GoGuardian wins, where it loses, and a prioritized three-layer execution plan — built from 150 buyer queries across ChatGPT + Claude.
GoGuardian's visibility pattern reflects three compounding content gaps — technical freshness failure, content misalignment on existing pages, and five capability areas with no content at all — that interact to suppress performance across the full buying funnel.
[Mechanism] AI platforms can't distinguish GoGuardian's best content from its oldest because sitemap lastmod timestamps are absent site-wide and blog freshness averages 0.15/1.0, disqualifying well-matched pages from freshness-weighted citation decisions. Existing Comparison and product pages cover the right feature categories but answer different questions than buyers at requirements and Shortlisting stages actually ask, producing positioning losses on queries GoGuardian should win. Five capability areas — cross-platform device support, hallway safety, parent engagement, interactive instruction, and feature-specific Comparison pages — have thin or zero content, making GoGuardian structurally absent from 36 L3 queries regardless of technical improvements.
Early-funnel invisibility compounds the late-funnel positioning problem: buyers who form consideration sets without GoGuardian at the Problem Identification stage carry competitor-defined criteria into Shortlisting, making GoGuardian's 96.2% Shortlisting visibility harder to convert.
[Synthesis] The sitemap lastmod fix must execute first because AI crawlers use lastmod timestamps to prioritize re-crawl frequency — without it, every L2 edit and L3 content creation is treated as potentially unchanged content, suppressing freshness signals for newly published material regardless of actual publication date. The stale_content_marketing fix must also precede L3 publication: launching new cross-platform and hallway safety pages alongside 5-year-old blog posts on the same domain sends contradictory freshness signals that reduce the domain's overall content currency assessment.
Where GoGuardian appears and where it doesn't — across personas, buying jobs, and platforms.
[TL;DR] GoGuardian is visible in 54% of buyer queries but wins only 23%. Converting visibility to wins is the primary challenge (31% gap). High-intent queries run higher at 65%.
GoGuardian's 54% overall visibility (81/150) hides a steep funnel drop — from 96.2% at Shortlisting to 51.2% visible at early discovery — meaning late-stage dominance is built on a consideration set that competitors narrowed upstream.
| Dimension | Combined | Platform Delta |
|---|---|---|
| All Queries | 54% | Even |
| By Persona | ||
| Director of Curriculum & Instruction | 45.5% | Claude +5 percentage points |
| Director of Technology | 60.5% | Claude +5 percentage points |
| High School Principal | 54.5% | ChatGPT +6 percentage points |
| Director of Student Services & Safety | 60.7% | ChatGPT +7 percentage points |
| Superintendent | 41.7% | Even |
| By Buying Job | ||
| Artifact Creation | 16.7% | ChatGPT +8 percentage points |
| Comparison | 43.8% | Even |
| Consensus Creation | 30.8% | Claude +8 percentage points |
| Problem Identification | 50% | Claude +17 percentage points |
| Requirements Building | 37.5% | Claude +12 percentage points |
| Shortlisting | 96.2% | ChatGPT +15 percentage points |
| Solution Exploration | 66.7% | Claude +20 percentage points |
| Validation | 58.3% | ChatGPT +17 percentage points |
| Dimension | ChatGPT | Claude |
|---|---|---|
| All Queries | 42.7% | 42% |
| By Persona | ||
| Director of Curriculum & Instruction | 36.4% | 40.9% |
| Director of Technology | 41.9% | 46.5% |
| High School Principal | 45.5% | 39.4% |
| Director of Student Services & Safety | 50% | 42.9% |
| Superintendent | 37.5% | 37.5% |
| By Buying Job | ||
| Artifact Creation | 16.7% | 8.3% |
| Comparison | 40.6% | 40.6% |
| Consensus Creation | 15.4% | 23.1% |
| Problem Identification | 25% | 41.7% |
| Requirements Building | 12.5% | 25% |
| Shortlisting | 92.3% | 76.9% |
| Solution Exploration | 33.3% | 53.3% |
| Validation | 54.2% | 37.5% |
[Data] Overall visibility: 54% (81/150 queries). Shortlisting: 96.2% (25/26). High-intent visibility: 64.6% (53/82).
Early-funnel visible: 51.2% (22/43). Lowest feature visibility: Interactive Lesson & Assessment Tools 16.7% (1/6), Digital Hall Pass & Campus Movement Tracking 28.6% (2/7). Role gap: decision-maker win rate 36.1% (13/36 visible) vs. evaluator 46.7% (21/45 visible), a -11pp decision-maker deficit.
Platform delta: 1pp (ChatGPT marginally higher).
[Synthesis] The funnel-stage visibility pattern is a structural problem, not a brand awareness problem. GoGuardian appears in nearly every Shortlisting query but is absent for nearly half the earlier buying stages where evaluation criteria are formed. The -11pp decision-maker win rate deficit means the buyers with final contract authority — superintendents and district IT directors — are proportionally harder to reach than their evaluator counterparts, likely because high-authority personas ask more specific technical and financial questions that require richer content than GoGuardian currently publishes.
17 queries won by named competitors · 11 no clear winner · 41 no vendor mentioned
Sorted by competitive damage — competitor-winning queries first.
| ID | Query | Persona | Stage | Winner |
|---|---|---|---|---|
| ⚑ Competitor Wins — 17 queries where a named competitor captures the buyer | ||||
| gg_075 | "Lightspeed Systems vs Securly — which K-12 filtering platform is better for mixed-device districts?" | Director of Technology | Comparison | Lightspeed Systems |
| gg_076 | "Blocksi vs Linewize for student safety — how do their alert systems and monitoring accuracy compare?" | Director of Student Services & Safety | Comparison | Linewize |
| gg_077 | "Lightspeed Systems vs Blocksi — which has the best YouTube filtering for schools?" | Director of Curriculum & Instruction | Comparison | Lightspeed Systems |
| gg_079 | "LanSchool vs Lightspeed Classroom — Comparison for Chromebook classroom management in K-12" | High School Principal | Comparison | Lightspeed Systems |
| gg_080 | "Securly vs Linewize for K-12 web filtering — which has better parent communication features?" | Superintendent | Comparison | Linewize |
| gg_086 | "Linewize vs Lightspeed Systems — which K-12 filtering platform handles BYOD better?" | Director of Technology | Comparison | Lightspeed Systems |
| gg_089 | "Blocksi vs Securly vs Lightspeed — which K-12 platform is best for districts consolidating vendors?" | Superintendent | Comparison | Lightspeed Systems |
| gg_091 | "Linewize vs Securly — which has better AI accuracy for detecting student safety threats?" | Director of Student Services & Safety | Comparison | Linewize |
| gg_093 | "How do Securly, Lightspeed, and Blocksi compare on reporting dashboards and analytics for district admins?" | Superintendent | Comparison | Lightspeed Systems |
| gg_095 | "Blocksi vs Lightspeed — which handles BYOD and personal device filtering better for schools?" | Director of Technology | Comparison | Lightspeed Systems |
Remaining competitor wins: Lightspeed Systems ×4, Securly ×2, Blocksi ×1. 11 queries with no clear winner. 41 queries with no vendor mentioned. Full query-level data available in the analysis export.
Queries where GoGuardian is mentioned but a competitor is positioned more favorably.
| ID | Query | Persona | Buying Job | Winner | GoGuardian Position |
|---|---|---|---|---|---|
| gg_003 | "Teachers losing half the class to games and social media on devices — what are other districts doing about this?" | High School Principal | Problem Identification | No Clear Winner | Mentioned In List |
| gg_004 | "Our web filter blocks educational sites teachers need but students still find workarounds — how do we fix this?" | Director of Curriculum & Instruction | Problem Identification | No Clear Winner | Brief Mention |
| gg_006 | "How are K-12 districts managing device filtering when they have Chromebooks, iPads, and Windows laptops?" | Director of Technology | Problem Identification | No Clear Winner | Mentioned In List |
| gg_012 | "What criteria matter most when evaluating web filtering and student safety platforms for K-12?" | Director of Technology | Problem Identification | No Vendor Mentioned | Mentioned In List |
| gg_013 | "Build vs. buy for student web filtering — when does it make sense to use a commercial platform versus open source?" | Director of Technology | Solution Exploration | No Clear Winner | Mentioned In List |
| gg_019 | "How do schools allow YouTube for educational content while blocking everything inappropriate without blocking the whole site?" | Director of Curriculum & Instruction | Solution Exploration | No Vendor Mentioned | Mentioned In List |
| gg_022 | "Agent-based vs. DNS-based web filtering for K-12 — which approach works better for mixed-device environments?" | Director of Technology | Solution Exploration | Securly | Mentioned In List |
| gg_024 | "What does a good K-12 device usage reporting dashboard look like for board presentations?" | Superintendent | Solution Exploration | No Vendor Mentioned | Mentioned In List |
| gg_026 | "Role-based filtering policies in schools — how do districts set different rules by grade level and building?" | High School Principal | Solution Exploration | No Vendor Mentioned | Mentioned In List |
| gg_028 | "Key requirements for evaluating K-12 web filtering platforms for a district with 15,000 students across 20 buildings?" | Director of Technology | Requirements Building | No Vendor Mentioned | Mentioned In List |
| ID | Query | Persona | Buying Job | Winner | GoGuardian Position |
|---|---|---|---|---|---|
| gg_030 | "Must-have vs. nice-to-have features for classroom management software in a 1:1 Chromebook district?" | Director of Curriculum & Instruction | Requirements Building | No Vendor Mentioned | Mentioned In List |
| gg_031 | "What cross-platform support should we require from a web filter if we have Chromebooks, Windows laptops, and iPads?" | Director of Technology | Requirements Building | No Vendor Mentioned | Mentioned In List |
| gg_039 | "What granularity of policy controls should we expect — per-student, per-class, per-grade, per-building?" | Director of Technology | Requirements Building | No Clear Winner | Mentioned In List |
| gg_041 | "What BYOD and guest network filtering capabilities should we require for a district that allows personal devices on campus?" | Director of Technology | Requirements Building | No Vendor Mentioned | Mentioned In List |
| gg_043 | "What policy customization should a K-12 web filter support — per-student overrides, scheduled rules, OU-based policies?" | High School Principal | Requirements Building | No Clear Winner | Mentioned In List |
| gg_044 | "Best K-12 web filtering platforms for mid-size school districts with CIPA compliance requirements" | Director of Technology | Shortlisting | Lightspeed Systems | Strong 2nd |
| gg_047 | "Which K-12 filtering platforms work across Chromebooks, Windows, Mac, and iOS from a single console?" | Director of Technology | Shortlisting | Lightspeed Systems | Mentioned In List |
| gg_050 | "Which school safety platforms give parents visibility into student device activity at home?" | High School Principal | Shortlisting | Securly | Strong 2nd |
| gg_051 | "Best digital hall pass systems for high schools that integrate with existing student information systems" | High School Principal | Shortlisting | Securly | Mentioned In List |
| gg_053 | "Which K-12 platforms have the best device usage reporting for school board compliance presentations?" | Superintendent | Shortlisting | Lightspeed Systems | Strong 2nd |
| gg_054 | "Leading K-12 classroom management tools with built-in formative assessment and interactive lesson features" | Director of Curriculum & Instruction | Shortlisting | No Clear Winner | Mentioned In List |
| gg_056 | "K-12 digital safety platforms that integrate well with Google Workspace and Microsoft 365" | Director of Technology | Shortlisting | Gaggle | Mentioned In List |
| gg_057 | "Top-rated student self-harm and violence detection platforms for K-12 districts — what are counselors recommending?" | Director of Student Services & Safety | Shortlisting | Lightspeed Systems | Strong 2nd |
| gg_063 | "Best K-12 web filtering platforms with strong CIPA compliance reporting for E-rate audits" | Superintendent | Shortlisting | Lightspeed Systems | Mentioned In List |
| gg_067 | "Top K-12 edtech platforms for districts consolidating from multiple filtering and safety vendors into one" | Director of Technology | Shortlisting | Lightspeed Systems | Mentioned In List |
| gg_068 | "Which student safety tools have 24/7 monitoring and escalation for after-hours threats?" | Director of Student Services & Safety | Shortlisting | Lightspeed Systems | Strong 2nd |
| gg_071 | "GoGuardian vs Securly for student safety monitoring — which catches real threats with fewer false alerts?" | Director of Student Services & Safety | Comparison | Securly | Strong 2nd |
| gg_072 | "Blocksi vs other K-12 web filtering and classroom management platforms — how does it compare?" | High School Principal | Comparison | No Clear Winner | Strong 2nd |
| gg_074 | "How does Securly's student safety monitoring compare to other AI-based self-harm detection tools?" | Director of Student Services & Safety | Comparison | No Clear Winner | Strong 2nd |
| gg_078 | "How does Linewize's human-moderated safety monitoring compare to fully AI-based detection platforms?" | Director of Student Services & Safety | Comparison | Linewize | Mentioned In List |
| gg_081 | "How do leading K-12 web filters compare on CIPA compliance reporting and E-rate documentation?" | Director of Technology | Comparison | Lightspeed Systems | Strong 2nd |
| gg_083 | "Which K-12 web filtering platform covers the most device types from one dashboard — Securly, Lightspeed, or others?" | Director of Technology | Comparison | Lightspeed Systems | Mentioned In List |
| gg_084 | "Gaggle vs other K-12 student safety monitoring platforms — which detects threats faster?" | Director of Student Services & Safety | Comparison | Gaggle | Strong 2nd |
| gg_085 | "How do K-12 web filtering platforms compare on policy customization — per-school rules, grade-level overrides?" | Director of Technology | Comparison | Lightspeed Systems | Mentioned In List |
| gg_098 | "LanSchool Air vs Blocksi classroom management — which works better with Chromebooks?" | High School Principal | Comparison | Blocksi | Brief Mention |
| gg_103 | "Common complaints about Lightspeed Systems from K-12 IT directors" | Director of Technology | Validation | No Vendor Mentioned | Brief Mention |
| gg_107 | "LanSchool Air problems — does it work well with Chromebooks and cloud-based environments?" | High School Principal | Validation | No Clear Winner | Brief Mention |
| gg_117 | "Lightspeed Systems student safety monitoring — how accurate are the alerts compared to other AI-based tools?" | Director of Student Services & Safety | Validation | Lightspeed Systems | Mentioned In List |
| gg_119 | "Digital hall pass system reviews — is adding a hall pass module to an existing edtech suite worth it?" | High School Principal | Validation | No Clear Winner | Mentioned In List |
| gg_121 | "Does Blocksi have reliable 24/7 student safety monitoring or is it just during school hours?" | Director of Student Services & Safety | Validation | Blocksi | Brief Mention |
| gg_124 | "How long does it typically take to fully deploy a K-12 web filtering and safety platform across a 15,000-student district?" | Director of Technology | Validation | No Clear Winner | Mentioned In List |
| gg_127 | "How to justify student safety monitoring software to a school board that thinks counselors should handle it manually" | Director of Student Services & Safety | Consensus Creation | No Vendor Mentioned | Mentioned In List |
| gg_135 | "Cost Comparison of running separate filtering, monitoring, and classroom management tools vs. a single platform" | Director of Technology | Consensus Creation | Securly | Mentioned In List |
| gg_136 | "How are other districts measuring the effectiveness of student safety monitoring tools?" | Director of Student Services & Safety | Consensus Creation | No Vendor Mentioned | Mentioned In List |
| gg_137 | "Parent satisfaction data after districts implement at-home device monitoring — does it reduce complaints?" | High School Principal | Consensus Creation | No Vendor Mentioned | Brief Mention |
| gg_141 | "Build a TCO model for implementing a K-12 digital safety platform for a 15,000-student district over 3 years" | Superintendent | Artifact Creation | No Clear Winner | Mentioned In List |
| gg_145 | "Create a web filtering requirements matrix comparing BYOD support, cross-platform coverage, and YouTube filtering across vendors" | Director of Technology | Artifact Creation | No Clear Winner | Mentioned In List |
Who’s winning when GoGuardian isn’t — and who controls the narrative at each buying stage.
[TL;DR] GoGuardian wins 22.7% of queries (34/150), ranks #1 in SOV — H2H record: 78W–27L across 9 competitors.
The #1 SOV position rests on a 2-mention margin over Securly (81 vs. 79), and Lightspeed Systems at #3 wins the cross-platform, CIPA reporting, and consolidation Shortlisting queries that drive enterprise contract decisions — freshness improvements and five L3 NIOs are the levers to break the tie.
| Company | Mentions | Share |
|---|---|---|
| GoGuardian | 81 | 21.9% |
| Securly | 79 | 21.3% |
| Lightspeed Systems | 78 | 21.1% |
| Linewize | 42 | 11.3% |
| Blocksi | 39 | 10.5% |
| Gaggle | 20 | 5.4% |
| LanSchool | 10 | 2.7% |
| Hapara | 8 | 2.2% |
| Dyknow | 8 | 2.2% |
| NetSupport School | 2 | 0.5% |
When GoGuardian and a competitor both appear in the same response, who gets the recommendation? One query with multiple competitors generates a matchup against each — so H2H totals will exceed the query count.
Win = primary recommendation (cross-platform majority). Loss = competitor was. Tie = neither or third party.
For the 69 queries where GoGuardian is completely absent:
Vendors appearing in responses not in GoGuardian’s defined competitive set.
[Synthesis] The #1 SOV position masks a razor-thin margin: GoGuardian holds 21.9% share against Securly's 21.4% — a gap of 2 mentions across 150 queries. H2H records confirm GoGuardian wins direct matchups against all tracked competitors, but win rate of 32.9% (27/82 high-intent queries unconditional) shows that winning individual comparisons does not translate to winning most buyer decisions. Lightspeed Systems is the primary competitive threat, taking the majority of cross-platform, CIPA reporting, and vendor-consolidation Shortlisting queries — categories where GoGuardian has thin or misaligned content.
The H2H record should not be interpreted as dominance given the overall unconditional win rate.
What AI reads and trusts in this category.
[TL;DR] GoGuardian had 62 unique pages cited across buyer queries, ranking #2 among all cited domains. 10 high-authority domains cite competitors but not GoGuardian.
Sixty-two unique GoGuardian pages are cited and goguardian.com ranks #2 in citation frequency, but support subdomain citations (14 instances) and a 10-query third-party gap signal that commercial pages lack the structured extractable claims AI platforms need to cite them for high-stakes buyer queries.
Note: Domain-level citation counts (above) tally instances per individual domain. Competitor-level counts (below) aggregate across all domains owned by a single vendor, which may include subdomains.
Non-competitor domains citing other vendors but not GoGuardian — off-domain authority opportunities.
These domains cited competitors but did not cite GoGuardian pages in the queries analyzed. This reflects citation patterns in AI responses, not overall platform presence.
[Synthesis] Ranking #2 in citation frequency with 62 unique pages and 106 instances on goguardian.com signals established AI authority — one competitor domain outranks GoGuardian.com by citation volume, and that gap is addressable through the L1 freshness fixes. However, support.goguardian.com absorbing 14 citation instances suggests AI platforms are citing help documentation rather than commercial product pages for some buyer queries, pointing to structural content gaps on product pages rather than crawl access problems. The 10-query third-party citation gap identifies queries where GoGuardian content was matched but not cited — further evidence that content exists but lacks the extractable structure AI platforms require.
Three layers of recommendations ranked by commercial impact and implementation speed.
[TL;DR] 32 priority recommendations (plus 4 near-rebuild optimizations) targeting 123 queries where GoGuardian is currently invisible. 5 L1 technical fixes + 2 verification checks, 20 content optimizations (L2), 5 new content initiatives (L3).
All 123 recommendations flow in dependency order — L1 technical fixes unblock freshness credit, 80 L2 optimizations close positioning gaps on existing pages, and 5 L3 NIOs address the content voids that no optimization can fix without new content creation.
Reading the priority numbers: Recommendations are ranked 1–32 across all three layers by commercial impact × implementation speed. Within each layer, items appear in priority order. Gaps in the sequence (e.g., L1 shows 1, 2, then 12) mean higher-priority items belong to a different layer.
Configuration and infrastructure changes. Owner: Engineering / DevOps. Timeline: Days to weeks.
| Priority | Finding | Impact | Timeline |
|---|---|---|---|
| #1 | All 4 competitive Comparison pages lack visible publication dates | High | < 1 day |
| #2 | Majority of blog content is over 1 year old | High | 1-2 weeks |
| #3 | Sitemap contains 1,200+ URLs but no lastmod timestamps | Medium | 1-3 days |
| #21 | Schema markup status could not be assessed — manual verification recommended | Medium | 1-3 days |
| #22 | Three Comparison pages share identical H1 heading | Medium | < 1 day |
Items requiring manual review before determining if action is needed.
| Priority | Finding | Impact | Timeline |
|---|---|---|---|
| #31 | Client-side rendering status could not be confirmed | Low | < 1 day |
| #32 | Meta descriptions and Open Graph tags could not be assessed | Low | 1-3 days |
Click any row to expand full issue/fix detail.
Existing pages that need restructuring or deepening. Owner: Content Team. Timeline: Weeks.
The /competitor-Comparison page has no financial modeling resources — Superintendent buyers building 3-year TCO models (gg_141) need downloadable tools with named cost components, not feature Comparison tables.
Queries affected: gg_141
The /beacon page has no 'Resources' or 'Evaluation Tools' section linking to downloadable assets that Artifact Creation buyers need — gg_142 and gg_146 represent buyers in active procurement who are ready to consume structured evaluation materials. The /beacon page's current content architecture is product-focused, not procurement-focused — it tells buyers what Beacon does but does not help them justify or evaluate it in a multi-vendor review context.
Queries affected: gg_142, gg_146
The /admin page has no 'Resources for Procurement Teams' section linking to artifact-level content — district IT directors writing RFPs (gg_139) and designing pilot plans (gg_148) have no GoGuardian-authored tools to work from. E-rate funding justification content (gg_134) requires specific USAC program guidance and GoGuardian's E-rate eligibility status, which is not present on /admin in the structured format buyers need for board presentations.
Queries affected: gg_134, gg_139, gg_148, gg_143, gg_145
The /beacon page lacks a 'What Happens Without Proactive Monitoring' section that gg_130 requires — buyers building board justification for AI safety monitoring need risk-of-inaction language, not feature descriptions. The /beacon page has no 'Measuring Effectiveness' or outcome metrics section that gg_136 asks about, meaning safety specialists who need to report to school boards cannot find GoGuardian-authored measurement frameworks. The /beacon page does not address the 'counselors should handle it manually' objection raised in gg_127, which is a specific board-level resistance argument that safety specialists need GoGuardian to help them overcome.
Queries affected: gg_127, gg_128, gg_130, gg_136
The /admin/vs-competitors page does not include sections addressing common complaints about Lightspeed Systems (gg_103), Blocksi (gg_105), or Linewize (gg_106) — buyers at the Validation stage have already identified these vendors and are checking their risks before finalizing a shortlist. The page does not address the specific risk in gg_111 ('biggest risks of choosing Lightspeed Systems for a district switching from on-prem') — a question that GoGuardian is uniquely positioned to answer as an alternative vendor. The /admin/vs-competitors page lacks the Comparison framing for gg_072 ('Blocksi vs other K-12 filtering and classroom management platforms') — GoGuardian should appear as the alternative benchmark in this query type.
Queries affected: gg_072, gg_103, gg_105, gg_106, gg_111
The /admin page describes GoGuardian Admin's capabilities but does not address the 'our filter blocks educational sites teachers need' problem framing in gg_004 — a buyer at this stage needs to see that GoGuardian Admin solves the overblocking problem, not just that it filters content. The /admin page does not include a 'cloud-based vs. on-premises filtering' Comparison section that gg_017 explicitly asks about — buyers evaluating cloud migration need this content on the product page, not just in a blog post. The /admin page lacks an 'Evaluation Criteria for K-12 Web Filtering' section that gg_012 requires — district IT directors at the Problem Identification stage are looking for criteria frameworks, and GoGuardian's product page should help them build that framework.
Queries affected: gg_002, gg_004, gg_012, gg_013, gg_017
The /beacon page answers 'what does Beacon do?' but not 'what are the main approaches to keeping students safe online?' (gg_001) — early-funnel buyers at the Problem Identification stage are looking for category education, not a product pitch. The /beacon page does not address the 'hundreds of false alerts a day' problem framing used in gg_005, meaning the page cannot be cited as a solution to alert fatigue even though Beacon's detection model is designed to address this pain point. The /beacon page lacks an 'AI-powered vs. human-reviewed monitoring' explainer section that gg_014 explicitly requires, causing GoGuardian to miss a query where its product architecture is directly relevant.
Queries affected: gg_001, gg_005, gg_014, gg_029, gg_040
The /dns page does not explicitly address BYOD and guest network filtering — a capability that gg_020 and gg_041 specifically ask about; buyers cannot confirm from /dns that GoGuardian covers personal devices and guest WiFi. The /admin page does not present requirements content for large-district scale (gg_028 asks about 15,000 students across 20 buildings) — no enterprise-scale performance or capacity claims exist on the page. The /admin page cannot serve gg_096 ('Linewize vs Lightspeed for CIPA compliance and E-rate documentation') because it doesn't present GoGuardian's E-rate compliance documentation capabilities as a direct Linewize/Lightspeed alternative.
Queries affected: gg_028, gg_020, gg_041, gg_096
The /beacon page does not include a dedicated privacy and compliance section that lists GoGuardian's specific FERPA, COPPA, and state privacy law compliance status — buyers asking gg_032 (security/privacy requirements for monitoring platforms) cannot find this information in a structured format. The /beacon page does not address the Securly FERPA/COPPA concern framing used in gg_112, missing an opportunity to differentiate GoGuardian's data practices at a moment when a competitor's privacy record is under scrutiny.
Queries affected: gg_032, gg_112
The /competitor-Comparison page addresses product feature comparisons but not the 'is a single platform realistic?' consolidation framing in gg_007 — buyers at Problem Identification stage asking about tool consolidation are not yet thinking vendor-specifically. The page lacks a 'Cost of Tool Sprawl' or 'Total Cost of Ownership: 4 Separate Tools vs. GoGuardian' section that gg_135 and gg_132 require — Securly wins gg_135 because it publishes explicit cost Comparison content. The /competitor-Comparison page does not address the ROI of a unified K-12 digital safety platform vs. separate tools (gg_126) with specific cost components (licensing, IT overhead, training, integration maintenance).
Queries affected: gg_007, gg_101, gg_126, gg_135, gg_132
The /discover page does not explicitly address Google Workspace and Microsoft 365 integration capabilities in the structured format gg_056 requires — Gaggle wins this Shortlisting query because it publishes explicit Microsoft 365 integration documentation. The page does not provide deployment timeline data for large districts that gg_124 asks about — district IT directors evaluating platform complexity need specific rollout timelines before they can recommend GoGuardian to their IT team. The /discover page lacks a 'Consolidate Your Edtech Stack' positioning section that addresses gg_067's consolidation Shortlisting query — Lightspeed wins by presenting itself as the consolidation platform of record.
Queries affected: gg_036, gg_056, gg_067, gg_124
The /admin/vs-competitors page shares the H1 'GoGuardian beats the competition' with two other Comparison pages (per L1 finding duplicate_h1_comparison_pages) — this generic heading does not target CIPA compliance or E-rate audit queries that determine Shortlisting for district IT procurement. The /admin/vs-competitors page has no visible publication or last-updated date (per L1 finding undated_comparison_pages), disqualifying it from freshness credit on Shortlisting queries where Lightspeed's dated compliance content wins. The page does not address the 'switching from legacy on-premises filter' concern in gg_088, missing a conversion opportunity for districts actively evaluating migration — the switching pain point is the buyer's primary risk concern.
Queries affected: gg_044, gg_063, gg_053, gg_088, gg_100
The /beacon/vs-competitors page uses the H1 'GoGuardian beats the competition' (per L1 finding duplicate_h1_comparison_pages) — this generic heading provides no specificity for queries about alert accuracy, false positive rates, or after-hours monitoring that determine whether buyers include GoGuardian Beacon on their shortlist. The /beacon/vs-competitors page has no visible publication or last-updated date (per L1 finding undated_comparison_pages), meaning AI platforms cannot assign freshness credit to competitive claims — Securly and Linewize pages with dated Comparison content win by default on freshness signals. The /beacon/vs-competitors page does not address the human-moderated vs. AI-only detection distinction that gg_078 and gg_084 explicitly ask about, leaving Linewize's human-moderation positioning unanswered on GoGuardian's own Comparison page.
Queries affected: gg_076, gg_091, gg_071, gg_074, gg_078, gg_084, gg_057, gg_068, gg_117, gg_121, gg_104
The /classroom-management page has no 'Resources' section linking to evaluation tools — curriculum directors designing teacher pilot programs (gg_144) have no GoGuardian-authored rubric to work from.
Queries affected: gg_144
The /classroom-management page lacks a 'Measuring Instructional Time Impact' section that gg_129 requires — curriculum directors building business cases need specific measurement approaches (minutes recovered per class, engagement proxy metrics) to present to school boards. The page does not address teacher buy-in strategies (gg_131) — how successful districts win teacher adoption of classroom management software — which is a Consensus Creation need that the buying committee member facing teacher resistance would consult GoGuardian to answer.
Queries affected: gg_129, gg_131
The /admin page describes filtering capabilities but does not explicitly list the supported policy hierarchy (per-student, per-class, per-grade, per-building, per-OU) that gg_039 and gg_043 require — buyers at Requirements Building stage need explicit policy granularity specs, not feature descriptions. The page lacks a 'Role-Based Filtering Policies' section explaining how GoGuardian Admin handles different rules by grade level, building, or teacher-assigned group — the specific Solution Exploration framing in gg_026. Policy_customization has 100% visibility (5/5 queries) but only 20% conditional win rate (1/5 visible queries) — this is the clearest positioning gap in the dataset: GoGuardian appears but doesn't answer the buyer's specific question.
Queries affected: gg_026, gg_039, gg_043
The /classroom-management page does not address the 'half the teachers stopped using it' adoption failure scenario in gg_010 — a problem-identification framing that buyers use when they've already had a bad implementation experience. The page lacks a 'Must-Have vs. Nice-to-Have for 1:1 Chromebook Districts' feature taxonomy that gg_030 explicitly requires — curriculum directors building requirements lists need a structured framework, not a feature list. The /classroom-management page does not address the 'students losing class time to games and social media' problem framing in gg_003, meaning buyers diagnosing this specific pain point cannot connect it to GoGuardian's solution.
Queries affected: gg_003, gg_010, gg_030, gg_035
The /teacher/vs-competitors page shares the generic H1 'GoGuardian beats the competition' with two other Comparison pages (per L1 finding duplicate_h1_comparison_pages) — no Chromebook-specific or LanSchool-specific heading captures the Comparison framing buyers use in gg_079 and gg_098. The page does not address the 'LanSchool reliability on 30+ student devices' concern in gg_118 — buyers at Validation stage who have LanSchool on their shortlist are looking for reliability comparisons. The /teacher/vs-competitors page lacks a head-to-head LanSchool Air vs. GoGuardian Teacher section addressing Chromebook-native vs. cross-OS classroom management, which is the specific Comparison gg_079 and gg_090 ask about.
Queries affected: gg_079, gg_090, gg_098, gg_107, gg_118, gg_123
The /discover page does not present its reporting capabilities in the 'board presentation' framing that gg_024 uses — superintendents asking about board-ready reporting need to see report format examples and output types, not data analytics product descriptions. The /discover or /tech-data pages do not describe what reporting capabilities are required for board compliance reviews (gg_034), missing an opportunity to position GoGuardian's reporting as a compliance tool rather than a monitoring tool.
Queries affected: gg_024, gg_034
The /blog/5-problems-with-youtube-in-the-classroom post is likely over 1 year old (per L1 finding stale_content_marketing noting multiple posts with confirmed dates older than 365 days) and does not include the granularity Comparison data that gg_033 and gg_077 require. The blog post does not address the channel-level vs. video-level vs. comment-blocking granularity question in gg_033, which is a specific Requirements Building question that GoGuardian's YouTube filtering capability should be positioned to answer. The post does not compare GoGuardian's YouTube filtering against Lightspeed or Blocksi (gg_077, gg_099), missing competitive differentiation at a query type where Lightspeed and Blocksi win.
Queries affected: gg_019, gg_033, gg_077, gg_099
Net new content addressing visibility and positioning gaps. Owner: Content Strategy. Timeline: Months.
District IT Directors — GoGuardian's highest-volume persona at 43 queries — ask exclusively about cross-platform and BYOD device filtering across all buying stages from problem identification through artifact creation, and GoGuardian is absent from 10 of 11 of these queries. Lightspeed Systems wins the majority because it publishes dedicated cross-platform capability pages and Comparison data that AI platforms can extract. GoGuardian's DNS product technically covers this capability but has no dedicated content hub for multi-device environments, making the product invisible to buyers who lead their evaluation with 'does it work on Windows and iPads too?' This void is critical: the Director of Technology is a decision_maker role type whose unresolved cross-platform questions can eliminate vendors before Shortlisting begins.
ChatGPT (high): ChatGPT rewards structured feature Comparison tables with explicit device-type rows; Lightspeed wins Comparison queries (gg_075, gg_047, gg_083) in ChatGPT responses by presenting named device compatibility data. Claude (high): Claude needs factual, well-structured content with named capability claims; agent-based vs. DNS architecture comparisons (gg_022) show Claude cites content that explicitly addresses the architectural tradeoff, which GoGuardian currently lacks.
Buyers evaluating K-12 safety platforms increasingly face pressure from parents who want visibility into their child's device activity at home — a post-COVID expectation that has become a procurement criterion. Securly and Linewize win these 6 queries because they publish dedicated parent app and parent communication feature pages that AI platforms extract as direct answers. GoGuardian has parent visibility capabilities (likely through Beacon or Admin), but no content exists to serve the buyer who asks 'which platform gives parents the best home visibility?' This gap affects three personas — Superintendent, High School Principal, and Director of Student Services & Safety — who each weigh parent communication features differently in the buying process.
ChatGPT (high): ChatGPT Comparison queries (gg_080, gg_097) are won by Securly and Linewize — both publish explicit parent app feature pages that ChatGPT extracts for structured Comparison answers. Claude (medium): Claude's consensus and requirements queries (gg_038, gg_137) need factual outcome data — parent adoption rates, complaint reduction metrics — not feature descriptions; GoGuardian has no content providing this type of evidence.
School principals face a concrete, daily operational problem — students out of class with no accountability system — that digital hall pass technology solves. GoGuardian has a Hall Pass product module, yet all 7 queries from problem identification through artifact creation are answered by competitors because no GoGuardian content explains how this product works, what problems it solves, or why it outperforms alternatives like Securly's hall pass integration. Securly wins the Shortlisting query (gg_051) by default. This cluster spans the full buying funnel for a single persona (High School Principal), meaning a dedicated hall pass content program would capture visibility at every stage of the principal's evaluation journey.
ChatGPT (high): ChatGPT Shortlisting queries (gg_051) currently return Securly as the answer because Securly has hall pass feature documentation; ChatGPT's citation model rewards pages that explicitly name the product category and list SIS integrations. Claude (high): Claude's problem identification and solution exploration queries (gg_008, gg_018) return general guidance without naming GoGuardian — Claude cites vendors that have educational content explaining the category, not just product pages.
Curriculum Directors Shortlisting classroom management tools increasingly require built-in interactive lesson and formative assessment capabilities — not just device monitoring. Nearpod is cited by name in gg_094 as the category benchmark, and 'No Vendor Mentioned' wins most queries, indicating AI platforms lack a strong authority source for this buying job. GoGuardian's Teacher product has interactive instruction features, but no content explains them in the 'formative assessment' or 'interactive lesson tool' framing that curriculum directors use. This cluster covers the full buying journey for Director of Curriculum & Instruction — a persona who influences but does not own the final purchase decision, placing this at medium commercial priority despite a severe content gap.
ChatGPT (medium): ChatGPT's Shortlisting query (gg_054) returns 'No Clear Winner' — no vendor has sufficient structured content for ChatGPT to rank a clear leader; publishing a dedicated page with feature specifics would be sufficient to capture this query. Claude (high): Claude's Consensus Creation query (gg_138) asks for evidence that classroom management software improves engagement and test scores — Claude requires citeable outcome data, not feature descriptions; GoGuardian needs a dedicated evidence/outcomes page.
Policy_customization has 100% visibility (5/5 queries) yet only a 20% conditional win rate (1/5 visible queries) — GoGuardian appears in every policy customization query but loses four of five because its content describes features rather than comparing them. The same pattern holds for Usage Reporting & Analytics Dashboard (62.5% visible, 20% win rate) and SIS & LMS Integration Ecosystem (71.4% visible, 20% win rate). The root cause is a missing content type: buyers asking 'how does GoGuardian Admin compare to Lightspeed Filter on CIPA reporting?' need a dedicated Comparison page with side-by-side claims, not a product page that mentions CIPA compliance in passing. Superintendent and Director of Technology decision-makers dominate this cluster — the buyers with final authority are the ones GoGuardian is failing to convert at the Comparison stage.
ChatGPT (high): All 5 affinity-override queries require Comparison page types; ChatGPT extracts structured Comparison data from pages that use explicit 'GoGuardian vs. [Competitor]' headings and Comparison tables — GoGuardian's current product pages lack this format. Claude (high): Claude's Comparison queries (gg_081, gg_085, gg_093) need factual depth with named competitive claims — Claude prioritizes pages that state specific capability differences with verifiable specifics rather than generic 'GoGuardian beats the competition' H1 headings (currently duplicated across 3 Comparison pages per L1 finding duplicate_h1_comparison_pages).
All recommendations across all three layers, ranked by commercial impact × implementation speed.
The four most commercially valuable pages — /competitor-Comparison, /admin/vs-competitors, /teacher/vs-competitors, and /beacon/vs-competitors — display no visible publication or last-updated dates. These pages are classified as content_marketing and cannot receive freshness credit from AI crawlers.
7 of 12 commercially relevant blog posts analyzed have confirmed publication dates older than 365 days. The oldest dates to January 2015. Several high-value posts covering web filtering bypass methods (2019), Chromebook monitoring (2020), education software comparisons (2020), and internet safety (2019) are over 5 years old.
The sitemap at /sitemap.xml contains over 1,200 URLs but none include lastmod dates. The sitemap is a flat file (not a sitemap index) with no priority values.
GoGuardian's Cross-Platform & Multi-OS Device Support feature generates 10 buyer queries yet earns only 1 win (10%, 1/10 queries unconditional), with a 16.7% conditional win rate (1/6 visible queries) — a structural content void: no dedicated cross-platform hub exists to serve IT Directors managing mixed Chromebook/Windows/iOS environments.
Six Comparison-stage and Validation-stage queries are routed L3 via affinity override — GoGuardian's content covers the feature category (SIS & LMS Integration Ecosystem, Usage Reporting & Analytics Dashboard, Granular Policy & Role-Based Access Controls) but the page type is wrong: buyers asking Comparison questions get product pages and blog posts rather than dedicated feature Comparison pages, causing GoGuardian to lose to Lightspeed Systems on 5 of 6 queries.
The /competitor-Comparison page has no financial modeling resources — Superintendent buyers building 3-year TCO models (gg_141) need downloadable tools with named cost components, not feature Comparison tables.
The /beacon page has no 'Resources' or 'Evaluation Tools' section linking to downloadable assets that Artifact Creation buyers need — gg_142 and gg_146 represent buyers in active procurement who are ready to consume structured evaluation materials.
The /admin page has no 'Resources for Procurement Teams' section linking to artifact-level content — district IT directors writing RFPs (gg_139) and designing pilot plans (gg_148) have no GoGuardian-authored tools to work from.
The Digital Hall Pass & Campus Movement Tracking feature has the second-lowest visibility rate of any feature at 28.6% (2/7 queries) and a 0% conditional win rate (0/2 visible queries) — GoGuardian has no dedicated hallway safety or digital hall pass content, leaving all 7 NIO queries entirely unwinnable.
The Parent Visibility & At-Home Controls feature has a 55.6% visibility rate (5/9 queries) overall but thin coverage across all 6 NIO queries — no dedicated parent portal content page exists — allowing Securly and Linewize to win parent-focused Comparison and Shortlisting queries by default.
The /beacon page lacks a 'What Happens Without Proactive Monitoring' section that gg_130 requires — buyers building board justification for AI safety monitoring need risk-of-inaction language, not feature descriptions.
The /admin/vs-competitors page does not include sections addressing common complaints about Lightspeed Systems (gg_103), Blocksi (gg_105), or Linewize (gg_106) — buyers at the Validation stage have already identified these vendors and are checking their risks before finalizing a shortlist.
The /admin page describes GoGuardian Admin's capabilities but does not address the 'our filter blocks educational sites teachers need' problem framing in gg_004 — a buyer at this stage needs to see that GoGuardian Admin solves the overblocking problem, not just that it filters content.
The /beacon page answers 'what does Beacon do?' but not 'what are the main approaches to keeping students safe online?' (gg_001) — early-funnel buyers at the Problem Identification stage are looking for category education, not a product pitch.
The /dns page does not explicitly address BYOD and guest network filtering — a capability that gg_020 and gg_041 specifically ask about; buyers cannot confirm from /dns that GoGuardian covers personal devices and guest WiFi.
The /beacon page does not include a dedicated privacy and compliance section that lists GoGuardian's specific FERPA, COPPA, and state privacy law compliance status — buyers asking gg_032 (security/privacy requirements for monitoring platforms) cannot find this information in a structured format.
The /competitor-Comparison page addresses product feature comparisons but not the 'is a single platform realistic?' consolidation framing in gg_007 — buyers at Problem Identification stage asking about tool consolidation are not yet thinking vendor-specifically.
The /discover page does not explicitly address Google Workspace and Microsoft 365 integration capabilities in the structured format gg_056 requires — Gaggle wins this Shortlisting query because it publishes explicit Microsoft 365 integration documentation.
The /admin/vs-competitors page shares the H1 'GoGuardian beats the competition' with two other Comparison pages (per L1 finding duplicate_h1_comparison_pages) — this generic heading does not target CIPA compliance or E-rate audit queries that determine Shortlisting for district IT procurement.
The /beacon/vs-competitors page uses the H1 'GoGuardian beats the competition' (per L1 finding duplicate_h1_comparison_pages) — this generic heading provides no specificity for queries about alert accuracy, false positive rates, or after-hours monitoring that determine whether buyers include GoGuardian Beacon on their shortlist.
Our analysis method returns rendered page content as markdown text, so JSON-LD structured data markup is not visible. We could not determine whether product pages use Product schema, blog posts use Article schema, or FAQ sections use FAQPage schema.
The pages /admin/vs-competitors, /teacher/vs-competitors, and /beacon/vs-competitors all use the same H1: 'GoGuardian beats the competition.' This generic heading provides no differentiation signal for AI models trying to match these pages to specific product Comparison queries.
The /classroom-management page has no 'Resources' section linking to evaluation tools — curriculum directors designing teacher pilot programs (gg_144) have no GoGuardian-authored rubric to work from.
The Interactive Lesson & Assessment Tools feature has the lowest visibility rate of any tracked feature at 16.7% (1/6 queries) with a 0% conditional win rate (0/1 visible query) — GoGuardian has no content serving curriculum directors evaluating classroom management platforms with built-in formative assessment tools.
The /classroom-management page lacks a 'Measuring Instructional Time Impact' section that gg_129 requires — curriculum directors building business cases need specific measurement approaches (minutes recovered per class, engagement proxy metrics) to present to school boards.
The /admin page describes filtering capabilities but does not explicitly list the supported policy hierarchy (per-student, per-class, per-grade, per-building, per-OU) that gg_039 and gg_043 require — buyers at Requirements Building stage need explicit policy granularity specs, not feature descriptions.
The /classroom-management page does not address the 'half the teachers stopped using it' adoption failure scenario in gg_010 — a problem-identification framing that buyers use when they've already had a bad implementation experience.
The /teacher/vs-competitors page shares the generic H1 'GoGuardian beats the competition' with two other Comparison pages (per L1 finding duplicate_h1_comparison_pages) — no Chromebook-specific or LanSchool-specific heading captures the Comparison framing buyers use in gg_079 and gg_098.
The /discover page does not present its reporting capabilities in the 'board presentation' framing that gg_024 uses — superintendents asking about board-ready reporting need to see report format examples and output types, not data analytics product descriptions.
The /blog/5-problems-with-youtube-in-the-classroom post is likely over 1 year old (per L1 finding stale_content_marketing noting multiple posts with confirmed dates older than 365 days) and does not include the granularity Comparison data that gg_033 and gg_077 require.
All pages returned substantial text content via our analysis method, suggesting no widespread CSR rendering failure. However, we cannot definitively confirm whether content is server-rendered or client-rendered from the rendered output alone.
Meta descriptions, Open Graph tags, and Twitter Card markup are not visible in rendered markdown output. We could not verify whether pages have optimized meta descriptions or social sharing metadata.
All three workstreams can start this week.
[Synthesis] L1 fixes execute first in dependency order because the sitemap lastmod fix unblocks freshness credit for all subsequently updated and created pages — without it, L2 and L3 improvements won't be credited as fresh by AI crawlers regardless of publication date. L2 optimizations are the largest category (80 recommendations) because GoGuardian's content infrastructure is broadly in place but systematically misaligned with the specific questions buyers ask at each stage. The five L3 NIOs represent complete content voids where GoGuardian has product capability but no AI-citable content — these require creation, not optimization, and no L1 or L2 investment changes that outcome.