Before we run the audit, we need to make sure we're asking the right questions about the right competitors to the right buyers. This document presents what we've learned about NeuroGuard+'s market — your job is to tell us what we got right, what we got wrong, and what we missed.
Before we measure citation visibility in the concussion prevention equipment space, these three signals tell us whether AI crawlers can access and trust NeuroGuard+'s site. They anchor every section that follows.
AI search is transforming how concussion prevention equipment buyers discover and evaluate solutions. The category's reliance on clinical credibility and safety certification creates a unique GEO dynamic — buyers querying AI platforms expect evidence-backed recommendations, and the platforms reward domains with structured, fresh, authoritative content. NeuroGuard+ is entering this measurement at a stage where establishing visibility compounds: early citations build domain trust that reinforces future citations.
This document validates three inputs before the audit runs. The competitive landscape establishes which head-to-head matchups shape the query set. The buyer personas determine which search intent patterns we test across the purchase funnel — from parent research through institutional procurement. And the technical baseline reveals whether AI platforms can access and trust the site's content at all. Each section includes targeted validation questions whose answers directly calibrate the audit architecture.
The validation call is a 45–60 minute decision-making session with two tracks. Track one: input validation — confirming that the entities, tiers, and ratings in this document reflect actual deal dynamics. Corrections here reshape query prioritization across the selected AI platforms. Track two: engineering triage — reviewing which technical findings your team can resolve before audit results arrive, improving baseline visibility independent of the call outcome.
What this document is This Engagement Foundation Review presents our research into the concussion prevention mouthguard market — the competitors, buyer personas, product features, pain points, and technical baseline that will drive the GEO audit. Every section maps to a component of the buyer query set we'll construct.
What we need from you Look for the purple boxes throughout this document. Each one asks a specific question about your business that we can't answer from outside-in research alone. Your answers at the validation call will calibrate the audit — which queries we prioritize, which competitors we test head-to-head, and which personas drive the intent architecture.
How to read confidence badges Every data point carries a confidence badge. High = sourced directly from your site or verified third-party data. Medium = inferred from category patterns or partial data. Low = best-guess that needs validation. Medium and low badges are where your input matters most.
Validate NeuroGuard+ positions as both concussion prevention and athletic performance enhancement (15–25% strength gains from jaw alignment). Does the performance angle drive a separate buying conversation — e.g., coaches seeking a competitive edge versus administrators seeking liability protection — or is it a secondary selling point? If separate, we split the query set into safety-motivated and performance-motivated clusters targeting different buyer motivations.
5 personas: 3 decision-makers, 1 evaluator, 1 influencer. These personas drive the buyer query set — each role searches differently across the concussion prevention mouthguard purchase funnel.
Critical Review Area Personas have the highest impact on query architecture. If a persona is missing, misclassified, or irrelevant, entire query clusters will target the wrong intent. Review each role below and flag corrections at the validation call.
Data Sourcing Note Role, department, seniority, influence level, and veto power are sourced from the knowledge graph. Buying jobs and query focus areas are synthesized from the persona's role, technical level, and position in the purchase funnel — they represent our best inference of how this buyer would search, not direct observational data.
→ Does the Athletic Director own the concussion equipment budget directly, or does final approval sit with a superintendent or school board? If budget authority is higher, we add an administrative decision-maker persona and shift approval-stage queries upward.
→ Do coaches in other contact sports (soccer, lacrosse, hockey) make independent equipment purchasing decisions, or does the AD centralize all safety gear procurement? If sport-specific coaches buy independently, we need separate coaching personas with sport-specific query language.
→ Does the Head Athletic Trainer have formal sign-off authority on safety equipment purchases, or is the role advisory to the AD? If advisory, we reclassify as evaluator and redistribute decision-stage queries to the Athletic Director.
→ Does this persona represent competitive travel clubs, recreational leagues, or both? If rec leagues have fundamentally different budgets and decision cycles than travel clubs, we may need separate institutional personas with different query patterns.
→ Does the sports parent buy independently after personal research, or primarily in response to coach or team recommendations? If purchases are coach-driven, query focus shifts from discovery-stage to validation-stage queries and the parent persona's weight in the query set decreases.
Missing Personas? Three roles not in the current set that may appear in NeuroGuard+'s deals: School District Risk Manager / Insurance Coordinator (if liability concerns and insurance premium reduction drive institutional mandates), Pediatric Sports Medicine Physician (if doctor recommendations drive parent purchases — particularly relevant given NeuroGuard+'s neuromuscular mechanism claims), Equipment Manager for large programs (if D1/D2 programs have dedicated procurement staff separate from coaching). Who else shows up in your deals?
5 primary + 4 secondary competitors identified. Tier assignments determine which head-to-head matchups the audit tests across concussion prevention comparison queries.
Competitive GEO Context Getting these tiers right determines which queries test direct competitive differentiation — queries like "best concussion prevention mouthguard" or "NeuroGuard+ vs Q-Collar" — versus broader category awareness. Each primary competitor generates 6–8 head-to-head queries, totaling approximately 30–40 direct comparison queries for the 5 primary competitors. We're less certain about Rezon Wear's tier — if they rarely appear in North American deals, moving them to secondary would shift approximately 8 queries out of the head-to-head set.
Validate Three questions: (1) Is Rezon Wear (medium confidence) actually appearing in North American competitive deals, or is their limited distribution making them irrelevant to NeuroGuard+'s market? If irrelevant, we move them to secondary and reallocate ~8 head-to-head queries. (2) Should Prevent Biometrics be promoted to primary given the direct mouthguard form-factor overlap — even though they're detection vs. prevention, do they compete for the same budget line? (3) Are there DTC consumer mouthguard brands (SISU, Shock Doctor) that compete for parent purchases but aren't in this competitive set?
10 buyer-level capabilities mapped. These features determine which capability queries the audit tests — each feature generates queries in the buyer's language, not marketing copy.
A mouthguard that actually reduces my players' risk of getting a concussion during games and practice through proven biomechanical protection
Equipment that improves athlete strength, balance, and endurance beyond just safety — a competitive edge from jaw alignment
Third-party tested and certified concussion prevention with peer-reviewed studies and recognized safety ratings backing the claims
A protective mouthguard that athletes can wear all game without discomfort, overheating, or wanting to rip it out
Multiple levels of customization from self-fit to dentist-molded so every athlete gets the best possible fit for their mouth
One concussion prevention product that works across football, hockey, soccer, lacrosse, and combat sports without buying separate gear
A mouthguard that lets athletes breathe freely and communicate with teammates during play without removing it
A simple process to order concussion protection for an entire team with bulk pricing and easy fitting for every player
Available at major sporting goods stores where I can see it, touch it, and get it same-day instead of ordering online and waiting
Concussion protection that works alongside helmets, face cages, and other mandatory equipment without interference or extra bulk
Validate Two items need particular scrutiny: (1) Clinical Validation is rated weak — NeuroGuard+ has no FDA clearance, no Virginia Tech Helmet Lab rating, and no published RCTs. Is evidence acquisition actively underway (FDA submission, Virginia Tech testing, university partnerships)? If so, we adjust strength to moderate and add evidence-based differentiation queries. If not, the audit strategy pivots to comfort and performance positioning where NeuroGuard+ is stronger. (2) Performance Enhancement claims (15–25% strength gains) are rated moderate due to limited independent verification — are there unpublished studies or partnerships that would strengthen this? Are any features missing or candidates for merging?
9 pain points: 4 high, 5 medium severity. Pain point buyer language is how queries will be phrased — the audit tests whether AI platforms cite NeuroGuard+ when buyers express these frustrations.
Validate Three checks: (1) Is Liability Exposure (high severity, medium confidence) actually driving institutional purchases — do deals frequently cite legal risk as the trigger, or is it more of a background concern? If lower severity in practice, we de-weight liability-framed queries. (2) Is Team Budget Constraints (medium severity) actually higher — do deals frequently stall on cost justification for a $50/player mouthguard with debated science? (3) Are there regulatory or insurance-driven pain points missing — e.g., insurance premium reductions for organizations that mandate concussion prevention equipment, or state athletic association mandates?
6 findings from Layer 1 analysis. These are technical and structural issues — not content recommendations. Content gap analysis requires query response data and will be delivered in the full audit.
Engineering — Verify Now The top finding is a possible client-side rendering issue affecting 25 of 32 pages. Engineering should test /pages/data-research, /pages/how-it-works, and /blogs/neuroguard-blog with JavaScript disabled or using Google's Rich Results Test to determine if content is delivered in the initial HTML response. If confirmed, implementing server-side rendering is the highest-leverage fix before the audit. Additionally, audit schema markup on key pages — JSON-LD structured data could not be assessed and may be missing on non-product pages.
What we found: Automated content extraction returned only Shopify configuration code and JavaScript for 25 of 32 analyzed pages — all /pages/* routes (13 pages) and all /blogs/* routes (12 pages) failed to return rendered body content. Only /products/* routes (6 pages) and the collection page returned readable product descriptions. The homepage also failed to return rendered content.
Why it matters: If AI crawlers (GPTBot, ClaudeBot, PerplexityBot) face similar rendering challenges, the majority of the site's content — including the critical data-research, how-it-works, FAQ, and all blog posts — would be invisible for AI citation. AI platforms cannot cite content they cannot extract.
Recommended fix: Verify rendering behavior by testing key pages with JavaScript disabled in a browser, or use Google's Rich Results Test / URL Inspection tool. If content depends on client-side JavaScript, implement server-side rendering (SSR) for all page templates. If using a Shopify theme with heavy JavaScript, ensure Liquid templates include content in the initial HTML response.
What we found: All 12 blog posts report a lastmod date of February 21, 2024 — over 24 months ago. This likely reflects a platform migration rather than actual content creation dates, but regardless, AI crawlers see these pages as 24+ months stale. No blog post has been published or updated since.
Why it matters: 76.4% of AI-cited pages were updated within the previous 30 days (Ahrefs, 1.9M citation study). Content marketing pages older than 365 days are functionally invisible to freshness-weighted citation algorithms. NeuroGuard+'s blog posts covering concussion science and mouthguard comparisons compete against fresher competitor content for the same queries.
Recommended fix: Audit all 12 blog posts for accuracy and relevance. Republish updated versions with current dates for the highest-value posts (concussions-and-mouthguards, custom-vs-over-the-counter-mouthguards, kids-and-concussions-data). Establish a content refresh cadence — updating 2-3 posts per month to maintain a rolling 90-day freshness window.
What we found: Four commercially important pages have sitemap lastmod dates older than 12 months: how-it-works (November 2024, 15 months), sports (February 2025, 12 months), cheerleading (February 2025, 12 months), and custom-fit-mouthguards (February 2024, 24 months). Two additional pages — testimonials and testimonial-video — are 21+ months stale.
Why it matters: These pages cover core KG features: performance enhancement (how-it-works), multi-sport versatility (sports, cheerleading), and custom fit tiers (custom-fit-mouthguards). Stale product-commercial pages signal to AI platforms that the information may be outdated, reducing citation priority relative to competitors with fresher content on the same topics.
Recommended fix: Update these four pages with current product information, recent customer data, and refreshed claims. Even minor content updates with substantive additions will reset the freshness signal. Prioritize how-it-works and custom-fit-mouthguards as they map to core differentiating features.
What we found: JSON-LD schema markup is not visible in rendered output and could not be assessed for any of the 32 analyzed pages. Shopify product pages typically include Product schema automatically, but custom pages (/pages/*) and blog posts (/blogs/*) may lack appropriate structured data types (FAQPage for FAQ, Article for blog posts, HowTo for fitting guides).
Why it matters: Schema markup helps AI platforms understand page purpose and extract structured data for citations. Pages with appropriate schema types are more likely to be correctly interpreted and cited in AI-generated responses. Missing or generic schema means the site relies entirely on unstructured content signals.
Recommended fix: Audit schema markup using Google's Rich Results Test on key pages: /pages/faq (FAQPage), /pages/data-research (Article), /pages/how-it-works (HowTo), all blog posts (Article), and product pages (verify Product schema includes reviews, pricing, availability). Add missing schema types through Shopify theme code or a structured data app.
What we found: Meta descriptions and Open Graph tags are not accessible from rendered output and could not be assessed for any page. Shopify auto-generates basic meta descriptions from product/page content, but these auto-generated descriptions may be truncated or suboptimal for AI citation contexts.
Why it matters: Meta descriptions influence how AI platforms summarize pages in search results and citation contexts. Well-crafted meta descriptions with specific claims and differentiators improve the likelihood of accurate AI citations.
Recommended fix: Verify meta descriptions using browser developer tools or Screaming Frog on all commercial pages. Ensure each product page and key content page has a custom meta description that includes specific differentiating claims rather than generic marketing language.
What we found: All 18 product URLs share an identical lastmod timestamp of 2026-03-05, suggesting Shopify auto-updates when inventory or pricing changes — not when content is modified. Blog sitemap dates (all 2024-02-21) reflect a migration event rather than individual updates.
Why it matters: Unreliable lastmod timestamps reduce the sitemap's value as a freshness signal. When all products show the same date, crawlers cannot prioritize recently updated content. When blog dates all match, crawlers cannot distinguish relevant content from genuinely stale content.
Recommended fix: This is a known Shopify platform limitation. For blog posts, ensure any content updates trigger a proper lastmod update. Consider adding visible "Last Updated" dates to blog posts and key pages to provide an additional freshness signal that both readers and AI crawlers can use.
Note The low heading hierarchy (0.49), content depth (0.44), and passage extractability (0.47) scores are likely influenced by the rendering issue — if 25 of 32 pages returned only JavaScript configuration code, automated scoring could only evaluate the surface layer. Resolving the rendering issue and re-analyzing would likely improve these metrics significantly for pages that do have well-structured content behind the JavaScript layer.
Why Now
• AI search adoption is accelerating — buyer discovery patterns in the concussion prevention space are shifting quarter over quarter as parents and coaches increasingly ask AI platforms for equipment recommendations.
• Early citations compound: domains that AI platforms learn to trust now get cited more frequently as training data accumulates.
• Competitors who establish GEO visibility first create a structural disadvantage for late movers — Q30 Innovations and Storelli are already investing in content that AI platforms can extract and cite.
• Concussion prevention equipment is still early-innings in GEO optimization — acting now means competing against inaction, not against entrenched strategies.
The full audit will measure NeuroGuard+'s citation visibility across buyer queries in the concussion prevention space — queries like "best mouthguard for concussion prevention," "how to protect youth athletes from concussions," and "concussion prevention equipment comparison." You'll see exactly which queries return results that include competitors like Q30 Innovations and Storelli but not NeuroGuard+ — and what it would take to appear in them. Resolving the technical rendering issues now improves the baseline before we measure it, giving the audit cleaner data to work with.
45–60 minutes to walk through this document together. We'll confirm personas, competitors, feature strengths, and pain point severity — every correction directly calibrates the query set.
Buyer queries constructed from the validated KG, executed across selected AI platforms (ChatGPT, Perplexity, Claude, Gemini). Each query tests whether NeuroGuard+ appears in the response.
Complete visibility analysis, competitive positioning across every query, and a three-layer action plan — technical fixes, content priorities, and strategic positioning moves ranked by citation impact.
Start Now — Engineering These don't depend on the rest of the audit and will improve your baseline visibility before we even measure it:
1. Verify page rendering: Test /pages/data-research, /pages/how-it-works, /pages/faq, and 2-3 blog posts with JavaScript disabled or using Google's Rich Results Test. If content depends on client-side JS, implement server-side rendering for all Shopify page templates.
2. Audit schema markup: Check JSON-LD structured data on /pages/faq (should have FAQPage), /pages/data-research (Article), blog posts (Article), and product pages (verify Product schema includes reviews, pricing, availability). Add missing schema types.
3. Add visible "Last Updated" dates: Add last-updated timestamps to blog posts and key commercial pages to provide an additional freshness signal that both readers and AI crawlers can use, working around Shopify's unreliable sitemap timestamps.
Two jobs before we meet. The questions on the left require your judgment — no one knows your business better than you. The engineering tasks on the right don't require the call at all.