Before we run the audit, we need to make sure we're asking the right questions about the right competitors to the right buyers. This document presents what we've learned about Vitally's market — your job is to tell us what we got right, what we got wrong, and what we missed.
Before we measure citation visibility in the customer success platform space, these three signals tell us whether AI crawlers can access and trust Vitally's site content.
AI search is reshaping how B2B SaaS companies discover and evaluate customer success platforms. Buyers who once relied on G2 grids and analyst reports are increasingly turning to ChatGPT, Perplexity, and Claude to shortlist vendors — and the platforms that AI cites consistently will capture disproportionate mindshare. Vitally is positioned in a mid-market segment where this shift is accelerating fastest, and establishing GEO visibility now creates a compounding advantage before enterprise-focused competitors like Gainsight and Totango recognize the opportunity.
This Foundation Review covers three inputs that drive the audit: the competitive landscape that shapes which head-to-head queries we test, the buyer personas that determine search intent patterns across the CS purchase decision, and the technical baseline that determines whether AI platforms can access Vitally's content at all. Each section presents our outside-in research for your validation — confirming or correcting these inputs before the audit runs ensures we're measuring what actually matters to your buyers.
The validation call is a decision-making session with two jobs. First, input validation: are the right competitors in the right tiers, do the personas reflect who actually signs off on CS platform purchases, and are feature strength ratings honest? Second, engineering triage: which Layer 1 technical fixes can start before results come back, and which require further investigation? Your answers on both fronts directly shape the query set and audit architecture.
Three things to know before you dive in.
What this is This document presents the foundation we've built for Vitally's GEO audit — the competitive landscape, buyer personas, feature taxonomy, and technical baseline specific to the customer success platform category. Every element here drives the buyer query set that the audit will test. Getting these inputs right means the audit measures what actually matters to your buyers.
What we need from you Purple boxes throughout the document contain specific validation questions. These aren't rhetorical — each one identifies a point where your answer changes the audit's query architecture. Review them before the validation call. The more precise your corrections, the sharper the audit output.
Confidence badges Every data point carries a confidence badge: High means sourced from multiple verified references, Med means sourced from a single reference or inferred with supporting evidence, and Low means limited data or significant uncertainty. Focus your review time on medium and low-confidence items — those are where your input changes the most.
The client profile anchors every query we build. Confirm these details match how buyers actually find and describe Vitally.
Validate Vitally competes against enterprise platforms (Gainsight, Totango) but is classified as mid-market — does Vitally actively pursue enterprise deals with 1,000+ customer accounts, or does the ICP stop at mid-market? If Vitally is moving upmarket, we add enterprise-evaluation queries and adjust the Enterprise Scalability feature weighting from "weak" to a contested capability.
5 personas: 2 decision-makers, 2 evaluators, 1 influencer. Each persona drives a distinct query cluster in the audit.
Critical review area Personas are the highest-leverage input in the audit — getting a persona wrong means an entire query cluster targets the wrong buyer. Review each persona's influence level and veto power carefully. A misclassified decision-maker generates queries for a buying stage that doesn't exist.
Data sourcing note Persona names, roles, departments, influence levels, and veto power are sourced from the knowledge graph. Buying jobs and query focus areas are synthesized from role context and category patterns — these represent the search behaviors we expect from each role, not confirmed behavior.
→ Does the VP of CS hold full budget authority for the CS platform, or does the CRO co-sign? If David Chen (CRO) approves spend, we need to weight executive-approval queries differently than CS-leader-owned queries.
→ Does the CS Ops Director own the vendor shortlist and technical evaluation, or does a RevOps or IT team run procurement? If evaluation happens outside CS Ops, we shift integration-depth queries to target that team's search patterns instead.
→ Does a CRO typically participate in CS platform purchase decisions at Vitally's target companies, or does the VP of CS have full budget authority? If CROs aren't in the room, we remove this persona entirely and reallocate 15–20 executive-revenue queries to CS-specific decision stages.
→ Does the CS Team Lead's feedback carry weight during vendor evaluation, or is their input limited to post-purchase adoption? If they influence the shortlist, we add hands-on evaluation queries targeting usability and daily workflow comparisons.
→ Is "Head of CS" a distinct role from "VP of CS" at Vitally's target companies, or do these titles represent the same buyer? If they overlap, we merge Marcus and Maya into a single persona and eliminate duplicate query patterns across their clusters.
Missing personas? Three roles we didn't include but might be relevant: VP of Sales / CRO (if CS and Sales share renewal ownership and co-evaluate tooling), RevOps / Sales Ops Director (if the CS platform purchase runs through a centralized operations team rather than CS directly), or IT / Security Lead (if enterprise deals require IT procurement sign-off on data handling and SSO). Who else shows up in your deals?
5 primary + 4 secondary competitors identified. Tier assignments determine which head-to-head matchups the audit tests.
Why tiers matter Primary competitors generate head-to-head comparison queries — "Vitally vs Gainsight," "best customer success platform for mid-market SaaS," and category evaluation queries where tier-1 vendors appear together. Getting these tiers right determines which ~30–40 queries test direct competitive differentiation vs. category awareness. Catalyst is listed as primary with Med confidence — the Totango merger may mean buyers no longer evaluate Catalyst independently, which would shift approximately 8 head-to-head queries out of the primary comparison set.
Validate Three questions for the call: (1) Does Catalyst still appear as a separate option in your deals post-Totango merger, or should we consolidate it into the Totango matchup? (2) Velaris is listed with Low confidence — is this a vendor you encounter in competitive evaluations, or can we drop them? (3) Are there vendors we missed entirely — particularly any CRM-native CS modules (HubSpot Service Hub, Salesforce CS Cloud) that buyers compare against dedicated CS platforms?
12 capabilities mapped. Strength ratings determine whether the audit tests differentiation queries (strong features) or defensive positioning (moderate/weak features).
Accurately predict which customers are at risk of churning with multi-factor health scores based on usage, engagement, and support data
Automate repetitive CS motions like onboarding sequences, renewal outreach, and risk escalations so CSMs focus on high-value work
Bidirectional sync with Salesforce, HubSpot, and product analytics tools so customer data stays consistent across all systems
Track retention, expansion revenue, NRR, and team performance with dashboards that leadership can actually use for board reporting
Get AI-generated account summaries, meeting prep, and next-best-action recommendations without manually reviewing every account
Manage onboarding timelines, milestones, and task assignments to get customers to time-to-value faster
Collect NPS and CSAT feedback directly within the customer journey and automatically trigger follow-up actions based on scores
Give the whole revenue team a shared workspace with collaborative notes, internal docs, and account timelines instead of scattered Slack threads
See exactly which features customers are using and where adoption is dropping off so CSMs can intervene before churn happens
Proactively manage renewals and identify expansion opportunities with automated alerts and pipeline tracking tied to customer health
Segment customers by ARR, plan, lifecycle stage, or health score to run targeted playbooks for each tier instead of one-size-fits-all
Handle thousands of accounts with complex hierarchies, custom objects, and enterprise-grade permissions without the platform slowing down
Validate Three areas to check: (1) Enterprise Scalability is rated Weak based on G2 feedback about complexity limitations with large account hierarchies — is this accurate, or has Vitally shipped improvements that change this rating? (2) AI-Powered CS Copilot is rated Moderate because Vitally AI is relatively new — where does it stand against Gainsight's AI capabilities today? (3) Are any features missing entirely — for example, digital-touch / tech-touch automation as a distinct capability from workflow automation?
9 pain points: 6 high, 3 medium severity. The buyer language in each pain point is how we'll phrase queries — accuracy here directly shapes what the audit measures.
Validate Three checks: (1) "Missed expansion revenue" was sourced via LLM inference (Med confidence) — does this resonate with how Vitally's buyers actually describe this problem, or is expansion tracking more of a nice-to-have than a purchase driver? (2) Is "scaling CS without headcount" the right framing, or do buyers more commonly describe this as "digital-touch vs. high-touch" segmentation? The query language shifts depending on how buyers frame it. (3) Are there category-specific pains we missed — for example, multi-product / multi-entity complexity (managing CS across multiple product lines), CS-to-sales handoff friction (renewal ownership disputes between CS and AE teams), or proving CS ROI to finance (justifying CS headcount with revenue attribution)?
7 findings from the technical baseline analysis. These are engineering-actionable items that affect AI crawler access and content extractability.
Engineering action needed The top finding is a high-severity issue: 2 broken URLs in the footer navigation (the competitive hub page and a comparison page both return 404 errors). Engineering should restore or redirect these immediately — broken internal links degrade site-level trust signals for AI crawlers. Additionally, the sitemap lacks lastmod timestamps on all 1,000+ URLs, which eliminates the freshness signal AI platforms use to prioritize content for citation. Both fixes can start before the validation call.
What we found: Two URLs linked from the site footer navigation return 404 errors: the 'Why Vitally' competitive hub page at /customer-success-platforms and the 'vs. CRM' comparison page at /customer-success-platforms/vitally-vs-crm. These are publicly indexed URLs that AI crawlers will encounter and fail to process.
Why it matters: Broken pages in the primary navigation create dead ends for AI crawlers indexing the site. The competitive hub page is the parent page for all five competitor comparison pages — its absence means AI platforms cannot discover the comparison section through hierarchical crawling. Broken internal links also degrade site-level trust signals used by AI citation algorithms.
Recommended fix: Either restore the /customer-success-platforms hub page and /customer-success-platforms/vitally-vs-crm comparison page, or update footer navigation links to point to live URLs. If these pages were intentionally removed, implement 301 redirects to the most relevant live page.
What we found: The sitemap.xml contains over 1,000 URLs but none include lastmod, changefreq, or priority attributes. Every entry contains only the <loc> element.
Why it matters: AI crawlers and search engines use sitemap lastmod dates as a primary signal for content freshness. Without these timestamps, crawlers cannot prioritize recently updated content over stale pages. This is especially costly for competitor comparison pages and blog content, where freshness directly impacts citation likelihood — 76.4% of AI-cited pages were updated within 30 days.
Recommended fix: Add accurate lastmod timestamps to all sitemap entries, particularly for competitor comparison pages, product pages, and recent blog posts. Ensure lastmod reflects actual content changes, not automated regeneration timestamps.
What we found: The majority of product, feature, and comparison pages use multiple H1 tags — ranging from 4 to 14 H1 elements per page. The CSM solution page has 14 H1 tags, comparison pages average 8, and product pillar pages each have 6–7.
Why it matters: AI models use heading hierarchy to identify page topics and extract structured passages. Multiple H1 tags dilute the primary topic signal and make it harder for LLMs to determine which heading represents the page's main subject.
Recommended fix: Restructure pages to use a single H1 for the primary page topic, demoting secondary sections to H2. Focus first on the 5 competitor comparison pages and 4 solution pages.
What we found: All 5 competitor comparison pages (vs. Gainsight, ChurnZero, Totango, Planhat, Catalyst) display no visible publication or last-updated date. The only temporal references are G2 badge descriptions mentioning 'Summer 2025'.
Why it matters: Competitor comparison content is among the most frequently cited content types in AI vendor evaluation responses. AI platforms heavily weight freshness — undated comparison content is deprioritized relative to competitors' dated alternatives.
Recommended fix: Add a visible 'Last updated: [date]' element to each comparison page and commit to updating these pages quarterly. Also add lastmod to these URLs in sitemap.xml.
What we found: Our analysis method returns rendered page content as markdown text, which does not include JSON-LD schema markup, meta descriptions, or Open Graph tags. We cannot determine whether appropriate schema types are present on any page.
Why it matters: Schema markup provides explicit structured data that AI crawlers use to classify page content and extract key facts. FAQPage schema on feature pages, Product schema on product pages, and Article schema on case studies all improve AI extractability.
Recommended fix: Audit schema markup using Google's Rich Results Test or Schema.org validator on key page types. Prioritize the 5 comparison pages and 7 feature pages.
What we found: Meta descriptions and Open Graph tags are not visible in rendered markdown output. We cannot verify whether commercial pages have unique, keyword-optimized meta descriptions or proper OG tags.
Why it matters: Meta descriptions provide AI crawlers with a concise summary of page content and are used by some AI platforms as a primary signal for page relevance. Missing or duplicate meta descriptions reduce the specificity of AI citations.
Recommended fix: Verify meta descriptions and OG tags using browser developer tools or Screaming Frog. Ensure each commercial page has a unique meta description (150–160 characters) that includes the primary topic and Vitally's name.
What we found: Our analysis method cannot detect whether pages rely on client-side rendering frameworks that may block AI crawlers. All pages returned substantive text content, suggesting server-side rendering is likely in place (consistent with Webflow), but this cannot be confirmed without testing with JavaScript disabled.
Why it matters: AI crawlers that do not execute JavaScript will see empty or partial content on CSR-dependent pages. While Vitally's pages appear to render content successfully, confirming this eliminates a potential category of AI visibility issues.
Recommended fix: Verify rendering method by loading key pages with JavaScript disabled. Focus verification on product and feature pages. Likely not an issue based on observed content rendering via Webflow.
Note on scoring 25 of 35 pages have no detectable publication date, making freshness scores unreliable for the majority of the site. All 23 product/commercial pages and both structural pages returned null freshness — the 0.28 weighted average is driven entirely by the 10 blog/content marketing pages. Schema coverage could not be assessed for any page. These gaps mean the baseline scores understate or miss key dimensions of AI readiness.
Why now
• AI search adoption is accelerating — buyer discovery patterns in B2B SaaS are shifting quarter over quarter as more CS leaders use ChatGPT and Perplexity to evaluate vendors
• Early citations compound: domains that AI platforms learn to trust now get cited more frequently as training data accumulates
• Competitors who establish GEO visibility first create a structural disadvantage for late movers — Gainsight and ChurnZero are already producing comparison content at scale
• The customer success platform category is still early-innings in GEO optimization — acting now means competing against inaction, not against entrenched strategies
The full audit will measure Vitally's citation visibility across buyer queries in the customer success platform space — including queries like "best CS platform for mid-market SaaS," "how to reduce churn with automation," and "Vitally vs Gainsight for customer health scoring." You'll see exactly which queries return results that include your competitors but not Vitally — and what it would take to appear in them. Fixing the technical baseline now (broken navigation, sitemap timestamps, heading hierarchy) improves the foundation before we measure it.
45–60 minutes. Walk through this document together, confirm or correct every input, and lock in the query architecture for the audit.
Build buyer queries from validated inputs and run them across selected AI platforms to measure citation visibility and competitive positioning.
Visibility analysis, competitive positioning, content gap prioritization, and a three-layer action plan — tactical, structural, and strategic.
Start now — no call required These technical fixes don't depend on the validation call and will improve Vitally's AI crawler baseline before we measure it:
• Restore or redirect broken navigation URLs — the /customer-success-platforms hub and /vitally-vs-crm page are returning 404s from every page's footer. Engineering can fix with 301 redirects in under a day.
• Add lastmod timestamps to sitemap.xml — all 1,000+ URLs lack freshness signals. Configure Webflow to output accurate lastmod dates.
• Verify schema markup on key page types — run Google's Rich Results Test on the 5 comparison pages and 7 feature pages to identify structured data gaps.
These don't depend on the rest of the audit and will improve your baseline visibility before we even measure it.
Two jobs before we meet. The questions on the left require your judgment — no one knows your business better than you. The engineering tasks on the right don't require the call at all.