Engagement Foundation Review

Vitally Audit Foundation

Before we run the audit, we need to make sure we're asking the right questions about the right competitors to the right buyers. This document presents what we've learned about Vitally's market — your job is to tell us what we got right, what we got wrong, and what we missed.

Prepared March 2026
vitally.io
Customer Success Platform
GEO Readiness

Where You Stand Today

Before we measure citation visibility in the customer success platform space, these three signals tell us whether AI crawlers can access and trust Vitally's site content.

Technical Readiness
Needs Attention
1 high-severity finding: 2 broken URLs in footer navigation (competitive hub page and vs. CRM comparison page returning 404s). 5 medium and 1 low-severity findings related to sitemap timestamps, heading hierarchy, and unverified schema markup.
Content Freshness
At Risk
Weighted freshness: 0.28. Content marketing pages average 0.28 freshness — 9 of 10 blog/content pages are older than 6 months, outside the 2–3 month citation window where AI platforms concentrate 76% of citations. 23 product/commercial pages have no detectable publication date — verify manually. Only 1 page updated within 90 days.
Crawl Coverage
Good
robots.txt exists and does not block any AI crawlers (GPTBot, ClaudeBot, PerplexityBot all permitted). Sitemap accessible with 1,000+ URLs indexed. No blocked crawler paths detected.
Executive Summary

What You Need to Know

AI search is reshaping how B2B SaaS companies discover and evaluate customer success platforms. Buyers who once relied on G2 grids and analyst reports are increasingly turning to ChatGPT, Perplexity, and Claude to shortlist vendors — and the platforms that AI cites consistently will capture disproportionate mindshare. Vitally is positioned in a mid-market segment where this shift is accelerating fastest, and establishing GEO visibility now creates a compounding advantage before enterprise-focused competitors like Gainsight and Totango recognize the opportunity.

This Foundation Review covers three inputs that drive the audit: the competitive landscape that shapes which head-to-head queries we test, the buyer personas that determine search intent patterns across the CS purchase decision, and the technical baseline that determines whether AI platforms can access Vitally's content at all. Each section presents our outside-in research for your validation — confirming or correcting these inputs before the audit runs ensures we're measuring what actually matters to your buyers.

The validation call is a decision-making session with two jobs. First, input validation: are the right competitors in the right tiers, do the personas reflect who actually signs off on CS platform purchases, and are feature strength ratings honest? Second, engineering triage: which Layer 1 technical fixes can start before results come back, and which require further investigation? Your answers on both fronts directly shape the query set and audit architecture.

TL;DR — Action Items
  • 🟡 High: Broken Pages Linked from Site Navigation — Engineering should restore or 301-redirect the /customer-success-platforms hub page and /customer-success-platforms/vitally-vs-crm, both returning 404s from footer nav across all pages.
  • 🟣 Validate at the Call: David Chen (CRO) persona — This persona was inferred from category patterns, not sourced from reviews. If CROs don't participate in CS platform purchase decisions at Vitally's target companies, we remove 15–20 executive-revenue queries and reallocate to CS-specific decision stages.
  • 🟣 Validate at the Call: Catalyst as primary competitor — Post-merger with Totango, Catalyst's product independence is uncertain. If buyers no longer evaluate Catalyst separately, we consolidate ~8 head-to-head queries into the Totango matchup.
  • ✅ Start Now: Add lastmod timestamps to sitemap.xml — All 1,000+ URLs lack lastmod dates, eliminating the freshness signal AI crawlers use to prioritize content. Engineering can add these without waiting for the validation call.
  • 📋 Validation Call: Feature strength distribution across 12 capabilities — 7 rated strong, 4 moderate, 1 weak — the accuracy of these ratings determines whether capability queries emphasize Vitally's differentiation or test defensive positioning against Gainsight and ChurnZero.
How This Works

Reading This Document

Three things to know before you dive in.

What this is This document presents the foundation we've built for Vitally's GEO audit — the competitive landscape, buyer personas, feature taxonomy, and technical baseline specific to the customer success platform category. Every element here drives the buyer query set that the audit will test. Getting these inputs right means the audit measures what actually matters to your buyers.

What we need from you Purple boxes throughout the document contain specific validation questions. These aren't rhetorical — each one identifies a point where your answer changes the audit's query architecture. Review them before the validation call. The more precise your corrections, the sharper the audit output.

Confidence badges Every data point carries a confidence badge: High means sourced from multiple verified references, Med means sourced from a single reference or inferred with supporting evidence, and Low means limited data or significant uncertainty. Focus your review time on medium and low-confidence items — those are where your input changes the most.

Company Profile

Vitally

The client profile anchors every query we build. Confirm these details match how buyers actually find and describe Vitally.

Company Overview

Company Name Vitally High
Domain vitally.io
Name Variants Vitally.io, Vitally Inc, Vitally CS, Vitally Customer Success
Category AI-powered customer success platform for B2B SaaS
Segment Mid-market
Key Products Vitally Customer Success Platform, Vitally AI
Positioning Unifies customer health scoring, workflow automation, and team collaboration to reduce churn and drive net revenue retention

Validate Vitally competes against enterprise platforms (Gainsight, Totango) but is classified as mid-market — does Vitally actively pursue enterprise deals with 1,000+ customer accounts, or does the ICP stop at mid-market? If Vitally is moving upmarket, we add enterprise-evaluation queries and adjust the Enterprise Scalability feature weighting from "weak" to a contested capability.

Buyer Personas

Who Buys Customer Success Platforms

5 personas: 2 decision-makers, 2 evaluators, 1 influencer. Each persona drives a distinct query cluster in the audit.

Critical review area Personas are the highest-leverage input in the audit — getting a persona wrong means an entire query cluster targets the wrong buyer. Review each persona's influence level and veto power carefully. A misclassified decision-maker generates queries for a buying stage that doesn't exist.

Data sourcing note Persona names, roles, departments, influence levels, and veto power are sourced from the knowledge graph. Buying jobs and query focus areas are synthesized from role context and category patterns — these represent the search behaviors we expect from each role, not confirmed behavior.

Maya Richardson
VP of Customer Success
Decision-maker High
Senior CS leader responsible for retention strategy, team performance, and NRR targets. Owns the CS platform purchase decision and reports directly to the CRO or CEO at mid-market B2B SaaS companies.
Veto power: Yes — final sign-off on CS platform selection and budget allocation
Technical level: Medium
Primary buying jobs: Evaluate platform ROI against retention/expansion KPIs, justify budget to executive team, ensure platform supports scaling CS without proportional headcount growth
Query focus areas: "Best customer success platform for SaaS," "CS platform ROI," "how to reduce churn with automation," "Vitally vs Gainsight for mid-market"
Source: Review mining — G2 reviewer titles, case studies

Does the VP of CS hold full budget authority for the CS platform, or does the CRO co-sign? If David Chen (CRO) approves spend, we need to weight executive-approval queries differently than CS-leader-owned queries.

Jordan Okafor
Director of CS Operations
Evaluator High
Owns the operational infrastructure of the CS org — data pipelines, health score models, workflow automation rules, and integration architecture. Builds the technical requirements doc and runs the vendor evaluation process.
Veto power: No — but technical rejection effectively kills a deal
Technical level: High
Primary buying jobs: Define integration requirements (Salesforce, HubSpot, product analytics), evaluate API quality and data model flexibility, stress-test automation capabilities against current workflow complexity
Query focus areas: "Customer success platform Salesforce integration," "CS automation workflows," "health score customization," "Vitally API documentation"
Source: Review mining — G2 reviewer titles, operational role patterns

Does the CS Ops Director own the vendor shortlist and technical evaluation, or does a RevOps or IT team run procurement? If evaluation happens outside CS Ops, we shift integration-depth queries to target that team's search patterns instead.

David Chen
Chief Revenue Officer
Decision-maker Med
C-Suite executive responsible for the full revenue lifecycle — sales, CS, and expansion. Evaluates CS platforms through a revenue-impact lens: NRR, expansion pipeline, and board-level retention metrics. Inferred from category patterns — not directly sourced from Vitally-specific buyer data.
Veto power: Yes — budget authority over revenue-org tooling
Technical level: Low
Primary buying jobs: Approve CS platform spend against revenue targets, evaluate impact on NRR and gross retention, align CS tooling with sales-to-CS handoff strategy
Query focus areas: "Customer success platform ROI," "how CS drives NRR," "CS platform for revenue teams," "reduce churn SaaS"
Source: LLM inference — inferred from B2B SaaS purchase patterns, not directly observed

Does a CRO typically participate in CS platform purchase decisions at Vitally's target companies, or does the VP of CS have full budget authority? If CROs aren't in the room, we remove this persona entirely and reallocate 15–20 executive-revenue queries to CS-specific decision stages.

Aisha Patel
Customer Success Team Lead
Influencer High
Senior individual contributor who manages a book of accounts and mentors junior CSMs. Provides bottom-up feedback on tool usability, daily workflow friction, and feature gaps. Their advocacy or resistance during a trial period heavily influences adoption success.
Veto power: No
Technical level: Medium
Primary buying jobs: Evaluate day-to-day usability during trial, identify workflow friction points, champion or block adoption based on team experience
Query focus areas: "Best CS tools for managing accounts," "customer success workflow tips," "how to track customer health," "Vitally reviews"
Source: Review mining — G2 reviewer titles, team-level role patterns

Does the CS Team Lead's feedback carry weight during vendor evaluation, or is their input limited to post-purchase adoption? If they influence the shortlist, we add hands-on evaluation queries targeting usability and daily workflow comparisons.

Marcus Lindgren
Head of Customer Success
Evaluator High
Director-level CS leader who owns team execution, playbook design, and customer outcomes. Drives the requirements-gathering process and evaluates platforms against operational needs — onboarding efficiency, health score accuracy, and team collaboration.
Veto power: No — but owns the requirements doc and vendor recommendation
Technical level: Medium
Primary buying jobs: Define platform requirements from CS team needs, evaluate onboarding and implementation timelines, assess feature depth against current pain points
Query focus areas: "Customer success platform comparison," "best CS tool for onboarding," "how to improve customer health scores," "CS platform implementation timeline"
Source: Review mining — G2 reviewer titles, CS leadership patterns

Is "Head of CS" a distinct role from "VP of CS" at Vitally's target companies, or do these titles represent the same buyer? If they overlap, we merge Marcus and Maya into a single persona and eliminate duplicate query patterns across their clusters.

Missing personas? Three roles we didn't include but might be relevant: VP of Sales / CRO (if CS and Sales share renewal ownership and co-evaluate tooling), RevOps / Sales Ops Director (if the CS platform purchase runs through a centralized operations team rather than CS directly), or IT / Security Lead (if enterprise deals require IT procurement sign-off on data handling and SSO). Who else shows up in your deals?

Competitive Landscape

Who Vitally Competes Against

5 primary + 4 secondary competitors identified. Tier assignments determine which head-to-head matchups the audit tests.

Why tiers matter Primary competitors generate head-to-head comparison queries — "Vitally vs Gainsight," "best customer success platform for mid-market SaaS," and category evaluation queries where tier-1 vendors appear together. Getting these tiers right determines which ~30–40 queries test direct competitive differentiation vs. category awareness. Catalyst is listed as primary with Med confidence — the Totango merger may mean buyers no longer evaluate Catalyst independently, which would shift approximately 8 head-to-head queries out of the primary comparison set.

Primary Competitors

Gainsight

Primary High
gainsight.com
The dominant enterprise customer success platform with deep product adoption analytics and extensive integrations; extremely powerful for large organizations but expensive ($30K–$100K+/year), slow to implement (8–12 weeks), and overkill for mid-market teams.
Source: Review mining — G2, Capterra, competitive pages

ChurnZero

Primary High
churnzero.com
Strong mid-market customer success platform with robust reporting dashboards and churn analytics; however, the UI is widely criticized as difficult to use, and implementation timelines are longer than Vitally's.
Source: Review mining — G2, Capterra, competitive pages

Totango

Primary High
totango.com
Enterprise-focused composable customer success platform with a schemaless data model; merged with Catalyst in 2024. Expensive (~$50K/year + 20% setup fee), limited customization flexibility, and skews toward larger organizations.
Source: Review mining — G2, Capterra, competitive pages

Planhat

Primary High
planhat.com
Flexible European-headquartered customer success platform known for strong data modeling and extensibility with data warehouses; well-regarded for customization but less established in the US market and can require more setup effort than Vitally.
Source: Review mining — G2, Capterra, competitive pages

Catalyst

Primary Med
catalyst.io
Revenue-focused customer success platform popular with CS teams that prioritize CRM-like workflows; merged with Totango in 2024, creating uncertainty about product direction and roadmap independence.
Source: Category listing — G2 category grid

Secondary Competitors

Custify

Secondary Med
custify.com
Lightweight customer success platform targeting small and mid-market SaaS companies; easier to adopt but lacks the depth of automation, AI features, and integrations of more established platforms like Vitally.
Source: Category listing — G2 category grid

ClientSuccess

Secondary Med
clientsuccess.com
Relationship-focused customer success platform with strong health scoring and a user-friendly interface; positioned for simpler use cases and lacks the workflow automation and AI capabilities of more modern platforms.
Source: Category listing — G2 category grid

SmartKarrot

Secondary Med
smartkarrot.com
AI-enabled customer success platform with product adoption features and account intelligence; smaller market presence and fewer enterprise integrations compared to Vitally and top-tier competitors.
Source: Category listing — G2 category grid

Velaris

Secondary Low
velaris.io
Newer AI-first customer success platform positioning as an alternative to established players; gaining traction with content marketing but limited market share and integration ecosystem compared to Vitally.
Source: Category listing — limited market data

Validate Three questions for the call: (1) Does Catalyst still appear as a separate option in your deals post-Totango merger, or should we consolidate it into the Totango matchup? (2) Velaris is listed with Low confidence — is this a vendor you encounter in competitive evaluations, or can we drop them? (3) Are there vendors we missed entirely — particularly any CRM-native CS modules (HubSpot Service Hub, Salesforce CS Cloud) that buyers compare against dedicated CS platforms?

Feature Taxonomy

12 Buyer-Level Capabilities

12 capabilities mapped. Strength ratings determine whether the audit tests differentiation queries (strong features) or defensive positioning (moderate/weak features).

Customer Health Scoring Strong High

Accurately predict which customers are at risk of churning with multi-factor health scores based on usage, engagement, and support data

Playbook & Workflow Automation Strong High

Automate repetitive CS motions like onboarding sequences, renewal outreach, and risk escalations so CSMs focus on high-value work

CRM & Data Integration Strong High

Bidirectional sync with Salesforce, HubSpot, and product analytics tools so customer data stays consistent across all systems

Reporting & Analytics Dashboards Moderate High

Track retention, expansion revenue, NRR, and team performance with dashboards that leadership can actually use for board reporting

AI-Powered CS Copilot Moderate Med

Get AI-generated account summaries, meeting prep, and next-best-action recommendations without manually reviewing every account

Customer Onboarding & Project Management Strong High

Manage onboarding timelines, milestones, and task assignments to get customers to time-to-value faster

In-App NPS & Customer Surveys Moderate Med

Collect NPS and CSAT feedback directly within the customer journey and automatically trigger follow-up actions based on scores

Team Collaboration & Shared Docs Moderate Med

Give the whole revenue team a shared workspace with collaborative notes, internal docs, and account timelines instead of scattered Slack threads

Product Usage & Adoption Tracking Strong High

See exactly which features customers are using and where adoption is dropping off so CSMs can intervene before churn happens

Renewal & Expansion Management Strong High

Proactively manage renewals and identify expansion opportunities with automated alerts and pipeline tracking tied to customer health

Customer Segmentation & Lifecycle Management Strong High

Segment customers by ARR, plan, lifecycle stage, or health score to run targeted playbooks for each tier instead of one-size-fits-all

Enterprise Scalability & Advanced Customization Weak Med

Handle thousands of accounts with complex hierarchies, custom objects, and enterprise-grade permissions without the platform slowing down

Validate Three areas to check: (1) Enterprise Scalability is rated Weak based on G2 feedback about complexity limitations with large account hierarchies — is this accurate, or has Vitally shipped improvements that change this rating? (2) AI-Powered CS Copilot is rated Moderate because Vitally AI is relatively new — where does it stand against Gainsight's AI capabilities today? (3) Are any features missing entirely — for example, digital-touch / tech-touch automation as a distinct capability from workflow automation?

Pain Point Taxonomy

What Buyers Are Trying to Solve

9 pain points: 6 high, 3 medium severity. The buyer language in each pain point is how we'll phrase queries — accuracy here directly shapes what the audit measures.

Reactive churn detection — teams learn about at-risk accounts too late High High

"By the time we find out a customer is unhappy, they've already made up their mind to leave — we're always too late"
Personas: VP of Customer Success, CS Team Lead, Head of Customer Success

CSMs drowning in manual administrative tasks instead of customer work High High

"My CSMs are drowning in busywork and spreadsheets instead of actually talking to customers"
Personas: VP of Customer Success, Director of CS Operations, CS Team Lead

Customer data fragmented across 5+ tools with no single source of truth High High

"I have to check five different tools just to understand what's going on with one account"
Personas: Director of CS Operations, CS Team Lead, Head of Customer Success

Leadership lacks real-time visibility into retention trends and revenue impact High High

"I can't tell the board our NRR story because I don't have clean data on what's driving retention and expansion"
Personas: VP of Customer Success, Chief Revenue Officer

Inconsistent onboarding varies by CSM, leading to slow time-to-value Medium High

"Every CSM does onboarding differently and some customers slip through the cracks — it takes months to get them live"
Personas: VP of Customer Success, CS Team Lead, Head of Customer Success

Scaling CS without proportional headcount growth High High

"We doubled our customer count but can't hire fast enough — I need to scale CS without adding headcount"
Personas: VP of Customer Success, Chief Revenue Officer, Director of CS Operations

Missed upsell and expansion opportunities due to lack of systematic signals High Med

"We're leaving money on the table because nobody is tracking which customers are ready for an upsell conversation"
Personas: Chief Revenue Officer, VP of Customer Success, Head of Customer Success

Customer feedback collected but never systematically acted on Medium Med

"We send NPS surveys but the results just sit in a spreadsheet — nobody actually acts on the feedback"
Personas: Director of CS Operations, VP of Customer Success

Critical customer context trapped in individual CSM knowledge silos Medium High

"When a CSM quits, all the customer context walks out the door — the new CSM starts from zero"
Personas: VP of Customer Success, CS Team Lead, Head of Customer Success

Validate Three checks: (1) "Missed expansion revenue" was sourced via LLM inference (Med confidence) — does this resonate with how Vitally's buyers actually describe this problem, or is expansion tracking more of a nice-to-have than a purchase driver? (2) Is "scaling CS without headcount" the right framing, or do buyers more commonly describe this as "digital-touch vs. high-touch" segmentation? The query language shifts depending on how buyers frame it. (3) Are there category-specific pains we missed — for example, multi-product / multi-entity complexity (managing CS across multiple product lines), CS-to-sales handoff friction (renewal ownership disputes between CS and AE teams), or proving CS ROI to finance (justifying CS headcount with revenue attribution)?

Site Analysis

Layer 1 Technical Findings

7 findings from the technical baseline analysis. These are engineering-actionable items that affect AI crawler access and content extractability.

Engineering action needed The top finding is a high-severity issue: 2 broken URLs in the footer navigation (the competitive hub page and a comparison page both return 404 errors). Engineering should restore or redirect these immediately — broken internal links degrade site-level trust signals for AI crawlers. Additionally, the sitemap lacks lastmod timestamps on all 1,000+ URLs, which eliminates the freshness signal AI platforms use to prioritize content for citation. Both fixes can start before the validation call.

🟡 Broken Pages Linked from Site Navigation

What we found: Two URLs linked from the site footer navigation return 404 errors: the 'Why Vitally' competitive hub page at /customer-success-platforms and the 'vs. CRM' comparison page at /customer-success-platforms/vitally-vs-crm. These are publicly indexed URLs that AI crawlers will encounter and fail to process.

Why it matters: Broken pages in the primary navigation create dead ends for AI crawlers indexing the site. The competitive hub page is the parent page for all five competitor comparison pages — its absence means AI platforms cannot discover the comparison section through hierarchical crawling. Broken internal links also degrade site-level trust signals used by AI citation algorithms.

Business consequence: Queries like "Vitally vs CRM for customer success" or "best customer success platform comparison" may fail to surface Vitally's competitive content when AI crawlers cannot discover comparison pages through the broken hub page hierarchy.

Recommended fix: Either restore the /customer-success-platforms hub page and /customer-success-platforms/vitally-vs-crm comparison page, or update footer navigation links to point to live URLs. If these pages were intentionally removed, implement 301 redirects to the most relevant live page.

Impact: High Effort: < 1 day Owner: Engineering Affected: 2 URLs linked from footer navigation across all pages site-wide

🔵 Sitemap Lacks lastmod Timestamps on All 1,000+ URLs

What we found: The sitemap.xml contains over 1,000 URLs but none include lastmod, changefreq, or priority attributes. Every entry contains only the <loc> element.

Why it matters: AI crawlers and search engines use sitemap lastmod dates as a primary signal for content freshness. Without these timestamps, crawlers cannot prioritize recently updated content over stale pages. This is especially costly for competitor comparison pages and blog content, where freshness directly impacts citation likelihood — 76.4% of AI-cited pages were updated within 30 days.

Business consequence: When a buyer asks "what is the best customer success platform in 2026," AI platforms cannot distinguish Vitally's recently updated content from years-old pages, deprioritizing all of Vitally's content relative to competitors whose sitemaps carry accurate timestamps.

Recommended fix: Add accurate lastmod timestamps to all sitemap entries, particularly for competitor comparison pages, product pages, and recent blog posts. Ensure lastmod reflects actual content changes, not automated regeneration timestamps.

Impact: Medium Effort: 1-3 days Owner: Engineering Affected: All 1,000+ URLs in sitemap.xml

🔵 Multiple H1 Tags on Most Commercial Pages

What we found: The majority of product, feature, and comparison pages use multiple H1 tags — ranging from 4 to 14 H1 elements per page. The CSM solution page has 14 H1 tags, comparison pages average 8, and product pillar pages each have 6–7.

Why it matters: AI models use heading hierarchy to identify page topics and extract structured passages. Multiple H1 tags dilute the primary topic signal and make it harder for LLMs to determine which heading represents the page's main subject.

Business consequence: When an AI platform processes a query like "customer success workflow automation tools," pages with 14 competing H1 tags make it harder for the model to extract a clear, citable passage about Vitally's automation capabilities, reducing citation probability vs. competitors with clean heading hierarchies.

Recommended fix: Restructure pages to use a single H1 for the primary page topic, demoting secondary sections to H2. Focus first on the 5 competitor comparison pages and 4 solution pages.

Impact: Medium Effort: 1-2 weeks Owner: Engineering Affected: ~20 of 35 commercial pages

🔵 Competitor Comparison Pages Lack Visible Publication Dates

What we found: All 5 competitor comparison pages (vs. Gainsight, ChurnZero, Totango, Planhat, Catalyst) display no visible publication or last-updated date. The only temporal references are G2 badge descriptions mentioning 'Summer 2025'.

Why it matters: Competitor comparison content is among the most frequently cited content types in AI vendor evaluation responses. AI platforms heavily weight freshness — undated comparison content is deprioritized relative to competitors' dated alternatives.

Business consequence: For queries like "Vitally vs Gainsight 2026" or "ChurnZero vs Vitally comparison," AI platforms may prefer a competitor's dated comparison page over Vitally's undated version, even if Vitally's content is more recent or more accurate.

Recommended fix: Add a visible 'Last updated: [date]' element to each comparison page and commit to updating these pages quarterly. Also add lastmod to these URLs in sitemap.xml.

Impact: Medium Effort: < 1 day Owner: Content Affected: 5 competitor comparison pages

🔵 Schema Markup Cannot Be Verified — Manual Check Recommended

What we found: Our analysis method returns rendered page content as markdown text, which does not include JSON-LD schema markup, meta descriptions, or Open Graph tags. We cannot determine whether appropriate schema types are present on any page.

Why it matters: Schema markup provides explicit structured data that AI crawlers use to classify page content and extract key facts. FAQPage schema on feature pages, Product schema on product pages, and Article schema on case studies all improve AI extractability.

Business consequence: Missing FAQPage schema on Vitally's feature pages (which contain FAQ sections) means AI platforms cannot efficiently extract structured answers to queries like "does Vitally integrate with Salesforce" or "how does Vitally health scoring work."

Recommended fix: Audit schema markup using Google's Rich Results Test or Schema.org validator on key page types. Prioritize the 5 comparison pages and 7 feature pages.

Impact: Medium Effort: 1-3 days Owner: Engineering Affected: All 35 analyzed pages — verification needed

🔵 Meta Descriptions and OG Tags Cannot Be Verified

What we found: Meta descriptions and Open Graph tags are not visible in rendered markdown output. We cannot verify whether commercial pages have unique, keyword-optimized meta descriptions or proper OG tags.

Why it matters: Meta descriptions provide AI crawlers with a concise summary of page content and are used by some AI platforms as a primary signal for page relevance. Missing or duplicate meta descriptions reduce the specificity of AI citations.

Business consequence: If meta descriptions are missing or duplicated across Vitally's product pages, AI platforms may struggle to distinguish between pages for queries like "customer success health scoring tool" vs. "CS workflow automation platform," potentially citing the wrong page or no page at all.

Recommended fix: Verify meta descriptions and OG tags using browser developer tools or Screaming Frog. Ensure each commercial page has a unique meta description (150–160 characters) that includes the primary topic and Vitally's name.

Impact: Low Effort: < 1 day Owner: Marketing Affected: All 35 analyzed pages — verification needed

🔵 Client-Side Rendering Status Cannot Be Verified

What we found: Our analysis method cannot detect whether pages rely on client-side rendering frameworks that may block AI crawlers. All pages returned substantive text content, suggesting server-side rendering is likely in place (consistent with Webflow), but this cannot be confirmed without testing with JavaScript disabled.

Why it matters: AI crawlers that do not execute JavaScript will see empty or partial content on CSR-dependent pages. While Vitally's pages appear to render content successfully, confirming this eliminates a potential category of AI visibility issues.

Business consequence: If any product pages rely on client-side rendering, AI crawlers may see empty content for queries like "best customer success platform features," giving competitors with server-rendered pages a structural advantage in every feature comparison query.

Recommended fix: Verify rendering method by loading key pages with JavaScript disabled. Focus verification on product and feature pages. Likely not an issue based on observed content rendering via Webflow.

Impact: Low Effort: < 1 day Owner: Engineering Affected: All pages — verification needed, likely not an issue

Site Analysis Summary

Total Pages Analyzed 35
Commercially Relevant Pages 35
Heading Hierarchy 0.61
Content Depth 0.59
Freshness 0.28 weighted (blog: 0.28, product: unable to assess, structural: unable to assess) (25 pages unscored)
Passage Extractability 0.60
Schema Coverage Unable to assess (35 pages unscored)

Note on scoring 25 of 35 pages have no detectable publication date, making freshness scores unreliable for the majority of the site. All 23 product/commercial pages and both structural pages returned null freshness — the 0.28 weighted average is driven entirely by the 10 blog/content marketing pages. Schema coverage could not be assessed for any page. These gaps mean the baseline scores understate or miss key dimensions of AI readiness.

Next Steps

What Happens Next

Why now

• AI search adoption is accelerating — buyer discovery patterns in B2B SaaS are shifting quarter over quarter as more CS leaders use ChatGPT and Perplexity to evaluate vendors

• Early citations compound: domains that AI platforms learn to trust now get cited more frequently as training data accumulates

• Competitors who establish GEO visibility first create a structural disadvantage for late movers — Gainsight and ChurnZero are already producing comparison content at scale

• The customer success platform category is still early-innings in GEO optimization — acting now means competing against inaction, not against entrenched strategies

The full audit will measure Vitally's citation visibility across buyer queries in the customer success platform space — including queries like "best CS platform for mid-market SaaS," "how to reduce churn with automation," and "Vitally vs Gainsight for customer health scoring." You'll see exactly which queries return results that include your competitors but not Vitally — and what it would take to appear in them. Fixing the technical baseline now (broken navigation, sitemap timestamps, heading hierarchy) improves the foundation before we measure it.

01

Validation Call

45–60 minutes. Walk through this document together, confirm or correct every input, and lock in the query architecture for the audit.

02

Query Generation & Execution

Build buyer queries from validated inputs and run them across selected AI platforms to measure citation visibility and competitive positioning.

03

Full Audit Delivery

Visibility analysis, competitive positioning, content gap prioritization, and a three-layer action plan — tactical, structural, and strategic.

Start now — no call required These technical fixes don't depend on the validation call and will improve Vitally's AI crawler baseline before we measure it:

Restore or redirect broken navigation URLs — the /customer-success-platforms hub and /vitally-vs-crm page are returning 404s from every page's footer. Engineering can fix with 301 redirects in under a day.

Add lastmod timestamps to sitemap.xml — all 1,000+ URLs lack freshness signals. Configure Webflow to output accurate lastmod dates.

Verify schema markup on key page types — run Google's Rich Results Test on the 5 comparison pages and 7 feature pages to identify structured data gaps.

These don't depend on the rest of the audit and will improve your baseline visibility before we even measure it.

Before the Call

Your Pre-Call Checklist

Two jobs before we meet. The questions on the left require your judgment — no one knows your business better than you. The engineering tasks on the right don't require the call at all.

Questions for You
Does a CRO participate in CS platform purchase decisions at your target companies, or does the VP of CS have full budget authority?
If wrong: We remove the CRO persona and reallocate 15–20 executive-revenue queries to CS-specific decision stages
Does Catalyst still appear as a separate vendor in competitive evaluations post-Totango merger?
If wrong: We consolidate ~8 head-to-head queries into the Totango matchup and free up query slots
Is Vitally actively pursuing enterprise deals (1,000+ accounts), or does the ICP stop at mid-market?
If wrong: We add enterprise-evaluation queries and adjust Enterprise Scalability from "weak" to contested
Is "Head of CS" a distinct buyer role from "VP of CS" at your target companies?
If wrong: We merge two personas and eliminate duplicate query patterns across their clusters
Does the VP of CS hold full budget authority, or does the CRO co-sign CS platform purchases?
If wrong: We reweight executive-approval queries vs. CS-leader-owned queries
Does the CS Ops Director own the vendor shortlist, or does RevOps/IT run procurement?
If wrong: We shift integration-depth queries to target that team's search patterns instead
Does the CS Team Lead's feedback carry weight during vendor evaluation or only post-purchase?
If wrong: We add hands-on evaluation queries targeting usability comparisons
Is Enterprise Scalability accurately rated "weak," or has Vitally shipped improvements?
If wrong: We reclassify from defensive to differentiation queries for enterprise capability
Where does Vitally AI stand today vs. Gainsight's AI — still "moderate" or catching up?
If wrong: We adjust AI capability query weighting from defensive to competitive
Is "missed expansion revenue" a real purchase driver, or more of a nice-to-have pain point?
If wrong: We deprioritize expansion-revenue queries and reallocate to higher-severity pains
Is Velaris a vendor you encounter in competitive evaluations, or can we drop them?
If wrong: We remove secondary competitor and free up category awareness query slots
Are there missing competitors — particularly CRM-native CS modules like HubSpot Service Hub or Salesforce CS Cloud?
If wrong: We add cross-category comparison queries that test a different competitive axis
Are there missing personas — VP of Sales, RevOps Director, or IT/Security Lead?
If wrong: We add entirely new query clusters for buyer roles not currently represented
Are there missing features (e.g., digital-touch automation) or pain points (e.g., CS-to-sales handoff friction)?
If wrong: We add capability and pain-based queries the current taxonomy doesn't cover
Is "scaling CS without headcount" the right framing, or do buyers say "digital-touch vs. high-touch"?
If wrong: We adjust the query language to match how buyers actually describe this pain
For Engineering — Start Now
Restore or 301-redirect the 2 broken footer navigation URLs (/customer-success-platforms and /vitally-vs-crm)
Broken hub page blocks AI crawler discovery of all 5 comparison pages — < 1 day effort
Add lastmod timestamps to all sitemap.xml entries
1,000+ URLs with no freshness signal — AI crawlers can't prioritize recent content
Verify schema markup on 5 comparison pages and 7 feature pages using Rich Results Test
Structured data gaps may be reducing AI extractability — verification takes < 1 day
Verify CSR status by loading key pages with JavaScript disabled
Likely not an issue (Webflow serves SSR), but confirmation eliminates a risk category
Alignment

We're Aligned On

This isn't a contract — it's a shared understanding. The audit runs against what's below. If something changes between now and the call, we adjust. The goal is to make sure we're asking the right questions for the right buyers against the right competitors.
Already Confirmed
Competitive set — 5 primary + 4 secondary competitors identified and tiered
Persona set — 5 personas: 2 decision-makers, 2 evaluators, 1 influencer
Feature taxonomy — 12 buyer-level capabilities with outside-in strength ratings (7 strong, 4 moderate, 1 weak)
Pain point set — 9 buyer frustrations with severity ratings (6 high, 3 medium)
Layer 1 technical audit — 7 findings logged (1 high, 5 medium, 1 low), engineering notified
Decided at the Call
CRO persona validity — whether David Chen (CRO) participates in CS platform purchases or should be removed, shifting ~15–20 executive-revenue queries
Catalyst tier — whether Catalyst is still evaluated independently post-Totango merger or should be consolidated
Feature overweighting — top 3 capabilities to emphasize in capability queries (candidates: Customer Health Scoring, Workflow Automation, Product Usage Tracking based on strong ratings × high-severity pain point linkage)
Pain point prioritization — top 3 buyer problems to test first (candidates: reactive churn detection, manual CSM workflows, fragmented customer data based on high severity × broadest persona impact)
Enterprise Scalability and AI Copilot strength ratings — accuracy determines defensive vs. differentiation query strategy
Any persona corrections, competitor additions/removals, or missing feature/pain point gaps surfaced at the call
Client
Date