Engagement Foundation Review

Brightspot Audit Foundation

Before we run the audit, we need to make sure we're asking the right questions about the right competitors to the right buyers. This document presents what we've learned about Brightspot's market — your job is to tell us what we got right, what we got wrong, and what we missed.

May 2026
brightspot.com
Enterprise CMS
GEO Readiness

Where You Stand Today

Before we measure citation visibility in the enterprise CMS space, these three signals tell us whether AI crawlers can access and trust Brightspot's content. They set the baseline the audit measures against.

Technical Readiness
Needs Attention
2 high-severity findings identified. Top issue: critical competitor comparison pages are significantly outdated — 8 comparison pages carry stale or missing dates, reducing AI citation likelihood for head-to-head evaluation queries.
Content Freshness
At Risk
Weighted freshness: 0.35 — well below the 0.45 threshold. Content marketing pages average 0.35 freshness; 10 of 14 scored pages are older than 180 days. 4 pages updated within 90 days. Additionally, 28 product/commercial pages have no detectable publication date — verify manually whether dates are rendered in page body.
Crawl Coverage
Good
All major AI crawlers (GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Bytespider) explicitly allowed. Robots.txt blocks only internal/utility paths (_debug, _plugins, cms, ajax). Sitemap accessible with 45 commercially relevant pages indexed.
Executive Summary

What You Need to Know

AI search is reshaping how enterprise content management system buyers discover and evaluate platforms. Organizations searching for CMS alternatives, headless architecture options, and content operations solutions are increasingly getting answers from AI-generated responses rather than traditional search results. Brightspot has a structural opportunity to establish GEO visibility now — before competitors recognize this channel — because the enterprise CMS evaluation cycle is long and early citation authority compounds over time.

This Foundation Review covers three domains that drive the audit architecture: the competitive landscape (5 primary + 4 secondary competitors) that shapes which head-to-head queries we construct, the 5 buyer personas that determine search intent patterns across the enterprise CMS purchase cycle, and the Layer 1 technical baseline that determines whether AI platforms can access Brightspot's content effectively. Your role is to validate or correct these inputs before the audit runs.

The validation call is a decision-making session. Two types of decisions: (1) input validation — are the Enterprise Architect and CMO personas real roles in your deal cycles, and are the feature strength ratings accurate against named competitors like Adobe AEM and Contentful? (2) engineering triage — which of the 2 high-severity technical findings can your team start fixing before results come back? The answers determine what queries we build and what baseline we measure against.

TL;DR — Action Items
  • 🟡 High: Critical competitor comparison pages are significantly outdated — Content team should update Contentful and Contentstack comparison pages and add visible dates to all 8 CMS Selection Guide pages to restore freshness signals for AI citation.
  • 🟡 High: Multiple content marketing pages lack visible publication dates — Engineering should add rendered date stamps to comparison and case study pages so AI crawlers can assess recency.
  • 🟣 Validate at the Call: Enterprise Architect persona (James Whitfield) — This persona was inferred from category patterns, not sourced from deal data. If Enterprise Architects don't appear in Brightspot evaluations, we remove 15-20 integration and architecture queries from the audit set.
  • ✅ Start Now: Fix multiple H1 headings on industry pages — The For Developers page has 12 H1 tags diluting its topic signal. Engineering can fix heading hierarchy immediately — no validation call needed.
  • 📋 Validation Call: Feature strength accuracy for AI Content Creation and Personalization — Both rated "moderate" based on outside-in assessment. If Brightspot has shipped significant updates, upgrading these to "strong" adds capability-comparison queries that test differentiation against AEM and Sitecore.
Context

How This Works

Three things to know before reviewing this document.

WHAT THIS IS This document presents the research foundation for your GEO visibility audit in the enterprise CMS space. Every persona, competitor, feature, and pain point here drives the buyer query set we'll test across AI platforms. The more accurate these inputs are, the more precisely the audit measures your real competitive position.

WHAT WE NEED FROM YOU Purple boxes throughout this document contain specific questions. These aren't optional — each one identifies where your answer changes what queries we build or how we weight the results. Come to the validation call with answers to each purple question. If you're unsure, that's fine — "I don't know" is better than a guess.

CONFIDENCE BADGES Every data point carries a confidence badge. High means sourced directly from your site, reviews, or public filings. Medium means inferred from category patterns or limited data. Low means estimated and likely needs correction. Focus your review time on medium and low confidence items — the high-confidence data rarely surprises.

Company Profile

Brightspot

The client profile that anchors every query we construct.

Company Overview

Company Name Brightspot High
Domain brightspot.com
Name Variants Brightspot CMS, Brightspot Content Management, Perfect Sense, Perfect Sense Digital, Brightspot by Perfect Sense
Category Enterprise content management system — hybrid and headless CMS
Segment Enterprise
Key Products Brightspot CMS, Brightspot AI Suite, Brightspot Front-end Design System
Positioning Hybrid and headless CMS for creating, managing, and publishing digital content at scale across multiple channels and brands

→ VALIDATE Brightspot spans both the "headless CMS" and "enterprise DXP" buying conversations — does your sales team encounter different buyer personas in each, or do the same decision-makers evaluate Brightspot whether they're replacing AEM or adopting headless for the first time? If these are two distinct buying motions, we may need separate query clusters for each.

Personas

Who Buys Enterprise CMS

5 personas: 2 decision-makers, 2 evaluators, 1 influencer. Each persona generates a distinct query cluster based on how they search during the enterprise CMS purchase cycle.

CRITICAL REVIEW AREA Personas have the highest downstream impact of any input in this document. Each persona generates 15-25 unique buyer queries. A wrong persona means wasted queries; a missing persona means blind spots in the audit. Review each card carefully — especially the medium-confidence personas sourced from inference rather than direct deal data.

DATA SOURCING Role, department, seniority, influence level, and veto power are sourced from the knowledge graph (review mining and category inference). Buying jobs and query focus areas are synthesized from persona attributes and category patterns — they represent how we expect this persona to search, not confirmed behavior.

Karen Liu
VP of Digital Experience
Decision-maker High
Digital/Marketing leadership responsible for the organization's web presence, content delivery infrastructure, and digital transformation initiatives. Owns the CMS decision as the platform that underpins all digital channels.
Veto power: Yes — controls the digital experience budget line and signs off on platform selections that affect web, mobile, and omnichannel delivery.
Technical level: Medium — understands architecture at a strategic level but relies on engineering for implementation decisions.
Primary buying jobs: Evaluating platform capabilities against multi-brand requirements, building the business case for CMS migration, comparing TCO across vendors, aligning editorial and engineering stakeholders.
Query focus areas: CMS comparison queries ("best enterprise CMS for multi-brand"), migration queries ("AEM alternative for large organizations"), capability queries ("headless CMS with editorial tools").
Source: Review mining — G2 and Gartner reviewer titles

Does the VP Digital typically own the CMS budget, or does this report up to the CTO in your deal cycles? If budget authority sits with engineering, Karen becomes an evaluator and David's query set expands to include business-case queries.

David Okonkwo
CTO / VP of Engineering
Decision-maker High
Engineering leadership responsible for platform architecture, security, scalability, and integration decisions. Evaluates CMS choices on technical merit — API quality, performance under load, developer experience, and fit within the existing technology stack.
Veto power: Yes — can block any CMS that doesn't meet architecture standards, security requirements, or integration constraints.
Technical level: High — evaluates API documentation, performance benchmarks, deployment architecture, and custom development requirements.
Primary buying jobs: Validating architecture fit (headless vs. hybrid), assessing developer productivity, evaluating integration complexity with existing stack, ensuring scalability and performance SLAs.
Query focus areas: Architecture queries ("headless CMS API performance"), integration queries ("CMS integration with Salesforce"), developer experience queries ("CMS developer experience comparison"), scalability queries ("CMS for millions of pages").
Source: Review mining — G2 and Gartner reviewer titles

In deals where both a VP Digital and a CTO are present, who drives the initial shortlist vs. who holds final sign-off? If the CTO only validates technical fit after the shortlist is set, we should shift David's queries toward late-stage validation rather than early discovery.

Maria Santos
Director of Content Strategy
Evaluator High
Content/Editorial leadership responsible for content operations, editorial workflows, and publishing efficiency. Evaluates CMS platforms on authoring experience, workflow flexibility, and ability to empower content teams without developer dependency.
Veto power: No — but a strong negative signal from the content team can derail adoption after purchase.
Technical level: Low — evaluates user experience and workflow capabilities rather than underlying architecture.
Primary buying jobs: Assessing editorial workflow quality, evaluating content authoring UX, determining whether the CMS reduces developer dependency, validating multi-site governance capabilities.
Query focus areas: Workflow queries ("CMS with content approval workflows"), authoring queries ("best CMS for content teams"), governance queries ("multi-brand content governance CMS"), editorial experience queries ("CMS that doesn't need developers").
Source: Review mining — G2 reviewer titles in content/editorial roles

Does the Director of Content typically get involved during evaluation (demo stage) or only post-purchase for adoption? If post-purchase only, we should deprioritize Maria's discovery queries and focus on comparison-stage queries where editorial experience is a deciding factor.

James Whitfield
Enterprise Architect / Director of IT
Influencer Medium
IT/Infrastructure leadership responsible for enterprise architecture standards, system integrations, and technology governance. Evaluates CMS platforms for compliance with architecture principles, integration patterns, and security posture.
Veto power: No — but can block implementations that violate enterprise architecture standards or create integration debt.
Technical level: High — evaluates platforms against enterprise architecture frameworks, API standards, and integration patterns.
Primary buying jobs: Validating that the CMS fits within enterprise architecture standards, assessing integration complexity with existing systems (CRM, DAM, analytics, SSO), ensuring compliance and security requirements are met.
Query focus areas: Integration queries ("enterprise CMS integration with existing martech"), architecture queries ("composable DXP architecture"), compliance queries ("CMS SOC 2 compliance"), vendor assessment queries ("CMS vendor evaluation criteria enterprise").
Source: LLM inference — inferred from enterprise CMS category patterns

Does an Enterprise Architect or Director of IT actually appear in Brightspot deal cycles, or does the CTO absorb this role? If this persona doesn't exist independently, we remove ~15 integration and architecture-governance queries and fold the relevant ones into David Okonkwo's set.

Priya Anand
CMO / VP of Marketing
Evaluator Medium
Marketing leadership responsible for brand presence, demand generation, and marketing operations. Evaluates CMS platforms on personalization capabilities, campaign agility, and ability to launch marketing initiatives without engineering bottlenecks.
Veto power: No — but controls marketing budget that may fund the CMS or influence the business case.
Technical level: Low — evaluates based on marketing outcomes, personalization capabilities, and speed to launch campaigns.
Primary buying jobs: Assessing personalization and A/B testing capabilities, evaluating campaign launch velocity, determining whether the CMS supports marketing agility without developer dependency.
Query focus areas: Personalization queries ("CMS with built-in personalization"), marketing agility queries ("fastest CMS for campaign launches"), DXP queries ("digital experience platform for marketing teams"), ROI queries ("CMS ROI for marketing").
Source: LLM inference — enterprise CMS decisions typically involve marketing leadership

Does the CMO/VP Marketing typically sit in Brightspot evaluations as a stakeholder, or is this a digital/IT-led purchase where marketing is consulted but not at the table? If marketing isn't a distinct buying voice, we should merge Priya's personalization queries into Karen Liu's set and remove the marketing-specific discovery queries.

MISSING PERSONAS? Consider: Chief Digital Officer (if digital transformation is positioned as a C-suite initiative separate from IT), Head of Web Development / Engineering Manager (the person who actually implements and maintains the CMS daily — distinct from the CTO's architecture-level evaluation), or Procurement / Vendor Management (if enterprise CMS deals go through formal procurement with RFP processes). Who else shows up in your deals?

Competitive Landscape

Who You're Measured Against

5 primary + 4 secondary competitors identified. Tier assignments determine which head-to-head queries test direct competitive differentiation in the enterprise CMS space.

WHY TIERS MATTER Primary competitors generate head-to-head comparison queries — "Brightspot vs. AEM," "best enterprise CMS for multi-brand publishing," "Contentful alternative with editorial tools." Each primary competitor produces approximately 6-8 direct matchup queries. Getting these tiers right determines which ~30-40 queries test direct competitive differentiation vs. broader category awareness. Three secondary competitors — Acquia, Sanity, and Storyblok — carry medium confidence on tier assignment; if any regularly appear in deals, promoting them to primary adds another 6-8 comparison queries each.

Primary Competitors

Adobe Experience Manager

Primary High
adobe.com
Dominant enterprise DXP incumbent with deep personalization and DAM suite; notoriously expensive ($250K–$1M+), slow to implement (9–18 months), requires specialized developers — Brightspot actively positions as the faster, cheaper alternative.
Source: Automated scrape — Brightspot comparison pages

Sitecore

Primary High
sitecore.com
Enterprise DXP with deep personalization and marketing automation; complex back-end requires specialized developers and long implementation cycles, making it a high-TCO choice Brightspot positions against on usability and speed.
Source: Automated scrape — Brightspot comparison pages

Contentful

Primary High
contentful.com
Leading API-first headless CMS with strong developer experience; excels at pure headless use cases but lacks built-in editorial workflows and hybrid architecture that Brightspot offers for teams wanting both flexibility and rich authoring.
Source: Category listing — G2, Gartner

Contentstack

Primary High
contentstack.com
Composable headless CMS targeting enterprises with strong automation and marketplace integrations; competes on headless side but lacks Brightspot's hybrid architecture for organizations needing traditional editorial alongside API-first delivery.
Source: Category listing — G2, Gartner

WordPress VIP

Primary High
wpvip.com
Enterprise-managed WordPress with massive ecosystem and lower barrier to entry; struggles with performance at high-volume publishing scale and lacks governance and multi-site management depth Brightspot offers for complex enterprise operations.
Source: Automated scrape — Brightspot comparison pages

Secondary Competitors

Drupal

Secondary High
drupal.org
Open-source CMS with extreme customizability and no licensing fees; strong in government and higher-ed but requires significant developer investment, and total cost of ownership can rival commercial platforms.
Source: Category listing — G2, Gartner

Acquia

Secondary Medium
acquia.com
Enterprise Drupal-based DXP with cloud hosting, personalization, and marketing tools layered on open source; appeals to Drupal-invested organizations but carries higher cost and complexity than standalone Drupal.
Source: Category listing — G2, Gartner

Sanity

Secondary Medium
sanity.io
Developer-centric content platform treating content as structured data with fully customizable studio; strong developer experience and real-time collaboration, but requires significant front-end engineering and lacks out-of-the-box editorial features.
Source: Category listing — G2, Gartner

Storyblok

Secondary Medium
storyblok.com
Headless CMS with visual editor bridging developer flexibility and marketer usability; growing in mid-market and enterprise but less proven at the scale and complexity of Brightspot's largest publishing clients.
Source: Category listing — G2, Gartner

→ VALIDATE Three questions: (1) Do Acquia, Sanity, or Storyblok regularly appear in Brightspot evaluations? If any shows up in 3+ recent deals, we should promote to primary and add head-to-head queries. (2) Is any listed competitor irrelevant — a vendor Brightspot never actually encounters in competitive situations? (3) Are there vendors missing entirely — particularly newer composable DXP players like Uniform, Amplience, or Hygraph that may be entering the same deals?

Feature Taxonomy

What Buyers Evaluate

12 buyer-level capabilities mapped. Each feature generates capability-comparison queries that test how AI platforms describe Brightspot's strengths relative to competitors.

Headless & Hybrid Architecture Strong High

A CMS that supports headless, decoupled, and hybrid delivery from a single platform so we don't have to choose between editorial tools and API flexibility

AI-Powered Content Creation & Productivity Moderate Med

Built-in AI tools that help editors draft, optimize, and tag content faster without leaving the CMS

Editorial Workflow & Collaboration Strong High

Content creation workflows with approval chains, version control, real-time collaboration, and role-based permissions for large editorial teams

Multi-Site & Multi-Brand Management Strong High

Manage content across dozens of websites, brands, and regions from a single CMS instance with shared content and independent governance

Content Modeling & Taxonomy Strong High

Flexible content types and taxonomy management that lets us structure content once and reuse it across channels without duplicating effort

Integration Ecosystem & APIs Strong High

Pre-built integrations with our existing martech stack — CRM, DAM, analytics, marketing automation — plus robust APIs for custom connections

Personalization & A/B Testing Moderate Med

Built-in personalization and A/B testing that lets us optimize content experiences without a separate experimentation platform

Performance & Scalability at Volume Strong High

A CMS that handles millions of pages and traffic spikes without performance degradation or requiring constant infrastructure tuning

Content Authoring & Rich Text Editing UX Moderate High

A modern, intuitive editing experience that content teams can use without developer help — something that feels like Google Docs, not a database form

Search & Content Discovery Strong Med

Powerful search across all our content, with federated search capabilities and faceted navigation for both internal editorial use and front-end site search

Implementation Speed & Time to Value Strong High

Go live in weeks or months, not the 12-18 month implementation cycle our current CMS required

Managed Hosting & Support Quality Strong High

Fully managed cloud hosting with 99.9% uptime SLA and a support team that actually understands our platform and content operations

→ VALIDATE Three features rated "moderate" — AI-Powered Content Creation, Personalization & A/B Testing, and Content Authoring UX. (1) Has the AI Suite shipped significant updates since early 2025 that would move this to "strong" against Contentful and Contentstack? (2) Is Personalization still an add-on module, or has it been integrated into the core platform? (3) The "moderate" rating on Content Authoring UX is sourced from consistent G2 feedback about a dated rich text editor — has this been redesigned in recent releases? Upgrading any of these changes which capability queries we test offensively vs. defensively.

Pain Point Taxonomy

What Drives the Purchase

10 pain points: 6 high, 4 medium severity. Buyer language from these pain points determines how queries will be phrased — AI platforms surface solutions for the problems buyers actually describe.

Legacy CMS lock-in and escalating costs High High

"We're paying over $500K a year for our CMS and it takes 6 months to launch a new microsite — we need to get off this platform before the next renewal"
Personas: VP Digital Experience, CTO, CMO

Content team developer dependency High High

"Every time we need to update a landing page or launch a campaign, we have to file a ticket with engineering and wait in the sprint queue"
Personas: Director of Content Strategy, VP Digital Experience, CMO

Multichannel publishing friction High High

"We're copying and pasting the same content into three different systems every time we publish — it's error-prone and our teams are burning out"
Personas: Director of Content Strategy, VP Digital Experience

High total cost of ownership High High

"The license fee was just the start — between the SI, the developers we had to hire, and the ongoing customization costs, we've spent three times what we budgeted"
Personas: CMO, CTO, VP Digital Experience

Headless vs. editorial tradeoff High High

"We went headless for the flexibility but now our editors hate the CMS — they can't preview anything and the authoring experience is terrible"
Personas: Director of Content Strategy, CTO, VP Digital Experience

Slow implementation cycles High High

"We kicked off the CMS replatforming project 14 months ago and we're still not live — leadership is losing patience and the budget is gone"
Personas: VP Digital Experience, CTO, CMO

Content governance at scale Medium Med

"Every brand team is doing their own thing in their own CMS — we have no visibility into what's being published and no way to enforce brand standards"
Personas: VP Digital Experience, Director of Content Strategy, CMO

Integration complexity Medium High

"Our CMS barely talks to Salesforce and every time we upgrade something breaks — integrating with our stack feels like a full-time job"
Personas: Enterprise Architect, CTO

Poor search experience Medium Med

"Our editors can't find anything in the CMS — we have 50,000 articles and the search is basically useless unless you know the exact title"
Personas: Director of Content Strategy, Enterprise Architect

AI content operations gap Medium Med

"My editors are using ChatGPT in a separate tab and then pasting into the CMS — we need AI built into our actual workflow, not bolted on"
Personas: Director of Content Strategy, VP Digital Experience

→ VALIDATE (1) All 6 high-severity pain points relate to legacy CMS migration frustrations — is this the dominant buying trigger, or do some buyers come to Brightspot from a greenfield/first-CMS context where migration pain doesn't apply? (2) Is the "AI content operations gap" pain point resonating in current sales conversations, or is it still aspirational for most buyers? (3) Missing pain points to consider: compliance and accessibility requirements (if regulated industries are a significant segment), content localization at scale (if global multi-language is a common requirement), or vendor consolidation pressure (if buyers are trying to reduce their martech stack). What buyer frustrations are we missing?

Site Analysis

Layer 1 Technical Findings

Technical baseline assessment of brightspot.com for AI crawler accessibility and citation readiness.

ENGINEERING ACTION Two high-severity findings require attention: outdated competitor comparison pages and missing publication dates on content marketing pages. These are freshness-signal issues that directly affect whether AI platforms cite Brightspot's comparison content over competitors' fresher alternatives. Content team should update comparison pages; engineering should add visible date rendering to all content marketing page templates. Additionally, verify schema markup presence across commercial pages — this was unassessable in our analysis.

🟡 Critical competitor comparison pages are significantly outdated

What we found: Two primary competitor comparison pages carry publication dates over 12 months old: Brightspot vs. Contentful (June 22, 2022 — nearly 4 years old) and Brightspot vs. Contentstack (August 12, 2024 — 21 months old). Additionally, six of eight comparison pages in the CMS Selection Guide have no visible publication or last-updated date at all.

Why it matters: AI platforms heavily weight freshness when selecting content to cite. Research shows 76.4% of AI-cited pages were updated within 30 days. Comparison pages are among the highest-value content for vendor evaluation queries — when these pages appear stale or undated, AI systems will prefer competitors' fresher alternatives content over Brightspot's.

Business consequence: Queries like "Brightspot vs. Contentful 2026" or "best enterprise CMS for headless and editorial" may return Contentful's own comparison content instead of Brightspot's when AI crawlers see a 4-year-old publication date on Brightspot's comparison page.

Recommended fix: Update the Contentful and Contentstack comparison pages with current product capabilities, pricing context, and 2025/2026 customer results. Add visible 'Last updated' dates to all comparison pages in the CMS Selection Guide. Establish a quarterly review cadence for comparison content to keep publication dates within the 90-day AI citation window.

Impact: High Effort: 1-2 weeks Owner: Content Affected: 8 comparison pages in /cms-resources/

🟡 Multiple content marketing pages lack visible publication dates

What we found: Six of eight competitor comparison pages and one case study (AP News) display no visible publication or last-updated date. These are content_marketing pages where date visibility directly affects AI citation likelihood.

Why it matters: For content marketing pages (blogs, comparisons, case studies), the absence of a visible date is a negative signal. AI crawlers cannot determine recency and will not give these pages freshness credit, defaulting to treating them as stale. This is distinct from product pages where missing dates are neutral.

Business consequence: Enterprise CMS evaluation queries like "AEM alternative 2026" or "best headless CMS with editorial tools" will deprioritize Brightspot's undated comparison pages, allowing competitors with dated, fresher content to capture citations that should be Brightspot's.

Recommended fix: Add a visible 'Published' and 'Last updated' date to all comparison pages, case studies, and blog posts. Ensure dates are rendered in the page body text (not just in meta tags or schema markup) so they are accessible to all crawlers including those that process rendered content.

Impact: High Effort: < 1 day Owner: Engineering Affected: ~14 content marketing pages, likely more site-wide

🔵 Multiple H1 headings on commercially important pages

What we found: Three commercially important pages use multiple H1 tags: Technology & Software has at least 5 H1-level headings, Media & Publishing has at least 5, and For Developers has approximately 12 H1-level headings that should be H2s.

Why it matters: A single H1 per page signals the primary topic to AI crawlers. Multiple H1s dilute the topic signal and make it harder for AI systems to determine what the page is primarily about, reducing citation likelihood for specific queries.

Business consequence: Queries like "best CMS for media publishers" or "CMS for developers" may not surface Brightspot's industry pages because the diluted H1 structure makes it unclear to AI crawlers that these pages are primary authorities on those topics.

Recommended fix: Retain the first H1 as the page's primary heading and convert all subsequent H1 tags to H2. For Developers: keep "Brightspot CMS benefits for development teams" as the sole H1. For Technology & Software: keep "A composable solution for leading tech companies."

Impact: Medium Effort: < 1 day Owner: Engineering Affected: 3 pages — /industries/technology-software, /industries/media-publishing, /brightspot-cms/for-developers

🔵 Schema markup presence could not be verified — manual check recommended

What we found: JSON-LD structured data could not be assessed through our analysis method. We were unable to determine whether appropriate schema types (Product, FAQPage, Article, Organization, SoftwareApplication) are present on any of the 45 pages analyzed.

Why it matters: Structured data helps AI systems understand page purpose, extract specific claims, and match pages to user intent. Product schema on feature pages, FAQPage schema on architecture guides, and Article schema on blog posts improve citation likelihood.

Business consequence: Without confirmed schema markup, Brightspot may be missing a structural signal that helps AI platforms match enterprise CMS capability queries to the correct product pages — a gap competitors with proper schema implementation would exploit.

Recommended fix: Audit all commercial pages using Google's Rich Results Test or Schema.org Validator. Ensure Product or SoftwareApplication schema on product/feature pages, Article schema with datePublished/dateModified on blog and comparison content, FAQPage schema on architecture guide pages.

Impact: Medium Effort: 1-3 days Owner: Engineering Affected: All 45 commercially relevant pages

🔵 Meta descriptions and Open Graph tags could not be verified

What we found: Meta descriptions, Open Graph tags, canonical URLs, and meta robots directives could not be assessed from rendered markdown output. These HTML head elements are stripped during content rendering.

Why it matters: While meta descriptions and OG tags have less direct impact on AI citation than content quality, canonical URLs are important for preventing duplicate content signals across the site's multiple URL patterns (e.g., /brightspot-cms vs. /platform).

Business consequence: If canonical URLs are misconfigured across Brightspot's multiple URL paths, AI crawlers may fragment the domain's authority signal, reducing overall citation likelihood for enterprise CMS queries where consolidated domain authority matters.

Recommended fix: Verify meta descriptions, OG tags, and canonical URLs using browser developer tools or Screaming Frog. Ensure each commercial page has a unique meta description and correct canonical URL. Pay attention to pages accessible through multiple URL paths.

Impact: Low Effort: 1-3 days Owner: Engineering Affected: All commercially relevant pages

🔵 Client-side rendering status should be verified for AI crawler compatibility

What we found: Client-side rendering detection was not possible through our analysis method. All 45 pages returned substantial text content, suggesting server-side or pre-rendered delivery is in place. However, we could not verify whether certain interactive elements render differently for AI crawlers that do not execute JavaScript.

Why it matters: AI crawlers vary in JavaScript execution capabilities. GPTBot and ClaudeBot typically do not execute JavaScript. If any page sections rely on client-side rendering, those sections would be invisible to these crawlers.

Business consequence: If dynamic content sections (pricing calculators, interactive feature comparisons, or customer testimonials loaded via JS) are invisible to AI crawlers, those data points cannot be cited in enterprise CMS evaluation responses — even though they exist on the page for human visitors.

Recommended fix: Test key commercial pages with JavaScript disabled to confirm all critical content renders server-side. Use Google Search Console's URL Inspection tool or Screaming Frog in JavaScript-off mode to verify content accessibility.

Impact: Low Effort: < 1 day Owner: Engineering Affected: All pages — verification is precautionary

Site Analysis Summary

Total pages analyzed 45
Commercially relevant pages 45
Heading hierarchy 0.72
Content depth 0.67
Freshness 0.35 weighted (content marketing: 0.35, product: unable to assess, structural: unable to assess)
Passage extractability 0.69
Schema coverage Unable to assess (45 pages unscored)

NOTE 31 of 45 pages (28 product/commercial + 3 structural) have no detectable freshness score due to absent publication dates. The 0.35 weighted freshness is calculated from the 14 content marketing pages that had visible dates. The actual site-wide freshness may be better or worse — engineering should verify whether product pages render dates in a crawler-accessible format.

Next Steps

What Happens Next

WHY NOW

• AI search adoption is accelerating — buyer discovery patterns for enterprise CMS are shifting quarter over quarter as more evaluators use AI assistants for vendor shortlisting
• Early citations compound: domains that AI platforms learn to trust now get cited more frequently as training data accumulates
• Competitors who establish GEO visibility first create a structural disadvantage for late movers — once AEM or Contentful own the citation position for "best enterprise CMS" queries, displacement requires significantly more effort
• Enterprise CMS is still early-innings in GEO optimization — acting now means competing against inaction, not against entrenched strategies

The full audit will measure citation visibility across buyer queries in the enterprise CMS space — including "best headless CMS with editorial tools," "AEM alternative for multi-brand enterprise," and "CMS that reduces developer dependency." You'll see exactly which queries return results that include Adobe AEM, Contentful, or Sitecore but not Brightspot — and what it would take to appear in them. Fixing the freshness and heading hierarchy issues now improves the technical baseline before we even measure it.

01

Validation Call

45-60 minutes walking through this document. Confirm personas, competitor tiers, feature strengths, and pain point severity. Every correction sharpens the query set.

02

Query Generation & Execution

Buyer queries constructed from validated inputs, executed across selected AI platforms (ChatGPT, Claude, Perplexity, Gemini). Results captured and scored.

03

Full Audit Delivery

Visibility analysis, competitive positioning across AI platforms, content gap prioritization, and three-layer action plan (technical, content, strategic).

START NOW — ENGINEERING These don't depend on the rest of the audit and will improve your baseline visibility before we even measure it:

Add visible publication dates to content marketing pages — engineering template change to render dates in page body on comparison and case study pages (effort: less than 1 day)
Fix multiple H1 headings — convert extra H1 tags to H2 on /industries/technology-software, /industries/media-publishing, and /brightspot-cms/for-developers (effort: less than 1 day)
Audit schema markup — verify whether JSON-LD structured data exists on commercial pages using Rich Results Test; if absent, implement Product/Article/FAQPage schema (effort: 1-3 days)

Before the Call

Your Pre-Call Checklist

Two jobs before we meet. The questions on the left require your judgment — no one knows your business better than you. The engineering tasks on the right don't require the call at all.

Questions for You
Does an Enterprise Architect appear in Brightspot deal cycles independently from the CTO?
If wrong: we remove ~15 integration/architecture queries and fold relevant ones into CTO's set
Does the CMO/VP Marketing sit at the evaluation table, or is this a digital/IT-led purchase?
If wrong: we merge personalization queries into VP Digital's set and remove marketing-specific discovery queries
Are the "headless CMS" and "enterprise DXP replacement" buying conversations distinct motions with different personas?
If wrong: we may need separate query clusters for each buying motion rather than one unified set
Has AI Content Creation shipped major updates that move it from "moderate" to "strong" vs. Contentful/Contentstack?
If wrong: we upgrade AI capability queries from defensive to offensive positioning
Do Acquia, Sanity, or Storyblok appear in 3+ recent competitive deals?
If wrong: promoting any to primary adds 6-8 head-to-head comparison queries each
Does the VP Digital own CMS budget, or does budget authority sit with engineering/CTO?
If wrong: Karen becomes an evaluator and David's query set expands to include business-case queries
Does the CTO drive the initial shortlist or only validate technical fit after shortlist is set?
If wrong: we shift CTO queries toward late-stage validation rather than early discovery
Does the Director of Content get involved during evaluation or only post-purchase for adoption?
If wrong: we deprioritize content discovery queries and focus on comparison-stage editorial experience queries
Are all 6 high-severity pain points migration-driven, or do greenfield buyers have different triggers?
If wrong: we add non-migration query clusters for first-time CMS buyers
Are there missing competitors (Uniform, Amplience, Hygraph) or listed ones that never appear in deals?
If wrong: adding/removing a primary competitor shifts ~6-8 queries in the head-to-head set
For Engineering — Start Now
Add visible publication dates to comparison and case study page templates
Restores freshness signals for AI citation on highest-value content marketing pages
Fix multiple H1 headings on industry and developer pages
3 pages with diluted heading structure — convert extra H1s to H2 to strengthen topic signals
Audit schema markup across commercial pages
Verify JSON-LD presence; implement Product/Article/FAQPage schema if absent
Verify CSR status with JavaScript disabled
Precautionary — initial signals are positive but confirmation ensures no crawler blind spots
Alignment

We're Aligned On

This isn't a contract — it's a shared understanding. The audit runs against what's below. If something changes between now and the call, we adjust. The goal is to make sure we're asking the right questions for the right buyers against the right competitors.
Already Confirmed
Competitive set — 5 primary + 4 secondary competitors named and tiered
Persona set — 5 personas: 2 decision-makers, 2 evaluators, 1 influencer
Feature taxonomy — 12 capabilities with outside-in strength ratings (8 strong, 3 moderate)
Pain point set — 10 buyer frustrations with severity ratings (6 high, 4 medium)
Layer 1 technical audit — 6 findings logged (2 high, 4 medium/low), engineering notified
Decided at the Call
Enterprise Architect and CMO persona validity — determines whether ~30 queries stay in or get redistributed across other personas
Feature strength accuracy for AI Content Creation, Personalization, and Content Authoring UX — moderate vs. strong changes offensive/defensive query framing
Headless vs. DXP replacement buying motion split — may require separate query clusters
Secondary competitor promotions (Acquia, Sanity, Storyblok) — each promotion adds 6-8 head-to-head queries
Pain point prioritization — top 3 buyer frustrations to emphasize in problem-aware queries
Client
Date