Engagement Foundation Review

D2L Audit Foundation

Before we run the audit, we need to make sure we're asking the right questions about the right competitors to the right buyers. This document presents what we've learned about D2L's market — your job is to tell us what we got right, what we got wrong, and what we missed.

Prepared March 2026
www.d2l.com
Learning Management System (LMS)
GEO Readiness

Where You Stand Today

Before we measure citation visibility in the learning management system space, these three signals tell us whether AI crawlers can access and trust D2L's site content.

Technical Readiness
Needs Attention
2 high-severity findings identified. Top issue: 4 of 6 competitor comparison pages have not been updated in 12-38 months, deprioritizing them as citation sources for head-to-head evaluation queries. No critical blockers found.
Content Freshness
Needs Attention
Weighted freshness: 0.53. 19 pages updated within 90 days. 10 pages older than 6 months. Blog content averages 0.46 freshness (6 of 14 posts older than 12 months). Product pages average 0.67 freshness but 8 product pages have no detectable date — verify manually.
Crawl Coverage
Good
All 7 AI crawlers (GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot, Google-Extended, Googlebot, Bytespider) are allowed. robots.txt confirmed accessible with only administrative paths blocked. Sitemap accessible via WordPress/Yoast SEO.
Executive Summary

What You Need to Know

AI search is reshaping how learning management system buyers discover, evaluate, and shortlist platforms. D2L operates in a category where institutional procurement cycles are long and switching costs are high — which means the first LMS vendor an AI platform cites in an evaluation query has a disproportionate influence on the shortlist. Companies that establish GEO visibility now lock in a structural advantage before competitors recognize the opportunity.

This Foundation Review presents three layers of audit preparation: the competitive landscape that shapes which head-to-head queries we construct, the buyer personas whose search intent patterns determine what questions we ask, and the technical baseline that determines whether AI platforms can access D2L's content at all. Each section exists so we can validate inputs together before the audit runs — the accuracy of these inputs directly determines whether the audit measures the right things.

The validation call is a decision-making session with real stakes. Two types of decisions: (1) input validation — are the right competitors in the right tiers, the right personas driving query intent, and the right features rated at the right strengths? and (2) engineering triage — which technical fixes should start before results come back? The specific items for each are in the Pre-Call Checklist at the end of this document.

TL;DR — Action Items
  • 🟡 High: Stale Competitor Comparison Pages — Marketing should update 4 comparison pages (Moodle, Schoology, Google Classroom, Sakai) with current G2 data and feature comparisons; these are primary citation targets for head-to-head evaluation queries.
  • 🟡 High: Key Product Pages Show Stale Modification Dates — Marketing should refresh the Brightspace, Performance+, and Achievement+ product pages; stale timestamps deprioritize them against competitors with recently updated pages.
  • 🟣 Validate at the Call: VP of Learning & Development persona — Brian Torres was inferred from corporate training buyer patterns, not sourced from D2L deal data. If this buyer doesn't exist in D2L's pipeline, we remove ~15-20 corporate training queries and reallocate to higher-ed-specific intent patterns.
  • ✅ Start Now: Schema markup audit — Engineering can verify JSON-LD structured data across all 42 pages using Google's Rich Results Test without waiting for the call; this improves how AI platforms categorize D2L's content.
  • 📋 Validation Call: Feature strength distribution — The KG shows 6 strong, 4 moderate, 2 weak capabilities. Confirming these ratings determines which queries test D2L's strengths vs. defend its weaknesses — particularly whether Mobile Learning (weak) and Collaboration Tools (weak) are accurately rated.
How This Works

Reading This Document

Three things to know before you start.

What this is This document presents what we've learned about D2L's competitive position in the learning management system market through outside-in research. It covers the personas who buy LMS platforms, the competitors you face in deals, the features buyers evaluate, and the technical signals that determine whether AI platforms can access your content. Every section feeds directly into the query set we'll build for the audit.

What you need to do Look for the purple question boxes throughout the document. These are the specific items where your input changes the audit. We need you to confirm what's right, correct what's wrong, and flag what's missing. Your answers directly shape which queries we run and which competitors we test against.

Confidence badges Every data point carries a confidence badge: High means sourced from multiple reliable signals, Medium means inferred or single-source, Low means best guess. Focus your review time on medium and low confidence items — those are where your corrections have the most impact.

Company Profile

D2L

The foundation the audit builds on. Every field here shapes how we construct queries and interpret results.

Company Overview

Company Name D2L High
Domain www.d2l.com
Name Variants D2L, Desire2Learn, Desire 2 Learn, D2L Inc, D2L Brightspace, Brightspace, D2L Inc., DTOL
Category Learning Management System (LMS)
Segment Enterprise
Key Products D2L Brightspace, D2L Lumi, Creator+, Performance+, Achievement+
Positioning LMS for higher education, K-12, corporate training, and government — online, hybrid, and in-person learning at scale

D2L's positioning spans four distinct segments: higher education, K-12, corporate training, and government. Does each segment generate its own buyer conversations and deal cycles, or does higher education dominate pipeline? If corporate training and government are meaningful revenue segments, we need separate query clusters for each — which roughly doubles the query surface area.

Buyer Personas

Who Buys LMS Platforms

5 personas: 2 decision-makers, 2 evaluators, 1 influencer. These personas drive the query set — each one searches differently based on their role in the LMS purchase decision.

Critical Review Area Personas are the highest-leverage input for the audit. Getting a persona wrong doesn't just miss queries — it builds an entire query cluster around the wrong search intent. Review each persona's role, influence level, and veto power carefully.

Data Sourcing Note Role, department, seniority, influence level, veto power, and technical level are sourced directly from the knowledge graph. Buying jobs and query focus areas are synthesized from the persona's role context to illustrate how they would search — these are our best inference, not sourced data.

David Chen
Chief Information Officer
Decision-maker High
C-Suite IT leader responsible for enterprise technology strategy, infrastructure decisions, and vendor selection for institution-wide platforms including the LMS.
Veto power: Yes — controls technology budget and final vendor approval for enterprise software.
Technical level: High
Primary buying jobs: Evaluate total cost of ownership, assess integration requirements with existing campus systems (SIS, SSO, video platforms), validate security and compliance posture, approve final vendor selection.
Query focus areas: LMS integration capabilities, enterprise LMS security compliance, LMS total cost of ownership comparison, best LMS for large university IT infrastructure.
Source: Review mining (G2, Capterra reviewer titles and institutional profiles)

At D2L's target institutions, does the CIO or a CTO/VP of IT typically hold final budget authority for LMS contracts? If a CTO is the real budget holder, we need to split the IT decision-maker into two personas with distinct query intent.

Margaret Williams
Provost / Chief Academic Officer
Decision-maker High
Senior academic leader who owns the pedagogical vision and learning outcomes strategy. Champions LMS adoption across faculties and ensures the platform aligns with institutional academic goals.
Veto power: Yes — can block any LMS that doesn't meet academic requirements or faculty adoption standards.
Technical level: Low
Primary buying jobs: Ensure LMS supports accreditation requirements, evaluate learning outcomes tracking, assess faculty adoption likelihood, validate accessibility and equity mandates.
Query focus areas: LMS for accreditation and outcomes assessment, best learning management system for faculty adoption, LMS accessibility compliance for higher education, competency-based education platforms.
Source: Review mining (G2, Capterra institutional decision-maker profiles)

Does the Provost drive LMS selection at D2L's target institutions, or does IT lead with Academic Affairs as a stakeholder? If the Provost is advisory rather than a decision-maker, we'd deprioritize accreditation and pedagogy queries in favor of IT-driven evaluation queries.

Aisha Jackson
Director of Online Learning & Instructional Design
Evaluator High
Mid-level academic technology leader who evaluates LMS platforms hands-on, builds the shortlist, and runs pilot programs. Translates faculty needs into technical requirements for the selection committee.
Veto power: No — but heavily influences the shortlist through hands-on evaluation and pilot results.
Technical level: Medium
Primary buying jobs: Run LMS demos and pilots, evaluate content authoring and course design tools, assess instructional design workflow fit, compare learning analytics capabilities across vendors.
Query focus areas: Best LMS for online course design, LMS with adaptive learning paths, learning management system comparison for instructional designers, LMS content authoring tools review.
Source: Review mining (G2, Capterra reviewer titles in academic technology roles)

Does this role function as the primary shortlist builder who filters vendors before the CIO and Provost see options, or does IT assemble the initial list? If this persona builds the shortlist, we weight early-stage discovery and comparison queries more heavily for this role.

Brian Torres
VP of Learning & Development
Evaluator Medium
Corporate L&D leader evaluating LMS platforms for employee training, compliance learning, and professional development programs. Represents the Brightspace for Business buyer segment.
Veto power: No — evaluates and recommends, but procurement and IT make the final call in corporate settings.
Technical level: Low
Primary buying jobs: Evaluate corporate training LMS options, assess compliance training automation, compare employee onboarding platforms, measure training ROI and completion tracking.
Query focus areas: Best corporate training LMS, compliance training platform comparison, LMS for employee development, corporate learning management system vs alternatives.
Source: LLM inference (inferred from corporate training buyer patterns, not sourced from D2L deal data)

Does a VP of Learning & Development actually appear in D2L's corporate training pipeline, or does Brightspace for Business sell through IT/procurement channels? If this persona doesn't exist in real deals, we remove the corporate L&D query cluster (~15-20 queries) and reallocate to higher-ed intent patterns.

Carlos Rivera
LMS Administrator / Educational Technologist
Influencer High
Hands-on technical operator who manages the LMS day-to-day — user provisioning, course setup, integrations, and troubleshooting. Provides critical input on platform usability and administration complexity during evaluations.
Veto power: No — advisory role, but negative feedback on administrative complexity can derail a selection.
Technical level: High
Primary buying jobs: Evaluate administrative ease-of-use, test integration setup with campus systems, assess user management and permission structures, compare migration difficulty from incumbent LMS.
Query focus areas: LMS administration best practices, LMS migration from Blackboard to Brightspace, easiest LMS to administer, LMS with best API and integration support.
Source: Review mining (G2, Capterra reviewer titles in LMS admin and edtech roles)

Does the LMS Admin have informal veto power through technical evaluation reports, or is their input genuinely advisory? If admins can block selections by flagging migration or administration risks, we'd reclassify as evaluator and add administration-complexity queries to their cluster.

Missing Personas? We identified 5 personas but the LMS buying process at large institutions often involves additional stakeholders. Consider: Dean / Department Chair (if individual schools have LMS selection influence separate from the Provost), Director of Student Success / Retention (if student outcomes data drives the LMS decision), or Procurement / Purchasing Officer (if RFP-stage procurement runs a parallel evaluation process). Who else shows up in D2L's deals?

Competitive Landscape

Who You Compete Against

5 primary + 4 secondary competitors identified. Tier assignments determine which head-to-head queries we construct for the audit.

Why Tiers Matter Getting these tiers right determines which queries test direct competitive differentiation. Each primary competitor generates 6-8 head-to-head queries like "D2L Brightspace vs Canvas LMS" or "best LMS for higher education — Brightspace or Moodle." That's roughly 30-40 head-to-head queries across 5 primary competitors. We're less certain about Absorb LMS, Sakai, and TalentLMS as secondary competitors — if any of these actually appears in D2L deals regularly, promoting them to primary would add another 6-8 queries to the head-to-head set.

Primary Competitors

Canvas LMS

Primary High
instructure.com
Modern cloud-native LMS dominant in US higher education with an intuitive UI and strong third-party integrations; stronger ease-of-use reputation than Brightspace but weaker in competency-based education and analytics depth.
Source: Automated scrape (D2L comparison page, G2 category, search results)

Blackboard Learn

Primary High
anthology.com
Legacy enterprise LMS with deep penetration in large universities and government; comprehensive feature set but widely criticized for outdated UX, slow innovation, and complex administration compared to Brightspace.
Source: Automated scrape (D2L comparison page, G2 category, search results)

Moodle

Primary High
moodle.org
Open-source LMS with massive global adoption and deep customizability; zero licensing cost appeals to budget-constrained institutions but requires significant IT resources to host, maintain, and customize.
Source: Automated scrape (D2L comparison page, G2 category, search results)

Docebo

Primary High
docebo.com
AI-powered enterprise learning platform strong in corporate training and extended enterprise use cases; better AI personalization and e-commerce features than Brightspace but limited presence in higher education and K-12.
Source: Category listing (G2 LMS category grid, Gartner Peer Insights)

Schoology

Primary High
powerschool.com
K-12-focused LMS with strong assessment and gradebook integration into PowerSchool's SIS ecosystem; weaker in higher education and corporate training but a direct threat to Brightspace in the K-12 segment.
Source: Automated scrape (D2L comparison page, search results)

Secondary Competitors

Absorb LMS

Secondary Med
absorblms.com
Corporate training LMS known for clean UX and strong compliance training features; competes with Brightspace for Business but lacks academic features for higher education and K-12.
Source: Category listing (G2 corporate LMS category)

Google Classroom

Secondary High
classroom.google.com
Free, lightweight classroom tool deeply integrated with Google Workspace; dominates K-12 for simple use cases but lacks the assessment rigor, analytics, and enterprise features of Brightspace.
Source: Automated scrape (D2L comparison page, search results)

Sakai

Secondary Med
sakailms.org
Open-source LMS maintained by the Apereo Foundation with a loyal following at research universities; no licensing cost but declining market share and slower feature development compared to commercial platforms.
Source: Automated scrape (search results, academic LMS comparisons)

TalentLMS

Secondary Med
talentlms.com
Lightweight, affordable corporate training LMS popular with SMBs; fastest time-to-deploy in category but lacks the depth, analytics, and multi-audience architecture Brightspace offers at enterprise scale.
Source: Category listing (G2 corporate training LMS category)

Three secondary competitors — Absorb LMS, Sakai, and TalentLMS — have medium confidence on tier assignment. Do any of these actually appear in D2L's competitive deals? If Absorb shows up regularly in Brightspace for Business evaluations, it should be primary. Conversely, is Sakai still relevant given its declining market share, or should it be dropped entirely? Are there vendors we missed — particularly in government LMS procurement (e.g., Cornerstone, SAP SuccessFactors Learning)?

Feature Taxonomy

Capabilities That Drive Buying Decisions

12 buyer-level capabilities mapped. Feature strengths determine which capability queries test D2L's advantages vs. expose weaknesses.

Course Creation & Content Authoring Strong High

Tools for faculty and instructional designers to build engaging online courses with multimedia, interactive content, and reusable learning objects

Assessment & Grading Strong High

Flexible quiz types, rubrics, competency-based grading, and an integrated gradebook that handles complex weighting schemes

Learning Analytics & Predictive Insights Strong High

Dashboards and predictive models that identify at-risk students and measure learning outcomes across programs

Adaptive & Personalized Learning Paths Strong High

Automatically adjust course content and pacing based on individual learner performance and mastery levels

Accessibility & Compliance Strong High

WCAG 2.1 AA compliance, screen reader support, and built-in accessibility checking so all learners can participate regardless of ability

Competency-Based Education & Outcomes Tracking Strong High

Map learning outcomes to competencies, track student mastery across programs, and generate accreditation reports

Third-Party Integrations & LTI Ecosystem Moderate High

Connect the LMS to our SIS, video platforms, plagiarism tools, publisher content, and other edtech through LTI and open APIs

AI-Powered Learning Tools Moderate Med

AI tutoring, automated feedback generation, intelligent content recommendations, and AI-assisted course design

Administration & Role Management Moderate High

Manage thousands of users, courses, and organizational units with granular role-based permissions and bulk operations

Corporate Training & Compliance Learning Moderate Med

Deploy mandatory compliance training, track completions across the workforce, and manage certifications with automated reminders

Mobile Learning Experience Weak High

A mobile app that lets students access courses, submit assignments, participate in discussions, and view grades from any device

Collaboration & Communication Tools Weak High

Discussion boards, group workspaces, peer review, video conferencing integration, and real-time messaging for student-faculty interaction

Two features are rated weak (Mobile Learning, Collaboration Tools) based on consistent negative G2/Capterra reviews. Is this accurate relative to Canvas and Blackboard, or has Brightspace improved in recent releases? Also: AI-Powered Learning Tools and Corporate Training are rated moderate with medium confidence — D2L Lumi is new and Brightspace for Business is less established. Should AI tools be upgraded given Lumi's capabilities, or does Docebo's and Instructure's AI investment keep D2L at moderate? Are any features missing — particularly around video conferencing or virtual classroom capabilities?

Pain Point Taxonomy

What Buyers Struggle With

9 pain points: 5 high, 4 medium severity. Buyer language here is how queries will be phrased — if the words don't match how your buyers actually talk, the queries won't match real search intent.

Steep learning curve for faculty and administrators High High

"Our faculty spend weeks just figuring out how to set up a course in the new LMS — we need something that doesn't require a PhD in instructional technology"
Personas: Director of Online Learning, LMS Administrator, Provost

Frustrating mobile app experience High High

"Students complain the mobile app is clunky and half the features don't work — they can't even submit assignments reliably from their phones"
Personas: Director of Online Learning, Provost, LMS Administrator

System slowdowns during peak usage High Med

"The LMS crashes every finals week when 20,000 students are submitting at once — we can't have an unreliable platform during the most critical times"
Personas: Chief Information Officer, LMS Administrator, Provost

Faculty resist adopting the LMS High High

"Half our faculty still email PDFs because they find the LMS too complicated — we need a platform that professors will actually use"
Personas: Provost, Director of Online Learning

LMS migration is extremely costly and risky High Med

"We have 10 years of courses in our current LMS — switching platforms means potentially losing content and retraining 500 faculty members"
Personas: Chief Information Officer, Provost, Director of Online Learning

Overly complex platform administration Medium High

"Managing permissions and settings feels like navigating a maze — our admins spend hours on configuration that should take minutes"
Personas: LMS Administrator, Chief Information Officer

Integration with campus systems requires significant IT effort Medium High

"Every time we try to integrate a new tool with our LMS it becomes a multi-month IT project — we need plug-and-play connections"
Personas: Chief Information Officer, LMS Administrator

Clunky collaboration and discussion tools Medium High

"Our students are used to Slack and Google Docs — the LMS discussion boards and group tools feel like they're from 2010"
Personas: Director of Online Learning, Provost

Reporting and analytics are powerful but overly complex Medium High

"The analytics dashboards have great data buried in them but it takes our team hours to build a report the dean can actually understand"
Personas: Director of Online Learning, Provost, VP of Learning & Development

Two pain points have medium confidence: "System slowdowns during peak usage" (review-sourced but limited data) and "LMS migration risk" (inferred from general LMS market patterns, not D2L-specific reviews). Is peak performance a real issue for Brightspace specifically, or are those reviews about prior-generation LMS platforms? For migration risk — does D2L offer migration tooling that makes this less painful than competitors? Also consider: lack of built-in proctoring (if remote exam integrity is a top concern), limited multi-language support (if international institutions are a target), or vendor lock-in anxiety (if open standards matter to buyers). What's missing?

Site Analysis

Layer 1 Technical Findings

Technical signals that determine whether AI platforms can access, parse, and trust D2L's content.

Action Required Two high-severity findings identified: Stale Competitor Comparison Pages and Key Product Pages Show Stale Modification Dates. Marketing should prioritize updating these pages — stale timestamps directly reduce citation priority in AI responses. Additionally, Engineering should run a schema markup audit and meta description verification across all 42 analyzed pages — these could not be assessed through our analysis method and need manual confirmation.

🟡 Stale Competitor Comparison Pages

What we found: 4 of 6 dedicated comparison pages have not been updated in over 12 months. D2L Brightspace vs. Moodle was last modified August 2024 (~18 months ago). D2L Brightspace vs. Schoology was published February 2024 (~25 months). D2L Brightspace vs. Google Classroom was published January 2023 (~38 months). D2L Brightspace vs. Sakai was published March 2024 (~24 months). Only the Canvas and Blackboard comparison pages have been updated within the last 90 days.

Why it matters: Competitor comparison pages are among the most frequently cited content by AI platforms in vendor evaluation queries. Research shows 76.4% of AI-cited pages were updated within 30 days. Stale comparison pages with outdated G2 data will be deprioritized in favor of fresher competitor content or third-party reviews.

Business consequence: Queries like "D2L Brightspace vs Moodle" or "best LMS for higher education comparison" may cite third-party reviews or competitor-authored content instead of D2L's own comparison pages, ceding control of the competitive narrative to sources D2L doesn't control.

Recommended fix: Update all 4 stale comparison pages with current G2 data (Spring 2026 or latest available), refresh feature comparison tables, and add recent customer migration stories. Prioritize the Moodle page given Moodle's large market share and D2L's existing blog content (moodle-alternatives) that could cross-link.

Impact: High Effort: 1-2 weeks Owner: Marketing Affected: 4 comparison pages (Moodle, Schoology, Google Classroom, Sakai)

🟡 Key Product Pages Show Stale Modification Dates

What we found: The main Brightspace product page (/brightspace/) shows a last modification date of May 7, 2025 — approximately 10 months ago. The Achievement+ page (/brightspace/achievement/) shows a publication date of July 9, 2024 with no visible update — over 20 months old. The Performance+ page (/brightspace/performance/) shows only a September 2022 publication date with no visible recent modification.

Why it matters: Product pages are primary citation sources for AI platforms answering "what does this product do" queries. AI crawlers use modification timestamps as freshness signals when selecting content to cite. Product pages older than 6 months are disadvantaged against competitors with recently updated pages.

Business consequence: Queries like "Brightspace LMS features" or "D2L Performance+ capabilities" may deprioritize D2L's own product pages in favor of third-party reviews or competitor content with more recent timestamps, reducing D2L's ability to control its own product narrative in AI responses.

Recommended fix: Review and update the main /brightspace/ product page, /brightspace/achievement/, and /brightspace/performance/ pages with current product capabilities, recent customer metrics, and updated award recognitions. Ensure visible dates on the page reflect the update. Verify that sitemap lastmod timestamps are being set correctly for these pages.

Impact: High Effort: 1-3 days Owner: Marketing Affected: /brightspace/, /brightspace/achievement/, /brightspace/performance/

🔵 Schema Markup Cannot Be Verified — Manual Check Recommended

What we found: JSON-LD structured data (schema.org markup) could not be assessed across any of the 42 analyzed pages. Our analysis method returns rendered page content as markdown, which strips HTML-embedded schema blocks. We cannot confirm whether appropriate schema types (Product, Article, FAQ, HowTo, Organization) are implemented on commercially relevant pages.

Why it matters: Schema markup helps AI platforms understand page type, content structure, and entity relationships. Missing or incorrect schema reduces the likelihood of content being correctly categorized and cited in AI responses.

Business consequence: Queries like "best LMS with competency-based education" may fail to associate D2L's content with the correct capability category if schema markup doesn't explicitly define the content type, slightly reducing citation precision for capability-specific queries.

Recommended fix: Audit schema markup across all commercially relevant pages using Google's Rich Results Test or Schema.org Validator. Verify that product pages carry Product or SoftwareApplication schema, blog posts carry Article schema with author and dateModified, and the Organization schema is present on the homepage. This is a WordPress site (Yoast SEO detected in sitemap), which likely provides some baseline schema — verify it is correctly configured and sufficiently detailed.

Impact: Medium Effort: 1-3 days Owner: Engineering Affected: All 42 analyzed pages

🔵 Meta Descriptions and Open Graph Tags Cannot Be Verified

What we found: Meta descriptions and Open Graph (OG) tags could not be assessed from the rendered page output. These HTML head elements are not visible in the rendered markdown content returned by our analysis method.

Why it matters: Meta descriptions influence how AI platforms summarize page content in search results and citations. Missing or generic meta descriptions mean AI platforms must infer page purpose from body content alone. As a WordPress site with Yoast SEO, meta descriptions are likely configured but should be verified for quality and keyword alignment.

Business consequence: Queries like "LMS platform comparison for universities" may generate less precise summaries of D2L's pages if meta descriptions are generic, potentially causing AI responses to undersell Brightspace's specific advantages in evaluation-stage queries.

Recommended fix: Verify meta descriptions and OG tags across all key pages using browser developer tools or a tool like Screaming Frog. Ensure each commercially relevant page has a unique, descriptive meta description (150-160 characters) that includes the page's primary value proposition. Confirm OG title, description, and image tags are set.

Impact: Medium Effort: 1-3 days Owner: Marketing Affected: All 42 analyzed pages

🔵 Client-Side Rendering Status Should Be Verified

What we found: All 42 fetched pages returned substantial rendered content, suggesting the site is primarily server-rendered (consistent with WordPress). However, client-side rendering detection signals are not available through our analysis method. We cannot definitively confirm that all page content is accessible without JavaScript execution.

Why it matters: AI crawlers vary in their JavaScript rendering capabilities. GPTBot and Googlebot render JavaScript, but some crawlers (PerplexityBot, ClaudeBot) may have limited rendering. WordPress sites are generally server-rendered, making this a low-risk concern for D2L.

Business consequence: If any interactive product demos or comparison tables on D2L's site rely on JavaScript rendering, queries about specific Brightspace capabilities may miss that content on platforms whose crawlers don't execute JavaScript.

Recommended fix: Verify by loading key product and comparison pages in a browser with JavaScript disabled. If all primary content, headings, and navigation are visible without JavaScript, no action is needed. Pay particular attention to interactive content sections, comparison tables, and dynamically loaded testimonials.

Impact: Low Effort: < 1 day Owner: Engineering Affected: All pages — particularly interactive product demos and comparison tables

Site Analysis Summary

Total Pages Analyzed 42
Commercially Relevant Pages 42
Heading Hierarchy 0.70
Content Depth 0.64
Freshness 0.53 weighted (blog: 0.46, product: 0.67, structural: n/a)
Passage Extractability 0.64
Schema Coverage Unable to assess (42 pages unscored)

Note Schema coverage could not be assessed for any of the 42 pages due to analysis method limitations. Additionally, 11 pages had no detectable freshness date (8 product pages, 3 structural pages). These metrics should be verified manually by Engineering to complete the technical baseline.

Next Steps

What Happens Next

Why Now

• AI search adoption is accelerating — buyer discovery patterns are shifting quarter over quarter

• Early citations compound: domains that AI platforms learn to trust now get cited more frequently as training data accumulates

• Competitors who establish GEO visibility first create a structural disadvantage for late movers

• The learning management system category is still early-innings in GEO optimization — acting now means competing against inaction, not against entrenched strategies

The full audit will measure D2L's citation visibility across buyer queries in the LMS space, including queries like "best LMS for competency-based education," "D2L Brightspace vs Canvas for large universities," and "learning management system with adaptive learning paths." You'll see exactly which queries return results that include Canvas, Blackboard, or Moodle but not D2L — and what it would take to appear in them. Fixing the stale comparison pages and product page timestamps now improves the technical baseline before we even measure it.

01

Validation Call

45-60 minutes to walk through this document together. We'll confirm personas, competitor tiers, feature strengths, and pain point severity — every correction directly shapes the query set.

02

Query Generation & Execution

Buyer queries constructed from validated KG inputs, executed across selected AI platforms to measure citation visibility, competitive positioning, and response quality.

03

Full Audit Delivery

Visibility analysis, competitive positioning data, and a three-layer action plan — prioritized by which gaps actually cost D2L citations, not by intuition.

Start Now — Don't Wait for the Call These don't depend on the rest of the audit and will improve your baseline visibility before we even measure it:

1. Schema markup audit: Engineering should verify JSON-LD structured data across all commercially relevant pages using Google's Rich Results Test. Check that product pages carry SoftwareApplication schema, blog posts carry Article schema with dateModified, and Organization schema is on the homepage.

2. Meta description and OG tag verification: Marketing should verify meta descriptions across key pages using Screaming Frog or browser dev tools. Ensure each page has a unique, descriptive meta description aligned with buyer search intent.

3. Client-side rendering check: Engineering should load key product and comparison pages with JavaScript disabled in Chrome DevTools. If all content is visible, no further action needed.

Before the Call

Your Pre-Call Checklist

Two jobs before we meet. The questions on the left require your judgment — no one knows your business better than you. The engineering tasks on the right don't require the call at all.

Questions for You
Does a VP of Learning & Development actually appear in D2L's corporate training pipeline?
If wrong: we remove ~15-20 corporate L&D queries and reallocate to higher-ed intent patterns.
Does D2L's corporate training segment generate distinct buyer conversations, or does higher ed dominate pipeline?
If wrong: we may need separate query clusters per segment, roughly doubling query surface area.
Does the CIO or a CTO/VP of IT typically hold final LMS budget authority at target institutions?
If wrong: we need to split the IT decision-maker persona and adjust IT-driven evaluation queries.
Does the Provost drive LMS selection or does IT lead with Academic Affairs as stakeholder?
If wrong: we'd deprioritize accreditation and pedagogy queries in favor of IT-driven evaluation queries.
Does the Director of Online Learning build the vendor shortlist, or does IT assemble the initial list?
If wrong: we'd reweight early-stage discovery queries between these two personas.
Does the LMS Admin have informal veto power through technical evaluation reports?
If wrong: we'd reclassify as evaluator and add administration-complexity queries to their cluster.
Are there missing personas — Dean/Department Chair, Student Success Director, or Procurement Officer?
If wrong: missing persona means an entire buyer query cluster is absent from the audit.
Do Absorb LMS, Sakai, or TalentLMS actually appear in D2L's competitive deals? Any missing vendors?
If wrong: promoting or dropping a competitor shifts 6-8 head-to-head queries per adjustment.
Are Mobile Learning (weak) and Collaboration Tools (weak) accurately rated? Has D2L Lumi changed the AI tools rating?
If wrong: strength ratings determine which capability queries test advantages vs. expose weaknesses.
Is peak performance a real Brightspace issue? Does D2L offer migration tooling that reduces switching risk? Missing pain points?
If wrong: pain point severity and buyer language directly shape how queries are phrased.
For Engineering — Start Now
Run schema markup audit across all commercially relevant pages
Use Google's Rich Results Test or Schema.org Validator. Verify Product/SoftwareApplication, Article, and Organization schema types are correctly configured.
Verify meta descriptions and Open Graph tags across key pages
Use Screaming Frog or browser dev tools. Ensure each page has a unique, descriptive meta description (150-160 chars) with buyer-relevant language.
Verify client-side rendering — load key pages with JavaScript disabled
Chrome DevTools → Settings → Disable JavaScript. If all content is visible without JS, no further action needed. Focus on interactive content sections and comparison tables.
Verify sitemap lastmod timestamps are updating correctly for product pages
8 product pages have no detectable date. Confirm Yoast SEO is setting lastmod timestamps and that they reflect actual content updates.
Alignment

We're Aligned On

This isn't a contract — it's a shared understanding. The audit runs against what's below. If something changes between now and the call, we adjust. The goal is to make sure we're asking the right questions for the right buyers against the right competitors.
Already Confirmed
Competitive set — 5 primary + 4 secondary competitors named (Canvas, Blackboard, Moodle, Docebo, Schoology as primary)
Persona set — 5 personas: 2 decision-makers (CIO, Provost), 2 evaluators (Dir. Online Learning, VP L&D), 1 influencer (LMS Admin)
Feature taxonomy — 12 capabilities with outside-in strength ratings (6 strong, 4 moderate, 2 weak)
Pain point set — 9 buyer frustrations with severity ratings (5 high, 4 medium)
Layer 1 technical audit — 5 findings logged (2 high, 2 medium, 1 low), engineering notified
Decided at the Call
VP of Learning & Development persona validation — confirm this buyer exists in D2L's corporate training pipeline or remove and reallocate queries
Segment query strategy — whether higher ed, K-12, corporate, and government each need distinct query clusters
Feature overweighting — top 3 capabilities to emphasize in buyer queries (recommended: Course Creation & Authoring, Assessment & Grading, Learning Analytics based on pain point linkage)
Pain point prioritization — top 3 buyer problems to test first (recommended: steep learning curve, mobile experience, peak performance based on severity × persona breadth)
Competitor tier adjustments — confirm Absorb LMS, Sakai, and TalentLMS secondary tier assignments; identify any missing vendors
Client
Date