Before we run the audit, we need to make sure we're asking the right questions about the right competitors to the right buyers. This document presents what we've learned about D2L's market — your job is to tell us what we got right, what we got wrong, and what we missed.
Before we measure citation visibility in the learning management system space, these three signals tell us whether AI crawlers can access and trust D2L's site content.
AI search is reshaping how learning management system buyers discover, evaluate, and shortlist platforms. D2L operates in a category where institutional procurement cycles are long and switching costs are high — which means the first LMS vendor an AI platform cites in an evaluation query has a disproportionate influence on the shortlist. Companies that establish GEO visibility now lock in a structural advantage before competitors recognize the opportunity.
This Foundation Review presents three layers of audit preparation: the competitive landscape that shapes which head-to-head queries we construct, the buyer personas whose search intent patterns determine what questions we ask, and the technical baseline that determines whether AI platforms can access D2L's content at all. Each section exists so we can validate inputs together before the audit runs — the accuracy of these inputs directly determines whether the audit measures the right things.
The validation call is a decision-making session with real stakes. Two types of decisions: (1) input validation — are the right competitors in the right tiers, the right personas driving query intent, and the right features rated at the right strengths? and (2) engineering triage — which technical fixes should start before results come back? The specific items for each are in the Pre-Call Checklist at the end of this document.
Three things to know before you start.
What this is This document presents what we've learned about D2L's competitive position in the learning management system market through outside-in research. It covers the personas who buy LMS platforms, the competitors you face in deals, the features buyers evaluate, and the technical signals that determine whether AI platforms can access your content. Every section feeds directly into the query set we'll build for the audit.
What you need to do Look for the purple question boxes throughout the document. These are the specific items where your input changes the audit. We need you to confirm what's right, correct what's wrong, and flag what's missing. Your answers directly shape which queries we run and which competitors we test against.
Confidence badges Every data point carries a confidence badge: High means sourced from multiple reliable signals, Medium means inferred or single-source, Low means best guess. Focus your review time on medium and low confidence items — those are where your corrections have the most impact.
The foundation the audit builds on. Every field here shapes how we construct queries and interpret results.
→ D2L's positioning spans four distinct segments: higher education, K-12, corporate training, and government. Does each segment generate its own buyer conversations and deal cycles, or does higher education dominate pipeline? If corporate training and government are meaningful revenue segments, we need separate query clusters for each — which roughly doubles the query surface area.
5 personas: 2 decision-makers, 2 evaluators, 1 influencer. These personas drive the query set — each one searches differently based on their role in the LMS purchase decision.
Critical Review Area Personas are the highest-leverage input for the audit. Getting a persona wrong doesn't just miss queries — it builds an entire query cluster around the wrong search intent. Review each persona's role, influence level, and veto power carefully.
Data Sourcing Note Role, department, seniority, influence level, veto power, and technical level are sourced directly from the knowledge graph. Buying jobs and query focus areas are synthesized from the persona's role context to illustrate how they would search — these are our best inference, not sourced data.
→ At D2L's target institutions, does the CIO or a CTO/VP of IT typically hold final budget authority for LMS contracts? If a CTO is the real budget holder, we need to split the IT decision-maker into two personas with distinct query intent.
→ Does the Provost drive LMS selection at D2L's target institutions, or does IT lead with Academic Affairs as a stakeholder? If the Provost is advisory rather than a decision-maker, we'd deprioritize accreditation and pedagogy queries in favor of IT-driven evaluation queries.
→ Does this role function as the primary shortlist builder who filters vendors before the CIO and Provost see options, or does IT assemble the initial list? If this persona builds the shortlist, we weight early-stage discovery and comparison queries more heavily for this role.
→ Does a VP of Learning & Development actually appear in D2L's corporate training pipeline, or does Brightspace for Business sell through IT/procurement channels? If this persona doesn't exist in real deals, we remove the corporate L&D query cluster (~15-20 queries) and reallocate to higher-ed intent patterns.
→ Does the LMS Admin have informal veto power through technical evaluation reports, or is their input genuinely advisory? If admins can block selections by flagging migration or administration risks, we'd reclassify as evaluator and add administration-complexity queries to their cluster.
Missing Personas? We identified 5 personas but the LMS buying process at large institutions often involves additional stakeholders. Consider: Dean / Department Chair (if individual schools have LMS selection influence separate from the Provost), Director of Student Success / Retention (if student outcomes data drives the LMS decision), or Procurement / Purchasing Officer (if RFP-stage procurement runs a parallel evaluation process). Who else shows up in D2L's deals?
5 primary + 4 secondary competitors identified. Tier assignments determine which head-to-head queries we construct for the audit.
Why Tiers Matter Getting these tiers right determines which queries test direct competitive differentiation. Each primary competitor generates 6-8 head-to-head queries like "D2L Brightspace vs Canvas LMS" or "best LMS for higher education — Brightspace or Moodle." That's roughly 30-40 head-to-head queries across 5 primary competitors. We're less certain about Absorb LMS, Sakai, and TalentLMS as secondary competitors — if any of these actually appears in D2L deals regularly, promoting them to primary would add another 6-8 queries to the head-to-head set.
→ Three secondary competitors — Absorb LMS, Sakai, and TalentLMS — have medium confidence on tier assignment. Do any of these actually appear in D2L's competitive deals? If Absorb shows up regularly in Brightspace for Business evaluations, it should be primary. Conversely, is Sakai still relevant given its declining market share, or should it be dropped entirely? Are there vendors we missed — particularly in government LMS procurement (e.g., Cornerstone, SAP SuccessFactors Learning)?
12 buyer-level capabilities mapped. Feature strengths determine which capability queries test D2L's advantages vs. expose weaknesses.
Tools for faculty and instructional designers to build engaging online courses with multimedia, interactive content, and reusable learning objects
Flexible quiz types, rubrics, competency-based grading, and an integrated gradebook that handles complex weighting schemes
Dashboards and predictive models that identify at-risk students and measure learning outcomes across programs
Automatically adjust course content and pacing based on individual learner performance and mastery levels
WCAG 2.1 AA compliance, screen reader support, and built-in accessibility checking so all learners can participate regardless of ability
Map learning outcomes to competencies, track student mastery across programs, and generate accreditation reports
Connect the LMS to our SIS, video platforms, plagiarism tools, publisher content, and other edtech through LTI and open APIs
AI tutoring, automated feedback generation, intelligent content recommendations, and AI-assisted course design
Manage thousands of users, courses, and organizational units with granular role-based permissions and bulk operations
Deploy mandatory compliance training, track completions across the workforce, and manage certifications with automated reminders
A mobile app that lets students access courses, submit assignments, participate in discussions, and view grades from any device
Discussion boards, group workspaces, peer review, video conferencing integration, and real-time messaging for student-faculty interaction
→ Two features are rated weak (Mobile Learning, Collaboration Tools) based on consistent negative G2/Capterra reviews. Is this accurate relative to Canvas and Blackboard, or has Brightspace improved in recent releases? Also: AI-Powered Learning Tools and Corporate Training are rated moderate with medium confidence — D2L Lumi is new and Brightspace for Business is less established. Should AI tools be upgraded given Lumi's capabilities, or does Docebo's and Instructure's AI investment keep D2L at moderate? Are any features missing — particularly around video conferencing or virtual classroom capabilities?
9 pain points: 5 high, 4 medium severity. Buyer language here is how queries will be phrased — if the words don't match how your buyers actually talk, the queries won't match real search intent.
→ Two pain points have medium confidence: "System slowdowns during peak usage" (review-sourced but limited data) and "LMS migration risk" (inferred from general LMS market patterns, not D2L-specific reviews). Is peak performance a real issue for Brightspace specifically, or are those reviews about prior-generation LMS platforms? For migration risk — does D2L offer migration tooling that makes this less painful than competitors? Also consider: lack of built-in proctoring (if remote exam integrity is a top concern), limited multi-language support (if international institutions are a target), or vendor lock-in anxiety (if open standards matter to buyers). What's missing?
Technical signals that determine whether AI platforms can access, parse, and trust D2L's content.
Action Required Two high-severity findings identified: Stale Competitor Comparison Pages and Key Product Pages Show Stale Modification Dates. Marketing should prioritize updating these pages — stale timestamps directly reduce citation priority in AI responses. Additionally, Engineering should run a schema markup audit and meta description verification across all 42 analyzed pages — these could not be assessed through our analysis method and need manual confirmation.
What we found: 4 of 6 dedicated comparison pages have not been updated in over 12 months. D2L Brightspace vs. Moodle was last modified August 2024 (~18 months ago). D2L Brightspace vs. Schoology was published February 2024 (~25 months). D2L Brightspace vs. Google Classroom was published January 2023 (~38 months). D2L Brightspace vs. Sakai was published March 2024 (~24 months). Only the Canvas and Blackboard comparison pages have been updated within the last 90 days.
Why it matters: Competitor comparison pages are among the most frequently cited content by AI platforms in vendor evaluation queries. Research shows 76.4% of AI-cited pages were updated within 30 days. Stale comparison pages with outdated G2 data will be deprioritized in favor of fresher competitor content or third-party reviews.
Recommended fix: Update all 4 stale comparison pages with current G2 data (Spring 2026 or latest available), refresh feature comparison tables, and add recent customer migration stories. Prioritize the Moodle page given Moodle's large market share and D2L's existing blog content (moodle-alternatives) that could cross-link.
What we found: The main Brightspace product page (/brightspace/) shows a last modification date of May 7, 2025 — approximately 10 months ago. The Achievement+ page (/brightspace/achievement/) shows a publication date of July 9, 2024 with no visible update — over 20 months old. The Performance+ page (/brightspace/performance/) shows only a September 2022 publication date with no visible recent modification.
Why it matters: Product pages are primary citation sources for AI platforms answering "what does this product do" queries. AI crawlers use modification timestamps as freshness signals when selecting content to cite. Product pages older than 6 months are disadvantaged against competitors with recently updated pages.
Recommended fix: Review and update the main /brightspace/ product page, /brightspace/achievement/, and /brightspace/performance/ pages with current product capabilities, recent customer metrics, and updated award recognitions. Ensure visible dates on the page reflect the update. Verify that sitemap lastmod timestamps are being set correctly for these pages.
What we found: JSON-LD structured data (schema.org markup) could not be assessed across any of the 42 analyzed pages. Our analysis method returns rendered page content as markdown, which strips HTML-embedded schema blocks. We cannot confirm whether appropriate schema types (Product, Article, FAQ, HowTo, Organization) are implemented on commercially relevant pages.
Why it matters: Schema markup helps AI platforms understand page type, content structure, and entity relationships. Missing or incorrect schema reduces the likelihood of content being correctly categorized and cited in AI responses.
Recommended fix: Audit schema markup across all commercially relevant pages using Google's Rich Results Test or Schema.org Validator. Verify that product pages carry Product or SoftwareApplication schema, blog posts carry Article schema with author and dateModified, and the Organization schema is present on the homepage. This is a WordPress site (Yoast SEO detected in sitemap), which likely provides some baseline schema — verify it is correctly configured and sufficiently detailed.
What we found: Meta descriptions and Open Graph (OG) tags could not be assessed from the rendered page output. These HTML head elements are not visible in the rendered markdown content returned by our analysis method.
Why it matters: Meta descriptions influence how AI platforms summarize page content in search results and citations. Missing or generic meta descriptions mean AI platforms must infer page purpose from body content alone. As a WordPress site with Yoast SEO, meta descriptions are likely configured but should be verified for quality and keyword alignment.
Recommended fix: Verify meta descriptions and OG tags across all key pages using browser developer tools or a tool like Screaming Frog. Ensure each commercially relevant page has a unique, descriptive meta description (150-160 characters) that includes the page's primary value proposition. Confirm OG title, description, and image tags are set.
What we found: All 42 fetched pages returned substantial rendered content, suggesting the site is primarily server-rendered (consistent with WordPress). However, client-side rendering detection signals are not available through our analysis method. We cannot definitively confirm that all page content is accessible without JavaScript execution.
Why it matters: AI crawlers vary in their JavaScript rendering capabilities. GPTBot and Googlebot render JavaScript, but some crawlers (PerplexityBot, ClaudeBot) may have limited rendering. WordPress sites are generally server-rendered, making this a low-risk concern for D2L.
Recommended fix: Verify by loading key product and comparison pages in a browser with JavaScript disabled. If all primary content, headings, and navigation are visible without JavaScript, no action is needed. Pay particular attention to interactive content sections, comparison tables, and dynamically loaded testimonials.
Note Schema coverage could not be assessed for any of the 42 pages due to analysis method limitations. Additionally, 11 pages had no detectable freshness date (8 product pages, 3 structural pages). These metrics should be verified manually by Engineering to complete the technical baseline.
Why Now
• AI search adoption is accelerating — buyer discovery patterns are shifting quarter over quarter
• Early citations compound: domains that AI platforms learn to trust now get cited more frequently as training data accumulates
• Competitors who establish GEO visibility first create a structural disadvantage for late movers
• The learning management system category is still early-innings in GEO optimization — acting now means competing against inaction, not against entrenched strategies
The full audit will measure D2L's citation visibility across buyer queries in the LMS space, including queries like "best LMS for competency-based education," "D2L Brightspace vs Canvas for large universities," and "learning management system with adaptive learning paths." You'll see exactly which queries return results that include Canvas, Blackboard, or Moodle but not D2L — and what it would take to appear in them. Fixing the stale comparison pages and product page timestamps now improves the technical baseline before we even measure it.
45-60 minutes to walk through this document together. We'll confirm personas, competitor tiers, feature strengths, and pain point severity — every correction directly shapes the query set.
Buyer queries constructed from validated KG inputs, executed across selected AI platforms to measure citation visibility, competitive positioning, and response quality.
Visibility analysis, competitive positioning data, and a three-layer action plan — prioritized by which gaps actually cost D2L citations, not by intuition.
Start Now — Don't Wait for the Call These don't depend on the rest of the audit and will improve your baseline visibility before we even measure it:
1. Schema markup audit: Engineering should verify JSON-LD structured data across all commercially relevant pages using Google's Rich Results Test. Check that product pages carry SoftwareApplication schema, blog posts carry Article schema with dateModified, and Organization schema is on the homepage.
2. Meta description and OG tag verification: Marketing should verify meta descriptions across key pages using Screaming Frog or browser dev tools. Ensure each page has a unique, descriptive meta description aligned with buyer search intent.
3. Client-side rendering check: Engineering should load key product and comparison pages with JavaScript disabled in Chrome DevTools. If all content is visible, no further action needed.
Two jobs before we meet. The questions on the left require your judgment — no one knows your business better than you. The engineering tasks on the right don't require the call at all.