AI Visibility Audit

Graylog
Visibility Report

Competitive intelligence for AI-mediated buying decisions. Where Graylog wins, where it loses, and a prioritized three-layer execution plan — built from 150 buyer queries across ChatGPT + Perplexity.

150 Buyer Queries
5 Personas
8 Buying Jobs
ChatGPT + Perplexity
March 7, 2026

TL;DR

17.3%
Visibility
26 of 150 queries
4%
Win Rate
6 wins of 150 queries
124
Invisible
queries where Graylog absent
19
Recommendations
targeting 150 gap queries (+ 5 near-rebuild optimizations)
Three things to know
The CISO sees Graylog; the evaluators building the shortlist don't
Decision-makers (CISO, VP IT Ops) show a combined win rate of 31.25% (5/16 visible decision-maker queries), while evaluators — security engineers at 0.0% (0/5 wins) and compliance directors at 0.0% (0/2 wins) — win nothing despite appearing in Graylog-visible responses. The 21pp role-type gap means the people who own requirements-building and Shortlisting complete their vendor evaluation frameworks before Graylog appears, so CISOs often receive a shortlist that doesn't include it.
21pp role gap · evaluator queries
Client-side rendering may be hiding Graylog's product content from AI crawlers across ~28 pages
Multiple product pages (/products/enterprise/, /products/source-available/) and feature pages returned minimal visible body text through the audit's rendering pipeline, with content appearing to load dynamically via JavaScript. AI crawlers (GPTBot, ClaudeBot, PerplexityBot) have limited JavaScript execution — if body content relies on client-side rendering, models may index only brief meta descriptions rather than the full feature specifications, Comparison claims, and use-case content that would make these pages citable. This technical risk affects approximately 28 commercially critical pages — the same pages targeted by L2 content optimization recommendations.
Technical fix · ~28 product pages
Graylog's API Security product wins when cited but has almost no content for AI to find
The API Threat Detection & PII Monitoring feature shows 50.0% visibility (3/6 queries, small sample) where thin content does exist — Graylog wins when AI models can find something to cite. Yet all 5 L3 queries targeting API security are routed to net-new content because inventory is 'thin.' Queries like 'What risks do companies face when they have zero visibility into API traffic?' (grl_009) go to competitors by default despite Graylog having a dedicated API Security module — a product-content mismatch competitors do not face in this niche.
Content void · 5 API security queries
Section 1
The Invisible Contender: Graylog's GEO Visibility Audit

Graylog's 17.3% overall visibility (26/150 queries) is not a positioning problem — it is a content architecture problem. The data pattern reveals three compounding gaps that compound across the buying journey.

Early Funnel — Where Graylog is visible but not winning
Problem Identification
0%
Requirements Building
6.7%
Solution Exploration
18.8%
Late Funnel — Where Graylog competes
Consensus Creation
30.8%
Validation
25%
Shortlisting
24%
Comparison
15.6%
Artifact Creation
8.3%

[Mechanism] Three compounding gaps drive early-funnel invisibility: Graylog publishes no educational content for problem-identification and solution-exploration stages, so buyers form initial shortlists without encountering the brand. No Comparison page library means Graylog is absent from 26 of 32 Comparison-stage queries where competitors like Splunk and Elastic maintain dedicated Comparison pages. Five capability areas — SOAR automation, UEBA, dashboards, API security, and cloud log ingestion — have thin or absent content, eliminating Graylog from entire topic areas regardless of product strength. Technical risk compounds all three: possible client-side rendering on approximately 28 product pages may mean AI crawlers index only brief meta descriptions rather than full feature content, reducing the citable content available to AI models even for pages that exist.

Layer 1
Technical Foundation Fixes
Resolve CSR rendering risk, correct the sitemap index typo, and add explicit AI crawler directives to ensure AI models can fully index the content that L2 and L3 improvements will add to existing and new pages.
3 fixes + 3 checks · Days to 2 weeks
Layer 2
Content Depth Optimization
Add extractable benchmarks, Comparison claims, use-case framing, and framework-specific compliance data to 89 existing pages so AI models can cite Graylog for Shortlisting, Validation, and consensus-creation queries where it currently appears but fails to win.
8 recommendations · 2–6 weeks
Layer 3
Net-New Capability Content
Create 55 new content assets across five NIO themes — SOAR automation, UEBA, dashboard usability, API security, and a Comparison page library — to establish Graylog's presence in topic areas where it currently has zero content and zero visibility.
5 recommendations · 1–3 months

[Synthesis] L1 technical fixes must execute before L2 and L3 content work because possible CSR rendering means newly optimized page content may not be indexed by AI crawlers if the body loads via JavaScript — editing a page that crawlers cannot read produces no GEO benefit. The sitemap typo fix ensures new L3 pages created in the content_type sitemap are correctly indexed rather than orphaned.

Section 2
Visibility Analysis

Where Graylog appears and where it doesn't — across personas, buying jobs, and platforms.

[TL;DR] Graylog is visible in 17% of buyer queries but wins only 4%.

Graylog's 17.3% (26/150) overall visibility is driven almost entirely by late-funnel exposure — the 90.9% early-funnel invisibility rate means buyers are making their initial shortlist decisions without encountering the brand, and the 21pp evaluator gap means the people who own those shortlists see Graylog least.

Platform Visibility

+10pp
Senior Security Engineer — widest persona swing
−8pp
Artifact Creation — widest stage swing
DimensionCombinedPlatform Delta
All Queries17.3%Even
By Persona
Chief Information Security Officer34.3%Perplexity +6pp
Director of Compliance & Risk9.1%Even
Senior Security Engineer17.2%ChatGPT +10pp
SOC Manager9.7%Perplexity +6pp
VP of IT Operations12.1%Even
By Buying Job
Artifact Creation8.3%Perplexity +8pp
Comparison15.6%ChatGPT +3pp
Consensus Creation30.8%Perplexity +8pp
Problem Identification0%Even
Requirements Building6.7%ChatGPT +7pp
Shortlisting24%Perplexity +4pp
Solution Exploration18.8%Perplexity +6pp
Validation25%ChatGPT +4pp
Show per-platform breakdown (ChatGPT vs Perplexity raw %)
DimensionChatGPTPerplexity
All Queries12%12.7%
By Persona
Chief Information Security Officer22.9%28.6%
Director of Compliance & Risk4.5%4.5%
Senior Security Engineer17.2%6.9%
SOC Manager3.2%9.7%
VP of IT Operations9.1%9.1%
By Buying Job
Artifact Creation0%8.3%
Comparison15.6%12.5%
Consensus Creation15.4%23.1%
Problem Identification0%0%
Requirements Building6.7%0%
Shortlisting12%16%
Solution Exploration6.2%12.5%
Validation25%20.8%

Visibility by Buying Job

Artifact Creation8.3% (1/12)
Comparison15.6% (5/32)
Consensus Creation30.8% (4/13)
Problem Identification0% (0/13)
Requirements Building6.7% (1/15)
Shortlisting24% (6/25)
Solution Exploration18.8% (3/16)
Validation25% (6/24)
High-intent visibility
Shortlist + Compare + Validate
21% (17/81)
High-intent win rate35.3% (6/17)
Appearance → win conversion35.3% (6/17)

Visibility & Win Rate by Persona

Chief Information Security Officer34.3% vis · 41.7% win (5/12)
Director of Compliance & Risk9.1% vis · 0% win (0/2)
Senior Security Engineer17.2% vis · 0% win (0/5)
SOC Manager9.7% vis · 33.3% win (1/3)
VP of IT Operations12.1% vis · 0% win (0/4)
Decision-maker win rate
Chief Information Security Officer + VP of IT Operations
31.2% (5/16 visible)
Evaluator win rate
Director of Compliance & Risk + Senior Security Engineer + SOC Manager
10% (1/10 visible)
Role type gap21pp

Visibility by Feature Focus

Alerting Notification16.7% vis (1/6) · 100% win (1/1)
API Security50% vis (3/6) · 33.3% win (1/3)
Compliance Reporting11.8% vis (2/17) · 0% win (0/2)
Cost Predictability27.8% vis (5/18) · 20% win (1/5)
Dashboards Visualization0% vis (0/5) · 0% win (0)
Data Ingestion Parsing0% vis (0/11) · 0% win (0)
Deployment Flexibility21.1% vis (4/19) · 0% win (0/4)
Log Management21.4% vis (3/14) · 33.3% win (1/3)
Scalability Performance18.2% vis (2/11) · 0% win (0/2)
Siem Threat Detection20.8% vis (5/24) · 40% win (2/5)
Soar Automation9.1% vis (1/11) · 0% win (0/1)
Ueba Behavioral Analytics0% vis (0/8) · 0% win (0)

Visibility by Pain Point

Alert Fatigue9.1% vis (2/22) · 50% win (1/2)
API Visibility Gap50% vis (3/6) · 33.3% win (1/3)
Blind Spots Log Gaps9.1% vis (1/11) · 100% win (1/1)
Compliance Audit Scramble11.1% vis (2/18) · 0% win (0/2)
Siem Complexity5.6% vis (1/18) · 0% win (0/1)
Siem Cost Explosion25% vis (6/24) · 16.7% win (1/6)
Slow Investigation26.7% vis (4/15) · 0% win (0/4)
Staffing Shortage14.3% vis (2/14) · 0% win (0/2)
Tool Sprawl25% vis (1/4) · 100% win (1/1)
Vendor Lock In8.3% vis (1/12) · 0% win (0/1)

[Data] Overall visibility: 17.3% (26/150 queries). Early-funnel invisibility: 90.9% (40/44 queries across Problem Identification, Solution Exploration, Requirements Building). CISO visibility: 34.3% (12/35). Compliance director: 9.1% (2/22). Security engineer: 17.2% (5/29). SOC manager: 9.7% (3/31). Decision-maker win rate: 31.25% (5/16 visible). Evaluator win rate: 10.0% (1/10 visible). Role-type gap: 21pp. [Synthesis] The 21pp gap between decision-maker and evaluator win rates reveals the funnel's structural weakness: CISOs encounter Graylog and respond well, but the evaluators who build their shortlists — security engineers, compliance directors, and SOC managers — largely do not. Evaluators own the requirements-building and Shortlisting stages where vendor lists are constructed; their 10.0% win rate (1/10 visible) means Graylog is being filtered out before it reaches the CISO desk. The fix is content that reaches evaluators at the problem-identification and solution-exploration stages where they form category mental models — not just product pages that serve buyers who already know Graylog exists.

Invisibility Gaps — 124 Queries Where Graylog Doesn’t Appear

42 queries won by named competitors · 68 no clear winner · 14 no vendor mentioned

Sorted by competitive damage — competitor-winning queries first.

IDQueryPersonaStageWinner
⚑ Competitor Wins — 42 queries where a named competitor captures the buyer
grl_029"What data residency and sovereignty considerations matter when choosing between cloud and on-prem SIEM?"Director of Compliance & RiskSolution Exp.Splunk
grl_044"How to evaluate whether a SIEM can scale with our company without needing constant infrastructure upgrades"VP of IT OperationsReq. BuildingElastic Security
grl_045"Best SIEM platforms for mid-market companies with high alert volumes and small security teams"Chief Information Security OfficerShortlistingSumo Logic
grl_052"SIEM tools with pre-built MITRE ATT&CK detection rules that work out of the box"Senior Security EngineerShortlistingElastic Security
grl_053"Most user-friendly SIEM platforms for IT ops teams that aren't security specialists"VP of IT OperationsShortlistingSumo Logic
grl_055"Best SIEM solutions with UEBA for detecting insider threats and compromised credentials"Senior Security EngineerShortlistingSplunk
grl_057"SIEM platforms with the best alert tuning and noise reduction for security operations centers"SOC ManagerShortlistingExabeam
grl_058"SIEM solutions with automated compliance reporting for SOX and GDPR audits"Director of Compliance & RiskShortlistingSplunk
grl_060"fastest SIEM platforms for forensic log search across terabytes of retained data"Senior Security EngineerShortlistingCrowdStrike Falcon Next-Gen SIEM
grl_062"Top SIEM platforms for a 3-5 analyst SOC that needs strong out-of-box threat detections"SOC ManagerShortlistingSplunk
Show 32 more competitor wins + 82 uncontested queries

Remaining competitor wins: Datadog ×8, Sumo Logic ×6, Splunk ×5, Elastic Security ×5, LogRhythm ×3, Exabeam ×3, Wazuh ×1, CrowdStrike Falcon Next-Gen SIEM ×1. 68 queries with no clear winner. 14 queries with no vendor mentioned. Full query-level data available in the analysis export.

Positioning Gaps — 20 Queries Where Graylog Appears But Loses

Queries where Graylog is mentioned but a competitor is positioned more favorably.

IDQueryPersonaBuying JobWinnerGraylog Position
grl_017"Cloud SIEM vs. on-prem SIEM vs. hybrid — what are the real differences for a 500-person company?"VP of IT OperationsSolution Exp.No Clear WinnerMentioned In List
grl_021"How do API security tools differ from traditional SIEM for detecting data exfiltration through APIs?"Chief Information Security OfficerSolution Exp.No Vendor MentionedMentioned In List
grl_027"How do SIEM platforms integrate MITRE ATT&CK mappings into detection and investigation workflows?"Senior Security EngineerSolution Exp.No Clear WinnerBrief Mention
grl_035"What pricing questions should I ask SIEM vendors to avoid surprise costs as log volumes grow?"Chief Information Security OfficerReq. BuildingNo Clear WinnerBrief Mention
grl_047"Top SIEM tools with fast log search for incident investigations processing 200+ GB/day"SOC ManagerShortlistingSplunkMentioned In List
grl_050"SIEM platforms that support both cloud and on-prem deployment for hybrid environments"Chief Information Security OfficerShortlistingNo Clear WinnerMentioned In List
grl_059"Which SIEM vendors offer flat-rate or node-based pricing instead of charging per GB of ingestion?"Chief Information Security OfficerShortlistingNo Clear WinnerMentioned In List
grl_066"mid-market SIEM alternatives that don't charge by data volume — we need to ingest everything"VP of IT OperationsShortlistingExabeamMentioned In List
grl_079"LogRhythm vs Splunk vs Graylog — which SIEM has the best out-of-box detection content?"Chief Information Security OfficerComparisonLogRhythmMentioned In List
grl_098"ManageEngine Log360 vs LogRhythm for compliance and log management at a budget-conscious mid-market company"Director of Compliance & RiskComparisonManageEngine Log360Mentioned In List
Show 10 more queries
IDQueryPersonaBuying JobWinnerGraylog Position
grl_109"Graylog performance at high volume — what do users say about search speed past 200 GB/day?"Senior Security EngineerValidationNo Clear WinnerMentioned In List
grl_110"How complex is Graylog deployment for a mid-size IT team without dedicated SIEM engineers?"VP of IT OperationsValidationNo Clear WinnerMentioned In List
grl_111"Graylog API Security — is it mature enough for production use or still early-stage?"Chief Information Security OfficerValidationNo Clear WinnerMentioned In List
grl_119"Graylog Open vs Graylog Enterprise — what are the real limitations of the free version?"Senior Security EngineerValidationNo Clear WinnerMentioned In List
grl_124"LogRhythm investigation workflow — is it actually faster than manual log correlation?"Senior Security EngineerValidationNo Clear WinnerBrief Mention
grl_126"ROI of switching to a lower-cost SIEM — how do you calculate savings vs. migration risk?"Chief Information Security OfficerConsensusNo Clear WinnerMentioned In List
grl_128"Case studies of mid-market companies that improved threat detection after switching SIEMs"SOC ManagerConsensusNo Clear WinnerBrief Mention
grl_132"How to make the business case for SIEM automation to non-technical executives"Senior Security EngineerConsensusNo Clear WinnerMentioned In List
grl_135"Total cost Comparison of running Elastic Stack in-house vs. a managed SIEM like Graylog Cloud or Sumo Logic"VP of IT OperationsConsensusNo Clear WinnerMentioned In List
grl_143"Create a compliance requirements matrix for evaluating SIEM platforms against PCI DSS, HIPAA, SOX, and GDPR"Director of Compliance & RiskArtifactNo Vendor MentionedMentioned In List
Section 3
Competitive Position

Who’s winning when Graylog isn’t — and who controls the narrative at each buying stage.

[TL;DR] Graylog wins 4% of queries (6/150), ranks #7 in SOV — H2H record: 15W–4L across 8 competitors.

Graylog beats every named competitor it faces in H2H matchups (5-2 vs. Splunk, 3-0 vs. Elastic) but ranks #7 in share of voice with 6.4% of competitive mentions — a paradox explained by visibility, not positioning: Graylog cannot win matchups it is not present for.

Share of Voice

CompanyMentionsShare
Splunk10726.2%
Elastic Security8420.6%
Exabeam5413.2%
Sumo Logic4912%
Datadog358.6%
LogRhythm286.9%
Graylog266.4%
CrowdStrike Falcon Next-Gen SIEM112.7%
Wazuh82%
ManageEngine Log36061.5%

Head-to-Head Records

When Graylog and a competitor both appear in the same response, who gets the recommendation? One query with multiple competitors generates a matchup against each — so H2H totals will exceed the query count.

Win = primary recommendation (cross-platform majority). Loss = competitor was. Tie = neither or third party.

vs. Splunk5W – 2L – 9T (16 co-appear)
vs. Elastic Security3W – 0L – 13T (16 co-appear)
vs. Datadog1W – 0L – 3T (4 co-appear)
vs. Sumo Logic1W – 0L – 5T (6 co-appear)
vs. LogRhythm0W – 0L – 6T (6 co-appear)
vs. Wazuh2W – 0L (2 co-appear)
vs. Exabeam2W – 1L – 6T (9 co-appear)
vs. ManageEngine Log3601W – 1L (2 co-appear)

Invisible Query Winners

For the 124 queries where Graylog is completely absent:

Splunk11 wins (8.9%)
Datadog7 wins (5.7%)
Sumo Logic7 wins (5.7%)
Elastic Security6 wins (4.8%)
Exabeam4 wins (3.2%)
LogRhythm3 wins (2.4%)
CrowdStrike Falcon Next-Gen SIEM2 wins (1.6%)
Wazuh1 win (0.8%)
ManageEngine Log3601 win (0.8%)
Uncontested (no winner)82 queries (66.1%)

Surprise Competitors

Vendors appearing in responses not in Graylog’s defined competitive set.

Microsoft Sentinel — 18.4% SOVFlagged
QRadar — 6.9% SOVFlagged
IBM QRadar — 6.1% SOVFlagged
Securonix — 5.4% SOVFlagged
OpenSearch — 3.9% SOVFlagged
Chronicle — 3.2% SOVFlagged
NetWitness — 2.7% SOVFlagged
netwitness — 2.5% SOVFlagged
XDR — 2.5% SOVFlagged
Devo — 1.7% SOVFlagged
syslog — 1.7% SOVFlagged
Palo Alto Networks — 1.5% SOVFlagged
Google Chronicle — 1.5% SOVFlagged
Panther — 1.5% SOVFlagged
Syslog — 1.5% SOVFlagged
Cribl — 1.2% SOVFlagged
Stellar Cyber — 1.2% SOVFlagged

[Synthesis] The competitive picture is internally contradictory in an instructive way. Win rate (the query-level metric measuring how often Graylog wins across all buyers) is low — 17.3% overall — yet H2H records show Graylog beating every named competitor it faces except Exabeam (2W-1L). The divergence is explained by visibility: Graylog's H2H wins only occur in the 17.3% of queries where it appears at all. SOV rank #7 means Graylog is out-mentioned by six competitors on the AI platforms buyers use for research. Critically, Microsoft Sentinel appeared 75 times as an unlisted competitor — a signal that AI models are routing regulated-industry security queries toward Microsoft's platform in ways that Graylog's current content does not intercept.

Section 4
Citation & Content Landscape

What AI reads and trusts in this category.

[TL;DR] Graylog had 32 unique pages cited across buyer queries, ranking #9 among all cited domains. 10 high-authority domains cite competitors but not Graylog.

32 unique Graylog pages were cited across the full audit and the domain ranked #9 by citation volume — below share-of-voice rank, indicating AI models reference Graylog by name from third-party sources rather than Graylog-authored content, and a deeper on-domain content library is required to shift this.

Top Cited Domains (citation instances)

sumologic.com65
Exabeam.com53
Splunk.com41
reddit.com28
elastic.co27
Show 15 more domains
linkedin.com27
searchinform.com26
underdefense.com23
graylog.org23 (#9)
netwitness.com22
sentinelone.com17
manageengine.com16
signoz.io16
learn.microsoft.com15
crowdstrike.com14
en.wikipedia.org14
datadoghq.com14
huntress.com13
g2.com13
paloaltonetworks.com12

Graylog URL Citations by Page

graylog.org/post/calculating-a-siems-total-cost...3
graylog.org/post/cloud-vs-on-premised-siem-one-...2
graylog.org/post/siem-automation-to-improve-thr...2
graylog.org/post/apis-the-silent-highway-for-se...1
graylog.org/post/using-mitre-attck-for-incident...1
Show 27 more pages
graylog.org1
graylog.org/pricing1
graylog.org/post/7-siem-configurations-to-impro...1
go2docs.graylog.org/current/interacting_with_yo...1
graylog.org/feature/events-and-alerts1
graylog.org/post/graylog-security-the-affordabl...1
go2docs.graylog.org/current/interacting_with_yo...1
graylog.org/graylog-vs-elastic-siem1
graylog.org/graylog-vs-Splunk-siem1
community.graylog.org/t/performance-optimizing/...1
go2docs.graylog.org/current/planning_your_deplo...1
community.graylog.org/t/graylog-migration-from-...1
go2docs.graylog.org/current/planning_your_deplo...1
go2docs.graylog.org/1380099/planning_your_deplo...1
go2docs.graylog.org/current/downloading_and_ins...1
community.graylog.org/t/best-hardware-setup-for...1
graylog.org/news/graylog-announces-free-api-sec...1
go2docs.graylog.org/apisecurity-current/what_is...1
go2.graylog.org/api-security-basics1
go2.graylog.org/api-security-free1
graylog.org/post/what-is-an-api-gateway1
graylog.org/products/api-security1
graylog.org/open-see-whats-missing1
graylog.org/open-vs-paid1
graylog.org/products/source-available1
go2docs.graylog.org/current/setting_up_graylog/...1
graylog.org/graylog-vs-LogRhythm-siem1
Total Graylog unique pages cited32
Graylog domain rank#9

Competitor URL Citations

Note: Domain-level citation counts (above) tally instances per individual domain. Competitor-level counts (below) aggregate across all domains owned by a single vendor, which may include subdomains.

Sumo Logic65 URL citations
Exabeam55 URL citations
Splunk50 URL citations
Elastic Security17 URL citations
CrowdStrike Falcon Next-Gen SIEM12 URL citations
ManageEngine Log36012 URL citations
Datadog9 URL citations
LogRhythm9 URL citations
Wazuh3 URL citations

Third-Party Citation Gaps

Non-competitor domains citing other vendors but not Graylog — off-domain authority opportunities.

These domains cited competitors but did not cite Graylog pages in the queries analyzed. This reflects citation patterns in AI responses, not overall platform presence.

reddit.com28 citations · Graylog not cited
linkedin.com27 citations · Graylog not cited
searchinform.com26 citations · Graylog not cited
underdefense.com23 citations · Graylog not cited
netwitness.com22 citations · Graylog not cited

[Synthesis] 32 unique Graylog pages were cited across the full audit — a thin content footprint given the breadth of queries. Graylog's domain ranked #9 by citation volume, which is below its SOV rank (#7 by mentions), indicating that even when AI models mention Graylog, they frequently cite competitor or third-party sources rather than Graylog-authored content. The 10-query third-party citation gap is where editorial sites, analyst content, and review platforms are filling the authority void. Building citation weight requires both producing citable content (L2/L3) and generating third-party authority through analyst coverage, G2 reviews, and community-sourced documentation that AI models treat as independent confirmation.

Section 5
Prioritized Action Plan

Three layers of recommendations ranked by commercial impact and implementation speed.

[TL;DR] 19 priority recommendations (plus 5 near-rebuild optimizations) targeting 150 queries where Graylog is currently invisible. 3 L1 technical fixes + 3 verification checks, 8 content optimizations (L2), 5 new content initiatives (L3).

The 150 recommendations are dependency-ordered: L1 technical fixes first (to ensure AI crawlers can access all content), then 89 L2 optimizations (to add extractable claims to existing pages), then 55 L3 net-new assets (to fill the capability and Comparison gaps where Graylog currently has zero visibility) — the Comparison library NIO alone covers 26 high-intent queries where Graylog's conditional win rate is 60.0% (3/5 visible).

Reading the priority numbers: Recommendations are ranked 1–19 across all three layers by commercial impact × implementation speed. Within each layer, items appear in priority order. Gaps in the sequence (e.g., L1 shows 1, 2, then 12) mean higher-priority items belong to a different layer.

Layer 1 Technical Fixes

Configuration and infrastructure changes. Owner: Engineering / DevOps. Timeline: Days to weeks.

Priority Finding Impact Timeline
#1Possible Client-Side Rendering on Product and Feature PagesMedium1-2 weeks

Issue: Multiple product pages (/products/enterprise/, /products/source-available/) and feature pages returned minimal visible body text through our rendering pipeline, with page content appearing to load dynamically via JavaScript. The rendered output consisted primarily of metadata, analytics scripts, and brief schema.org descriptions rather than the full page body content visible in a browser.

Fix: Test key product and feature pages with JavaScript disabled to determine whether body content is server-rendered. If CSR is confirmed, implement server-side rendering (SSR) or static site generation for all commercially important pages. WordPress sites using Elementor can enable server-side rendering through caching plugins (WP Rocket, LiteSpeed Cache) that serve pre-rendered HTML to crawlers.

#2Stale Competitor Comparison PagesHigh1-3 days

Issue: Two of five competitor Comparison pages have not been updated in over 8 months: Graylog vs. LogRhythm (last modified 2025-07-14) and Graylog vs. Microsoft Sentinel (last modified 2025-07-14). These are high-value pages that AI models reference heavily when answering vendor evaluation queries.

Fix: Update both Comparison pages with current product capabilities, recent feature releases, and 2025-2026 pricing/packaging changes. Ensure each page includes a visible last-updated date. Establish a quarterly review cadence for all Comparison pages.

#12Schema Markup Cannot Be Assessed — Manual Verification RecommendedMedium1-3 days

Issue: Our analysis method processes rendered page content rather than raw HTML source, so JSON-LD structured data blocks are not visible. We detected basic WebPage and BreadcrumbList schema from page metadata on several pages, but cannot determine whether product pages carry Product schema, Comparison pages carry appropriate schema, or blog posts carry Article schema with required fields populated.

Fix: Audit all commercially important pages using Google's Rich Results Test or Schema.org Validator. Ensure product pages carry Product schema, blog posts carry Article schema with datePublished/dateModified, Comparison pages carry appropriate schema, and FAQ sections carry FAQPage schema. The Yoast SEO plugin (already installed) can automate much of this.

Verification Checks

Items requiring manual review before determining if action is needed.

Priority Finding Impact Timeline
#17Meta Descriptions and OG Tags Cannot Be Assessed — Manual Verification RecommendedLow< 1 day

Issue: Meta descriptions, Open Graph tags, and Twitter Card markup are embedded in raw HTML and are not visible through rendered content analysis. While some meta descriptions were captured from schema.org data (e.g., the pricing page description mentioning plan Comparison), we cannot confirm whether all pages have unique, descriptive meta tags and properly configured social preview tags.

Fix: Verify all commercial pages have unique meta descriptions under 160 characters using Screaming Frog or a similar crawler. Check OG tags with a social preview tool. Yoast SEO (installed) should auto-generate these but manual review is recommended for key pages.

#18No Explicit AI Crawler Directives in robots.txtLow< 1 day

Issue: The robots.txt file contains only a wildcard user-agent rule with an empty Disallow directive. There are no explicit rules for AI-specific crawlers (GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot, Google-Extended, Bytespider). All crawlers are implicitly allowed, which is the desired state for AI visibility — but the absence of explicit directives means Graylog has not made a deliberate policy decision about AI crawler access.

Fix: Add explicit User-agent directives for key AI crawlers with Allow: / to document the intentional policy. This provides protection against future accidental blocking and signals to AI platforms that Graylog actively welcomes their crawlers.

#19Sitemap Index Contains Probable Typo in Child Sitemap URLLow< 1 day

Issue: The sitemap index at /sitemap_index.xml references a child sitemap named 'conent_type-sitemap.xml' (missing the 't' in 'content'). This appears to be a typo. The child sitemap's lastmod date is 2024-09-13, suggesting it has not been updated in over 17 months.

Fix: Verify whether the typo URL resolves correctly. If the content type is still used, rename the sitemap to 'content_type-sitemap.xml' and update the sitemap index reference. If the content type is deprecated, remove the child sitemap from the index. Review all child sitemaps for stale entries.

Click any row to expand full issue/fix detail.

Layer 2 Existing Content Optimization

Existing pages that need restructuring or deepening. Owner: Content Team. Timeline: Weeks.

Build Compliance Framework Resource Pages Linked From /use-cases/audit-and-regulatory-compliance/

Priority 6
Currently: partialThe compliance use case page establishes that Graylog supports compliance reporting but does not: (1) map specific Graylog capabilities to PCI DSS, HIPAA, SOX, GDPR, and NIS2 requirements by control (grl_034, grl_058, grl_069, grl_143), (2) quantify audit preparation time savings with named benchmarks (grl_013, grl_134), (3) address how Graylog's approach differs from GRC tools (grl_019), or (4) include enough content to answer competitive compliance queries about LogRhythm (grl_106) or Sumo Logic (grl_115) limitations.

The /use-cases/audit-and-regulatory-compliance/ page does not map Graylog capabilities to specific compliance frameworks by control number — queries grl_034 (PCI DSS and HIPAA security requirements checklist) and grl_069 (NIS2 compliance reporting) find a general compliance overview page with no framework-specific capability mapping that AI models can extract and cite. The /use-cases/audit-and-regulatory-compliance/ page does not quantify audit preparation time savings — queries grl_013 ('How much time do compliance teams spend on log-based audit prep?') and grl_134 ('How much time and money can automated compliance reporting save per audit cycle?') find no benchmarked savings data on this page. The /use-cases/audit-and-regulatory-compliance/ page does not address the SIEM vs. GRC tool positioning question — query grl_019 ('Difference between SIEM compliance reporting and dedicated GRC tools') requires an explanation Graylog should own but currently does not provide.

Queries affected: grl_004, grl_013, grl_019, grl_034, grl_049, grl_058, grl_069, grl_106, grl_115, grl_129, grl_134, grl_143, grl_148

Create Migration & Deployment Architecture Resource Hub Linked From /products/cloud/

Priority 7
Currently: partialThe /products/cloud/ page establishes that Graylog offers cloud, on-prem, and hybrid deployment but does not: (1) provide a structured decision framework for choosing between deployment modes for different regulated environments (grl_029, grl_065), (2) document a SIEM migration methodology with phases and risk mitigation (grl_036, grl_144), (3) explain Graylog Open vs. Graylog Enterprise capability differences in depth (grl_119), or (4) address data residency and sovereignty specifics for regulated industries (grl_122).

The /products/cloud/ page does not provide a deployment decision framework — queries like 'Cloud SIEM vs. on-prem SIEM vs. hybrid — what are the real differences for a 500-person company?' (grl_017) and 'SIEM platforms with data residency options for companies in regulated industries' (grl_065) find no structured Comparison on this page. The /products/cloud/ page makes no reference to SIEM migration complexity or methodology — queries grl_036 (migration evaluation criteria), grl_010 (how hard is legacy SIEM migration), and grl_144 (migration plan template) find only a product description page with no migration guidance content. The /products/cloud/ page does not address data residency requirements by jurisdiction or industry — query grl_029 (data residency and sovereignty for cloud vs. on-prem) and grl_122 (data residency risks for regulated companies) find no named compliance specifics (GDPR, CCPA, FedRAMP) on this page despite deployment flexibility being a key Graylog differentiator.

Queries affected: grl_007, grl_010, grl_014, grl_017, grl_029, grl_036, grl_050, grl_061, grl_065, grl_110, grl_119, grl_122, grl_138, grl_144, grl_145

Create SIEM Pricing Intelligence Resource Linked From /pricing/ to Address Cost Comparison Queries

Priority 8
Currently: partialThe /pricing/ page describes Graylog's pricing model and plan tiers. It does not: (1) explain per-GB vs. per-device vs. flat-rate SIEM pricing models in a vendor-agnostic way (grl_024), (2) document specific hidden costs of Splunk or Datadog that buyers should investigate (grl_103, grl_118), (3) provide a structured 3-year TCO Comparison template (grl_140), or (4) give ROI calculation methodology for switching decisions (grl_126, grl_131). The why-security-teams-are-switching page gestures at cost savings but lacks the numerical specificity buyers need for consensus creation.

The /pricing/ page shows Graylog plan tiers but contains no explanation of how per-GB ingestion pricing (Splunk, Datadog) compares structurally to Graylog's model — buyers asking 'how do SIEM pricing models work' (grl_024) cannot extract a comparative framework from this page. The /pricing/ page does not surface specific hidden-cost scenarios for legacy SIEMs — queries like 'Hidden costs of Splunk Enterprise that IT teams don't expect until year two' (grl_103) and 'Datadog pricing surprises' (grl_118) require named, specific cost trap examples that the /pricing/ page does not provide. The /pricing/ page lacks a structured ROI or payback period calculation that buyers can use to justify migration — queries grl_126 (ROI of switching), grl_131 (payback period), and grl_140 (TCO model) are consensus-creation queries that need calculators or worked examples, not a plan Comparison table.

Queries affected: grl_002, grl_024, grl_035, grl_046, grl_059, grl_066, grl_103, grl_118, grl_120, grl_126, grl_131, grl_140, grl_150

Deepen SIEM Threat Detection & SOC Use-Case Framing on /products/security/

Priority 11
Currently: coveredThe /products/security/ page covers general SIEM capabilities but does not: (1) call out specific MITRE ATT&CK coverage by technique count or tactic, (2) address 3–5 analyst SOC team sizing explicitly, (3) cite specific competitor weaknesses that match competitor-Validation queries (grl_102, grl_104, grl_112, grl_116, grl_121), or (4) include extractable case study language for AI citation on outcome-based queries (grl_128). Marketing prose dominates where structured, quotable claims are needed.

The /products/security/ page makes no reference to MITRE ATT&CK framework coverage by technique count, tactic area, or percentage — buyers asking about out-of-box MITRE coverage (grl_042, grl_052) cannot find a citable answer from this page. The /products/security/ page does not address team-size use cases — queries about '3–5 analyst SOCs' (grl_062) and 'mid-market companies with small security teams' (grl_045) find no content that explicitly names this buyer segment or explains why Graylog's detection content load is appropriate for lean teams. The /products/security/ page contains no named competitor Comparison claims — Validation queries about Splunk problems (grl_102), Elastic frustrations (grl_104), and Datadog security gaps (grl_112) find no Graylog-authored positioning that AI models can extract and cite.

Queries affected: grl_001, grl_012, grl_015, grl_027, grl_030, grl_042, grl_045, grl_052, grl_062, grl_068, grl_102, grl_104, grl_112, grl_116, grl_121, grl_128, grl_139

Add Search Performance Benchmarks and Scalability Comparisons to /feature/scalable-architecture/

Priority 13
Currently: coveredThe /feature/scalable-architecture/ page claims Graylog scales to enterprise volumes but does not: (1) publish specific query latency benchmarks at defined ingestion volumes (grl_033, grl_109), (2) document how Graylog handles scaling past 200 GB/day without infrastructure team intervention (grl_022, grl_056), (3) compare Graylog total cost at scale vs. self-managed Elastic Stack (grl_135), or (4) provide a vendor Comparison scorecard that buyers can use for formal evaluation (grl_149).

The /feature/scalable-architecture/ page contains no quantified search performance benchmarks at specific ingestion volumes — buyers asking 'What search performance benchmarks should I request from SIEM vendors for environments pushing 300+ GB/day?' (grl_033) and 'Graylog performance at high volume — what do users say about search speed past 200 GB/day?' (grl_109) find assertions without data. The /feature/scalable-architecture/ page does not document Graylog's operational overhead curve at scale — query grl_044 ('Can Graylog scale without constant infrastructure upgrades?') and grl_056 ('looking for a SIEM that handles 500 GB/day without a dedicated infrastructure team') find no specific evidence of autonomous scaling capability. The /feature/scalable-architecture/ page makes no reference to managed Elastic Stack vs. Graylog Cloud total cost Comparison — query grl_135 ('Total cost of running Elastic Stack in-house vs. managed SIEM like Graylog Cloud') requires quantified Comparison data not present on this page.

Queries affected: grl_022, grl_033, grl_044, grl_056, grl_109, grl_113, grl_135, grl_149

Create Cloud Log Ingestion Technical Reference and Evaluation Framework on /feature/data-collection/

Priority 14
Currently: partialThe /feature/data-collection/ page asserts broad log source support but does not: (1) name specific cloud log sources by integration method (Graylog Sidecar for Linux, Windows Event Log forwarding, Kubernetes DaemonSet, AWS CloudWatch integration — grl_031, grl_142), (2) address Datadog SIEM's specific ingestion gaps that buyers should weigh when evaluating it as a SIEM vs. Graylog (grl_105), (3) explain the log coverage vs. cost tradeoff that forces teams into blind spots (grl_006, grl_130), or (4) document what went wrong in Splunk migrations and how to avoid those errors (grl_125).

The /feature/data-collection/ page names log sources at a category level ('cloud, on-prem, containers') but does not name specific integrations by cloud provider and platform — queries grl_031 ('What questions should I ask SIEM vendors about log ingestion for cloud-native environments?') and grl_018 ('How do modern SIEMs handle log ingestion from Kubernetes and cloud services?') find no vendor-specific, named-integration content on this page. The /feature/data-collection/ page does not address the log coverage vs. budget tradeoff that creates security blind spots — query grl_006 ('How do security teams handle log blind spots when they can't afford to ingest everything?') and grl_130 ('Risk argument for investing in full log coverage vs. cutting SIEM costs by dropping log sources') require a cost-tiering or selective ingestion strategy discussion not present on this page. The /feature/data-collection/ page contains no documentation on Splunk migration ingestion considerations — query grl_125 ('Common mistakes companies make when migrating from Splunk to a new SIEM platform') is a Validation-stage query that Graylog should answer authoritatively given that its primary competitive displacement target is Splunk, but no migration-specific ingestion guidance exists.

Queries affected: grl_006, grl_018, grl_031, grl_105, grl_125, grl_130, grl_142

Add Alert Tuning Benchmark Data and Comparative Claims to /feature/events-and-alerts/

Priority 15
Currently: coveredThe /feature/events-and-alerts/ page describes Graylog's alerting system but provides no: (1) false positive reduction benchmarks or customer-reported alert volume reduction metrics, (2) specific Comparison against Splunk's alert tuning complexity for queries like grl_026, (3) SOC productivity metrics for leadership justification (grl_133), or (4) named alert correlation methodology that distinguishes Graylog from rule-based legacy SIEM alerting.

The /feature/events-and-alerts/ page describes alert functionality (create alerts, set thresholds) but provides no false positive reduction benchmarks — query grl_057 ('SIEM platforms with the best alert tuning and noise reduction for SOCs') cannot cite Graylog because no quantitative alerting outcome data appears on this page. The /feature/events-and-alerts/ page does not position Graylog's alert tuning approach against Splunk or ArcSight — query grl_026 ('How do modern SIEMs reduce alert noise compared to older platforms like Splunk or ArcSight?') finds no Comparison framing on this page despite it being the primary alerting feature page. The /feature/events-and-alerts/ page does not address alert quality metrics for leadership (MTTR, analyst hours per alert, false positive rate trends) — query grl_133 ('What metrics prove to leadership that a new SIEM actually reduced alert fatigue?') gets no citable evidence from this page.

Queries affected: grl_026, grl_038, grl_057, grl_108, grl_133

Add Investigation Speed Benchmarks and Log Management Use-Case Framing to /feature/search/

Priority 16
Currently: coveredThe /feature/search/ and /use-cases/centralized-log-management/ pages describe Graylog's search and log management capabilities at a feature level but lack: (1) specific investigation speed benchmarks vs. manual log correlation (grl_137: MTTI benchmarks), (2) chain-of-custody and log retention specifics for compliance teams (grl_043), (3) forensic log search benchmarks at terabyte scale (grl_060), and (4) business consolidation ROI framing for CFO justification (grl_127). Content depth is rated at 0.5 by the routing engine — below the 0.6 threshold needed to answer these queries.

The /feature/search/ page describes Graylog's search interface and query language but provides no MTTI (Mean Time to Investigate) benchmarks — query grl_137 ('Mean time to investigate benchmarks — how do modern SIEMs compare to manual log searching?') and grl_047 ('fastest SIEM platforms for log search for incident investigations') find no citable performance evidence. The /feature/search/ page does not address forensic log search requirements at terabyte-scale retained data — query grl_060 ('fastest SIEM platforms for forensic log search across terabytes of retained data') finds no retention architecture or search performance data on this page despite it being the primary search feature page. The /feature/search/ page does not surface log consolidation business value for non-security stakeholders — query grl_127 ('How to justify consolidating log management and SIEM to a CFO who thinks the current setup works fine') needs cost-per-log-event and tool-sprawl reduction data that does not appear on this page.

Queries affected: grl_003, grl_005, grl_016, grl_041, grl_043, grl_047, grl_060, grl_124, grl_127, grl_137, grl_141

Layer 3 Narrative Intelligence Opportunities

Net new content addressing visibility and positioning gaps. Owner: Content Strategy. Timeline: Months.

NIO #1: SOAR & Automation Content Void Across All Buying Stages
Gap Type: Content Type Deficit — Graylog publishes no substantive content on SOAR, incident response automation, or automated playbooks. The SOAR & Incident Response Automation feature shows 9.1% visibility (1/11 queries) and 0.0% win rate across all 11 L3 queries, with all matched inventory assessed as 'thin.' Lean SOC teams make automation capability a category-qualifying criterion — absence from this topic eliminates Graylog before Shortlisting begins.
Critical

Staffing shortage is the defining pressure for the SOC Manager persona, who controls 31 queries in this audit — yet Graylog has no content articulating how its automation reduces analyst workload. Competitors like Splunk and Datadog publish dedicated SOAR and playbook pages that AI models cite repeatedly for queries about automation-driven SOC efficiency. Because Graylog is absent from all 11 queries covering SOAR across problem identification, solution exploration, requirements building, Shortlisting, and consensus creation, buyers who prioritize automation (a deal-qualifying criterion for understaffed teams) complete their evaluation frameworks without ever encountering Graylog's incident response capabilities.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: grl_008, grl_020, grl_028, grl_032, grl_051, grl_067, grl_076, grl_094, grl_117, grl_132, grl_146
“How are small SOC teams automating incident response to make up for staffing shortages?”
“Which SIEMs have the best built-in automation for understaffed SOC teams running 24/7?”
“SIEM with automated playbooks for common alert triage — need to free up analyst time”
“LogRhythm SOAR vs Splunk SOAR — which provides better automation for common incident response?”
Blueprint
  • On-Domain: Create a dedicated SOAR & Incident Response Automation landing page (graylog.org/feature/incident-response-automation/) covering Graylog's built-in playbook engine, alert escalation workflows, and native integrations with ticketing systems (Jira, ServiceNow, PagerDuty).
  • On-Domain: Publish a 'SOC Automation Playbook Guide' blog post series targeting small teams (2–5 analysts), demonstrating specific use cases: credential stuffing auto-containment, failed login threshold escalation, and phishing triage automation.
  • On-Domain: Add a 'Automation ROI Calculator' or benchmarked case study to the use-cases section showing analyst hours saved per week in a 3–5 analyst SOC running Graylog automation.
  • On-Domain: Create a requirements-building resource: 'Must-Have vs. Nice-to-Have SOAR Features for Mid-Market SOCs' — a structured checklist that positions Graylog's native automation alongside integration-based SOAR options.
  • On-Domain: Produce a Graylog vs. Splunk SOAR Comparison page specifically addressing operational overhead for lean security teams.
  • Off-Domain: Submit a guest post to a SOC practitioner publication (SecurityWeek, Dark Reading, CISO Series) on 'How Mid-Market SOCs Can Close the Staffing Gap With SIEM Automation' — authored by a Graylog practitioner or customer.
  • Off-Domain: Develop a G2 review campaign specifically soliciting SOC Manager feedback on alert automation and playbook workflows to build third-party citation weight for this topic.
  • Off-Domain: Pursue co-marketing with SOAR integration partners (PagerDuty, Jira, Cortex XSOAR) to produce joint content that names Graylog as the SIEM layer in their automation stacks.
  • Off-Domain: Submit Graylog to MITRE ATT&CK evaluation coverage reports and reference those results in all automation content to build third-party authority.
Platform Acuity

ChatGPT (medium): ChatGPT cites vendor-produced automation guides and product pages for SOAR queries. Graylog needs a named, crawlable SOAR page with specific capability claims (e.g., 'automated playbooks for credential stuffing response') that ChatGPT can extract as a concrete product differentiator. Perplexity (high): Perplexity's live search surfaces pages with structured automation workflow descriptions, numbered steps, and third-party community citations. A blog post with a 'Step-by-step: automated alert triage in Graylog' format is likely to be cited directly for staffing-shortage queries.

NIO #2: UEBA & Behavioral Analytics Content Void Cedes Insider Threat Queries to Exabeam
Gap Type: Content Type Deficit — Graylog has no content articulating UEBA or behavioral analytics capabilities. The User & Entity Behavior Analytics (UEBA) feature shows 0.0% visibility (0/8 queries) and 0.0% win rate. Exabeam wins 3 of the 8 queries outright; Splunk wins 2. All 8 queries are routed to L3 because content inventory is rated 'thin' — no substantive UEBA page exists.
High

User and entity behavior analytics is a standard Shortlisting criterion for security engineers evaluating SIEMs to detect insider threats and compromised credentials — yet Graylog has no content surface where AI models can find and extract UEBA claims. Exabeam has built its entire brand around behavioral analytics, and Splunk's UEBA product generates extensive third-party reviews that AI models cite by default. Because security engineers (who own requirements building and technical Shortlisting) see 0.0% Graylog visibility across 8 UEBA queries, Graylog's behavioral analytics capabilities — whether native or partner-integrated — are functionally invisible during the stage where evaluation criteria are written.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: grl_011, grl_023, grl_037, grl_055, grl_084, grl_099, grl_114, grl_147
“What role does behavioral analytics play in reducing false positive security alerts?”
“Best SIEM solutions with UEBA for detecting insider threats and compromised credentials”
“What UEBA capabilities should I require in a SIEM for detecting compromised accounts and lateral movement?”
“Splunk UEBA vs Exabeam behavioral analytics — which catches more real insider threats?”
Blueprint
  • On-Domain: Create a UEBA & Behavioral Analytics capability page (graylog.org/feature/behavioral-analytics/ or similar) articulating how Graylog's risk scoring and anomaly detection identify compromised credentials and insider threats, with specific detection scenario examples.
  • On-Domain: Publish an explainer blog post: 'Do You Need Standalone UEBA or Is SIEM-Integrated Behavioral Analytics Enough?' — positioning Graylog's integrated approach as the mid-market answer to the standalone UEBA cost and complexity.
  • On-Domain: Create a UEBA requirements checklist resource: 'What Behavioral Analytics Capabilities Should a Mid-Market SIEM Include?' — structured as a downloadable or inline evaluation framework that specifies detections Graylog natively provides.
  • On-Domain: Produce a Graylog vs. Exabeam Comparison page specifically on behavioral analytics for insider threat detection, with honest coverage of where Graylog's approach differs from Exabeam's dedicated UEBA product.
  • Off-Domain: Pursue G2 review solicitation specifically asking customers to describe Graylog's behavioral anomaly detection use cases to build third-party citation volume on this topic.
  • Off-Domain: Submit practitioner content to security blogs or podcasts on 'Insider Threat Detection Without a Dedicated UEBA Platform' — authored by a security engineer with Graylog production experience.
  • Off-Domain: Engage Forrester or ESG for a brief on integrated UEBA in SIEM to establish third-party analytical authority for queries like grl_147 (UEBA Comparison matrix) and grl_084 (Splunk vs Exabeam).
Platform Acuity

ChatGPT (medium): ChatGPT's Comparison responses for UEBA queries consistently named Exabeam and Splunk by brand. Graylog needs a named UEBA page with specific detection capability claims (e.g., 'detects lateral movement via user behavior baselines') that ChatGPT can attribute to Graylog specifically. Perplexity (high): Perplexity surfaces structured capability pages and review-platform Comparison data for UEBA queries. A Graylog UEBA capability page with self-contained claim paragraphs (not requiring JavaScript to render) would be immediately citable for Shortlisting and requirements-building queries.

NIO #3: Dashboard & Usability Content Gap Surrenders IT Ops Persona Entirely
Gap Type: Content Type Deficit — Graylog has no buyer-facing content demonstrating dashboard usability for non-specialist IT operations teams. The Dashboards & Data Visualization feature shows 0.0% visibility (0/5 queries) and 0.0% win rate. Datadog and Sumo Logic win by default on all 5 queries, leveraging their dashboard-centric marketing and user experience content.
High

The VP of IT Operations persona (33 queries in this audit, 12.1% visibility) evaluates SIEMs primarily on operational usability — whether the platform requires dedicated SIEM expertise or can be operated by generalist IT staff. Graylog has no content addressing this dimension: no 'time to useful dashboard' benchmarks, no IT ops persona case studies, and no visual or descriptive content showing non-specialist users operating the platform. Datadog wins Comparison queries for dashboard usability because its entire brand leads with visualization. Until Graylog publishes content that names and addresses the IT ops audience's specific usability concerns, it cedes these 5 queries and the VP IT Ops discovery-stage attention to Datadog and Sumo Logic.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: grl_025, grl_040, grl_053, grl_082, grl_123
“What should I look for in SIEM dashboards if my ops team isn't deeply technical?”
“What makes a SIEM dashboard actually useful for IT ops teams that aren't security specialists?”
“Most user-friendly SIEM platforms for IT ops teams that aren't security specialists”
“Datadog vs Sumo Logic dashboards — which SIEM has more intuitive visualization for IT operations?”
Blueprint
  • On-Domain: Expand the graylog.org/feature/reports-and-dashboards/ page with IT ops persona-specific content: pre-built dashboard gallery, setup time benchmarks ('useful dashboards in under 2 hours'), and explicit callouts for non-security-specialist operators.
  • On-Domain: Create a use case page: 'Graylog for IT Operations Teams: Security Visibility Without Security Expertise' — covering dashboard templates, alert prioritization views, and common IT ops monitoring use cases.
  • On-Domain: Publish a blog post: 'How Long Does It Take to Get Useful SIEM Dashboards? Graylog vs. Splunk vs. Datadog' with honest setup benchmarks and a screenshot-driven walkthrough of Graylog's dashboard creation experience.
  • On-Domain: Add a 'Dashboard Quick-Start Guide' documentation page linked from the marketing site, demonstrating a non-specialist IT ops user standing up a threat monitoring dashboard in under 30 minutes.
  • Off-Domain: Commission an independent SIEM usability study or analyst brief covering 'Time-to-Dashboard' benchmarks across mid-market SIEMs — position Graylog as fastest to operational visibility for IT generalists.
  • Off-Domain: Pursue Gartner Peer Insights and G2 review campaigns specifically requesting IT ops manager feedback on dashboard usability to build review-based citation authority.
Platform Acuity

ChatGPT (medium): For grl_082 (Datadog vs Sumo Logic dashboards), ChatGPT named Datadog as the winner citing its analytics-first design. Graylog needs content with named usability claims extractable by ChatGPT — specific dashboard templates, setup time data, and persona-specific framing for IT generalists. Perplexity (high): Perplexity surfaces visual product pages and review-based comparisons for dashboard queries. Screenshot-rich pages and G2 reviews specifically mentioning dashboard usability would be immediately receptive. Self-contained paragraphs with benchmark data (e.g., 'Graylog users report X hours to first useful dashboard') are highly citable.

NIO #4: API Security Product Invisible Despite Competitive Capability
Gap Type: Content Type Deficit — Graylog has a dedicated API Security product with 50.0% visibility (3/6 queries, small sample, sample_size_flag=true) where thin content does exist — but all 5 L3 queries on API security are routed to net-new content because the content inventory is too thin to capture these gaps. The CISO (decision_maker with veto power) is the primary persona for 3 of the 5 queries.
Critical

Graylog's API Security module is a genuine product differentiator in the SIEM market — most SIEM competitors do not offer native API threat detection. Yet Graylog has almost no content establishing this capability for buyers who are just beginning to understand API visibility risk. Queries like 'What risks do companies face when they have zero visibility into API traffic?' (grl_009) and 'Graylog API Security — is it mature enough for production use or still early-stage?' (grl_111) go unanswered by Graylog content, with the latter representing a CISO-level Validation query where Graylog's own product credibility is in question. The commercial impact is high: Security teams have no visibility into API traffic, leaving data exfiltration an is the pain point, CISOs control API security budget, and the 5 uncontested queries span the entire buying journey from problem identification through consensus creation.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: grl_009, grl_021, grl_039, grl_111, grl_136
“What risks do companies face when they have zero visibility into their API traffic?”
“How do API security tools differ from traditional SIEM for detecting data exfiltration through APIs?”
“How should I evaluate API security capabilities when they're bundled into a SIEM platform?”
“Graylog API Security — is it mature enough for production use or still early-stage?”
Blueprint
  • On-Domain: Substantially expand the Graylog API Security product page with: specific threat scenarios detected (credential stuffing via API, shadow API discovery, PII data exfiltration), detection methodology, and customer deployment examples with named outcomes.
  • On-Domain: Publish a buyer's guide: 'Evaluating API Security in a SIEM Platform: What to Require and What to Watch Out For' — positioning Graylog's native API monitoring against bolt-on API security tools.
  • On-Domain: Create a problem-identification blog post: 'What Happens When You Have Zero Visibility Into Your API Traffic: Real Attack Scenarios' — a vendor-neutral educational piece that Graylog can legitimately own because it links to a strong product.
  • On-Domain: Publish an ROI and risk justification resource: 'Making the Business Case for API Security Monitoring' — structured for CISO audiences presenting to boards that don't yet see the risk.
  • On-Domain: Create a Validation-stage FAQ: 'Is Graylog API Security Production-Ready? Common Questions Answered' — directly addressing the maturity concerns surfaced in grl_111.
  • Off-Domain: Submit Graylog API Security to industry analyst (Forrester Wave, Gartner MQ) coverage for API security and SIEM bundled offerings to establish third-party Validation.
  • Off-Domain: Pursue co-authored content with API security practitioners or red team firms documenting real API attack scenarios that Graylog's module detected — provides third-party citation authority for grl_009 and grl_021 type queries.
Platform Acuity

ChatGPT (high): ChatGPT cites product-specific capability pages for feature evaluation queries. grl_111 ('Is Graylog API Security mature enough?') is a direct product query — a well-structured Graylog API Security FAQ or product page with explicit maturity claims and customer references would be directly citable. Perplexity (high): Perplexity uses live search to surface current product pages and third-party reviews. An expanded Graylog API Security page with production customer references, specific detection scenarios, and third-party analyst mentions would rank highly for Security teams have no visibility into API traffic, leaving data exfiltration an queries given the low competition in this content niche.

NIO #5: No Comparison Page Library Leaves 26 High-Intent Queries Without Graylog
Gap Type: Structural Gap — Graylog has no Comparison pages targeting competitor-versus-competitor queries and insufficient landing page or blog content for Shortlisting buying_job queries. Across 26 L3 queries, the routing rationale is 'AFFINITY OVERRIDE: buying_job=Comparison requires page types [Comparison] but found [feature, landing_page, product]' — the structural absence of Comparison-format content means Graylog cannot appear even when it has relevant product capabilities.
Critical

The Comparison buying job represents the highest commercial intent in the buying journey — buyers actively naming specific vendors and asking AI to help them choose. Graylog has 15.6% visibility (5/32 Comparison queries visible) and a 60.0% conditional win rate (3/5 visible) — but the 26 queries it cannot appear in are lost entirely to Splunk, Elastic Security, Datadog, and Sumo Logic, which each maintain extensive Comparison page libraries. The root cause is structural: without dedicated Comparison pages or buyer-oriented landing pages that use the Comparison format, AI models cannot place Graylog into responses to questions like 'Splunk vs Elastic Security for threat detection' — even though Graylog is a direct alternative. Creating a Comparison content library would unlock the buying stage where Graylog's win rate is highest when it does appear.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: grl_048, grl_064, grl_071, grl_072, grl_073, grl_074, grl_075, grl_077, grl_078, grl_079, grl_080, grl_081, grl_083, grl_086, grl_087, grl_088, grl_089, grl_090, grl_091, grl_093, grl_095, grl_096, grl_097, grl_098, grl_100, grl_101
“Elastic Security vs Splunk for threat detection correlation rules and MITRE ATT&CK coverage”
“Splunk vs Datadog SIEM total cost of ownership for a 400-person company ingesting 250 GB/day”
“Compare SIEM pricing across Splunk, Datadog, Elastic, and Sumo Logic for a 500-person company”
“LogRhythm vs Splunk vs Graylog — which SIEM has the best out-of-box detection content?”
Blueprint
  • On-Domain: Build a /compare/ or /vs/ hub page with links to all Graylog Comparison pages — signals to AI models that Graylog has a systematic Comparison content strategy.
  • On-Domain: Refresh and date-stamp the existing stale Comparison pages (graylog-vs-LogRhythm-siem, graylog-vs-microsoft-sentinel-siem) and add 2025-2026 pricing/capability updates.
  • On-Domain: Create net-new Graylog vs. [Competitor] pages for: Splunk, Elastic Security, Datadog, Sumo Logic, and Exabeam — each structured with feature Comparison tables, pricing Comparison, ideal use case, and migration guidance.
  • On-Domain: Create 'intercept' Comparison pages for competitor-vs-competitor queries where Graylog belongs in the conversation (e.g., 'Splunk vs. Elastic Security — and Why Mid-Market Teams Choose Graylog Instead') targeting grl_071, grl_075, grl_086, grl_087 type queries.
  • On-Domain: Produce landing pages for cloud log ingestion Shortlisting queries (grl_048, grl_064): 'Graylog Cloud Log Ingestion: AWS CloudTrail, Kubernetes, Azure Monitor Without Custom Parsers' — with specific integration names and zero-config capabilities listed.
  • On-Domain: Create a SIEM pricing Comparison resource: 'How Splunk, Datadog, Elastic, and Sumo Logic Price vs. Graylog — Side-by-Side' targeting grl_087, grl_095, grl_096 queries.
  • Off-Domain: Submit Graylog to G2, Capterra, and Gartner Peer Insights category grids that include the 'alternatives' tab — AI models heavily cite review platform alternatives sections for Comparison queries.
  • Off-Domain: Pursue analyst report inclusions (Forrester Wave for Security Analytics, Gartner Critical Capabilities for SIEM) to generate third-party Comparison citations that AI models draw on for grl_079 and grl_096 type queries.
  • Off-Domain: Develop partnerships with SIEM migration services or consultancies who can produce independent 'Splunk vs. Graylog migration' case content that Graylog does not need to author directly.
Platform Acuity

ChatGPT (medium): ChatGPT defaults to citing vendors with well-known Comparison page libraries (Splunk, Datadog) for competitor-vs-competitor queries. For grl_071 (Elastic vs. Splunk), ChatGPT did not cite any Graylog content. ChatGPT responds to authoritative third-party comparisons (analyst reports, G2 category grids) more than vendor-written Comparison pages — off-domain strategy is important here. Perplexity (high): Perplexity's live search consistently surfaces structured Comparison pages with feature tables, pricing rows, and verdict summaries. A Graylog Comparison page with a clear H2 structure ('Graylog vs. Splunk: Pricing', 'Graylog vs. Splunk: Threat Detection', etc.) and a self-contained Comparison table would be immediately indexable and citable for the 26 queries in this cluster.

Unified Priority Ranking

All recommendations across all three layers, ranked by commercial impact × implementation speed.

  • 1

    Possible Client-Side Rendering on Product and Feature Pages

    Multiple product pages (/products/enterprise/, /products/source-available/) and feature pages returned minimal visible body text through our rendering pipeline, with page content appearing to load dynamically via JavaScript. The rendered output consisted primarily of metadata, analytics scripts, and brief schema.org descriptions rather than the full page body content visible in a browser.

    Technical Fix · Engineering · Potentially all product pages (/products/*), feature pages (/feature/*), and use case pages (/use-cases/*) — approximately 28 pages
  • 2

    Stale Competitor Comparison Pages

    Two of five competitor Comparison pages have not been updated in over 8 months: Graylog vs. LogRhythm (last modified 2025-07-14) and Graylog vs. Microsoft Sentinel (last modified 2025-07-14). These are high-value pages that AI models reference heavily when answering vendor evaluation queries.

    Technical Fix · Content · 2 Comparison pages: /graylog-vs-LogRhythm-siem/ and /graylog-vs-microsoft-sentinel-siem/
  • 3

    API Security Product Invisible Despite Competitive Capability

    Graylog has a dedicated API Security product with 50.0% visibility (3/6 queries, small sample, sample_size_flag=true) where thin content does exist — but all 5 L3 queries on API security are routed to net-new content because the content inventory is too thin to capture these gaps. The CISO (decision_maker with veto power) is the primary persona for 3 of the 5 queries.

    New Content · Content · 5 queries affecting personas: Chief Information Security Officer, Senior Security Engineer
  • 4

    No Comparison Page Library Leaves 26 High-Intent Queries Without Graylog

    Graylog has no Comparison pages targeting competitor-versus-competitor queries and insufficient landing page or blog content for Shortlisting buying_job queries. Across 26 L3 queries, the routing rationale is 'AFFINITY OVERRIDE: buying_job=Comparison requires page types [Comparison] but found [feature, landing_page, product]' — the structural absence of Comparison-format content means Graylog cannot appear even when it has relevant product capabilities.

    New Content · Content · 26 queries affecting personas: Chief Information Security Officer, Senior Security Engineer, SOC Manager, VP of IT Operations, Director of Compliance & Risk
  • 5

    SOAR & Automation Content Void Across All Buying Stages

    Graylog publishes no substantive content on SOAR, incident response automation, or automated playbooks. The SOAR & Incident Response Automation feature shows 9.1% visibility (1/11 queries) and 0.0% win rate across all 11 L3 queries, with all matched inventory assessed as 'thin.' Lean SOC teams make automation capability a category-qualifying criterion — absence from this topic eliminates Graylog before Shortlisting begins.

    New Content · Content · 11 queries affecting personas: SOC Manager, Chief Information Security Officer, Senior Security Engineer
  • 6

    Build Compliance Framework Resource Pages Linked From /use-cases/audit-and-regulatory-compliance/

    The /use-cases/audit-and-regulatory-compliance/ page does not map Graylog capabilities to specific compliance frameworks by control number — queries grl_034 (PCI DSS and HIPAA security requirements checklist) and grl_069 (NIS2 compliance reporting) find a general compliance overview page with no framework-specific capability mapping that AI models can extract and cite.

    Content Optimization → New Content · Content · 13 queries, personas: Director of Compliance & Risk, Chief Information Security Officer
  • 7

    Create Migration & Deployment Architecture Resource Hub Linked From /products/cloud/

    The /products/cloud/ page does not provide a deployment decision framework — queries like 'Cloud SIEM vs. on-prem SIEM vs. hybrid — what are the real differences for a 500-person company?' (grl_017) and 'SIEM platforms with data residency options for companies in regulated industries' (grl_065) find no structured Comparison on this page.

    Content Optimization → New Content · Content · 15 queries, personas: VP of IT Operations, Director of Compliance & Risk, Chief Information Security Officer
  • 8

    Create SIEM Pricing Intelligence Resource Linked From /pricing/ to Address Cost Comparison Queries

    The /pricing/ page shows Graylog plan tiers but contains no explanation of how per-GB ingestion pricing (Splunk, Datadog) compares structurally to Graylog's model — buyers asking 'how do SIEM pricing models work' (grl_024) cannot extract a comparative framework from this page.

    Content Optimization → New Content · Content · 13 queries, personas: Chief Information Security Officer, VP of IT Operations
  • 9

    Dashboard & Usability Content Gap Surrenders IT Ops Persona Entirely

    Graylog has no buyer-facing content demonstrating dashboard usability for non-specialist IT operations teams. The Dashboards & Data Visualization feature shows 0.0% visibility (0/5 queries) and 0.0% win rate. Datadog and Sumo Logic win by default on all 5 queries, leveraging their dashboard-centric marketing and user experience content.

    New Content · Content · 5 queries affecting personas: VP of IT Operations, SOC Manager
  • 10

    UEBA & Behavioral Analytics Content Void Cedes Insider Threat Queries to Exabeam

    Graylog has no content articulating UEBA or behavioral analytics capabilities. The User & Entity Behavior Analytics (UEBA) feature shows 0.0% visibility (0/8 queries) and 0.0% win rate. Exabeam wins 3 of the 8 queries outright; Splunk wins 2. All 8 queries are routed to L3 because content inventory is rated 'thin' — no substantive UEBA page exists.

    New Content · Content · 8 queries affecting personas: Senior Security Engineer, SOC Manager, Chief Information Security Officer
  • 11

    Deepen SIEM Threat Detection & SOC Use-Case Framing on /products/security/

    The /products/security/ page makes no reference to MITRE ATT&CK framework coverage by technique count, tactic area, or percentage — buyers asking about out-of-box MITRE coverage (grl_042, grl_052) cannot find a citable answer from this page.

    Content Optimization · Content · 17 queries, personas: Chief Information Security Officer, SOC Manager, Senior Security Engineer
  • 12

    Schema Markup Cannot Be Assessed — Manual Verification Recommended

    Our analysis method processes rendered page content rather than raw HTML source, so JSON-LD structured data blocks are not visible. We detected basic WebPage and BreadcrumbList schema from page metadata on several pages, but cannot determine whether product pages carry Product schema, Comparison pages carry appropriate schema, or blog posts carry Article schema with required fields populated.

    Technical Fix · Engineering · All 42 inventoried pages — schema coverage could not be assessed for any page
  • 13

    Add Search Performance Benchmarks and Scalability Comparisons to /feature/scalable-architecture/

    The /feature/scalable-architecture/ page contains no quantified search performance benchmarks at specific ingestion volumes — buyers asking 'What search performance benchmarks should I request from SIEM vendors for environments pushing 300+ GB/day?' (grl_033) and 'Graylog performance at high volume — what do users say about search speed past 200 GB/day?' (grl_109) find assertions without data.

    Content Optimization → New Content · Content · 8 queries, personas: VP of IT Operations, Senior Security Engineer
  • 14

    Create Cloud Log Ingestion Technical Reference and Evaluation Framework on /feature/data-collection/

    The /feature/data-collection/ page names log sources at a category level ('cloud, on-prem, containers') but does not name specific integrations by cloud provider and platform — queries grl_031 ('What questions should I ask SIEM vendors about log ingestion for cloud-native environments?') and grl_018 ('How do modern SIEMs handle log ingestion from Kubernetes and cloud services?') find no vendor-specific, named-integration content on this page.

    Content Optimization → New Content · Content · 7 queries, personas: Senior Security Engineer, VP of IT Operations
  • 15

    Add Alert Tuning Benchmark Data and Comparative Claims to /feature/events-and-alerts/

    The /feature/events-and-alerts/ page describes alert functionality (create alerts, set thresholds) but provides no false positive reduction benchmarks — query grl_057 ('SIEM platforms with the best alert tuning and noise reduction for SOCs') cannot cite Graylog because no quantitative alerting outcome data appears on this page.

    Content Optimization · Content · 5 queries, personas: SOC Manager, Chief Information Security Officer
  • 16

    Add Investigation Speed Benchmarks and Log Management Use-Case Framing to /feature/search/

    The /feature/search/ page describes Graylog's search interface and query language but provides no MTTI (Mean Time to Investigate) benchmarks — query grl_137 ('Mean time to investigate benchmarks — how do modern SIEMs compare to manual log searching?') and grl_047 ('fastest SIEM platforms for log search for incident investigations') find no citable performance evidence.

    Content Optimization · Content · 11 queries, personas: SOC Manager, Senior Security Engineer, VP of IT Operations, Director of Compliance & Risk
  • 17

    Meta Descriptions and OG Tags Cannot Be Assessed — Manual Verification Recommended

    Meta descriptions, Open Graph tags, and Twitter Card markup are embedded in raw HTML and are not visible through rendered content analysis. While some meta descriptions were captured from schema.org data (e.g., the pricing page description mentioning plan Comparison), we cannot confirm whether all pages have unique, descriptive meta tags and properly configured social preview tags.

    Technical Fix · Content · All pages — meta descriptions and OG tags not assessable from rendered output
  • 18

    No Explicit AI Crawler Directives in robots.txt

    The robots.txt file contains only a wildcard user-agent rule with an empty Disallow directive. There are no explicit rules for AI-specific crawlers (GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot, Google-Extended, Bytespider). All crawlers are implicitly allowed, which is the desired state for AI visibility — but the absence of explicit directives means Graylog has not made a deliberate policy decision about AI crawler access.

    Technical Fix · Engineering · Site-wide crawler access policy (robots.txt)
  • 19

    Sitemap Index Contains Probable Typo in Child Sitemap URL

    The sitemap index at /sitemap_index.xml references a child sitemap named 'conent_type-sitemap.xml' (missing the 't' in 'content'). This appears to be a typo. The child sitemap's lastmod date is 2024-09-13, suggesting it has not been updated in over 17 months.

    Technical Fix · Engineering · Sitemap index (/sitemap_index.xml) — 1 child sitemap affected

Workstream Mapping

All three workstreams can start this week.

Engineering / DevOps

Layer 1 — Technical Fixes
Timeline: Days to 2 weeks
  • Stale Competitor Comparison Pages
  • Possible Client-Side Rendering on Product and Feature Pages
  • Schema Markup Cannot Be Assessed — Manual Verification…
  • Meta Descriptions and OG Tags Cannot Be Assessed — Manual…

Content Team

Layer 2 — Content Optimization
Timeline: 2–6 weeks
  • Deepen SIEM Threat Detection & SOC Use-Case Framing on…
  • Add Alert Tuning Benchmark Data and Comparative Claims to…
  • Create SIEM Pricing Intelligence Resource Linked From…
  • Create Migration & Deployment Architecture Resource Hub…

Content Strategy

Layer 3 — NIOs + Off-Domain
Timeline: 1–3 months
  • Create a dedicated SOAR & Incident Response Automation…
  • Create a UEBA & Behavioral Analytics capability page…
  • Expand the graylog.org/feature/reports-and-dashboards/ page…
  • Substantially expand the Graylog API Security product page…
  • Build a /compare/ or /vs/ hub page with links to all…

[Synthesis] The 150 recommendations are sequenced by dependency: L1 technical fixes execute first because CSR rendering and sitemap issues may prevent AI crawlers from indexing the content that L2 and L3 improvements will add — fixing a page that crawlers cannot read produces no GEO benefit. L2 content optimizations follow, adding the extractable claims, benchmarks, and Comparison framing that existing pages currently lack. L3 net-new content — 5 themed NIO clusters covering SOAR, UEBA, dashboards, API security, and a Comparison page library — addresses structural gaps where no content exists. The L3 Comparison library (26 queries, NIO 005) has the highest single-NIO commercial impact because it targets the buying stage where Graylog's conditional win rate is strongest.

Methodology
Audit Methodology

Query Construction

150 queries constructed from persona × buying job × feature focus × pain point matrix
Every query carries four metadata fields assigned at creation time
High-intent jobs (Shortlisting + Comparison + Validation): 54% of queries (81 of 150)
Note: 150 queries across full buying journey.

Personas

Chief Information Security Officer — Chief Information Security Officer · Decision Maker
VP of IT Operations — VP of IT Operations · Decision Maker
SOC Manager — SOC Manager · Evaluator
Senior Security Engineer — Senior Security Engineer · Evaluator
Director of Compliance & Risk — Director of Compliance & Risk · Evaluator

Buying Jobs Framework

8 non-linear buying jobs: Artifact Creation → Comparison → Consensus Creation → Problem Identification → Requirements Building → Shortlisting → Solution Exploration → Validation
High-intent jobs (Shortlisting + Comparison + Validation): 54% of queries (81 of 150)

Competitive Set

Primary: Splunk, Elastic Security, Datadog, Sumo Logic, LogRhythm
Secondary: Wazuh, Exabeam, CrowdStrike Falcon Next-Gen SIEM, ManageEngine Log360
Surprise: Microsoft Sentinel, QRadar, IBM QRadar, Securonix, OpenSearch, Chronicle, NetWitness, netwitness, XDR — flagged for review

Platforms & Scoring

Platforms: ChatGPT + Perplexity
Visibility: Binary — does the client appear in the response?
Win rate: Of visible queries, is the client the primary recommendation?

Cross-Platform Counting (Union Method)

When a query is run on multiple platforms, union logic is applied: a query counts as “visible” if the client appears on any platform, not each platform separately.
Winner resolution: When platforms disagree on the winner, majority vote is used. Vendor names are preferred over meta-values (e.g. “no clear winner”). True ties resolve to “no clear winner.”
Share of Voice: Each entity is counted once per query across platforms (union dedup), preventing double-counting when both platforms mention the same company.
This approach ensures headline metrics reflect real buyer-query outcomes rather than inflated per-platform counts.

Terminology

Mentions: Query-level visibility count. A company receives one mention per query where it appears in any platform response (union-deduped). This is the numerator for Share of Voice.
Unique Pages Cited: Count of distinct client page URLs cited across all platform responses, after URL normalization (stripping tracking parameters). The footer total in the Citation section uses this measure.
Citation Instances (Top Cited Domains): Raw count of citation occurrences per domain across all responses. A single domain can accumulate multiple citation instances from different queries and platforms. The Top Cited Domains table uses this measure.