AI Visibility Audit

15Five
Visibility Report

Competitive intelligence for AI-mediated buying decisions. Where 15Five wins, where it loses, and a prioritized three-layer execution plan — built from 150 buyer queries across ChatGPT + Perplexity.

150 Buyer Queries
5 Personas
8 Buying Jobs
ChatGPT + Perplexity
March 2, 2026

TL;DR

46%
Visibility
69 of 150 queries
12%
Win Rate
18 wins of 150 queries
81
Invisible
queries where 15Five absent
138
Actions
6 L1 + 74 L2 + 58 L3
Section 1
The Shortlist Paradox: Present at the Decision, Absent From the Discovery

Section 1 explains why 15Five's visibility collapses at the discovery stage — before buyers know what they need — while spiking at shortlisting, and identifies the three structural factors holding this pattern in place.

Early Funnel — Where 15Five is absent
Problem Identification
8%
Requirements Building
40%
Solution Exploration
40%
Late Funnel — Where 15Five competes
Shortlisting
81%
Validation
52%
Comparison
47%
Artifact Creation
33%
Consensus Creation
23%

[Mechanism] Three compounding gaps produce the early-funnel invisibility pattern. First, 15Five's XML sitemap contains only 19 blog URLs with all commercial product and solution pages excluded, so AI crawlers cannot reliably index the content that would answer discovery-stage queries about performance management, manager coaching, or engagement solutions — the pages exist but are structurally invisible to AI systems. Second, competitor comparison page URLs — the highest-intent entry points for buyers evaluating alternatives — redirect to a generic brand page with no competitor-specific content, eliminating 15Five from the solution-exploration and comparison conversations those URLs should serve. Third, 15Five lacks content for 58 buying queries entirely, with the gap concentrated in problem identification, solution exploration, and requirements building — the exact stages that drive the 69.0% early-funnel invisibility rate (metrics.funnel_metrics.early_funnel_invisibility_rate). The late-funnel visibility spike to 81% at shortlisting is not a content success; it is evidence that brand recognition built before AI mediated this category is carrying the load — a load that will erode as AI increasingly mediates the pre-shortlist journey.

Layer 1
Unblock Crawler Access
The 6 L1 fixes resolve the sitemap exclusion, comparison page redirects, freshness signal gaps, and indexability issues that currently prevent AI systems from accessing 15Five's commercial content and producing accurate discovery-stage responses.
6 actions · Days to 2 weeks
Layer 2
Sharpen Existing Pages
The 74 L2 optimizations reframe existing product, solution, and blog pages toward the buyer language and problem-framing angle that early-funnel and positioning queries require, closing gaps on pages that will benefit immediately from restored crawler access.
74 actions · 2–6 weeks
Layer 3
Build Discovery Content
The 58 L3 net-new pages create the discovery-stage content 15Five entirely lacks for problem identification, solution exploration, and requirements building queries — directly targeting the buying stages driving the 69.0% early-funnel invisibility rate.
58 actions · 1–3 months

[Synthesis] L1 technical fixes must execute before L2 or L3 because the sitemap fix determines whether any new or optimized content will be indexed by AI systems — publishing 74 updated pages or 58 new pages into a site whose commercial content is excluded from the sitemap leaves all of it invisible to AI crawlers for the same structural reason today's product pages are invisible. The comparison page redirect fix (L1) must also precede any L2 positioning work on comparison content, because no content improvement on a page that immediately redirects elsewhere can improve AI response quality.

Section 2
Visibility Analysis

Where 15Five appears and where it doesn't — across personas, buying jobs, and platforms.

[TL;DR] 15Five appears in 46% of buyer queries and wins 26.1% of those. Converting visibility to wins is the primary challenge (20% gap). High-intent queries run higher at 59.0%.

15Five's 46% overall visibility rate masks a severe funnel imbalance: 8% at problem identification (lowest of all buying jobs) versus 81% at shortlisting means the entire discovery-stage journey happens without 15Five present to shape buyer thinking.

Platform Visibility

−12pp
Perplexity leads ChatGPT overall
−33pp
VP of Talent Management — widest persona swing
−23pp
Consensus Creation — widest stage swing
DimensionCombinedPlatform Delta
All Queries46%Perplexity +12pp
By Persona
Chief Financial Officer52.2%Perplexity +13pp
Chief People Officer38.2%Perplexity +12pp
Director of HR Technology & People Analytics46.9%Perplexity +6pp
VP of People Operations46.9%Even
VP of Talent Management48.3%Perplexity +33pp
By Buying Job
Artifact Creation33.3%Perplexity +20pp
Comparison47.1%Even
Consensus Creation23.1%Perplexity +23pp
Problem Identification8.3%Perplexity +8pp
Requirements Building40%Perplexity +20pp
Shortlisting80.8%Perplexity +23pp
Solution Exploration40%Perplexity +7pp
Validation52.2%Perplexity +13pp
Show per-platform breakdown (ChatGPT vs Perplexity raw %)
DimensionChatGPTPerplexity
All Queries28.7%40.9%
By Persona
Chief Financial Officer34.8%47.8%
Chief People Officer26.5%38.2%
Director of HR Technology & People Analytics31.2%37.5%
VP of People Operations37.5%37.5%
VP of Talent Management13.8%46.4%
By Buying Job
Artifact Creation16.7%36.4%
Comparison41.2%38.2%
Consensus Creation0%23.1%
Problem Identification0%8.3%
Requirements Building13.3%33.3%
Shortlisting53.8%76.9%
Solution Exploration20%26.7%
Validation34.8%47.8%

Visibility by Buying Job

Artifact Creation33.3% (4/12)
Comparison47.1% (16/34)
Consensus Creation23.1% (3/13)
Problem Identification8.3% (1/12)
Requirements Building40% (6/15)
Shortlisting80.8% (21/26)
Solution Exploration40% (6/15)
Validation52.2% (12/23)
High-intent visibility
Shortlist + Compare + Validate
59.0% (49/83)
High-intent win rate34.7% (17/49)
Visibility-to-win gap−24pp

Visibility & Win Rate by Persona

Chief Financial Officer52.2% vis · 41.7% win (12/23)
Chief People Officer38.2% vis · 46.2% win (13/34)
Director of HR Technology & People Analytics46.9% vis · 20% win (15/32)
VP of People Operations46.9% vis · 20% win (15/32)
VP of Talent Management48.3% vis · 7.1% win (14/29)
Decision-maker win rate
cfo + chro
44% (11/25 visible)
Evaluator win rate
hr_technology_director + vp_people_ops + vp_talent
15.9% (7/44 visible)
Role type gap28pp

Visibility by Feature Focus

Compensation Management27.3% vis (3/11) · 33.3% win (1/3)
Continuous Checkins57.1% vis (8/14) · 25% win (2/8)
Employee Engagement Surveys47.6% vis · 20% win (N=21)
Hris Integrations63.6% vis (7/11) · 14.3% win (1/7)
Manager Coaching20% vis (3/15) · 66.7% win (2/3)
Okr Goal Tracking60% vis (6/10) · 16.7% win (1/6)
People Analytics12.5% vis (2/16) · 50% win (1/2)
Performance Reviews61.9% vis · 15.4% win (N=21)
Recognition Feedback44.4% vis (4/9) · 25% win (1/4)
Talent Calibration50% vis (4/8) · 25% win (1/4)

Visibility by Pain Point

Annual Review Burden69.2% vis (9/13) · 11.1% win (1/9)
Goal Misalignment66.7% vis (4/6) · 25% win (1/4)
Hr Roi Proof45.8% vis · 45.5% win (N=24)
Ineffective Managers38.9% vis (7/18) · 28.6% win (2/7)
Low Engagement No Action25% vis (2/8) · 50% win (1/2)
Regrettable Turnover57.9% vis · 36.4% win (N=19)
Siloed Hr Data33.3% vis (2/6) · 50% win (1/2)
Top Talent Flight Risk27.3% vis (3/11) · 33.3% win (1/3)

[Data] Overall: 46% visibility (69/150 queries). High-intent: 59% visible (49/83), 35% win rate (17/83), 24pp vis-to-win gap. Shortlisting: 81% (21/26) — among the highest of all buying jobs (metrics.visibility.by_buying_job[shortlisting].rate = 0.8077). Problem identification: 8% (1/12) — lowest of all buying jobs (metrics.visibility.by_buying_job[problem_identification].rate = 0.0833). Early-funnel invisibility: 69.0% across 42 queries. ChatGPT runs 12pp below Perplexity (metrics.visibility.platform_delta.value_pp = 12). [Synthesis] The 46% overall rate understates the funnel imbalance: 15Five over-indexes at shortlisting (81%) while being nearly absent at problem identification (8% — lowest of all buying jobs), the stage where buyers first articulate their pain and begin forming vendor associations. This back-loading means 15Five enters most buyer journeys as a name on a comparison list rather than the vendor that helped the buyer understand their problem — a dependency on prior brand awareness that becomes more fragile as AI mediates more pre-shortlist discovery. The 12pp platform gap between ChatGPT and Perplexity compounds the risk: gains on one platform are not transferring to the other, and optimization must address both channels.

Invisibility Gaps — 81 Queries Where 15Five Doesn’t Appear

21 queries won by named competitors · 7 no clear winner · 53 no vendor mentioned

Sorted by competitive damage — competitor-winning queries first.

IDQueryPersonaStageWinner
⚑ Competitor Wins — 21 queries where a named competitor captures the buyer
15f_052"switching from annual engagement surveys to a platform with real-time pulse and stronger benchmarking for predicting turnover"vp_people_opsShortlistingPerformYard
15f_056"Top people analytics platforms with AI-powered flight risk detection for mid-market companies"hr_technology_directorShortlistingLattice
15f_072"How does Leapsome's manager development compare to platforms with dedicated AI coaching features?"chroComparisonLeapsome
15f_079"How does Culture Amp's analytics compare to platforms with AI-powered people analytics for workforce insights?"hr_technology_directorComparisonCulture Amp
15f_080"Lattice vs Culture Amp — which has more flexible performance review workflows for complex org structures?"hr_technology_directorComparisonCulture Amp
15f_088"We're replacing our current engagement tool — Culture Amp vs Lattice, which is better for mid-market retention strategies?"chroComparisonCulture Amp
15f_089"Lattice vs Leapsome for manager coaching and development features at a mid-market company"vp_people_opsComparisonLattice
15f_090"Culture Amp vs Leapsome for continuous check-ins and pulse surveys — which drives better manager habits?"vp_people_opsComparisonLeapsome
15f_091"Betterworks vs Lattice analytics — switching from a platform with limited reporting, which has stronger people insights?"hr_technology_directorComparisonLattice
15f_092"Culture Amp vs Workleap for engagement surveys — analytics depth vs. simplicity for smaller HR teams"hr_technology_directorComparisonWorkleap
Show 11 more competitor wins + 60 uncontested queries

Remaining competitor wins: Lattice ×4, Leapsome ×3, Culture Amp ×3, Workleap ×1. 7 queries with no clear winner. 53 queries with no vendor mentioned. Full query-level data available in the analysis export.

Positioning Gaps — 51 Queries Where 15Five Appears But Loses

Queries where 15Five is mentioned but a competitor is positioned more favorably.

IDQueryPersonaBuying JobWinner15Five Position
15f_011"How do you identify which employees are high-potential and at risk of leaving before they hand in their notice?"vp_talentProblem IDNo Vendor MentionedBrief Mention
15f_016"We're replacing our ad-hoc 1:1 process — what's the real difference between dedicated check-in platforms and just using meeting agenda templates?"vp_people_opsSolution Exp.No Clear WinnerMentioned In List
15f_019"How do performance management platforms typically integrate with HRIS systems like Workday, BambooHR, and ADP?"hr_technology_directorSolution Exp.No Clear WinnerMentioned In List
15f_021"Open source vs. commercial OKR tools — real tradeoffs for a company with 200-500 employees"hr_technology_directorSolution Exp.No Clear WinnerMentioned In List
15f_022"We've outgrown SurveyMonkey for employee engagement — what does a modern performance management tech stack look like for 300+ employees?"hr_technology_directorSolution Exp.Culture AmpMentioned In List
15f_024"Our current review process doesn't connect to any business outcomes — how do companies move from annual reviews to something measurable?"cfoSolution Exp.No Vendor MentionedMentioned In List
15f_026"What types of HR technology actually move the needle on reducing voluntary turnover at mid-market companies?"vp_talentSolution Exp.No Vendor MentionedMentioned In List
15f_031"Key requirements for evaluating performance review platforms for a 400-person company moving away from annual reviews"vp_people_opsReq. BuildingNo Clear WinnerBrief Mention
15f_033"We want continuous feedback between review cycles — what capabilities actually matter in a recognition and feedback tool?"vp_people_opsReq. BuildingNo Clear WinnerBrief Mention
15f_034"Integration requirements for evaluating performance management software — HRIS sync, SSO, SCIM provisioning, API access, webhook support"hr_technology_directorReq. BuildingNo Vendor MentionedMentioned In List
Show 41 more queries
IDQueryPersonaBuying JobWinner15Five Position
15f_037"We've tried and failed with spreadsheet-based OKRs — what features in a dedicated OKR tool actually make goal cascading work?"hr_technology_directorReq. BuildingNo Vendor MentionedBrief Mention
15f_039"Evaluation criteria for performance management platforms from a finance perspective — ROI metrics, implementation costs, time to value"cfoReq. BuildingNo Vendor MentionedBrief Mention
15f_042"We need structured 1:1 tools that connect manager check-ins to company goals — what capabilities should we prioritize?"vp_talentReq. BuildingNo Vendor MentionedMentioned In List
15f_044"Which employee engagement platforms actually help you act on survey results, not just collect engagement scores?"chroShortlistingCulture AmpMentioned In List
15f_045"We've outgrown our current performance tool — best platforms for continuous check-ins and manager coaching at a 350-person company"chroShortlistingBetterworksMentioned In List
15f_048"Best compensation management tools for mid-market companies trying to connect pay to performance data"chroShortlistingNo Vendor MentionedMentioned In List
15f_049"Top continuous performance review platforms for replacing spreadsheet-based annual reviews at a 200-500 person company"vp_people_opsShortlistingLatticeStrong 2nd
15f_050"alternatives to our current performance management tool for a 350-person SaaS company focused on reducing regrettable turnover"vp_people_opsShortlistingLatticeMentioned In List
15f_054"performance management platforms with reliable BambooHR and Workday integration — replacing a tool that doesn't sync properly"vp_people_opsShortlistingLatticeMentioned In List
15f_055"Best performance management platforms with native HRIS integrations — Workday, ADP, BambooHR sync without custom middleware"hr_technology_directorShortlistingLatticeMentioned In List
15f_057"looking to replace our current review tool with a continuous performance platform that supports 360-degree feedback and custom review cycles"hr_technology_directorShortlistingLatticeMentioned In List
15f_058"replacing our standalone engagement survey tool — need a platform with real-time dashboards, API access, and data export for an analytics-driven HR team"hr_technology_directorShortlistingCulture AmpMentioned In List
15f_062"OKR platforms affordable enough for mid-market but robust enough to actually make goals stick across departments"cfoShortlistingNo Vendor MentionedStrong 2nd
15f_065"Best OKR tools for companies where goal cascading has never worked — switching from spreadsheets to a dedicated platform"vp_talentShortlistingNo Clear WinnerMentioned In List
15f_066"Which engagement platforms are best at connecting survey data to retention outcomes for mid-market companies?"vp_talentShortlistingLatticeMentioned In List
15f_067"Top tools for developing managers who've never had formal leadership training — practical coaching, not just theory"vp_talentShortlistingCulture AmpMentioned In List
15f_070"We're moving from annual reviews — how does Lattice compare to other platforms for making that transition smooth?"chroComparisonLatticeStrong 2nd
15f_074"How does Culture Amp handle continuous check-ins and manager enablement compared to dedicated check-in platforms?"vp_people_opsComparisonCulture AmpStrong 2nd
15f_075"Switching from our current review tool — how does Lattice compare for making performance reviews less painful?"vp_people_opsComparisonLatticeStrong 2nd
15f_076"How does Workleap's recognition and feedback functionality compare to more comprehensive performance management platforms?"vp_people_opsComparisonWorkleapMentioned In List
15f_077"We're considering switching our engagement tool — how does Culture Amp's benchmarking compare to other platforms' action-planning features?"vp_people_opsComparisonCulture AmpMentioned In List
15f_078"How does Lattice's integration architecture compare to other performance platforms for HRIS sync, APIs, and webhooks?"hr_technology_directorComparisonLatticeMentioned In List
15f_082"We're replacing spreadsheet-based comp decisions — how does Lattice's compensation module compare for linking pay to performance?"cfoComparisonLatticeBrief Mention
15f_084"How does Betterworks' total cost compare to mid-market alternatives — implementation, training, and per-seat pricing?"cfoComparisonBetterworksStrong 2nd
15f_085"How does Lattice's talent calibration and 9-box feature compare to other performance management platforms?"vp_talentComparisonLatticeStrong 2nd
15f_086"How does Leapsome's continuous feedback compare to other 1:1 tools — which one do managers actually adopt?"vp_talentComparisonLeapsomeMentioned In List
15f_087"How does Workleap's engagement surveys compare to more analytics-heavy platforms for a 200-person company?"vp_talentComparisonWorkleapStrong 2nd
15f_103"Lattice implementation problems when migrating from another performance management tool at a mid-market company"chroValidationNo Vendor MentionedBrief Mention
15f_106"We're evaluating Culture Amp as a replacement — what are the biggest downsides of their performance review features?"vp_people_opsValidationNo Clear WinnerBrief Mention
15f_109"Betterworks analytics and reporting limitations — what can't it do that other platforms handle?"hr_technology_directorValidationNo Clear WinnerBrief Mention
15f_111"Betterworks reviews from mid-market companies — is it worth the enterprise-level pricing?"cfoValidationNo Clear WinnerMentioned In List
15f_113"Is Workleap too basic for a growing mid-market company — will we outgrow it in two years?"cfoValidationNo Clear WinnerBrief Mention
15f_114"Workleap Officevibe limitations — what are the biggest feature gaps compared to more comprehensive platforms?"vp_talentValidationNo Clear WinnerMentioned In List
15f_119"15Five talent management and performance calibration — how does it compare to dedicated talent review platforms?"vp_talentValidationNo Clear WinnerPrimary Recommendation
15f_121"Biggest risks of switching to continuous performance management from annual reviews at a mid-market company"hr_technology_directorValidationNo Vendor MentionedMentioned In List
15f_127"Case studies of mid-market companies that improved manager effectiveness after switching to continuous performance management"chroConsensusLatticeMentioned In List
15f_137"Case studies of companies that reduced regrettable turnover after switching from annual reviews to continuous performance management"vp_talentConsensusNo Vendor MentionedMentioned In List
15f_140"Create a vendor comparison scorecard for 15Five, Lattice, Culture Amp, Betterworks, and Leapsome focused on integration capabilities and data architecture"hr_technology_directorArtifactLatticeStrong 2nd
15f_141"Build an evaluation template for comparing continuous performance management platforms — weighted scoring for reviews, check-ins, engagement, and analytics"vp_people_opsArtifactNo Vendor MentionedMentioned In List
15f_147"Create a comparison matrix for OKR and goal tracking features across 15Five, Betterworks, Lattice, and Leapsome"chroArtifactNo Clear WinnerMentioned In List
15f_149"Draft an executive summary comparing recognition and continuous feedback platforms for a leadership team — focus on retention impact"vp_talentArtifactNo Vendor MentionedMentioned In List
Section 3
Competitive Position

Who’s winning when 15Five isn’t — and who controls the narrative at each buying stage.

[TL;DR] 15Five ranks #3 in Share of Voice with a 30W–28L head-to-head record across 9 competitors.

15Five's #3 SOV rank is stable but not earned — Culture Amp and Betterworks are winning head-to-head matchups in queries 15Five should contest, and 53 of 81 invisible queries have no AI winner at all, making the unclaimed early-funnel the largest single competitive opportunity.

Share of Voice

CompanyMentionsShare
Lattice9021.3%
Culture Amp7317.3%
15Five6916.4%
Leapsome5011.8%
Betterworks419.7%
Quantum Workplace307.1%
PerformYard286.6%
Workleap245.7%
Engagedly153.5%
Reflektive20.5%

Head-to-Head Records

Counts only queries where both brands appear. Win = client was the primary recommendation (across platforms, by majority vote). Loss = competitor was. Tie = neither brand — or a third party — was.

vs. Lattice8W – 8L – 44T (60 co-appear)
vs. Culture Amp3W – 6L – 30T (39 co-appear)
vs. Betterworks2W – 6L – 16T (24 co-appear)
vs. Leapsome6W – 2L – 24T (32 co-appear)
vs. Workleap2W – 2L – 16T (20 co-appear)
vs. Quantum Workplace1W – 1L – 15T (17 co-appear)
vs. Engagedly4W – 1L – 5T (10 co-appear)
vs. PerformYard3W – 2L – 14T (19 co-appear)
vs. Reflektive1W – 0L – 1T (2 co-appear)

Invisible Query Winners

For the 81 queries where 15Five is completely absent:

Culture Amp7 wins (9%)
Lattice5 wins (6%)
Betterworks4 wins (5%)
Leapsome3 wins (4%)
Workleap1 win (1%)
PerformYard1 win (1%)
Uncontested (no winner)60 queries (74%)

Surprise Competitors

Vendors appearing in responses not in 15Five’s defined competitive set.

BambooHR — 4.7% SOVFlagged
Perceptyx — 1.7% SOVFlagged
beqom — 1.4% SOVFlagged
HiBob — 1.4% SOVFlagged
WorkTango — 1.4% SOVFlagged
Deel — 1.4% SOVFlagged
Workday — 1.2% SOVFlagged
Workhuman — 1.2% SOVFlagged
Visier — 1.2% SOVFlagged
Paycor — 1.2% SOVFlagged

[Synthesis] 15Five's #3 SOV rank reflects inherited brand recognition, not content dominance — Culture Amp and Betterworks are winning head-to-head matchups in queries where 15Five should compete, while 15Five leads Leapsome (6-2) where its content is more directly comparative. More importantly, 53 of 81 invisible queries have no AI winner at all: the largest opportunity is not displacing a specific competitor but becoming the first authoritative voice on early-funnel questions that no vendor currently answers. Creating that discovery-stage content is the mechanism for converting 15Five's #3 SOV position into one that earns consideration before buyers have formed competitor preferences.

Section 4
Citation & Content Landscape

What AI reads and trusts in this category.

[TL;DR] 15Five had 71 unique pages cited across buyer queries, ranking #3 among all cited domains. 10 high-authority domains cite competitors but not 15Five.

71 unique pages cited ranks 15Five #3 by domain, but the citation mix skews toward the homepage and support docs rather than buyer-facing capability pages — fixing the sitemap and comparison redirects will shift citation composition toward higher-converting content before new pages are needed.

Top Cited Domains (citation instances)

lattice.com165
cultureamp.com126
15five.com112 (#3)
betterworks.com99
leapsome.com77
Show 15 more domains
g2.com75
success.15five.com68 (#7)
peoplemanagingpeople.com54
linkedin.com53
quantumworkplace.com51
workleap.com51
support.cultureamp.com50
performyard.com36
reddit.com35
aihr.com32
capterra.com32
en.wikipedia.org30
bamboohr.com26
youtube.com24
gallup.com23

15Five URL Citations by Page

www.15five.com16
success.15five.com/hc/en-us/articles/1997683054...3
www.15five.com/blog/pendo-reduces-turnover-by-2...3
www.15five.com/solutions/reduce-regrettable-tur...2
www.15five.com/blog/guide-to-performance-manage...2
Show 66 more pages
www.15five.com/products/perform/ai-assisted-rev...2
success.15five.com/hc/en-us/articles/3600523467...2
success.15five.com/hc/en-us/articles/1390263345...2
success.15five.com/hc/en-us/articles/1392119953...2
success.15five.com/hc/en-us/articles/3177987475...2
success.15five.com/hc/en-us/articles/3600026995...2
success.15five.com/hc/en-us/articles/3600026996...2
www.15five.com/products/perform2
www.15five.com/partners/technology-partners/int...2
www.15five.com/products/perform/okrs-and-goals2
www.15five.com/blog/ai-predictive-analytics-for...1
www.15five.com/blog/trustradius-how-using-15fiv...1
success.15five.com/hc/en-us/articles/3090774315...1
success.15five.com/hc/en-us/articles/3085435206...1
success.15five.com/hc/en-us/articles/3028541446...1
www.15five.com/resources/on-demand/performance-...1
www.15five.com/products/15five-ai1
success.15five.com/hc/en-us/articles/3605404832...1
www.15five.com/resources/on-demand/the-ai-compa...1
success.15five.com/hc/en-us/articles/1581797015...1
www.15five.com/resources/research/reviewing-the...1
success.15five.com/hc/en-us/articles/3600065766921
www.15five.com/solutions/improve-manager-effect...1
success.15five.com/hc/en-us/articles/3600065766...1
www.15five.com/blog/empowered-education1
www.15five.com/blog/how-to-implement-impactful-...1
www.15five.com/blog/workplace-challenges1
www.15five.com/blog/top-hr-issues-20211
www.15five.com/blog/creating-a-pip-performance-...1
www.15five.com/blog/career-hub-employee-growth1
www.15five.com/blog/best-self-kickoff1
www.15five.com/blog/6-steps-to-better-onboardin...1
www.15five.com/blog/4-hidden-challenges-that-ho...1
www.15five.com/blog/continuous-employee-feedback1
success.15five.com/hc/en-us/articles/3600517782...1
www.15five.com/blog/the-benefits-of-integrating...1
success.15five.com/hc/en-us/articles/1710639436...1
success.15five.com/hc/en-us/articles/3600026995...1
success.15five.com/hc/en-us/articles/3600206958...1
www.15five.com/security1
success.15five.com/hc/en-us/articles/3086753652...1
success.15five.com/hc/en-us/articles/1181684228...1
www.15five.com/hubfs/Content/E-Books/15Five_202...1
www.15five.com/solutions/increase-employee-enga...1
www.15five.com/blog/how-15five-can-help-improve...1
www.15five.com/hubfs/Content/E-Books/15Five_Emp...1
www.15five.com/blog/employee-engagement-roi-cal...1
www.15five.com/blog/a-case-for-increasing-your-...1
www.15five.com/resources/on-demand/role-of-enga...1
success.15five.com/hc/en-us/articles/4404620478...1
success.15five.com/hc/en-us/articles/3600571794...1
success.15five.com/hc/en-us/articles/4404620505...1
www.15five.com/blog/ensure-fair-and-consistent-...1
success.15five.com/hc/en-us/articles/2386021413...1
success.15five.com/hc/en-us/articles/4404623881...1
www.15five.com/products/perform/calibrations1
www.15five.com/blog/kreg-tool1
www.15five.com/blog/state-of-employee-turnover1
www.15five.com/blog/what-is-continuous-performa...1
www.15five.com/blog/the-impact-of-regrettable-t...1
www.15five.com/winter-2026-product-release1
success.15five.com/hc/en-us/articles/3600256000...1
success.15five.com/hc/en-us/articles/3600026989...1
success.15five.com/hc/en-us/articles/3600026821...1
www.15five.com/blog/using-15fives-performance-m...1
www.15five.com/blog/5-must-have-features-to-loo...1
Total 15Five unique pages cited71
15Five domain rank#3

Competitor URL Citations

Lattice188 URL citations
Culture Amp176 URL citations
Betterworks116 URL citations
Leapsome90 URL citations
Workleap61 URL citations
Quantum Workplace52 URL citations
PerformYard36 URL citations
Engagedly12 URL citations
Reflektive1 URL citations

Third-Party Citation Gaps

Non-competitor domains citing other vendors but not 15Five — off-domain authority opportunities.

g2.com75 citations · 15Five absent
peoplemanagingpeople.com54 citations · 15Five absent
linkedin.com53 citations · 15Five absent
reddit.com35 citations · 15Five absent
aihr.com32 citations · 15Five absent

[Synthesis] 15Five's 71 unique cited pages rank #3 by domain, but the citation mix reveals a content-type problem: the homepage (16 citation instances) and support documentation dominate over buyer-facing capability and comparison pages, signaling that AI systems treat 15Five as a brand reference rather than an authoritative source on specific buyer questions. This pattern is structurally consistent with the sitemap finding: when commercial product pages are excluded from crawler scope, AI systems default to whatever is reachable — the homepage and help center. Fixing the sitemap and comparison page redirects (L1) will shift citation composition toward higher-converting pages before new content is required.

Section 5
Prioritized Action Plan

Three layers of recommendations ranked by commercial impact and implementation speed.

[TL;DR] 132 total gaps: 81 invisibility + 51 positioning. 6 L1 technical fixes, 74 can be addressed by optimizing existing content (L2), 58 require new content creation (L3).

138 actions close 132 gaps, but execution sequence matters more than volume: the 6 L1 technical fixes must run first because they determine whether the 74 L2 optimizations and 58 L3 new pages will be indexed and cited by AI systems at all.

Reading the priority numbers: Items are ranked 1–138 across all three layers by commercial impact × implementation speed. Within each layer, items appear in priority order. Gaps in the sequence (e.g., L1 shows 1, 2, then 12) mean higher-priority items belong to a different layer.

Layer 1 Technical Fixes

Configuration and infrastructure changes. Owner: Engineering / DevOps. Timeline: Days to weeks.

Priority Finding Impact Timeline
#1No Date Signals on Any Product or Solution PageMedium1-3 days

Issue: All product pages, solution pages, the why-15five page, and the pricing page have no visible last-updated dates and are absent from the sitemap — meaning no lastmod signal is available from any source. Freshness could not be determined for 17 of 30 pages analyzed. While blog posts in the sitemap carry lastmod timestamps (November-December 2025), these appear to be bulk-refreshed timestamps rather than per-post content modification dates: several blog posts show sitemap lastmod of 2025-11-25 or 2025-11-26 regardless of their original publication date (some were written in 2017-2019).

Fix: Add accurate lastmod timestamps to all commercial pages in the sitemap (requires first adding them to the sitemap per finding sitemap_missing_commercial_pages). Ensure sitemap lastmod values reflect actual content modification dates, not bulk publish dates. Consider adding visible 'Last updated: [date]' metadata to product and solution pages. Audit the bulk sitemap refresh — verify that pages with Nov 2025 lastmod were actually updated in November 2025 vs. a CMS auto-update.

#2XML Sitemap Contains Only 19 Blog URLs — All Commercial Pages AbsentMedium1-3 days

Issue: The sitemap at https://www.15five.com/sitemap.xml contains exactly 19 URLs, all of which are blog posts or resource thank-you pages with lastmod timestamps of November-December 2025. Zero product pages, zero solution pages, zero pricing pages, zero integration pages, zero comparison-redirect pages, and zero feature subpages appear in the sitemap. No sitemap index file exists (sitemap_index.xml and hs-sitemap.xml both return 404). Core commercial pages such as /products, /products/perform, /products/engage, /products/kona, /products/perform/compensation, /pricing, /integrations, /solutions/reduce-regrettable-turnover, and /why-15five are all entirely absent from any known sitemap.

Fix: Expand the sitemap to include all commercial pages — product pages, feature subpages, solution pages, pricing, integrations, comparison-redirect pages, and customer stories. Add accurate lastmod timestamps. If HubSpot CMS is in use (suggested by robots.txt Disallow patterns for /_hcms/ paths), verify sitemap page-type inclusion settings in Settings > Website > Pages > Sitemap and ensure all page types are enabled. Submit the updated sitemap to Google Search Console and Bing Webmaster Tools.

#13Case Study Page Returns Minimal Body Content — Verify Gating or CSRMedium1-3 days

Issue: The Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 returned almost exclusively navigation and footer markup with negligible body content — only the headline metric ('reduced turnover by over 20%') and a download button were accessible. Related customer stories presented as blog posts (Pendo, Auror) returned full body content normally. The case study format on this URL appears to use a gated download model (PDF behind a form), which renders the page's substantive content inaccessible to AI crawlers.

Fix: Convert the highest-value case studies from gated PDF format to fully accessible HTML pages with inline outcome metrics, challenge/solution narrative, and specific product features used. Keep the formatted PDF as a downloadable bonus for users who want it. This approach makes the content available to both AI crawlers and human readers without sacrificing lead capture (the form can be offered as an optional 'download full report' CTA within the page). Priority case studies to convert: Kreg Tool, TrustRadius, and any others using the gated format.

#14Competitor Comparison URLs Redirect to Generic Brand Page With No Competitor ContentMedium1-2 weeks

Issue: Three URLs that appear in search engine results as dedicated competitor comparison pages — /15five-vs-lattice, /15five-vs-cultureamp/, and /15five-vs-leapsome/ — all redirect to the generic /why-15five page. The /why-15five page contains no competitor-specific content: it does not mention Lattice, Culture Amp, or Leapsome by name, and contains only generic brand messaging ('The new ERA OF HR'). Fetching each comparison URL confirmed the canonical page is /why-15five and the full page content is identical across all three. Web search results still index these URLs with competitor-specific titles (e.g., '15Five vs Culture Amp | Comparing Employee Management...'), meaning buyers and AI crawlers who follow these URLs from search results land on a page that does not address the query that brought them there.

Fix: Either (a) create dedicated comparison landing pages at the existing URLs with substantive head-to-head content for each competitor, or (b) if comparison pages are not being maintained, implement 301 redirects from these URLs to the blog posts that do contain comparison content (e.g., /15five-vs-lattice → /blog/heres-why-people-choose-15five-over-lattice). Option (a) is strongly preferred: dedicated comparison pages with feature matrices, use-case differentiation, and migration guides are among the highest-ROI content types for AI citation in competitive evaluation queries. At minimum, create comparison pages for the top 3 primary competitors: Lattice, Culture Amp, and Betterworks.

#18Meta Descriptions and OG Tags: Manual Verification RequiredLow1-3 days

Issue: Meta descriptions and Open Graph tags (og:description, og:image, og:title) are not accessible via rendered markdown analysis. None of the 30 pages analyzed had visible meta description or OG tag content in the fetched output.

Fix: Audit meta descriptions and OG tags using Screaming Frog, Ahrefs site audit, or browser view-source. Confirm every commercial page has a unique meta description (140-160 characters) with a specific capability claim. For the /why-15five page (which currently serves as the redirect destination for three competitor comparison URLs), ensure the meta description explicitly positions 15Five against named competitors to preserve some competitive signal.

#19Schema Markup: Manual Verification RequiredLow1-3 days

Issue: This analysis was conducted using rendered page content (web_fetch returns markdown, not raw HTML), so JSON-LD schema blocks, meta tags, and OG tags are not visible in any of the 30 pages analyzed. Whether product pages carry Product or SoftwareApplication schema, blog posts carry Article schema with datePublished/dateModified, pricing pages carry Offer schema, or FAQ sections carry FAQPage schema cannot be determined from this analysis method.

Fix: Audit schema implementation using Google's Rich Results Test (https://search.google.com/test/rich-results) or a Screaming Frog structured data crawl. Priority items: (1) blog posts — verify Article/BlogPosting schema with author, datePublished, dateModified; (2) pricing page — verify Offer/PriceSpecification schema; (3) product FAQ sections — add FAQPage schema; (4) comparison pages — add WebPage schema with about properties referencing competitor entities once comparison content is restored.

Click any row to expand full issue/fix detail.

Layer 2 Existing Content Optimization

Existing pages that need restructuring or deepening. Owner: Content Team. Timeline: Weeks.

Create CFO ROI Content Hub to Replace /pricing Fallback for Business-Case and Cost-of-Inaction Queries

Priority 5
Currently: partialThe /pricing page covers platform costs but is missing: (1) cost-of-inaction framing (what does bad PM actually cost in turnover?); (2) ROI metrics from PM investment; (3) competitor pricing comparison transparency; (4) payback period modeling. Lattice wins 15f_135 (CFO, consensus creation, ROI evidence comparison) by having ROI calculator content and outcome statistics on their pricing and customer evidence pages. Queries 15f_009, 15f_039, 15f_134 are routed to /pricing only as a fallback because no dedicated ROI/business-case content exists.

The /pricing page at https://www.15five.com/pricing lists plan prices and feature tiers but has no ROI framing — CFO queries about the cost of poor PM processes (15f_009) and evaluation ROI metrics (15f_039) cannot be answered by citing a pricing page, and routing these queries to /pricing as a coverage fallback confirms the content gap rather than filling it. The /customer-stories/ page has case studies with outcome data but formats them as narrative blog posts rather than extractable ROI metrics — the Pendo (21% turnover reduction) and Auror (94% retention) outcomes are buried in story prose rather than surfaced as structured, AI-extractable claims. The routing of 15f_009 and 15f_039 to /pricing reveals the absence of any dedicated business-case or ROI content on the site — the CFO's question 'how much does poor performance management cost?' has no home anywhere in 15Five's content inventory.

Queries affected: 15f_009, 15f_039, 15f_111, 15f_113, 15f_134, 15f_135

Add AI Coaching Evidence and Methodology to /products/kona for Manager Effectiveness Queries

Priority 8
Currently: coveredThe /products/kona page covers the product existence but lacks: (1) outcome evidence showing Kona improves manager effectiveness; (2) a methodology explainer describing how AI coaching works; (3) a head-to-head comparison with external coaching programs, training platforms, and generic AI tools; (4) an ROI framework. Culture Amp wins 15f_067; Leapsome wins 15f_072; both competitors present manager development content with methodology and outcome framing that Kona's page currently lacks.

The /products/kona page describes Kona AI Coach as a product but provides no evidence of effectiveness — queries 15f_025 ('AI coaching tools for managers — is there evidence they actually improve manager effectiveness?') and 15f_046 (shortlisting, no_clear_winner) can't cite this page because it makes no verifiable outcome claims with data. The /products/kona page contains no explanation of HOW the AI coaching works — there is no methodology section covering what data Kona uses, how it generates coaching recommendations, and what differentiates it from generic AI prompting — making it non-citable for 'how do AI coaching tools work?' queries (15f_015, 15f_025). The /products/kona page does not address the 'AI coaching vs. external coaching programs vs. training platforms' comparison framing that appears in 4 queries (15f_015, 15f_025, 15f_067, 15f_138) — buyers evaluating manager development approaches need this comparison to justify AI coaching selection.

Queries affected: 15f_003, 15f_005, 15f_015, 15f_025, 15f_032, 15f_046, 15f_067, 15f_107, 15f_110, 15f_138, 15f_144

Add Migration Evidence and Outcome Proof to /products/perform for Continuous PM Switching Queries

Priority 9
Currently: coveredThe /products/perform page covers the product but lacks: (1) migration narrative for buyers switching from annual reviews or incumbent tools; (2) quantified outcome evidence connected to the performance review feature specifically; (3) a continuous vs. annual comparison table that AI systems can extract; (4) requirements-building content for buyers evaluating replacement platforms. Lattice wins 15f_049, 15f_057, 15f_070, 15f_075, 15f_127 by providing 'why teams switch to Lattice' sections and migration guides.

The /products/perform page has no switching or migration narrative — queries like 15f_049 ('Top continuous performance review platforms for replacing spreadsheet-based annual reviews') and 15f_057 ('replacing our current review tool — support for 360-degree feedback and custom review cycles') lose to Lattice because Lattice's comparable page includes explicit 'migrating from spreadsheets' language and a migration guide. The /products/perform page lacks customer outcome evidence tied specifically to the performance review feature — the Auror and Pendo case study data exists on blog posts but is not integrated into the product page narrative, making the page non-citable for 'does continuous PM actually produce better outcomes?' queries. The /products/perform page does not include a structured 'Continuous vs. Annual Reviews: Key Structural Differences' comparison that AI systems can extract for the educational solution-exploration queries (15f_013, 15f_024) where no vendor is recommended but a structural comparison would surface 15Five as the page host.

Queries affected: 15f_004, 15f_013, 15f_016, 15f_024, 15f_030, 15f_031, 15f_040, 15f_042, 15f_049, 15f_057, 15f_103, 15f_105, 15f_124, 15f_127, 15f_128, 15f_137, 15f_141, 15f_150

Add Pay Equity Compliance and Evaluation Criteria to /products/perform/compensation/

Priority 10
Currently: coveredThe /products/perform/compensation/ page covers the feature but is missing: (1) pay equity compliance specifics (audit trails, bias detection, EEOC-compatible reporting); (2) structured evaluation criteria for buyers comparing compensation management modules; (3) an explicit data flow narrative from performance ratings to compensation recommendations to manager override documentation. Lattice wins comparison queries on compensation (15f_082, 15f_102) by providing a 'Pay for performance' framework with described data flow.

The /products/perform/compensation/ page does not include pay equity compliance specifics — query 15f_038 ('What should I look for in compensation management software that supports pay equity compliance?') and 15f_125 ('Biggest risks of automating compensation decisions — what can go wrong with pay equity analysis?') cannot cite this page because compliance capabilities are not documented. The /products/perform/compensation/ page lacks a buyer evaluation checklist or evaluation criteria framework — queries 15f_038 (requirements building) and 15f_048 (shortlisting) need a page that helps buyers evaluate compensation management tools, not just a feature description. The /products/perform/compensation/ page does not describe the performance-rating-to-compensation data flow — the defining value proposition ('connect pay decisions to performance data without spreadsheets') is stated but not illustrated with a step-by-step process that AI systems can extract as a citable workflow.

Queries affected: 15f_010, 15f_027, 15f_038, 15f_048, 15f_112, 15f_125, 15f_129, 15f_146

Deepen /products/engage for Early-Funnel Action-Planning and Turnover-Prediction Queries

Priority 11
Currently: coveredThe /products/engage page covers the product but lacks: (1) a 'Survey → Action → Outcome' workflow that answers 'how do you close the loop on engagement data?'; (2) benchmarking or outcome evidence; (3) pulse-vs-annual comparison framing; (4) buyer-language evaluation criteria for engagement platforms. Culture Amp wins on queries like 15f_044 and 15f_058 by providing this action-planning and analytics framing; PerformYard wins 15f_052 with benchmark comparison content.

The /products/engage page presents engagement features as a capabilities list but has no problem-framing section — queries asking about warning signs of employee attrition (15f_001) or how to close the loop on engagement surveys (15f_006) cannot be answered by citing a product feature page. The /products/engage page lacks an outcome evidence block — it claims engagement improvements but provides no quantified customer results (response rate improvements, action-plan completion rates, turnover reduction data) that AI systems can extract as citable claims. The /products/engage page does not address the pulse-vs-annual survey tradeoff that appears in 3 queries (15f_017, 15f_041, 15f_114) — Culture Amp's comparable page wins these queries by including explicit 'when to use pulse vs. annual' guidance.

Queries affected: 15f_001, 15f_006, 15f_017, 15f_022, 15f_028, 15f_041, 15f_052, 15f_058, 15f_066, 15f_104, 15f_114, 15f_121, 15f_143

Increase Social Proof Density and Retention Mechanism Clarity on /solutions/reduce-regrettable-turnover

Priority 12
Currently: addressedThe solution page addresses regrettable turnover thematically but lacks: (1) a feature-to-outcome mechanism map explaining which 15Five features drive which retention outcomes; (2) social proof density matching Lattice's equivalent page (5+ named customer outcomes with specific percentages); (3) a buyer evaluation framework for assessing retention-focused HR platforms. Lattice wins 15f_050 (shortlisting for reducing regrettable turnover) because its retention solution page includes more customer outcomes and a clearer mechanism narrative.

The /solutions/reduce-regrettable-turnover page makes retention claims but doesn't explain the mechanism — queries like 15f_026 ('What types of HR technology actually move the needle on reducing voluntary turnover?') need a page that explains WHICH features drive WHICH retention outcomes, not just a claim that 15Five reduces turnover. The /solutions/reduce-regrettable-turnover page has insufficient customer outcome density — Lattice's equivalent page (winner on 15f_050) includes 5+ named company outcomes with specific retention percentages; 15Five's page references Auror and Pendo outcomes but does not present them in a structured, scannable density that AI systems can extract as a recommendation signal. The /solutions/reduce-regrettable-turnover page lacks a buyer evaluation resource — RFP-creation query 15f_139 ('Draft an RFP for a continuous performance management platform') routes to this page but finds no RFP template, evaluation criteria, or downloadable reference content.

Queries affected: 15f_026, 15f_050, 15f_139

Rebuild /integrations from Directory to Integration Evidence Hub with Technical Architecture and Success Stories

Priority 16
Currently: coveredThe /integrations page lists supported integrations (Workday, BambooHR, ADP, etc.) but provides no: (1) technical integration architecture documentation (SSO types, SCIM provisioning, API access, webhooks); (2) customer success stories demonstrating integration reliability at scale; (3) comparison data vs. Lattice's integration ecosystem. Lattice wins the integration vendor comparison artifact (15f_140) by providing integration-specific case studies and API documentation. The shortlisting affinity override flags confirm the directory format doesn't match the case_study/landing_page content type required for shortlisting queries.

The /integrations directory at https://www.15five.com/integrations lists supported HRIS platforms but contains zero content about integration architecture — queries 15f_034 ('Integration requirements for evaluating performance management software — HRIS sync, SSO, SCIM provisioning, API access, webhook support') and 15f_108 ('Culture Amp integration issues — any known problems syncing with Workday or other enterprise HRIS platforms?') cannot cite this page because technical architecture is not documented. The /integrations page has no customer integration success stories — shortlisting queries 15f_054 and 15f_055 (both winner=lattice) require evidence that integrations work reliably at scale with named HRIS platforms, not just confirmation that integrations exist. The /integrations page lacks any comparison framing against competitor integration ecosystems — query 15f_140 ('Create a vendor comparison scorecard for 15Five, Lattice, Culture Amp — integration capabilities and data architecture', winner=lattice) loses because Lattice has comparison-ready integration documentation that 15Five's directory cannot provide.

Queries affected: 15f_007, 15f_019, 15f_034, 15f_108, 15f_132, 15f_140

Restructure /blog/check-ins-and-1-on-1s/ for AI Extractability on Recognition and Continuous Feedback Queries

Priority 17
Currently: coveredBoth blog posts cover check-in and feedback methodology but lack: (1) outcome evidence connecting recognition/check-ins to retention improvement; (2) structured capability comparison data for buyer evaluation queries; (3) AI-extractable heading hierarchy for specific buyer questions. Workleap wins 15f_076 and 15f_098 with recognition-specific product pages that include feature comparisons and adoption evidence. The feature (recognition_feedback) has no dedicated product landing page — coverage depends entirely on methodology blog posts.

The /blog/check-ins-and-1-on-1s/ page explains how to run check-ins but lacks outcome evidence — query 15f_014 ('Does real-time employee recognition actually reduce turnover, or is it a feel-good feature?') and 15f_123 ('Do employee recognition tools actually sustain engagement improvements?') require citable evidence connecting recognition frequency to retention outcomes, which is absent from this methodology guide. The /blog/check-ins-and-1-on-1s/ page structure is optimized for human reading, not AI extraction — headings describe rather than answer ('How to run effective check-ins' instead of 'What are the most important capabilities in a continuous feedback tool?'), reducing the probability of passage extraction for requirements-building queries (15f_033). Recognition_feedback has no dedicated product landing page — this blog post is the primary coverage for all 6 queries in this cluster, but a blog post format cannot compete with Workleap's dedicated recognition product page that includes feature comparisons, adoption data, and customer outcome statistics.

Queries affected: 15f_014, 15f_033, 15f_068, 15f_123, 15f_133, 15f_149

Layer 3 Narrative Intelligence Opportunities

Net new content addressing visibility and positioning gaps. Owner: Content Strategy. Timeline: Months.

NIO #1: People Analytics & HR Intelligence Hub
Gap Type: Structural Gap — 15Five has no substantive people analytics content anywhere on the site — coverage was assessed as thin for all 15 queries in this cluster. The gap spans every buying stage from problem identification through artifact creation, meaning buyers evaluating analytics and flight-risk capabilities never encounter 15Five during the research process.
Critical

People analytics is the decisive capability for the CHRO and CFO during shortlisting — it answers the board question of whether HR investment produces measurable outcomes. Across 15 queries covering problem identification through artifact creation, 15Five returns no usable content on analytics, flight-risk prediction, or workforce intelligence. Competitors including Lattice (winner on 15f_056) and Culture Amp (winner on 15f_079 and 15f_101) are filling this gap. Because the CHRO and CFO are the two highest-influence decision-maker personas in this audit, absence at the analytics layer means 15Five is structurally excluded from the frame competitors are building around data-driven HR. Building a dedicated hub converts existing AMAYA product capability into visible authority across all five personas.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: 15f_002, 15f_008, 15f_020, 15f_023, 15f_029, 15f_035, 15f_047, 15f_056, 15f_079, 15f_091, 15f_101, 15f_109, 15f_122, 15f_130, 15f_145
“How do you prove to a skeptical CFO that people programs actually reduce turnover and save money?”
“Top people analytics platforms with AI-powered flight risk detection for mid-market companies”
“What workforce data should HR be reporting to the board, and what tools make that easier than building custom reports?”
“How accurate are AI-powered flight risk predictions — do people analytics tools actually predict employee turnover?”
Blueprint
  • On-Domain: Create a dedicated /products/analytics or /products/amaya hub page with specific capabilities described as buyer outcomes: flight-risk prediction signals (which data inputs, what the output looks like), workforce trend dashboards, natural language query interface, and data connectors to HRIS systems
  • On-Domain: Build a 'Board-Ready HR Metrics' content piece that maps 15Five analytics outputs directly to CFO and board concerns — anchor with specific customer outcome data (Auror 94% retention, Pendo 21% turnover reduction) and include benchmark comparisons showing cost-of-turnover calculations
  • On-Domain: Write a 'Build vs. Buy People Analytics' guide targeting 15f_020 and 15f_035 — frames the build-in-Tableau tradeoff honestly and positions 15Five's embedded analytics as the mid-market answer; include implementation timeline and data model comparisons
  • On-Domain: Create a 'Flight Risk Prediction: How It Works' technical explainer for the HR Technology Director — cover data inputs (performance scores, engagement trends, check-in frequency, compensation delta), model methodology, and accuracy benchmarks drawn from 15Five's customer base
  • On-Domain: Develop a 'People Analytics Requirements Checklist' targeting 15f_029 and 15f_035 — a structured evaluation criteria document that maps each requirement to 15Five's capabilities and enables direct comparison against competitors
  • Off-Domain: Publish contributed articles or bylined research on HR technology publications (SHRM, HR Dive, People Matters) covering flight-risk prediction methodology — positions 15Five as an analytics authority and generates third-party citations that AI systems prefer for shortlisting queries
  • Off-Domain: Submit 15Five to G2 and Capterra analytics-specific review categories (People Analytics, Workforce Analytics) to appear in shortlisting recommendation queries where 15Five is currently absent
  • Off-Domain: Pursue co-authored case study placements with Auror or Pendo in HR analytics publications — creates external citation sources that AI systems can reference in comparison and shortlisting queries
Platform Acuity

Chatgpt (high): ChatGPT's vendor recommendation queries (15f_047, 15f_056) returned competitor names exclusively when 15Five had no analytics content to cite. Authoritative first-party capability pages with specific claim-level data (e.g., 'predicts flight risk using 6 behavioral signals') are the content type ChatGPT extracts for recommendation answers. Perplexity (high): Perplexity returned no 15Five citations on comparison queries 15f_079 and 15f_091 where Culture Amp and Lattice appeared. Perplexity's passage-extraction model rewards self-contained analytical paragraphs with data points — dedicated hub pages with structured headings (How It Works, Data Inputs, Accuracy Benchmarks) are directly suited to Perplexity citation.

NIO #2: Dedicated Competitor Comparison & Competitive Intelligence Pages
Gap Type: Content Type Deficit — 26 comparison and shortlisting queries were routed to L3 because the AFFINITY OVERRIDE rule found that 15Five's existing content uses wrong page types (blog posts, product pages, and integration catalogs) for comparison and shortlisting buying jobs that require dedicated comparison pages and case-study landing pages. An existing L1 finding (comparison_urls_redirect_to_generic_page) confirms three indexed comparison URLs redirect to a generic brand page with zero competitor-specific content.
Critical

Comparison is a high-intent buying job (is_high_intent=true), and 15Five is absent from 26 queries in this stage — the largest single gap cluster in the L3 inventory. These queries include both direct '15Five vs. Competitor' searches and 'Competitor A vs. Competitor B' searches where 15Five should be inserting itself as the superior mid-market alternative. The only competitive differentiation content on the site is a single December 2025 blog post covering Lattice; Culture Amp and Betterworks have no dedicated comparison content. Because 15Five ties Lattice 8-8 head-to-head where both appear (per metrics.competitive.head_to_head), the problem is not performance once present — it is absence from the comparison stage entirely. Fixing this gap converts a strong late-funnel shortlisting position (80.77% visibility) into comparison-stage presence, producing the largest expected lift of any single NIO.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: 15f_054, 15f_055, 15f_070, 15f_072, 15f_074, 15f_075, 15f_076, 15f_077, 15f_078, 15f_080, 15f_082, 15f_084, 15f_086, 15f_087, 15f_088, 15f_089, 15f_090, 15f_092, 15f_093, 15f_094, 15f_095, 15f_097, 15f_098, 15f_099, 15f_100, 15f_102
“Lattice vs Culture Amp — which platform has stronger ROI evidence for mid-market performance management?”
“Culture Amp vs Leapsome for continuous check-ins and pulse surveys — which drives better manager habits?”
“Best performance management platforms with native HRIS integrations — Workday, ADP, BambooHR sync without custom middleware”
“We're replacing our current review tool — how does Lattice compare to other platforms for making that transition smooth?”
Blueprint
  • On-Domain: Restore dedicated comparison landing pages at the existing URL slugs /15five-vs-lattice, /15five-vs-cultureamp/, and /15five-vs-leapsome/ — each page must include a structured feature matrix (performance reviews, check-ins, engagement surveys, analytics, compensation, integrations, manager coaching), a use-case differentiation section by company size and HR maturity level, migration guide, and pricing tier comparison with total cost transparency
  • On-Domain: Create /15five-vs-betterworks comparison page — 15Five loses to Betterworks 2-6 head-to-head, and several comparison queries (15f_084, 15f_095) are actively won by Betterworks; a dedicated page is the highest-impact single competitive recovery action
  • On-Domain: Build a 'Performance Management Software Comparison Hub' at /compare or /performance-management-software/alternatives — a category-level page that positions 15Five as the mid-market standard, links to all individual competitor comparison pages, and serves as the canonical answer for 'alternatives to [competitor]' query patterns
  • On-Domain: Add a structured comparison table to the existing /integrations page for HRIS-specific shortlisting queries (15f_054, 15f_055) — show depth of integration (API, SCIM, bidirectional sync, webhook support) per HRIS platform vs. Lattice's integration model, converting the catalog page into a competitive asset
  • On-Domain: Create dedicated HRIS integration landing pages (e.g., /integrations/workday, /integrations/bamboohr, /integrations/adp) — the current catalog page fails AFFINITY OVERRIDE for shortlisting queries because it is a list, not a landing page with migration evidence, technical specs, and customer case studies
  • Off-Domain: Contribute 'How to Evaluate Performance Management Software' buyer's guide to HR publications (HR Bartender, SHRM blog, HR Brew) — positions 15Five as the authoritative voice in evaluation decisions and generates citation-worthy third-party content that AI systems reference for comparison queries
  • Off-Domain: Ensure 15Five is listed in G2 comparison grids alongside Lattice, Culture Amp, and Betterworks — G2 category grids are cited by both ChatGPT and Perplexity for category comparison queries where no vendor page is dominant
  • Off-Domain: Brief analyst relations contacts at Gartner and Forrester on AMAYA and Kona AI differentiators ahead of next evaluation cycle — analyst citations are high-authority sources that AI systems preferentially cite in comparison queries
Platform Acuity

Chatgpt (high): ChatGPT returned competitor-only responses across all 26 comparison queries. ChatGPT preferentially cites structured comparison pages with explicit claim-level differentiation ('Feature X: 15Five supports Y, Lattice requires Z'). The existing blog post format is treated as opinion content, not factual comparison — dedicated landing pages with feature matrices are required. Perplexity (high): Perplexity citations in this cluster went exclusively to competitor pages and G2 category pages. Perplexity's citation model rewards self-contained comparison tables and bullet-format differentiators that can be extracted as direct answers — exactly the format that dedicated comparison pages with feature matrices provide.

NIO #3: OKR & Goal Cascading Hub
Gap Type: Structural Gap — 9 queries targeting OKR and goal cascading capabilities were routed to L3 because the content inventory assessed coverage as thin across all OKR-focused queries. The gap spans problem identification through artifact creation, covering the VP of Talent's goal alignment pain point (goal_misalignment) and the CFO's concern about departmental goal adoption cost-effectiveness.
High

Goal misalignment is the pain point associated with OKR, and it represents a commercially actionable problem — organizations where 'nobody below VP level can explain their goals' are actively seeking solutions. 15Five has OKR tracking functionality but zero visibility across 9 OKR-related queries, allowing competitors like Betterworks (native OKR tool) and Leapsome to own category framing. Two shortlisting queries (15f_062, 15f_065) returned no_clear_winner, indicating the OKR market lacks a dominant AI-cited vendor — 15Five has a first-mover content opportunity in a segment where competitors have not yet established citation dominance.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: 15f_012, 15f_021, 15f_037, 15f_062, 15f_065, 15f_096, 15f_120, 15f_136, 15f_147
“Our company sets quarterly OKRs but nobody below the VP level can explain what their goals are — is there a better way to cascade them?”
“We've tried and failed with spreadsheet-based OKRs — what features in a dedicated OKR tool actually make goal cascading work?”
“OKR platforms affordable enough for mid-market but robust enough to actually make goals stick across departments”
“Common failure modes when rolling out OKR software — what makes teams stop using it within six months?”
Blueprint
  • On-Domain: Create /products/perform/goals or /okr-software as a dedicated OKR and goal cascading hub page — must directly address 'why does OKR software fail' (15f_120) and 'how does goal cascading work' (15f_012) with specific answers about adoption mechanics, manager accountability, and integration with performance review workflows
  • On-Domain: Write a 'Why OKR Software Fails (And How to Fix It)' longform guide targeting failure modes (15f_120) and spreadsheet-to-OKR transition pain (15f_037) — honest buyer-level analysis that positions 15Five's integrated approach as the solution rather than adding another standalone tool
  • On-Domain: Build an 'OKR vs. Spreadsheets: Real Tradeoffs' comparison page addressing 15f_037 — include implementation timeline, adoption benchmarks, and customer outcomes showing goal alignment improvement at named mid-market companies
  • On-Domain: Create a 'Making the Case for OKR Software' consensus-creation guide targeting 15f_136 — equips VP of Talent or HR Director to present to skeptical leadership; downloadable, evidence-backed, referencing 15Five outcomes with specific retention and alignment metrics
  • Off-Domain: Publish OKR methodology thought leadership in HR-focused publications positioning 15Five's integrated approach (OKR + continuous feedback + manager coaching) as the mid-market alternative to standalone OKR tools — generates third-party citations for shortlisting queries where no dominant source currently exists
  • Off-Domain: Submit to G2's OKR software category and Capterra's goal tracking software category — 15f_065 and 15f_062 shortlisting queries returned no_clear_winner, suggesting 15Five is not currently listed in these category listings that AI systems use as citation sources
Platform Acuity

Chatgpt (medium): OKR shortlisting queries returned no_clear_winner on 15f_065 — ChatGPT lacks a dominant source to cite. This is an opportunity to become the cited authority. ChatGPT needs a clearly structured capabilities page with specific claims about cascading depth, adoption mechanics, and manager accountability features. Perplexity (high): Perplexity's recency-weighted model rewards recently published, structured content. A new OKR hub page added to the sitemap with accurate lastmod timestamps would immediately compete for OKR-related queries where no dominant source currently exists — Perplexity's recency advantage is highest in underserved topic areas.

NIO #4: Talent Calibration & High-Potential Identification Hub
Gap Type: Structural Gap — 7 queries targeting talent calibration, 9-box assessment, and high-potential identification were routed to L3 because content inventory assessed talent calibration coverage as thin across the site. The gap spans problem identification through artifact creation, with the VP of Talent as the primary persona and top_talent_flight_risk as the central pain point.
High

Talent calibration is the structural link between performance data and retention decisions — the capability that lets HR leadership identify high performers before they resign. 15Five has 9-box and talent review functionality but is absent from all 7 calibration queries. The validation query 15f_119 ('15Five talent management and performance calibration — how does it compare') returned no_clear_winner — a buyer who has already found 15Five cannot find sufficient calibration evidence to form a view. Lattice wins 15f_085, confirming that calibration is an active competitive battleground. Creating a calibration hub is a direct product-capability-to-content conversion with no product investment required.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: 15f_011, 15f_018, 15f_036, 15f_085, 15f_119, 15f_131, 15f_148
“How do you identify which employees are high-potential and at risk of leaving before they hand in their notice?”
“How does talent calibration work in practice — is it worth the administrative effort for a 300-person company?”
“15Five talent management and performance calibration — how does it compare to dedicated talent review platforms?”
“Technical requirements for a talent calibration tool — flexible rating scales, bias detection, manager override audit trails, integration with existing review workflows”
Blueprint
  • On-Domain: Create /products/perform/calibration dedicated page covering 15Five's talent calibration and 9-box functionality — address the three buyer questions from this cluster explicitly: How does calibration work in practice (15f_018), what are the technical requirements including rating scale flexibility, bias detection flags, and manager override audit trails (15f_036), and how does it connect to flight-risk identification (15f_011)
  • On-Domain: Write a 'Talent Calibration ROI' content piece targeting 15f_131 consensus-creation query — frame the replacement cost of losing a high-performer (include benchmark turnover cost data) and show how 15Five's calibration tooling reduces flight risk with specific customer outcome evidence
  • On-Domain: Create a '9-Box Assessment Buyer's Guide' for the HR Technology Director persona — structured evaluation criteria that maps each technical requirement (15f_036, 15f_148) directly to 15Five's calibration features, enabling evaluators to score 15Five against Lattice and other platforms on these specific dimensions
  • Off-Domain: Publish contributed content on talent calibration methodology in SHRM or People Management publications — positions 15Five as the practitioner authority on calibration best practices and generates citable third-party content for the validation queries in this cluster
  • Off-Domain: Ensure 15Five appears in G2's talent management and succession planning software categories with calibration as a tagged feature — thin-coverage routing on 15f_085 and 15f_119 suggests 15Five may not be prominently listed in calibration-adjacent category grids that AI systems use for comparison queries
Platform Acuity

Chatgpt (high): 15f_119 ('15Five talent management and performance calibration — how does it compare') returned no_clear_winner — ChatGPT could not find sufficient 15Five-specific calibration content to form a recommendation. A structured product page with specific feature claims (e.g., rating scale types, bias flag categories, audit trail depth) gives ChatGPT the factual content it needs to cite 15Five on direct competitor queries. Perplexity (medium): Calibration validation and requirements queries both returned no_clear_winner on Perplexity. Perplexity favors pages structured as Q&A passages — a calibration page formatted as 'What is talent calibration? How does it reduce flight risk? What should I look for in a calibration tool?' maps directly to Perplexity's answer-extraction format.

NIO #5: CFO Financial Modeling & Total Cost of Ownership Content
Gap Type: Content Type Deficit — One query (15f_142) was routed to L3 with coverage_status='missing' — no page exists anywhere on 15Five's site that addresses TCO modeling, 3-year cost projections, or financial modeling frameworks for HR software investment. This is the only query in the audit with complete coverage absence, indicating a missing content type for CFO-facing financial decision support.
Medium

The CFO's primary concern is not features but financial justification — total cost over a multi-year investment cycle including licensing, implementation, training, and change management. While only one L3 query routes directly to this NIO, the CFO persona also drives multiple L2 ROI queries (15f_009, 15f_039, 15f_134) that share the same underlying need. A TCO template or financial model published on-domain would serve CFO consensus-creation queries across the full buying cycle and provide a defensible anchor for board-level ROI conversations — an asset type that no competitor currently has, creating first-mover citation authority.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: 15f_142
“Build a TCO model for implementing performance management software at a 300-person company over 3 years — licensing, implementation, training, and change management”
Blueprint
  • On-Domain: Create an interactive or downloadable '3-Year TCO Calculator for Performance Management Software' at /resources/tco-calculator or embedded within /pricing — cover all cost categories explicitly: per-seat licensing tiers, implementation and onboarding fees, training hours by role, change management investment, and ongoing support costs compared to HR admin time saved
  • On-Domain: Publish a 'Business Case for Performance Management Software' page at /resources/roi that bundles TCO framing with turnover cost benchmarks and 15Five customer outcome data — links the financial model to measurable outcomes (Pendo 21% turnover reduction, Auror 94% retention) to give the CFO a defensible ROI narrative with cited evidence
  • Off-Domain: Pitch the TCO model as a contributed resource to HR Finance or CFO.com publications — CFO-facing HR ROI content earns high-authority third-party citations and drives direct traffic from the exact persona making budget decisions, generating external AI citations for hr_roi_proof queries
  • Off-Domain: Submit to HR analyst blogs (Josh Bersin, RedThread Research) for inclusion in HR technology ROI frameworks — analyst endorsement of a published TCO methodology significantly increases AI citation frequency for consensus-creation and artifact-creation queries targeting the CFO persona
Platform Acuity

Chatgpt (high): 15f_142 returned no_vendor_mentioned — ChatGPT produced a generic TCO model without citing any vendor. A published 15Five TCO template would likely be cited as a primary source for this query type, as ChatGPT tends to attribute vendor-published financial frameworks when artifact-creation queries seek structured models. Perplexity (medium): Perplexity's recency weighting and source-citation model rewards recently published, structured financial content with clear table formats. A TCO page structured as a table (Cost Category | Year 1 | Year 2 | Year 3 | Notes) is directly extractable as a Perplexity answer for TCO modeling queries.

Unified Priority Ranking

All recommendations across all three layers, ranked by commercial impact × implementation speed.

  • 1

    No Date Signals on Any Product or Solution Page

    All product pages, solution pages, the why-15five page, and the pricing page have no visible last-updated dates and are absent from the sitemap — meaning no lastmod signal is available from any source. Freshness could not be determined for 17 of 30 pages analyzed. While blog posts in the sitemap carry lastmod timestamps (November-December 2025), these appear to be bulk-refreshed timestamps rather than per-post content modification dates: several blog posts show sitemap lastmod of 2025-11-25 or 2025-11-26 regardless of their original publication date (some were written in 2017-2019).

    Technical Fix · Engineering · 17 of 30 pages analyzed have no freshness signal — all product, solution, integration, and pricing pages
  • 2

    XML Sitemap Contains Only 19 Blog URLs — All Commercial Pages Absent

    The sitemap at https://www.15five.com/sitemap.xml contains exactly 19 URLs, all of which are blog posts or resource thank-you pages with lastmod timestamps of November-December 2025. Zero product pages, zero solution pages, zero pricing pages, zero integration pages, zero comparison-redirect pages, and zero feature subpages appear in the sitemap. No sitemap index file exists (sitemap_index.xml and hs-sitemap.xml both return 404). Core commercial pages such as /products, /products/perform, /products/engage, /products/kona, /products/perform/compensation, /pricing, /integrations, /solutions/reduce-regrettable-turnover, and /why-15five are all entirely absent from any known sitemap.

    Technical Fix · Engineering · All product, feature, solution, pricing, and integration pages — approximately 15+ high-value commercial URLs absent from sitemap
  • 3

    Dedicated Competitor Comparison & Competitive Intelligence Pages

    26 comparison and shortlisting queries were routed to L3 because the AFFINITY OVERRIDE rule found that 15Five's existing content uses wrong page types (blog posts, product pages, and integration catalogs) for comparison and shortlisting buying jobs that require dedicated comparison pages and case-study landing pages. An existing L1 finding (comparison_urls_redirect_to_generic_page) confirms three indexed comparison URLs redirect to a generic brand page with zero competitor-specific content.

    New Content · Content · 26 queries affecting personas: chro, vp_people_ops, hr_technology_director, cfo, vp_talent
  • 4

    People Analytics & HR Intelligence Hub

    15Five has no substantive people analytics content anywhere on the site — coverage was assessed as thin for all 15 queries in this cluster. The gap spans every buying stage from problem identification through artifact creation, meaning buyers evaluating analytics and flight-risk capabilities never encounter 15Five during the research process.

    New Content · Content · 15 queries affecting personas: chro, hr_technology_director, cfo, vp_people_ops, vp_talent
  • 5

    Create CFO ROI Content Hub to Replace /pricing Fallback for Business-Case and Cost-of-Inaction Queries

    The /pricing page at https://www.15five.com/pricing lists plan prices and feature tiers but has no ROI framing — CFO queries about the cost of poor PM processes (15f_009) and evaluation ROI metrics (15f_039) cannot be answered by citing a pricing page, and routing these queries to /pricing as a coverage fallback confirms the content gap rather than filling it.

    Content Optimization → New Content · Content · 6 queries, personas: cfo, chro, vp_people_ops
  • 6

    OKR & Goal Cascading Hub

    9 queries targeting OKR and goal cascading capabilities were routed to L3 because the content inventory assessed coverage as thin across all OKR-focused queries. The gap spans problem identification through artifact creation, covering the VP of Talent's goal alignment pain point (goal_misalignment) and the CFO's concern about departmental goal adoption cost-effectiveness.

    New Content · Content · 9 queries affecting personas: vp_talent, hr_technology_director, cfo, chro, vp_people_ops
  • 7

    Talent Calibration & High-Potential Identification Hub

    7 queries targeting talent calibration, 9-box assessment, and high-potential identification were routed to L3 because content inventory assessed talent calibration coverage as thin across the site. The gap spans problem identification through artifact creation, with the VP of Talent as the primary persona and top_talent_flight_risk as the central pain point.

    New Content · Content · 7 queries affecting personas: vp_talent, hr_technology_director, vp_people_ops, chro
  • 8

    Add AI Coaching Evidence and Methodology to /products/kona for Manager Effectiveness Queries

    The /products/kona page describes Kona AI Coach as a product but provides no evidence of effectiveness — queries 15f_025 ('AI coaching tools for managers — is there evidence they actually improve manager effectiveness?') and 15f_046 (shortlisting, no_clear_winner) can't cite this page because it makes no verifiable outcome claims with data.

    Content Optimization · Content · 11 queries, personas: chro, vp_people_ops, vp_talent, hr_technology_director
  • 9

    Add Migration Evidence and Outcome Proof to /products/perform for Continuous PM Switching Queries

    The /products/perform page has no switching or migration narrative — queries like 15f_049 ('Top continuous performance review platforms for replacing spreadsheet-based annual reviews') and 15f_057 ('replacing our current review tool — support for 360-degree feedback and custom review cycles') lose to Lattice because Lattice's comparable page includes explicit 'migrating from spreadsheets' language and a migration guide.

    Content Optimization · Content · 18 queries, personas: chro, vp_people_ops, hr_technology_director, vp_talent
  • 10

    Add Pay Equity Compliance and Evaluation Criteria to /products/perform/compensation/

    The /products/perform/compensation/ page does not include pay equity compliance specifics — query 15f_038 ('What should I look for in compensation management software that supports pay equity compliance?') and 15f_125 ('Biggest risks of automating compensation decisions — what can go wrong with pay equity analysis?') cannot cite this page because compliance capabilities are not documented.

    Content Optimization · Content · 8 queries, personas: cfo, chro, vp_people_ops, vp_talent
  • 11

    Deepen /products/engage for Early-Funnel Action-Planning and Turnover-Prediction Queries

    The /products/engage page presents engagement features as a capabilities list but has no problem-framing section — queries asking about warning signs of employee attrition (15f_001) or how to close the loop on engagement surveys (15f_006) cannot be answered by citing a product feature page.

    Content Optimization · Content · 13 queries, personas: chro, vp_people_ops, hr_technology_director, vp_talent
  • 12

    Increase Social Proof Density and Retention Mechanism Clarity on /solutions/reduce-regrettable-turnover

    The /solutions/reduce-regrettable-turnover page makes retention claims but doesn't explain the mechanism — queries like 15f_026 ('What types of HR technology actually move the needle on reducing voluntary turnover?') need a page that explains WHICH features drive WHICH retention outcomes, not just a claim that 15Five reduces turnover.

    Content Optimization · Content · 3 queries, personas: chro, vp_people_ops, vp_talent
  • 13

    Case Study Page Returns Minimal Body Content — Verify Gating or CSR

    The Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 returned almost exclusively navigation and footer markup with negligible body content — only the headline metric ('reduced turnover by over 20%') and a download button were accessible. Related customer stories presented as blog posts (Pendo, Auror) returned full body content normally. The case study format on this URL appears to use a gated download model (PDF behind a form), which renders the page's substantive content inaccessible to AI crawlers.

    Technical Fix · Content · /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 — other case studies available as blog posts appear accessible
  • 14

    Competitor Comparison URLs Redirect to Generic Brand Page With No Competitor Content

    Three URLs that appear in search engine results as dedicated competitor comparison pages — /15five-vs-lattice, /15five-vs-cultureamp/, and /15five-vs-leapsome/ — all redirect to the generic /why-15five page. The /why-15five page contains no competitor-specific content: it does not mention Lattice, Culture Amp, or Leapsome by name, and contains only generic brand messaging ('The new ERA OF HR'). Fetching each comparison URL confirmed the canonical page is /why-15five and the full page content is identical across all three. Web search results still index these URLs with competitor-specific titles (e.g., '15Five vs Culture Amp | Comparing Employee Management...'), meaning buyers and AI crawlers who follow these URLs from search results land on a page that does not address the query that brought them there.

    Technical Fix · Content · /15five-vs-lattice, /15five-vs-cultureamp/, /15five-vs-leapsome/ — all redirect to /why-15five with no competitor-specific content
  • 15

    CFO Financial Modeling & Total Cost of Ownership Content

    One query (15f_142) was routed to L3 with coverage_status='missing' — no page exists anywhere on 15Five's site that addresses TCO modeling, 3-year cost projections, or financial modeling frameworks for HR software investment. This is the only query in the audit with complete coverage absence, indicating a missing content type for CFO-facing financial decision support.

    New Content · Content · 1 queries affecting personas: cfo
  • 16

    Rebuild /integrations from Directory to Integration Evidence Hub with Technical Architecture and Success Stories

    The /integrations directory at https://www.15five.com/integrations lists supported HRIS platforms but contains zero content about integration architecture — queries 15f_034 ('Integration requirements for evaluating performance management software — HRIS sync, SSO, SCIM provisioning, API access, webhook support') and 15f_108 ('Culture Amp integration issues — any known problems syncing with Workday or other enterprise HRIS platforms?') cannot cite this page because technical architecture is not documented.

    Content Optimization → New Content · Content · 6 queries, personas: hr_technology_director, vp_people_ops, cfo
  • 17

    Restructure /blog/check-ins-and-1-on-1s/ for AI Extractability on Recognition and Continuous Feedback Queries

    The /blog/check-ins-and-1-on-1s/ page explains how to run check-ins but lacks outcome evidence — query 15f_014 ('Does real-time employee recognition actually reduce turnover, or is it a feel-good feature?') and 15f_123 ('Do employee recognition tools actually sustain engagement improvements?') require citable evidence connecting recognition frequency to retention outcomes, which is absent from this methodology guide.

    Content Optimization · Content · 6 queries, personas: chro, vp_people_ops, hr_technology_director, vp_talent
  • 18

    Meta Descriptions and OG Tags: Manual Verification Required

    Meta descriptions and Open Graph tags (og:description, og:image, og:title) are not accessible via rendered markdown analysis. None of the 30 pages analyzed had visible meta description or OG tag content in the fetched output.

    Technical Fix · Marketing · All 30 pages analyzed — priority: /why-15five, product pages, pricing page
  • 19

    Schema Markup: Manual Verification Required

    This analysis was conducted using rendered page content (web_fetch returns markdown, not raw HTML), so JSON-LD schema blocks, meta tags, and OG tags are not visible in any of the 30 pages analyzed. Whether product pages carry Product or SoftwareApplication schema, blog posts carry Article schema with datePublished/dateModified, pricing pages carry Offer schema, or FAQ sections carry FAQPage schema cannot be determined from this analysis method.

    Technical Fix · Engineering · All 30 pages analyzed — schema markup cannot be assessed via rendered markdown

Workstream Mapping

All three workstreams can start this week.

Engineering / DevOps

Layer 1 — Technical Fixes
Timeline: Days to 2 weeks
  • XML Sitemap Contains Only 19 Blog URLs — All Commercial…
  • Competitor Comparison URLs Redirect to Generic Brand Page…
  • No Date Signals on Any Product or Solution Page
  • Case Study Page Returns Minimal Body Content — Verify…

Content Team

Layer 2 — Content Optimization
Timeline: 2–6 weeks
  • Deepen /products/engage for Early-Funnel Action-Planning…
  • Add Migration Evidence and Outcome Proof to…
  • Add AI Coaching Evidence and Methodology to /products/kona…
  • Add Pay Equity Compliance and Evaluation Criteria to…

Content Strategy

Layer 3 — NIOs + Off-Domain
Timeline: 1–3 months
  • Create a dedicated /products/analytics or /products/amaya…
  • Restore dedicated comparison landing pages at the existing…
  • Create /products/perform/goals or /okr-software as a…
  • Create /products/perform/calibration dedicated page…
  • Create an interactive or downloadable '3-Year TCO…

[Synthesis] The 138-item plan is sequenced by dependency, not commercial impact alone: L1 technical fixes — particularly the sitemap fix that restores commercial pages to crawler scope and the redirect fix that makes comparison pages functional — must execute first because they determine whether L2 and L3 improvements will be indexed by AI systems at all. L2 optimizations on 74 existing pages then close positioning gaps where 15Five is visible but loses; L3 new content builds the 58 pages 15Five entirely lacks for discovery-stage queries, targeting the early-funnel stages that drive the 69.0% invisibility rate.

Gap coverage note: 129 of 132 gap queries (98%) are assigned to an L2 or L3 action item. 3 gap queries remain unrouted — these may represent edge-case queries that don’t cluster neatly or fall below the LLM’s grouping threshold.

Methodology
Audit Methodology

Query Construction

150 queries constructed from persona × buying job × feature focus × pain point matrix
Every query carries four metadata fields assigned at creation time
High-intent jobs (Shortlisting + Comparison + Validation): 55% of queries (83 of 150)
Note: 150 queries across full buying journey.

Personas

Chief People Officer — Chief People Officer · Decision Maker
VP of People Operations — VP of People Operations · Evaluator
Director of HR Technology & People Analytics — Director of HR Technology & People Analytics · Evaluator
Chief Financial Officer — Chief Financial Officer · Decision Maker
VP of Talent Management — VP of Talent Management · Evaluator

Buying Jobs Framework

8 non-linear buying jobs: Artifact Creation → Comparison → Consensus Creation → Problem Identification → Requirements Building → Shortlisting → Solution Exploration → Validation
High-intent jobs (Shortlisting + Comparison + Validation): 55% of queries (83 of 150)

Competitive Set

Primary: Lattice, Culture Amp, Betterworks, Leapsome, Workleap
Secondary: Quantum Workplace, Engagedly, PerformYard, Reflektive
Surprise: BambooHR — flagged for review

Platforms & Scoring

Platforms: ChatGPT + Perplexity
Visibility: Binary — does the client appear in the response?
Win rate: Of visible queries, is the client the primary recommendation?

Cross-Platform Counting (Union Method)

When a query is run on multiple platforms, union logic is applied: a query counts as “visible” if the client appears on any platform, not each platform separately.
Winner resolution: When platforms disagree on the winner, majority vote is used. Vendor names are preferred over meta-values (e.g. “no clear winner”). True ties resolve to “no clear winner.”
Share of Voice: Each entity is counted once per query across platforms (union dedup), preventing double-counting when both platforms mention the same company.
This approach ensures headline metrics reflect real buyer-query outcomes rather than inflated per-platform counts.

Terminology

Mentions: Query-level visibility count. A company receives one mention per query where it appears in any platform response (union-deduped). This is the numerator for Share of Voice.
Unique Pages Cited: Count of distinct client page URLs cited across all platform responses, after URL normalization (stripping tracking parameters). The footer total in the Citation section uses this measure.
Citation Instances (Top Cited Domains): Raw count of citation occurrences per domain across all responses. A single domain can accumulate multiple citation instances from different queries and platforms. The Top Cited Domains table uses this measure.