AI Visibility Audit

Pursue Networking
Visibility Report

Competitive intelligence for AI-mediated buying decisions. Where Pursue Networking wins, where it loses, and a prioritized three-layer execution plan — built from 150 buyer queries across ChatGPT + Perplexity.

150 Buyer Queries
5 Personas
8 Buying Jobs
ChatGPT + Perplexity
March 10, 2026

TL;DR

10%
Visibility
15 of 150 queries
6%
Win Rate
9 wins of 150 queries
135
Invisible
queries where Pursue Networking absent
25
Recommendations
targeting 148 gap queries (+ 6 near-rebuild optimizations)
Three things to know
Pursue Networking wins 2 in 3 visible high-intent queries — but appears in only 14.6% of them.
When AI platforms cite Pursue Networking on high-intent queries (Comparison, Shortlisting, Validation), it wins 66.7% of the time (8/12 visible high-intent queries). But it appears in only 14.6% (12/82) of those queries to begin with. The 52pp conditional win rate advantage over its visibility rate is not a product gap — it means ANDI is competitive when found, and invisible the other 85.4% of the time. The entire audit is a plan to close that visibility gap.
66.7% win rate · 14.6% visibility · high-intent queries
Four commercial pages return 404 errors to every AI crawler — making Pursue Networking's product features uncrawlable by design.
The /features, /pricing, /faq, and /pages/about pages are built as client-side-only Next.js routes and return 404 errors or empty JavaScript shells when fetched server-side by GPTBot, PerplexityBot, and ClaudeBot. The homepage itself delivers only a tagline and navigation links to crawlers. This means the pages most likely to be cited in vendor evaluation queries — product features, pricing structure, FAQ — have zero indexed content in any AI platform's training or retrieval cache. The fix is 1-2 weeks of Next.js SSR/SSG implementation and is the prerequisite for every other recommendation.
Critical · 4 commercial pages invisible · Engineering fix
Outreach automation — Pursue Networking's core use case — has 0% AI visibility across 27 buyer queries.
The LinkedIn Outreach Automation & Sequences feature has 0% visibility (0/27 queries) across the entire audit. 28 L3 queries spanning every buying stage — from 'how do startups fix manual prospecting?' to 'build a TCO model for a 10-person SDR team' — are won by Dripify, HeyReach, and Expandi purely because they publish content about automation. ANDI presumably offers this capability; no content exists to establish it. This is the largest single content void in the audit: 28 queries, 0 citations, and a product that could credibly answer every one of them.
Content void · 28 queries · outreach automation feature
Section 1
A Capable Product That Buyers Can't Find

Pursue Networking's visibility gap is not a product problem — it is a three-layer infrastructure deficit that prevents AI platforms from indexing, retrieving, and citing the brand even when buyers are actively searching for exactly what ANDI offers.

Early Funnel — Where Pursue Networking is visible but not winning
Problem Identification
0%
Solution Exploration
0%
Requirements Building
12.5%
Late Funnel — Where Pursue Networking competes
Comparison
28.1%
Artifact Creation
8.3%
Validation
8.3%
Shortlisting
3.9%
Consensus Creation
0%

[Mechanism] Three compounding gaps create the pattern. First, client-side rendering on the Next.js site means four navigation-critical pages (/features, /pricing, /faq, /pages/about) return 404 errors or empty JavaScript shells to AI crawlers — the commercial pages that would be cited in vendor evaluation queries cannot be indexed at all. Second, 5 of 12 audited features (LinkedIn Outreach Automation & Sequences, Unified Data Layer — LinkedIn, Gmail & HubSpot Integration, GEO Visibility & AI Brand Presence, Multi-Channel Campaign Sequencing, Email Finding & Verification) have zero content coverage on the site, so buyers researching these capabilities encounter only competitors.

Third, 64% of blog posts are 180+ days old with zero content updated in the last 90 days — research shows AI platforms strongly prefer recently-updated content for citation, and Pursue Networking's content freshness is well below competitor baselines. These three gaps compound: fixing only crawlability reveals thin content; publishing new content on the broken architecture makes it uncrawlable.

Layer 1
Fix the Foundation
6 L1 technical fixes resolve the CSR rendering failure, homepage crawlability deficit, sitemap timestamp accuracy, schema markup gaps, and meta tag coverage — unblocking AI crawler access to all commercial pages before any content work begins.
6 fixes + 1 checks · Days to 2 weeks
Layer 2
Restructure What Exists
30 L2 content optimizations deepen and reframe existing pages to be AI-extractable, add benchmark data and structured comparisons to blog posts, and reposition educational content to serve buyers at Shortlisting and Validation stages.
8 recommendations · 2–6 weeks
Layer 3
Build the Missing Content
111 L3 recommendations (organized into 10 NIOs) create the feature pages, Comparison hubs, integration guides, ROI frameworks, and topic clusters that Pursue Networking needs to appear in the 141 buyer queries where it is currently invisible.
10 recommendations · 1–3 months

[Synthesis] The L1 SSR fix for client-side rendering is a direct prerequisite for L2 and L3 work: publishing new /features/outreach-automation or /compare/ pages on the current Next.js architecture would route them through the same CSR-only rendering that causes /features to 404 today. Fixing the sitemap timestamp issue (L1) then ensures new L2/L3 content is prioritized in AI crawler scheduling rather than treated as equally-fresh with 2025-era posts.

Reference
How to Read This Report

Visibility

Whether Pursue Networking is mentioned at all in an AI response to a buyer query. Being visible does not mean being recommended — it just means Pursue Networking appeared somewhere in the answer.

Win Rate

Of the queries where Pursue Networking is visible, the percentage where it is the primary recommendation — the vendor the AI tells the buyer to evaluate first.

Share of Voice (SOV)

How often a vendor is mentioned by AI across all 150 buyer queries. Measures brand presence in AI-generated answers, not ad spend or traditional media.

Buying Jobs

The 8 non-linear tasks buyers perform during a purchase: Problem Identification, Solution Exploration, Requirements Building, Shortlisting, Comparison, Validation, Consensus Creation, and Artifact Creation.

NIO

Narrative Intelligence Opportunity — a cluster of related buyer queries where Pursue Networking has no content. Each NIO includes a blueprint of on-domain pages and off-domain actions to close the gap.

L1 / L2 / L3

The three execution layers. L1 = technical infrastructure fixes. L2 = optimization of existing pages. L3 = new content creation and off-domain authority building.

Citation

When an AI tool references a specific webpage as its source. AI systems build recommendations from cited pages — if your pages aren't cited, your content didn't influence the answer.

Invisible Query

A buyer query where Pursue Networking does not appear in the AI response at all. Distinct from a positioning gap, where Pursue Networking appears but is not the recommended vendor.
Section 2
Visibility Analysis

Where Pursue Networking appears and where it doesn't — across personas, buying jobs, and platforms.

[TL;DR] Pursue Networking is visible in 10% of buyer queries but wins only 6%.

Pursue Networking is present in only 10% of all buyer queries (15/150) and absent from 95.5% of early-funnel research (42/44 queries) — the stages where shortlists are formed. Fixing this is the entire point of the 148 recommendations.

Platform Visibility

−6 percentage points
Founder / CEO / Entrepreneur — widest persona swing
−12 percentage points
Requirements Building — widest stage swing
DimensionCombinedPlatform Delta
All Queries10%Even
By Persona
Chief Revenue Officer / Executive Leader7.1%Perplexity +4 percentage points
Founder / CEO / Entrepreneur21.2%Perplexity +6 percentage points
Head of Marketing / Demand Generation10.7%Even
Director of Revenue Operations / Operations Leader0%Even
VP of Sales / Head of Sales Development9.1%Even
By Buying Job
Artifact Creation8.3%Even
Comparison28.1%Even
Consensus Creation0%Even
Problem Identification0%Even
Requirements Building12.5%Perplexity +12 percentage points
Shortlisting3.9%Perplexity +4 percentage points
Solution Exploration0%Even
Validation8.3%Even
Show per-platform breakdown (ChatGPT vs Perplexity raw %)
DimensionChatGPTPerplexity
All Queries8%10%
By Persona
Chief Revenue Officer / Executive Leader3.6%7.1%
Founder / CEO / Entrepreneur15.2%21.2%
Head of Marketing / Demand Generation10.7%10.7%
Director of Revenue Operations / Operations Leader0%0%
VP of Sales / Head of Sales Development9.1%9.1%
By Buying Job
Artifact Creation8.3%8.3%
Comparison28.1%28.1%
Consensus Creation0%0%
Problem Identification0%0%
Requirements Building0%12.5%
Shortlisting0%3.9%
Solution Exploration0%0%
Validation8.3%8.3%

Visibility by Buying Job

Artifact Creation8.3% (1/12)
Comparison28.1% (9/32)
Consensus Creation0% (0/12)
Problem Identification0% (0/13)
Requirements Building12.5% (2/16)
Shortlisting3.9% (1/26)
Solution Exploration0% (0/15)
Validation8.3% (2/24)
High-intent visibility
Shortlist + Compare + Validate
14.6% (12/82)
High-intent win rate66.7% (8/12)
Appearance → win conversion66.7% (8/12)

Visibility & Win Rate by Persona

Chief Revenue Officer / Executive Leader7.1% vis · 50% win (1/2)
Founder / CEO / Entrepreneur21.2% vis · 57.1% win (4/7)
Head of Marketing / Demand Generation10.7% vis · 66.7% win (2/3)
Director of Revenue Operations / Operations Leader0% vis · win
VP of Sales / Head of Sales Development9.1% vis · 66.7% win (2/3)
Decision-maker win rate
Chief Revenue Officer / Executive Leader + Founder / CEO / Entrepreneur + VP of Sales / Head of Sales Development
58.3% (7/12 visible)
Evaluator win rate
Head of Marketing / Demand Generation + Director of Revenue Operations / Operations Leader
66.7% (2/3 visible)
Role type gap8 percentage points

Visibility by Feature Focus

Account Safety7.7% vis (1/13) · 0% win (0/1)
AI Message Writing15% vis (3/20) · 66.7% win (2/3)
Analytics Reporting13.3% vis (2/15) · 50% win (1/2)
Data Enrichment0% vis (0/8) · 0% win (0)
Email Finding0% vis (0/5) · 0% win (0)
GEO Visibility0% vis (0/6) · 0% win (0)
Hubspot Integration0% vis (0/15) · 0% win (0)
Multichannel Sequencing0% vis (0/5) · 0% win (0)
Outreach Automation0% vis (0/27) · 0% win (0)
Personal Brand Building20% vis (2/10) · 50% win (1/2)
Personalization At Scale21.4% vis (3/14) · 33.3% win (1/3)
Relationship Memory57.1% vis (4/7) · 100% win (4/4)

Visibility by Pain Point

Authenticity At Scale Tradeoff23.1% vis (3/13) · 33.3% win (1/3)
CRM Linkedin Disconnect0% vis (0/4) · 0% win (0)
Generic Outreach Ignored16.7% vis (1/6) · 0% win (0/1)
Linkedin Account Risk9.1% vis (1/11) · 0% win (0/1)
Manual Prospecting Bottleneck0% vis (0/6) · 0% win (0)
No Networking ROI Visibility22.2% vis (2/9) · 50% win (1/2)
Relationship Context Loss25% vis (1/4) · 100% win (1/1)
Tool Sprawl Integration Pain0% vis (0/9) · 0% win (0)

[Data] Overall visibility: 10% (15/150 queries). High-intent buying stage visibility: 14.6% (12/82 queries). Early-funnel invisibility: 95.5% (42/44 queries across Problem Identification, Solution Exploration, Requirements Building).

Best buying stage: Comparison at 28.1% (9/32 queries). Zero visibility in Solution Exploration (0/15) and Problem Identification (0/13).

[Synthesis] The visibility pattern reveals a funnel that only opens at the bottom. Pursue Networking appears primarily in Comparison and Validation queries — the stages where buyers have already formed a shortlist — while being completely absent from the discovery stages where shortlists are built. Buyers who never encounter Pursue Networking during problem identification and solution exploration will not be researching it at Comparison.

The 95.5% early-funnel invisibility rate is the root cause of the low overall visibility rate, not poor performance on individual queries.

Invisibility Gaps — 135 Queries Where Pursue Networking Doesn’t Appear

60 queries won by named competitors · 31 no clear winner · 44 no vendor mentioned

Sorted by competitive damage — competitor-winning queries first.

IDQueryPersonaStageWinner
⚑ Competitor Wins — 60 queries where a named competitor captures the buyer
pur_012"What's the actual ROI of LinkedIn networking tools for early-stage B2B companies?"Chief Revenue Officer / Executive LeaderProblem IdentificationLinkedIn Sales Navigator
pur_045"Best LinkedIn automation tools for startup sales teams in 2026"VP of Sales / Head of Sales DevelopmentShortlistingDripify
pur_046"Top AI-powered LinkedIn networking platforms for B2B revenue teams"Chief Revenue Officer / Executive LeaderShortlistingLinkedIn Sales Navigator
pur_047"LinkedIn prospecting tools with native HubSpot integration — which ones actually sync data properly?"Director of Revenue Operations / Operations LeaderShortlistingLinkedIn Sales Navigator
pur_050"LinkedIn outreach tools that won't get my team's accounts restricted — which ones are safest?"VP of Sales / Head of Sales DevelopmentShortlistingExpandi
pur_051"Which LinkedIn automation platforms have the best analytics for measuring actual pipeline impact?"Chief Revenue Officer / Executive LeaderShortlistingClosely
pur_052"LinkedIn tools with built-in contact data enrichment and email verification for B2B prospecting"Director of Revenue Operations / Operations LeaderShortlistingApollo.io
pur_054"Best platforms for scaling personalized LinkedIn outreach for demand gen teams at startups"Head of Marketing / Demand GenerationShortlistingExpandi
pur_055"LinkedIn prospecting tools for SDR teams of 5-10 reps — which platforms handle multi-seat well?"VP of Sales / Head of Sales DevelopmentShortlistingLinkedIn Sales Navigator
pur_056"Best alternatives to Sales Navigator for LinkedIn prospecting with AI messaging capabilities"Chief Revenue Officer / Executive LeaderShortlistingApollo.io
Show 50 more competitor wins + 75 uncontested queries

Remaining competitor wins: Dripify ×11, Expandi ×10, HeyReach ×10, Salesflow ×7, CoPilot AI ×4, Apollo.io ×3, LinkedIn Sales Navigator ×3, We-Connect ×1, Closely ×1. 31 queries with no clear winner. 44 queries with no vendor mentioned. Full query-level data available in the analysis export.

Positioning Gaps — 6 Queries Where Pursue Networking Appears But Loses

Queries where Pursue Networking is mentioned but a competitor is positioned more favorably.

IDQueryPersonaBuying JobWinnerPursue Networking Position
pur_036"Requirements for LinkedIn AI messaging tools that actually sound like the person sending them, not a template"Founder / CEO / EntrepreneurRequirements BuildingNo Vendor MentionedBrief Mention
pur_039"Requirements for LinkedIn tools that can prove networking ROI to the board with actual pipeline data"Chief Revenue Officer / Executive LeaderRequirements BuildingLinkedIn Sales NavigatorBrief Mention
pur_053"AI LinkedIn messaging tools that sound like you wrote them yourself, not a bot"Founder / CEO / EntrepreneurShortlistingExpandiBrief Mention
pur_095"ANDI vs CoPilot AI for building personal brands on LinkedIn — which tool is more effective?"Head of Marketing / Demand GenerationComparisonCoPilot AIStrong 2nd
pur_100"Switching from Dripify to something with better personalization — CoPilot AI or ANDI?"VP of Sales / Head of Sales DevelopmentComparisonCoPilot AIMentioned In List
pur_123"Is ANDI safe to use with LinkedIn — any account restriction risks?"Founder / CEO / EntrepreneurValidationNo Vendor MentionedPrimary Recommendation
Section 3
Competitive Position

Who’s winning when Pursue Networking isn’t — and who controls the narrative at each buying stage.

[TL;DR] Pursue Networking wins 6% of queries (9/150), ranks #9 in SOV — H2H record: 8W–5L across 8 competitors.

Pursue Networking ranks #9 in Share of Voice (15 mentions, 4.6% share) but wins 66.7% of the Comparison queries it appears in (7/9 visible Comparison queries) — a product that wins when found, in a market where it rarely gets found.

Share of Voice

CompanyMentionsShare
Expandi5817.7%
Dripify5215.8%
HeyReach4814.6%
Apollo.io3911.9%
LinkedIn Sales Navigator3510.7%
Salesflow298.8%
Closely237%
CoPilot AI226.7%
Pursue Networking154.6%
We-Connect72.1%

Head-to-Head Records

When Pursue Networking and a competitor both appear in the same response, who gets the recommendation? One query with multiple competitors generates a matchup against each — so H2H totals will exceed the query count.

Win = primary recommendation (cross-platform majority). Loss = competitor was. Tie = neither or third party.

vs. CoPilot AI1W – 2L (3 mentioned together)
vs. Dripify1W – 0L – 3T (4 mentioned together)
vs. Expandi1W – 2L (3 mentioned together)
vs. HeyReach2W – 0L – 1T (3 mentioned together)
vs. Salesflow1W – 0L (1 mentioned together)
vs. LinkedIn Sales Navigator0W – 1L (1 mentioned together)
vs. Closely1W – 0L – 1T (2 mentioned together)
vs. Apollo.io1W – 0L – 2T (3 mentioned together)

Invisible Query Winners

For the 135 queries where Pursue Networking is completely absent:

Dripify13 wins (9.6%)
HeyReach12 wins (8.9%)
Expandi12 wins (8.9%)
Salesflow7 wins (5.2%)
LinkedIn Sales Navigator6 wins (4.4%)
Apollo.io4 wins (3%)
CoPilot AI3 wins (2.2%)
Closely2 wins (1.5%)
We-Connect1 win (0.7%)
Uncontested (no winner)75 queries (55.5%)

Surprise Competitors

Vendors appearing in responses not in Pursue Networking’s defined competitive set.

Clay — 9.4% SOVFlagged
Waalaxy — 9.2% SOVFlagged
Instantly — 6.7% SOVFlagged
PhantomBuster — 5.8% SOVFlagged
Smartlead — 5.2% SOVFlagged
ZoomInfo — 4.9% SOVFlagged
Meet Alfred — 4.6% SOVFlagged
Cognism — 3.7% SOVFlagged
Lemlist — 3.7% SOVFlagged
Outreach — 3.4% SOVFlagged
Taplio — 3.4% SOVFlagged
Lusha — 3% SOVFlagged
Hublead — 2.7% SOVFlagged
Salesloft — 2.7% SOVFlagged
Dux-Soup — 2.7% SOVFlagged
Reply.io — 2.7% SOVFlagged
Kondo — 2.4% SOVFlagged
LaGrowthMachine — 2.4% SOVFlagged
Botdog — 2.4% SOVFlagged
Supergrow — 2.4% SOVFlagged
Wiza — 2.1% SOVFlagged
Salesforge — 2.1% SOVFlagged
HubSpot — 2.1% SOVFlagged
LeadConnect — 2.1% SOVFlagged
Apollo — 2.1% SOVFlagged
Lavender — 1.8% SOVFlagged
Zopto — 1.8% SOVFlagged
NeverBounce — 1.8% SOVFlagged
Hunter — 1.8% SOVFlagged
Phantombuster — 1.8% SOVFlagged
Skylead — 1.8% SOVFlagged
Valley — 1.5% SOVFlagged
LeadCRM — 1.5% SOVFlagged
Snov.io — 1.5% SOVFlagged
Clearbit — 1.5% SOVFlagged
Kaspr — 1.5% SOVFlagged
La Growth Machine — 1.5% SOVFlagged
Bouncer — 1.5% SOVFlagged
TryKondo — 1.5% SOVFlagged
Profound — 1.5% SOVFlagged
AuthoredUp — 1.5% SOVFlagged
Shield Analytics — 1.5% SOVFlagged
SalesRobot — 1.5% SOVFlagged
LinkedHelper — 1.5% SOVFlagged
Salesforce — 1.2% SOVFlagged
LeadIQ — 1.2% SOVFlagged
Hootsuite — 1.2% SOVFlagged
Linked Helper — 1.2% SOVFlagged

[Synthesis] These two metrics measure different things and must be read together. The unconditional win rate of 5.3% (8/150 total queries) reflects overall competitive position — Pursue Networking wins a small fraction of all buyer queries across the category. The H2H records show pairwise matchup outcomes in the narrow slice of queries where both brands appear simultaneously: Pursue Networking wins or ties against Dripify, HeyReach, Salesflow, Closely, and Apollo when they co-appear.

H2H parity with these competitors does not contradict the low overall win rate — it reflects that Pursue Networking competes well in the queries it reaches but reaches far fewer queries than its competitors. Expandi's dominance in SOV (17.7% share, 58 mentions) is a content volume advantage, not solely a product advantage.

Section 4
Citation & Content Landscape

What AI reads and trusts in this category.

[TL;DR] Pursue Networking had 8 unique pages cited across buyer queries, ranking #11 among all cited domains. 10 high-authority domains cite competitors but not Pursue Networking.

8 unique pursuenetworking.com pages were cited across 150 queries, with the domain ranked #11 among cited domains. The CSR rendering failure is the primary technical cause; expanding citable page count through L1 fixes and L3 content is the primary remedy.

Top Cited Domains (citation instances)

linkedin.com113
HeyReach.io60
reddit.com37
trykondo.com37
Expandi.io31
Show 15 more domains
lagrowthmachine.com25
salesforge.ai24
Salesflow.io23
connectsafely.ai21
Dripify.com21
pursuenetworking.com19 (#11)
botdog.co18
copilotai.com16
salesrobot.co14
phantombuster.com13
meetalfred.com13
closelyhq.com13
business.linkedin.com12
joinvalley.co11
arxiv.org10

Pursue Networking URL Citations by Page

pursuenetworking.com7
pursuenetworking.com/andi-onboarding-demo-series5
pursuenetworking.com/blog/smart-context-capture...2
pursuenetworking.com/blog/linkedin-networking-f...1
pursuenetworking.com/blog/ai-linkedin-dm-writing1
Show 3 more pages
pursuenetworking.com/blog/syncing-linkedin-data...1
pursuenetworking.com/resources1
pursuenetworking.com/blog/data-behind-relations...1
Total Pursue Networking unique pages cited8
Pursue Networking domain rank#11

Competitor URL Citations

Note: Domain-level citation counts (above) tally instances per individual domain. Competitor-level counts (below) aggregate across all domains owned by a single vendor, which may include subdomains.

HeyReach62 URL citations
Expandi31 URL citations
Salesflow23 URL citations
Closely19 URL citations
Dripify18 URL citations
CoPilot AI16 URL citations
LinkedIn Sales Navigator2 URL citations
We-Connect2 URL citations
Apollo.io2 URL citations

Third-Party Citation Gaps

Non-competitor domains citing other vendors but not Pursue Networking — off-domain authority opportunities.

These domains cited competitors but did not cite Pursue Networking pages in the queries analyzed. This reflects citation patterns in AI responses, not overall platform presence.

linkedin.com113 citations · Pursue Networking not cited
reddit.com37 citations · Pursue Networking not cited
trykondo.com37 citations · Pursue Networking not cited
lagrowthmachine.com25 citations · Pursue Networking not cited
salesforge.ai24 citations · Pursue Networking not cited

[Synthesis] 8 unique pages cited across 150 queries is a thin citation footprint — it means most of Pursue Networking's content is either not indexed by AI platforms or not deemed authoritative enough to surface in responses. The domain ranking of #11 confirms that pursuenetworking.com generates fewer citations than at least 10 other domains in this query space, including competitor domains and third-party review platforms. The practical implication: expanding the citable surface area (through L3 new content) and fixing the CSR rendering failure (L1) are the two actions most directly correlated with improving citations.

More citable pages at a higher freshness signal equals more citation instances.

Section 5
Prioritized Action Plan

Three layers of recommendations ranked by commercial impact and implementation speed.

[TL;DR] 25 priority recommendations (plus 6 near-rebuild optimizations) targeting 148 queries where Pursue Networking is currently invisible. 6 L1 technical fixes + 1 verification checks, 8 content optimizations (L2), 10 new content initiatives (L3).

148 recommendations execute in a strict sequence: L1 technical fixes first (prerequisite — new content published on the broken architecture will not be crawled), then L2 page optimizations, then L3 new content in priority order (critical NIOs: outreach automation, HubSpot integration, analytics/ROI).

Reading the priority numbers: Recommendations are ranked 1–25 across all three layers by commercial impact × implementation speed. Within each layer, items appear in priority order. Gaps in the sequence (e.g., L1 shows 1, 2, then 12) mean higher-priority items belong to a different layer.

Layer 1 Technical Fixes

Configuration and infrastructure changes. Owner: Engineering / DevOps. Timeline: Days to weeks.

Priority Finding Impact Timeline
#1Homepage Renders Only Tagline and Navigation to CrawlersHigh1-3 days

Issue: The homepage (pursuenetworking.com) returns only the ANDI product tagline, navigation links, and footer when fetched server-side. The full product description, feature highlights, social proof, and calls-to-action that would be visible in a browser are rendered entirely by client-side JavaScript and are invisible to AI crawlers.

Fix: Ensure the homepage's Next.js page component uses SSR or SSG to render the full product pitch, feature highlights, and key messaging in the initial HTML response. Test by fetching the page with curl and verifying all product content appears without JavaScript execution.

#2Majority of Blog Content Exceeds 180-Day Freshness ThresholdHigh2-4 weeks

Issue: Of 22 analyzed blog posts, 14 (64%) were last updated more than 180 days ago, and 2 are over 365 days old. Zero blog posts have been updated within the last 90 days. The most recently published ANDI-focused posts date to July 2025 (8+ months ago). Some posts show an 'Updated: October 19, 2025' date, suggesting a batch update pass, but the majority have not been refreshed.

Fix: Prioritize refreshing the highest-value ANDI product blog posts (CRM building, AI DM writing, prospecting database, workflow design) with updated content, examples, and visible publication/update dates. Establish a quarterly content refresh cadence for commercially important posts. Ensure all blog posts display visible publication and last-updated dates.

#3Meta Descriptions and OG Tags Not Assessable — Manual Verification RecommendedMedium1-3 days

Issue: Meta descriptions and Open Graph tags cannot be assessed from rendered markdown output. The homepage's meta description was visible in metadata ('ANDI delivers B2B networking tools on LinkedIn...') but individual page meta descriptions and OG tags for blog posts, the pricing page, and the scale page could not be verified.

Fix: Audit meta descriptions and OG tags across all commercially important pages using a crawler like Screaming Frog or a browser extension. Ensure each page has a unique, descriptive meta description under 160 characters. Add OG image, OG title, and OG description tags to all blog posts and product pages.

#4Schema Markup Status Unknown — Manual Verification RecommendedMedium1-3 days

Issue: Our analysis method returns rendered page text rather than raw HTML, making it impossible to assess whether JSON-LD schema markup (Organization, Product, Article, FAQ, HowTo) is present on any page. The site's Next.js architecture may include schema in the JavaScript bundle, but this cannot be confirmed from the rendered output.

Fix: Verify schema markup using Google's Rich Results Test or Schema.org Validator for key pages: homepage (Organization + Product), pricing (Product/Offer), blog posts (Article), FAQ page (FAQPage). Add missing schema types as Next.js Head components or via next-seo.

#19Sitemap Uses Identical Timestamps for All Non-Blog URLsMedium1-3 days

Issue: All 11 non-blog URLs in the sitemap (homepage, resources, resources subcategories, training, signin, dashboard, privacy) share an identical lastmod timestamp of 2025-10-13T23:15:51.234Z. This indicates the timestamps are auto-generated at build/deploy time rather than reflecting actual content modification dates.

Fix: Configure the sitemap generation to use actual content modification dates for each URL. Most Next.js sitemap plugins support reading file modification times or CMS timestamps. For pages with dynamic content, use the most recent content update date.

#25Critical Pages Invisible to AI Crawlers Due to Client-Side RenderingCritical1-2 weeks

Issue: Four commercially important pages linked from the site's main navigation — /features, /faq, /pricing, and /pages/about — return HTTP 404 errors when fetched server-side. The /pricing page occasionally returns a shell HTML document containing only Next.js framework JavaScript with no rendered content. These pages are built as client-side-only routes in the Next.js application and do not generate server-side HTML.

Fix: Enable Next.js Server-Side Rendering (SSR) or Static Site Generation (SSG) for all commercially important routes: /features, /pricing, /faq, and /pages/about. Use getServerSideProps or getStaticProps to ensure these pages return complete HTML on first request. Verify with curl or a headless fetch that each page returns full content without JavaScript execution.

Verification Checks

Items requiring manual review before determining if action is needed.

Priority Finding Impact Timeline
#24No Explicit AI Crawler Directives in robots.txtLow< 1 day

Issue: The robots.txt file uses a single User-Agent: * block that allows all crawlers (with /dashboard/ and /api/ excluded). There are no specific directives for GPTBot, ClaudeBot, PerplexityBot, Google-Extended, or Bytespider. All AI crawlers are implicitly allowed under the wildcard rule.

Fix: Consider adding explicit User-Agent blocks for each AI crawler to document the company's intent. For example, explicitly Allow GPTBot, ClaudeBot, and PerplexityBot while deciding on Google-Extended and Bytespider based on the company's training data preferences. This is a policy decision, not a technical fix.

Click any row to expand full issue/fix detail.

Layer 2 Existing Content Optimization

Existing pages that need restructuring or deepening. Owner: Content Team. Timeline: Weeks.

AI DM Performance Benchmarks & Competitor Review Synthesis — /blog/ai-linkedin-dm-writing

Priority 8
Currently: coveredPage addresses AI DM writing as a skill/technique; missing: performance benchmarks (reply rates, acceptance rates, conversion data), AI voice-matching methodology, and competitive quality comparisons against Salesflow and Dripify AI writing outputs.

The /blog/ai-linkedin-dm-writing page contains no quantitative claims about reply rate improvements or conversion lift — buyers asking 'does AI-written LinkedIn outreach actually convert better?' (pur_020) cannot find a citable answer on this page. The /blog/ai-linkedin-dm-writing page does not compare ANDI's AI writing quality against Salesflow, Dripify, or HeyReach — the 'Salesflow AI messaging quality' (pur_119) and 'Dripify vs HeyReach message quality' (pur_124) queries route here but find no competitor context. The /blog/ai-linkedin-dm-writing page describes what AI DM writing is but does not explain the technical mechanism by which AI learns a sender's voice (pur_022) — a requirements-level question buyers are actively asking.

Queries affected: pur_009, pur_016, pur_020, pur_022, pur_119, pur_124, pur_133

Competitor Platform Review Synthesis — Near-Rebuild Required on /blog/ai-linkedin-dm-writing

Priority 11
Currently: coveredMatched pages contain ANDI messaging content; missing: any mention of HeyReach or CoPilot AI quality issues, G2 review synthesis, or competitive Validation content that would surface ANDI as an alternative when buyers research competitor weaknesses.

The /blog/ai-linkedin-dm-writing page contains no content about HeyReach personalization — buyers asking specifically about HeyReach message quality (pur_112) find ANDI messaging content with no comparative relevance. The /blog/ai-linkedin-dm-writing page contains no content about CoPilot AI's known limitations for small startup teams (pur_114) — a buyer validating CoPilot AI complaints would find no ANDI alternative positioning.

Queries affected: pur_112, pur_114

Personalization vs. Template Quality Positioning — /blog/linkedin-dm-templates and /blog/linkedin-connection-request-templates

Priority 15
Currently: coveredPages provide template resources but lack: analysis of why generic outreach fails (the problem identification layer), explanation of how ANDI personalization works mechanically vs. simple variable substitution, Comparison of ANDI's personalization quality against Expandi/HeyReach, and data on what authentic vs. template outreach produces in reply rates.

The /blog/linkedin-dm-templates page provides message templates without explaining why template-based outreach is increasingly ignored — buyers searching 'why are LinkedIn acceptance rates dropping' (pur_001) land on a templates page that doesn't answer this problem-identification question. The /blog/linkedin-dm-templates page does not differentiate between true AI personalization and variable-substitution templates — buyers asking 'how do automation platforms handle real personalization vs. just blasting templates?' (pur_017) find templates, not the answer they need. The /blog/linkedin-connection-request-templates page does not include any mention of ANDI's approach to personalization or how using ANDI differs from copy-pasting from a template library — making it invisible for platform-evaluation queries (pur_053, pur_054, pur_058).

Queries affected: pur_001, pur_017, pur_036, pur_049, pur_053, pur_054, pur_058, pur_116, pur_135

Pipeline Impact Case Studies — Near-Rebuild Required on Consensus Creation Query

Priority 16
Currently: coveredMatched pages provide messaging tips and templates; missing: named or anonymized customer case studies with before/after pipeline metrics, startup-specific outcome data (pipeline generated, deal velocity, time saved), and social proof that AI messaging tools produce measurable business results.

The /blog/ai-linkedin-dm-writing page contains no customer outcome data — pur_129 ('Case studies of startups that grew their pipeline using LinkedIn AI messaging tools') requires named examples or anonymized benchmarks, neither of which appear on this page. The matched blog posts make no before/after performance claims — they teach messaging techniques but do not quantify what those techniques achieve in pipeline terms, making them uncitable for Consensus Creation queries.

Queries affected: pur_129

AI Platform Shortlisting — Repositioning /blog/ai-linkedin-dm-writing for Buyer Evaluation Queries

Priority 17
Currently: coveredPages are written as educational blog posts, not product evaluation resources; missing: ANDI's positioning relative to other platforms, specific feature claims buyers use during Shortlisting, and a clear product CTA that AI platforms can cite as an evaluation recommendation.

The /blog/ai-linkedin-dm-writing page teaches AI DM writing techniques generically without naming ANDI as the tool implementing them — buyers searching 'Best AI copilot tools for LinkedIn sales prospecting' (pur_064) find tips but no product recommendation. The /blog/ai-linkedin-dm-writing page does not compare ANDI against LinkedIn Sales Navigator or Apollo.io as an AI messaging alternative (pur_056 'Best alternatives to Sales Navigator with AI messaging capabilities') — the page never mentions these competitors. The /blog/ai-for-linkedin-content page discusses AI for LinkedIn content broadly but does not explain how ANDI specifically learns a user's writing style (pur_067 'AI-powered LinkedIn tools that actually learn your writing style') — a Shortlisting differentiator left unstated.

Queries affected: pur_046, pur_056, pur_064, pur_067

Relationship Memory & Sales Context Tracking — Deepening /blog/build-linkedin-crm-with-andi for Shortlisting and Requirements Queries

Priority 18
Currently: coveredPages explain how ANDI tracks relationship context using tutorial/how-to framing; missing: extractable feature specifications (what data ANDI captures, where it's stored, how long it's retained), Comparison against LinkedIn Sales Navigator's relationship tracking, and enterprise sales cycle framing (pur_043 asks specifically about long enterprise sales cycles).

The /blog/build-linkedin-crm-with-andi page uses tutorial narrative ('here's how to build your CRM') rather than structured feature specifications — AI platforms cannot extract 'what data does ANDI capture about each relationship?' as a retrievable answer for pur_011. The /blog/build-linkedin-crm-with-andi page does not compare ANDI's relationship tracking against LinkedIn Sales Navigator — the winner for pur_060 ('LinkedIn tools with relationship tracking so reps don't forget context between conversations') — leaving buyers without a head-to-head differentiator. The /blog/smart-context-capture-andi-remember-every-conversation page addresses general networking but does not specifically frame relationship memory for enterprise sales cycles with long time horizons (pur_043 — 'what relationship tracking features should LinkedIn tools have for long enterprise sales cycles?').

Queries affected: pur_011, pur_043, pur_060

Evaluation Templates & Comparison Matrices — Near-Rebuild Required on Artifact Creation Queries

Priority 21
Currently: coveredMatched pages provide educational content and message templates; missing: vendor evaluation scorecards, Comparison matrices with structured rows/columns for feature assessment, and downloadable decision-support tools that buyers can fill in and present to leadership.

The /blog/ai-linkedin-dm-writing page contains no structured evaluation criteria section — pur_142 ('Write evaluation criteria for LinkedIn AI messaging tools focused on authenticity and personalization quality') lands on a how-to writing guide rather than an evaluation framework. The /blog/linkedin-dm-templates page provides message templates, not vendor evaluation templates — pur_146 ('Create a vendor evaluation template for LinkedIn outreach tools focused on AI message quality, personalization, and reply rates') requires a procurement-style template, not a messaging template. No existing page contains a Comparison matrix across Dripify, Expandi, HeyReach, and Salesflow on personalization and brand building (pur_143) — the matched blog posts have no structured multi-vendor Comparison table.

Queries affected: pur_142, pur_146, pur_143

Startup Founder Lead Generation Without Constant Posting — Near-Rebuild on /blog/future-networking-ai-human-oversight-andi-approach

Priority 23
Currently: addressedMatched pages discuss ethical AI networking philosophy; missing: practical founder-to-inbound-lead content covering: optimizing LinkedIn profiles for inbound, consistent engagement without daily posting, ANDI's role in nurturing relationships that convert to inbound, and specific founder-led sales outcomes.

The /blog/future-networking-ai-human-oversight-andi-approach page covers ANDI's philosophical approach to AI-assisted networking; pur_004 ('How do startup founders build a LinkedIn presence that generates inbound leads without posting all day?') needs tactical founder use-case content — the pages answer different questions entirely. The /blog/ethical-linkedin-outreach page discusses outreach ethics but does not explain how consistent ANDI-powered relationship nurturing converts to inbound leads for founders — the specific commercial mechanism buyers need to understand.

Queries affected: pur_004

Layer 3 Narrative Intelligence Opportunities

Net new content addressing visibility and positioning gaps. Owner: Content Strategy. Timeline: Months.

NIO #1: Outreach Automation Category Leadership — No Content Home for Pursue Networking's Core Use Case
Gap Type: Structural Gap — Outreach automation is Pursue Networking's primary category, yet the LinkedIn Outreach Automation & Sequences feature has 0% visibility (0/27 queries) across the audit. 28 L3 queries spanning Problem Identification through Artifact Creation are unaddressed because the site has no dedicated content treating ANDI as an outreach automation platform — only blog posts on adjacent messaging tactics.
Critical

Buyers at every stage of the automation purchase journey — from 'how do startups fix manual prospecting?' to 'build me a TCO model for a 10-person SDR team' — find zero Pursue Networking content. Competitors like Dripify, HeyReach, and Expandi dominate these queries by default, not by merit. Because LinkedIn Outreach Automation & Sequences is the feature with the largest query footprint (27 queries in the audit) and 0% visibility, fixing this gap has the highest potential query-volume payoff of any NIO. The absence is structural: without a dedicated product hub or feature pages for automation, no amount of blog optimization can close the gap.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: pur_002, pur_008, pur_013, pur_014, pur_015, pur_021, pur_028, pur_031, pur_045, pur_055, pur_069, pur_071, pur_079, pur_081, pur_087, pur_091, pur_094, pur_097, pur_099, pur_103, pur_104, pur_107, pur_111, pur_118, pur_128, pur_130, pur_136, pur_141
“Our SDRs spend half their day scrolling LinkedIn and typing messages instead of actually selling — how do other startups fix this?”
“Best LinkedIn automation tools for startup sales teams in 2026”
“Build vs buy for LinkedIn outreach automation — when does it make sense to use a vendor tool?”
“How to justify LinkedIn automation tool investment to a CEO who thinks reps should just do it manually”
Blueprint
  • On-Domain: Create /features/outreach-automation product page with SSR-rendered content (depends on L1 fix: csr_rendering_failure) covering: how ANDI automates LinkedIn prospecting, daily action limits and safety rails, multi-step sequence builder, and startup-specific use cases
  • On-Domain: Publish a 'LinkedIn Automation ROI for Startups' pillar post covering build vs. buy decision, manual prospecting time cost, and ANDI productivity benchmarks with real numbers
  • On-Domain: Create a requirements checklist resource: 'What to Demand from a LinkedIn Automation Tool' — answering pur_030, pur_031, pur_034 directly
  • On-Domain: Publish a 'How ANDI Automates LinkedIn Outreach Without Sounding Like a Bot' use-case page targeting Founder / CEO / Entrepreneur and VP of Sales / Head of Sales Development Shortlisting queries
  • Off-Domain: Submit ANDI to G2 LinkedIn Automation category — currently missing from category grids that dominate Comparison queries
  • Off-Domain: Seek guest contributor slots on Sales Hacker, Pavilion, and RevGenius covering 'manual prospecting bottleneck' and 'startup SDR productivity' topics to build third-party citation signals
  • Off-Domain: Encourage satisfied startup customers to leave G2 reviews mentioning automation and time savings — AI platforms cite review aggregators for Validation queries
Platform Acuity

ChatGPT (high): ChatGPT Comparison and Shortlisting queries for automation tools (pur_045, pur_081, pur_091) show competitors cited by name with feature specifics — a named product hub with extractable feature claims would be directly quotable. Perplexity (high): Perplexity leads ChatGPT by 2pp overall visibility; structured Comparison tables and self-contained how-it-works sections (the format Perplexity extracts for answer boxes) would improve citation rates for automation category queries.

NIO #2: HubSpot & CRM Integration Hub — RevOps Veto-Holder Finds Nothing
Gap Type: Content Type Deficit — HubSpot integration has 0% visibility (0/15 queries) across the audit. The Director of Revenue Operations / Operations Leader persona — which holds veto power over tool selection — has 0% visibility (0/28 queries) across all query types. 15 L3 queries spanning problem identification through artifact creation are unaddressed because the site has no integration documentation, use-case content, or Comparison material for HubSpot sync.
Critical

The Director of Revenue Operations is the most likely person to block a LinkedIn tool purchase if CRM integration is broken or absent. With 0% visibility across all 28 RevOps queries, Pursue Networking is invisible to this persona at every stage of the buying journey. Competitors like We-Connect (#7 SOV), CoPilot AI, and Dripify win these queries because they publish integration guides and CRM sync documentation that AI platforms can cite. This is a content-type deficit: the site has no integration-class content (setup guides, sync architecture, data field mapping), only general blog posts.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: pur_003, pur_010, pur_019, pur_029, pur_035, pur_047, pur_065, pur_066, pur_073, pur_083, pur_105, pur_121, pur_132, pur_134, pur_139
“None of our LinkedIn conversations show up in HubSpot — how do teams fix this pipeline visibility gap?”
“LinkedIn prospecting tools with native HubSpot integration — which ones actually sync data properly?”
“How do LinkedIn prospecting tools integrate with HubSpot — native sync vs Zapier workarounds?”
“CoPilot AI CRM integration problems — does it break HubSpot workflows or create duplicate records?”
Blueprint
  • On-Domain: Create /integrations/hubspot dedicated page (SSR-rendered, pending L1 fix) covering: native sync architecture, which LinkedIn data fields map to HubSpot properties, sync frequency, and setup walkthrough — directly answering pur_035, pur_029, pur_047
  • On-Domain: Publish 'ANDI + HubSpot: Eliminating the LinkedIn-CRM Gap' use-case post targeting Problem Identification queries (pur_003, pur_010) with specific pain point framing around duplicate records and missing conversation data
  • On-Domain: Create a 'LinkedIn Tool Stack Consolidation' guide for RevOps leaders addressing pur_010, pur_066, pur_132 — the 'we have five tools that don't talk to each other' problem
  • On-Domain: Build a 'HubSpot Integration RFP Template' resource page targeting pur_139 and pur_035 — downloadable, structured, directly answerable by AI platforms
  • Off-Domain: List ANDI in HubSpot App Marketplace — AI platforms frequently cite App Marketplace as an authoritative source for integration capability queries
  • Off-Domain: Create a HubSpot Community answer for 'LinkedIn automation tools with native HubSpot sync' — Perplexity heavily cites community forums for tool recommendation queries
  • Off-Domain: Pursue a case study with a RevOps leader customer on HubSpot sync reliability — third-party testimonials for integration quality directly address Validation queries (pur_105, pur_121)
Platform Acuity

ChatGPT (high): ChatGPT cites integration documentation and App Marketplace listings for CRM sync queries; structured content with specific field-mapping details and setup steps matches ChatGPT's citation pattern for integration queries. Perplexity (high): Perplexity favors Comparison tables and self-contained answer passages; a page structured around 'native vs Zapier' with a clear Comparison table would be directly extractable for Shortlisting queries like pur_047, pur_065.

NIO #3: Analytics & ROI Proof Content — CRO Can't Justify the Investment
Gap Type: Content Type Deficit — Analytics and reporting has 13.3% visibility (2/15 queries) overall, with 14 queries routed to L3 because existing content lacks pipeline attribution data, ROI benchmarks, and measurement frameworks. The CRO persona — who must justify LinkedIn tool spend to boards — is invisible 92.9% of the time (2/28 visible queries), winning just 1 of those 2 visible queries.
Critical

The CRO's primary objection to any LinkedIn tool is the inability to prove ROI to the board. Pursue Networking's existing blog content covers networking tactics but contains no quantified pipeline impact data, time-savings benchmarks, or payback period analysis. Competitors like LinkedIn Sales Navigator and Closely win these queries because they publish attribution methodology and pipeline analytics documentation. This gap is commercially consequential: pur_007 ('I know LinkedIn networking works but I can't prove it to my board') and pur_127 ('ROI of implementing LinkedIn networking automation for a 15-person startup sales team') are exactly the queries CROs ask when building the business case — and Pursue Networking is absent from every response.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: pur_007, pur_012, pur_033, pur_037, pur_039, pur_051, pur_074, pur_078, pur_084, pur_106, pur_125, pur_127, pur_131, pur_140
“I know LinkedIn networking works for us but I can't prove it to my board — how do other revenue leaders solve this?”
“What's the actual ROI of LinkedIn networking tools for early-stage B2B companies?”
“ROI of implementing LinkedIn networking automation for a 15-person startup sales team — what's the actual business case?”
“Typical payback period for LinkedIn automation platforms — when does a startup's investment break even?”
Blueprint
  • On-Domain: Publish 'LinkedIn Networking ROI Playbook for B2B Startups' — a benchmark-driven guide with time savings data, connection-to-pipeline conversion rates, and payback period models for 10-20 person teams (directly answers pur_007, pur_012, pur_127, pur_131)
  • On-Domain: Create /features/analytics-reporting product page (SSR-rendered) with specific metrics ANDI tracks: connection rates, reply rates, conversation-to-meeting conversion, pipeline attribution — what data the CRO sees in the dashboard
  • On-Domain: Build a downloadable 'LinkedIn Tool ROI Scorecard' resource addressing pur_033, pur_037, pur_039, pur_140 — structured so AI platforms can extract the evaluation criteria
  • On-Domain: Publish 'Expandi vs CoPilot AI vs ANDI: Which Actually Drives More Pipeline?' Comparison post with ROI framing targeting pur_074, pur_084
  • Off-Domain: Commission or co-author a benchmark study on LinkedIn outreach ROI with a B2B research partner — third-party data is the most citable format for ROI queries on ChatGPT and Perplexity
  • Off-Domain: Submit ANDI's pipeline impact data to G2's 'ROI of Software' section — Perplexity cites G2 ROI calculator results for tool evaluation queries
  • Off-Domain: Pursue a CRO-persona case study: 'How [Customer Company] proved LinkedIn pipeline impact to their board using ANDI analytics' — addresses pur_007 and pur_125 directly
Platform Acuity

ChatGPT (medium): ChatGPT cites specific benchmark numbers when they appear in structured posts; ROI content with named data points (e.g., '34% reduction in time per qualified conversation') would be extracted for Consensus Creation queries. Perplexity (high): Perplexity consistently cites structured Comparison tables and benchmark data for B2B tool ROI queries; a page with a payback period table organized by team size would be directly extractable.

NIO #4: Account Safety & LinkedIn Compliance — A Risk-Blocking Gap at the Shortlisting Stage
Gap Type: Content Type Deficit — Account safety has 7.7% visibility (1/13 queries) with a 0% win rate on that 1 visible query. 13 L3 queries span problem identification through artifact creation; the site has no dedicated content addressing LinkedIn TOS compliance, safe automation limits, cloud-based vs. browser-extension safety, or account restriction risk — content types that competitors Expandi and Salesflow publish prominently.
High

LinkedIn account restriction is a deal-blocking concern: when a VP of Sales sees their top SDR's account restricted for a week (pur_006), the next query is 'which platform is safest?' — and Pursue Networking is absent from every safety-related response. Expandi wins pur_050 and pur_072 specifically because it publishes cloud-based safety architecture documentation. This gap is particularly damaging because it appears at the Shortlisting and Requirements Building stages — the exact moment buyers are narrowing their list. A single well-structured 'ANDI Safety Architecture' page would address 7 of the 13 queries in this cluster.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: pur_006, pur_018, pur_030, pur_034, pur_041, pur_050, pur_072, pur_092, pur_108, pur_113, pur_123, pur_126, pur_144
“My top SDR got their LinkedIn account restricted for a week because of our automation tool — how common is this?”
“LinkedIn outreach tools that won't get my team's accounts restricted — which ones are safest?”
“Cloud-based vs browser extension LinkedIn automation — which approach is safer for your account?”
“What questions should I ask LinkedIn automation vendors about account safety and LinkedIn compliance?”
Blueprint
  • On-Domain: Create 'ANDI Account Safety Architecture' page (SSR-rendered) covering: cloud-based approach vs browser extensions, how ANDI sets daily action limits, LinkedIn TOS compliance approach, and what happens if a limit is reached — directly answering pur_018, pur_050, pur_123
  • On-Domain: Publish 'LinkedIn Account Restriction: What Causes It and How ANDI Prevents It' post targeting Problem Identification queries (pur_006, pur_126) with specific incident data and protective mechanisms
  • On-Domain: Create 'Security Checklist for Evaluating LinkedIn Automation Tools' resource page — a structured downloadable targeting pur_030, pur_034, pur_041, pur_144 that AI platforms can extract as a complete checklist
  • On-Domain: Add a dedicated 'Safety' section to the /features page (pending L1 fix) with cloud-based infrastructure claims, rate limiting methodology, and compliance stance
  • Off-Domain: Publish a Reddit r/sales or r/saleshacker thread or response about cloud-based LinkedIn automation safety — Perplexity heavily cites Reddit for 'has anyone gotten banned using X?' queries (pur_108, pur_113)
  • Off-Domain: Seek a mention in G2 or Capterra's 'account safety' category criteria — review platform structured data is highly citable for safety Comparison queries
  • Off-Domain: Request existing customers to post G2 reviews specifically mentioning 'no account restrictions' or 'safe automation' — user-generated safety testimonials directly address pur_123 and pur_126
Platform Acuity

ChatGPT (medium): ChatGPT cites named safety architecture descriptions (e.g., 'cloud-based, runs on dedicated IP') for automation safety queries; a page with specific technical safety claims would be extracted for pur_018 and pur_050. Perplexity (high): Perplexity cites community forums and structured resource pages for 'has anyone gotten banned' queries; a structured checklist page would be extractable for pur_030, pur_034, and pur_144.

NIO #5: Data Enrichment & Email Finding — A Feature Cluster Competitors Win by Default
Gap Type: Content Type Deficit — Data enrichment has 0% visibility (0/8 queries) and email finding has 0% visibility (0/5 queries). 13 L3 queries span the full buying funnel; coverage status is 'thin' or 'missing' for all 13 because the site has no content explaining ANDI's data enrichment capabilities, email finder accuracy, or how these features compare to Apollo.io and dedicated enrichment tools.
High

Apollo.io wins the majority of data enrichment Comparison queries because it publishes specific accuracy benchmarks, data source methodology, and contact coverage statistics — content that AI platforms can directly cite when buyers ask 'which LinkedIn tool has the best email finding accuracy?' Pursue Networking's ANDI platform may offer enrichment and email finding capabilities, but zero content exists to establish this. RevOps evaluators (0% visibility across all 28 queries) are the primary buyers for enrichment tools, and they're currently invisible to any ANDI content. Apollo.io's dominance on pur_052, pur_057, pur_068, pur_098 is entirely a content gap, not a product gap.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: pur_023, pur_025, pur_038, pur_040, pur_052, pur_057, pur_068, pur_088, pur_098, pur_110, pur_115, pur_138, pur_149
“LinkedIn tools with built-in contact data enrichment and email verification for B2B prospecting”
“LinkedIn automation platforms that eliminate the need for separate email finder and enrichment tools”
“LinkedIn email finding tools with highest accuracy rates for B2B startup prospecting”
“Contact data enrichment from LinkedIn profiles — dedicated tools vs built-in features in automation platforms?”
Blueprint
  • On-Domain: Create /features/data-enrichment page (SSR-rendered) covering: what contact data ANDI enriches from LinkedIn profiles, email finding methodology, accuracy benchmarks vs. dedicated tools, and data source transparency
  • On-Domain: Publish 'Apollo.io vs ANDI for LinkedIn Prospecting: When You Need Enrichment Built In' Comparison post targeting pur_098, pur_088, pur_057 — the tool consolidation angle for RevOps buyers
  • On-Domain: Create 'Email Finding Accuracy: What to Expect from LinkedIn Automation Platforms' requirements guide targeting pur_025, pur_038, pur_040 — structured with accuracy benchmarks and verification methodology
  • On-Domain: Build a 'B2B Startup Data Stack Consolidation' guide targeting pur_023, pur_052, pur_138 — framing ANDI as a way to eliminate separate Lusha/ZoomInfo subscriptions
  • Off-Domain: Submit ANDI to G2 'Email Finder' and 'Data Enrichment' categories alongside the LinkedIn Automation listing — dual-category presence addresses Comparison queries that span both feature areas
  • Off-Domain: Seek a product review from a RevOps-focused newsletter (RevOps Squared, Operations Nation) covering ANDI's enrichment accuracy — third-party benchmark citations dominate accuracy Validation queries
  • Off-Domain: Create a LinkedIn article series from the RevOps perspective on 'eliminating LinkedIn tool sprawl' — LinkedIn-published content is occasionally cited by Perplexity for social selling tool queries
Platform Acuity

ChatGPT (medium): ChatGPT cites named accuracy statistics for email finding queries; a page with specific benchmark data (e.g., '87% email deliverability rate') would be directly extractable for pur_025, pur_068. Perplexity (high): Perplexity favors Comparison tables for 'dedicated enrichment tool vs built-in feature' queries; a structured Comparison page with pros/cons and accuracy data would be highly citable for pur_023, pur_052.

NIO #6: Personal Brand Building Hub — An Owned Feature With Zero Coverage
Gap Type: Structural Gap — Personal brand building has 20% visibility (2/10 queries) with a 50% win rate on those 2 visible queries — the best win rate of any feature cluster with significant query volume. Yet 8 queries are in L3 because no topic hub or feature page exists for this capability; coverage status is 'missing' for all 8 queries. The site wins when it appears, but appears only 20% of the time.
High

Personal brand building is the one feature where Pursue Networking already wins when visible — 50% win rate (1/2 visible queries) signals the product resonates when buyers find it. But the 20% visibility rate on Personal Brand Growth & LinkedIn Presence queries means competitors like LinkedIn Sales Navigator and CoPilot AI default-win 8 out of 10 buyer queries in this category, simply because they publish LinkedIn thought leadership and brand building content that ANDI has no equivalent for. The Founder / CEO / Entrepreneur and Head of Marketing / Demand Generation personas both search for personal brand tools; creating a topic hub would serve both simultaneously while leveraging the existing win rate signal.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: pur_005, pur_027, pur_032, pur_048, pur_062, pur_063, pur_090, pur_095
“How are B2B marketing teams using LinkedIn for demand gen beyond just running ads?”
“How do personal branding tools on LinkedIn differ from standard LinkedIn automation platforms?”
“What to look for in a LinkedIn tool that can build personal brands for multiple team members simultaneously”
“Best LinkedIn tools for startup founders who want to build a personal brand and generate leads”
Blueprint
  • On-Domain: Create /features/personal-brand-building page (SSR-rendered) covering: how ANDI helps founders and marketing leaders build consistent LinkedIn presence, multi-team member brand management, content amplification, and lead generation through thought leadership — directly competing with CoPilot AI's content on this topic
  • On-Domain: Publish 'How B2B Marketing Teams Use LinkedIn for Demand Gen Beyond Ads' guide targeting pur_005, pur_027 — framing ANDI as the tool behind the strategy
  • On-Domain: Create 'ANDI vs CoPilot AI for LinkedIn Personal Branding' Comparison post to directly address pur_095 — a named ANDI query currently won by a competitor
  • On-Domain: Publish 'LinkedIn Personal Branding Checklist for Startup Founders' resource targeting pur_032, pur_048 — structured format citable by both ChatGPT and Perplexity
  • Off-Domain: Pursue co-marketing with LinkedIn thought leaders or creator tools to build Personal Brand Growth & LinkedIn Presence category authority — AI platforms will cite those partnerships as evidence of credibility
  • Off-Domain: Submit ANDI to G2 'LinkedIn Tools for Personal Branding' subcategory if it exists, or request G2 add the feature tag to ANDI's profile
Platform Acuity

ChatGPT (medium): ChatGPT uses named product comparisons for 'ANDI vs CoPilot AI' queries; a dedicated Comparison post would be extractable for pur_095. Perplexity (high): Perplexity cites structured 'how B2B teams use X for Y' content in demand gen queries; the /features/personal-brand-building page with structured use cases would be citable for pur_005, pur_063.

NIO #7: Multichannel Sequencing — A Missing Feature Content Area Competitors Default-Win
Gap Type: Content Type Deficit — Multichannel sequencing has 0% visibility (0/5 queries) and coverage status is 'missing' for all 5 L3 queries. No content exists on the site explaining ANDI's approach to LinkedIn-plus-email sequencing, making it invisible for buyers who require multichannel outreach as a baseline requirement.
Medium

Dripify wins pur_093 ('which handles multi-channel sequences better, LinkedIn plus email?') by default because it publishes multichannel sequence documentation. With only 5 queries in this cluster, the absolute query volume is small — but Requirements Building and Artifact Creation buying jobs in this cluster carry disproportionate commercial weight because they appear late in the evaluation funnel. A buyer writing an RFP for multichannel sequencing who never sees ANDI in their research will not include it on the shortlist.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: pur_026, pur_042, pur_061, pur_093, pur_150
“Multi-channel outreach sequences vs LinkedIn-only campaigns — which approach works better for B2B startups?”
“Minimum feature requirements for LinkedIn outreach tools that support multi-channel sequences with email”
“Best LinkedIn prospecting platforms with multi-channel sequencing — LinkedIn plus email in one workflow”
“Dripify vs HeyReach — which handles multi-channel sequences better, LinkedIn plus email?”
Blueprint
  • On-Domain: Publish 'LinkedIn-Only vs Multichannel Sequencing: Which Approach Drives Better B2B Results?' guide targeting pur_026, pur_150 — data-backed Comparison with ANDI's positioning on the LinkedIn-first approach
  • On-Domain: Create 'Multichannel Outreach Requirements Checklist' resource targeting pur_042, pur_061 — structured so buyers can evaluate any tool (and ANDI specifically) against their multichannel requirements
  • On-Domain: Add multichannel sequencing to the /features page (pending L1 SSR fix) with honest capability description — what ANDI supports, what it doesn't, and the philosophy behind the approach
  • Off-Domain: Publish a LinkedIn article or contribute to a sales community thread on 'When LinkedIn-only outreach beats multichannel' — creates a citable third-party signal for ANDI's LinkedIn-first positioning
  • Off-Domain: Ensure G2 listing explicitly tags or describes multichannel sequencing support — review platform tags are citable evidence for Shortlisting queries like pur_061
Platform Acuity

ChatGPT (medium): ChatGPT cites named feature comparisons for multichannel queries; a Comparison post with specific capability claims would be extractable. Perplexity (high): Perplexity favors structured how-to and requirements content; a 'multichannel requirements checklist' page would be directly extractable for pur_042 and pur_061.

NIO #8: GEO Visibility Feature Content — A Differentiating Capability With Zero Documentation
Gap Type: Structural Gap — GEO visibility has 0% visibility (0/6 queries) and coverage status is 'missing' for all 6 L3 queries, including the fact that Pursue Networking's own GEO Visibility feature has no content explaining what it does, how it works, or why B2B buyers should care about AI search presence.
High

Pursue Networking offers GEO visibility as a product feature — a genuine differentiator in the LinkedIn automation market — yet publishes zero content explaining it. Buyers searching for 'tools that help B2B startups show up in AI-generated recommendations' (pur_059) and 'business case for GEO visibility services' (pur_137) receive answers from generic marketing platforms, not ANDI. This is a double irony: the client is invisible in AI search results when buyers search for the exact product that improves AI search results. A single authoritative content hub on this topic would immediately differentiate ANDI from every competitor in the LinkedIn automation space.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: pur_024, pur_044, pur_059, pur_070, pur_137, pur_148
“How do GEO visibility services work — can they actually make my brand show up in AI search results?”
“Requirements for GEO visibility services — what should a B2B startup expect from an AI brand presence audit?”
“Tools that help B2B startups show up in AI-generated recommendations and search results”
“Business case for GEO visibility services — how does showing up in AI search results drive pipeline for B2B startups?”
Blueprint
  • On-Domain: Create /features/geo-visibility product page (SSR-rendered, high priority pending L1 fix) explaining: what GEO visibility means, how ANDI measures AI search presence, what an audit covers, and how LinkedIn networking correlates with AI citation frequency — the category-defining resource
  • On-Domain: Publish 'What Is GEO Visibility and Why B2B Startups Need to Care' education guide targeting pur_024, pur_059, pur_070 — frame ANDI as the first LinkedIn platform to surface AI search presence data
  • On-Domain: Create 'GEO Visibility Requirements for B2B Buyers: What to Ask Any Vendor' resource targeting pur_044 — structured checklist format, directly extractable by AI platforms
  • On-Domain: Publish 'Business Case for GEO Visibility: How AI Search Is Changing B2B Pipeline' targeting pur_137 — data-backed Consensus Creation content for marketing leaders making the case internally
  • Off-Domain: Publish GEO visibility methodology on a marketing-focused platform (MarketingProfs, Content Marketing Institute) to establish third-party citation signals — AI platforms will cite these for education queries like pur_024
  • Off-Domain: Seek inclusion in 'AI search optimization tools' roundups and articles — a nascent but fast-growing content category where early inclusion creates durable citation advantage
Platform Acuity

ChatGPT (high): ChatGPT answers 'how does GEO visibility work' queries with education content; a well-structured explainer page would be directly citable for pur_024 and pur_059. Perplexity (high): Perplexity would cite a structured GEO visibility checklist or requirements guide for pur_044 and pur_148; the self-contained Q&A format of a requirements page matches Perplexity's citation preference.

NIO #9: Comparison Page Architecture — Blog Posts Cannot Substitute for Dedicated Comparison Content
Gap Type: Structural Gap — 5 L3 queries with buying_job='Comparison' were routed to L3 because the site's existing coverage comes entirely from blog posts, while Comparison queries require dedicated Comparison page types. The affinity override triggered for all 5: existing pages cover the right feature area but wrong content architecture — blogs versus structured head-to-head Comparison pages.
High

Pursue Networking has 28.1% visibility (9/32) on Comparison queries and a 77.8% win rate (7/9 visible) — the strongest performance of any buying stage. But 5 Comparison queries are invisible because they demand structured side-by-side content that blog posts cannot provide: Dripify vs Salesflow pricing comparisons, Expandi vs competitors on personalization, and three-way automation platform head-to-heads. Building a /compare/ directory with individual pairing pages would directly address these 5 queries, and would also compound benefits for the 9 currently-winning Comparison queries by providing a structural home for all Comparison content.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: pur_075, pur_085, pur_096, pur_100, pur_101
“HeyReach vs Dripify for personalized LinkedIn outreach at scale — which handles personalization better?”
“Salesflow vs Expandi for LinkedIn outreach — which offers better AI personalization in messages?”
“Which LinkedIn automation has the best AI writing — CoPilot AI, Dripify, or HeyReach?”
“Switching from Dripify to something with better personalization — CoPilot AI or ANDI?”
Blueprint
  • On-Domain: Create /compare/ directory with individual head-to-head pages: /compare/andi-vs-Dripify, /compare/andi-vs-HeyReach, /compare/andi-vs-Expandi, /compare/andi-vs-copilot-ai, /compare/andi-vs-Salesflow — SSR-rendered, structured with feature tables, pricing Comparison, and use-case fit columns
  • On-Domain: Publish 'ANDI vs CoPilot AI: Which Has Better Personalization?' targeting pur_100 specifically — a buyer explicitly searching for ANDI as an alternative is already sold on switching, just needs a Comparison resource
  • On-Domain: Build a 'LinkedIn Automation Tools Comparison Hub' index page listing all head-to-head comparisons and linking to individual pairing pages — creates a crawlable topic hub that AI platforms index as a Comparison resource
  • Off-Domain: Submit ANDI Comparison data to G2 'Compare' feature — G2's structured Comparison pages are among the most-cited sources for Comparison buying_job queries on both ChatGPT and Perplexity
  • Off-Domain: Create a LinkedIn post or Quora answer for 'Switching from Dripify to better personalization tool' — community content is cited by Perplexity for platform switch queries
Platform Acuity

ChatGPT (high): ChatGPT citation rate on Comparison queries is already strong (Pursue Networking wins 7/9 visible Comparison queries); structured /compare/ pages with extractable feature tables would increase surface area for the remaining gaps. Perplexity (high): Perplexity favors structured Comparison tables for head-to-head queries; dedicated /compare/ pages with self-contained Comparison data would be directly extracted for pur_075, pur_085, pur_096.

NIO #10: Competitor Validation & Pricing Intelligence — Absent When Buyers Evaluate Against ANDI
Gap Type: Content Type Deficit — 4 L3 queries target competitor weaknesses, pricing gotchas, and named ANDI comparisons during the Validation buying stage. Coverage status is 'missing' for 3 of 4 queries; Pursue Networking is invisible when buyers are actively comparing it against Expandi, HeyReach, and CoPilot AI during final evaluation.
Medium

Buyers at the Validation stage who ask 'Expandi's biggest weaknesses' (pur_109), 'HeyReach pricing gotchas' (pur_117), and 'Expandi pricing in 2026 — is it worth the cost?' (pur_122) are actively building the case to switch tools. Pursue Networking is absent from all three responses, missing the moment when switching intent is highest. These 4 queries represent a small cluster but disproportionate purchase-intent value. A single 'Why Teams Switch from [Competitor] to ANDI' content series would address all four with minimal production effort.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: pur_109, pur_117, pur_122, pur_145
“Expandi vs competitors — what are Expandi's biggest weaknesses that users complain about?”
“What are the contract and pricing gotchas with HeyReach that nobody tells you upfront?”
“Expandi pricing changes — is it still worth the cost in 2026 or have they gotten too expensive?”
“Build an executive summary comparing Salesflow, HeyReach, and CoPilot AI for our leadership team's review”
Blueprint
  • On-Domain: Publish 'Why Teams Switch from Expandi to ANDI: Real Customer Reasons' post targeting pur_109 — frame competitor weaknesses as ANDI strengths, with customer quotes if available
  • On-Domain: Create 'HeyReach vs ANDI: Pricing Transparency Compared' post targeting pur_117 — address pricing gotchas explicitly and position ANDI's pricing model as the honest alternative
  • On-Domain: Build 'Expandi Pricing in 2026: Is There a Better Option?' content targeting pur_122 — buyers asking this question are actively considering alternatives
  • On-Domain: Create an 'Executive Summary Template: LinkedIn Automation Tool Evaluation' resource targeting pur_145 — pre-structured document comparing major platforms including ANDI, downloadable format
  • Off-Domain: Monitor and respond to G2 reviews mentioning Expandi/HeyReach/Salesflow weaknesses with ANDI's perspective — Perplexity cites G2 review threads for competitor complaint queries
  • Off-Domain: Create a Reddit r/sales or r/LinkedInTips response thread addressing 'Expandi pricing gotchas' and positioning ANDI — community responses are citable for Validation queries on Perplexity
Platform Acuity

ChatGPT (medium): ChatGPT synthesizes competitor reviews for weakness queries; having named ANDI content that discusses competitor weaknesses in context would enable citation alongside G2 data. Perplexity (high): Perplexity cites community forums and review threads heavily for competitor complaint queries (pur_109, pur_122); community-format content (Reddit, Quora) would be the highest-leverage off-domain play for this cluster.

Unified Priority Ranking

All recommendations across all three layers, ranked by commercial impact × implementation speed.

  • 1

    Homepage Renders Only Tagline and Navigation to Crawlers

    The homepage (pursuenetworking.com) returns only the ANDI product tagline, navigation links, and footer when fetched server-side. The full product description, feature highlights, social proof, and calls-to-action that would be visible in a browser are rendered entirely by client-side JavaScript and are invisible to AI crawlers.

    Technical Fix · Engineering · Homepage (pursuenetworking.com) — the site's highest-traffic and highest-authority page.
  • 2

    Majority of Blog Content Exceeds 180-Day Freshness Threshold

    Of 22 analyzed blog posts, 14 (64%) were last updated more than 180 days ago, and 2 are over 365 days old. Zero blog posts have been updated within the last 90 days. The most recently published ANDI-focused posts date to July 2025 (8+ months ago). Some posts show an 'Updated: October 19, 2025' date, suggesting a batch update pass, but the majority have not been refreshed.

    Technical Fix · Content · 22 blog posts across networking, messaging, ANDI product, and data/systems categories.
  • 3

    Meta Descriptions and OG Tags Not Assessable — Manual Verification Recommended

    Meta descriptions and Open Graph tags cannot be assessed from rendered markdown output. The homepage's meta description was visible in metadata ('ANDI delivers B2B networking tools on LinkedIn...') but individual page meta descriptions and OG tags for blog posts, the pricing page, and the scale page could not be verified.

    Technical Fix · Engineering · All pages — priority on product/commercial pages and high-value blog posts.
  • 4

    Schema Markup Status Unknown — Manual Verification Recommended

    Our analysis method returns rendered page text rather than raw HTML, making it impossible to assess whether JSON-LD schema markup (Organization, Product, Article, FAQ, HowTo) is present on any page. The site's Next.js architecture may include schema in the JavaScript bundle, but this cannot be confirmed from the rendered output.

    Technical Fix · Engineering · All pages — particularly homepage, product pages, blog posts, and FAQ.
  • 5

    Analytics & ROI Proof Content — CRO Can't Justify the Investment

    Analytics and reporting has 13.3% visibility (2/15 queries) overall, with 14 queries routed to L3 because existing content lacks pipeline attribution data, ROI benchmarks, and measurement frameworks. The CRO persona — who must justify LinkedIn tool spend to boards — is invisible 92.9% of the time (2/28 visible queries), winning just 1 of those 2 visible queries.

    New Content · Content · 14 queries affecting personas: Chief Revenue Officer / Executive Leader, Director of Revenue Operations / Operations Leader, Head of Marketing / Demand Generation
  • 6

    HubSpot & CRM Integration Hub — RevOps Veto-Holder Finds Nothing

    HubSpot integration has 0% visibility (0/15 queries) across the audit. The Director of Revenue Operations / Operations Leader persona — which holds veto power over tool selection — has 0% visibility (0/28 queries) across all query types. 15 L3 queries spanning problem identification through artifact creation are unaddressed because the site has no integration documentation, use-case content, or Comparison material for HubSpot sync.

    New Content · Content · 15 queries affecting personas: Director of Revenue Operations / Operations Leader, Chief Revenue Officer / Executive Leader, VP of Sales / Head of Sales Development
  • 7

    Outreach Automation Category Leadership — No Content Home for Pursue Networking's Core Use Case

    Outreach automation is Pursue Networking's primary category, yet the LinkedIn Outreach Automation & Sequences feature has 0% visibility (0/27 queries) across the audit. 28 L3 queries spanning Problem Identification through Artifact Creation are unaddressed because the site has no dedicated content treating ANDI as an outreach automation platform — only blog posts on adjacent messaging tactics.

    New Content · Content · 28 queries affecting personas: Chief Revenue Officer / Executive Leader, VP of Sales / Head of Sales Development, Founder / CEO / Entrepreneur, Head of Marketing / Demand Generation, Director of Revenue Operations / Operations Leader
  • 8

    AI DM Performance Benchmarks & Competitor Review Synthesis — /blog/ai-linkedin-dm-writing

    The /blog/ai-linkedin-dm-writing page contains no quantitative claims about reply rate improvements or conversion lift — buyers asking 'does AI-written LinkedIn outreach actually convert better?' (pur_020) cannot find a citable answer on this page.

    Content Optimization → New Content · Content · 7 queries, personas: VP of Sales / Head of Sales Development, Head of Marketing / Demand Generation, Founder / CEO / Entrepreneur
  • 9

    Account Safety & LinkedIn Compliance — A Risk-Blocking Gap at the Shortlisting Stage

    Account safety has 7.7% visibility (1/13 queries) with a 0% win rate on that 1 visible query. 13 L3 queries span problem identification through artifact creation; the site has no dedicated content addressing LinkedIn TOS compliance, safe automation limits, cloud-based vs. browser-extension safety, or account restriction risk — content types that competitors Expandi and Salesflow publish prominently.

    New Content · Content · 13 queries affecting personas: VP of Sales / Head of Sales Development, Founder / CEO / Entrepreneur, Director of Revenue Operations / Operations Leader
  • 10

    Comparison Page Architecture — Blog Posts Cannot Substitute for Dedicated Comparison Content

    5 L3 queries with buying_job='Comparison' were routed to L3 because the site's existing coverage comes entirely from blog posts, while Comparison queries require dedicated Comparison page types. The affinity override triggered for all 5: existing pages cover the right feature area but wrong content architecture — blogs versus structured head-to-head Comparison pages.

    New Content · Content · 5 queries affecting personas: Head of Marketing / Demand Generation, VP of Sales / Head of Sales Development, Founder / CEO / Entrepreneur, Chief Revenue Officer / Executive Leader
  • 11

    Competitor Platform Review Synthesis — Near-Rebuild Required on /blog/ai-linkedin-dm-writing

    The /blog/ai-linkedin-dm-writing page contains no content about HeyReach personalization — buyers asking specifically about HeyReach message quality (pur_112) find ANDI messaging content with no comparative relevance.

    Content Optimization → New Content · Content · 2 queries, personas: Head of Marketing / Demand Generation, Founder / CEO / Entrepreneur
  • 12

    Data Enrichment & Email Finding — A Feature Cluster Competitors Win by Default

    Data enrichment has 0% visibility (0/8 queries) and email finding has 0% visibility (0/5 queries). 13 L3 queries span the full buying funnel; coverage status is 'thin' or 'missing' for all 13 because the site has no content explaining ANDI's data enrichment capabilities, email finder accuracy, or how these features compare to Apollo.io and dedicated enrichment tools.

    New Content · Content · 13 queries affecting personas: Director of Revenue Operations / Operations Leader, VP of Sales / Head of Sales Development, Founder / CEO / Entrepreneur
  • 13

    GEO Visibility Feature Content — A Differentiating Capability With Zero Documentation

    GEO visibility has 0% visibility (0/6 queries) and coverage status is 'missing' for all 6 L3 queries, including the fact that Pursue Networking's own GEO Visibility feature has no content explaining what it does, how it works, or why B2B buyers should care about AI search presence.

    New Content · Content · 6 queries affecting personas: Head of Marketing / Demand Generation, Founder / CEO / Entrepreneur
  • 14

    Personal Brand Building Hub — An Owned Feature With Zero Coverage

    Personal brand building has 20% visibility (2/10 queries) with a 50% win rate on those 2 visible queries — the best win rate of any feature cluster with significant query volume. Yet 8 queries are in L3 because no topic hub or feature page exists for this capability; coverage status is 'missing' for all 8 queries. The site wins when it appears, but appears only 20% of the time.

    New Content · Content · 8 queries affecting personas: Founder / CEO / Entrepreneur, Head of Marketing / Demand Generation
  • 15

    Personalization vs. Template Quality Positioning — /blog/linkedin-dm-templates and /blog/linkedin-connection-request-templates

    The /blog/linkedin-dm-templates page provides message templates without explaining why template-based outreach is increasingly ignored — buyers searching 'why are LinkedIn acceptance rates dropping' (pur_001) land on a templates page that doesn't answer this problem-identification question.

    Content Optimization → New Content · Content · 9 queries, personas: VP of Sales / Head of Sales Development, Head of Marketing / Demand Generation, Founder / CEO / Entrepreneur
  • 16

    Pipeline Impact Case Studies — Near-Rebuild Required on Consensus Creation Query

    The /blog/ai-linkedin-dm-writing page contains no customer outcome data — pur_129 ('Case studies of startups that grew their pipeline using LinkedIn AI messaging tools') requires named examples or anonymized benchmarks, neither of which appear on this page.

    Content Optimization → New Content · Content · 1 queries, personas: Founder / CEO / Entrepreneur
  • 17

    AI Platform Shortlisting — Repositioning /blog/ai-linkedin-dm-writing for Buyer Evaluation Queries

    The /blog/ai-linkedin-dm-writing page teaches AI DM writing techniques generically without naming ANDI as the tool implementing them — buyers searching 'Best AI copilot tools for LinkedIn sales prospecting' (pur_064) find tips but no product recommendation.

    Content Optimization · Content · 4 queries, personas: Chief Revenue Officer / Executive Leader, VP of Sales / Head of Sales Development, Founder / CEO / Entrepreneur
  • 18

    Relationship Memory & Sales Context Tracking — Deepening /blog/build-linkedin-crm-with-andi for Shortlisting and Requirements Queries

    The /blog/build-linkedin-crm-with-andi page uses tutorial narrative ('here's how to build your CRM') rather than structured feature specifications — AI platforms cannot extract 'what data does ANDI capture about each relationship?' as a retrievable answer for pur_011.

    Content Optimization · Content · 3 queries, personas: VP of Sales / Head of Sales Development, Chief Revenue Officer / Executive Leader
  • 19

    Sitemap Uses Identical Timestamps for All Non-Blog URLs

    All 11 non-blog URLs in the sitemap (homepage, resources, resources subcategories, training, signin, dashboard, privacy) share an identical lastmod timestamp of 2025-10-13T23:15:51.234Z. This indicates the timestamps are auto-generated at build/deploy time rather than reflecting actual content modification dates.

    Technical Fix · Engineering · 11 non-blog URLs in the sitemap. Blog post timestamps appear accurate and individually dated.
  • 20

    Competitor Validation & Pricing Intelligence — Absent When Buyers Evaluate Against ANDI

    4 L3 queries target competitor weaknesses, pricing gotchas, and named ANDI comparisons during the Validation buying stage. Coverage status is 'missing' for 3 of 4 queries; Pursue Networking is invisible when buyers are actively comparing it against Expandi, HeyReach, and CoPilot AI during final evaluation.

    New Content · Content · 4 queries affecting personas: Founder / CEO / Entrepreneur, Chief Revenue Officer / Executive Leader, VP of Sales / Head of Sales Development
  • 21

    Evaluation Templates & Comparison Matrices — Near-Rebuild Required on Artifact Creation Queries

    The /blog/ai-linkedin-dm-writing page contains no structured evaluation criteria section — pur_142 ('Write evaluation criteria for LinkedIn AI messaging tools focused on authenticity and personalization quality') lands on a how-to writing guide rather than an evaluation framework.

    Content Optimization → New Content · Content · 3 queries, personas: Founder / CEO / Entrepreneur, VP of Sales / Head of Sales Development, Head of Marketing / Demand Generation
  • 22

    Multichannel Sequencing — A Missing Feature Content Area Competitors Default-Win

    Multichannel sequencing has 0% visibility (0/5 queries) and coverage status is 'missing' for all 5 L3 queries. No content exists on the site explaining ANDI's approach to LinkedIn-plus-email sequencing, making it invisible for buyers who require multichannel outreach as a baseline requirement.

    New Content · Content · 5 queries affecting personas: Chief Revenue Officer / Executive Leader, VP of Sales / Head of Sales Development, Founder / CEO / Entrepreneur, Director of Revenue Operations / Operations Leader
  • 23

    Startup Founder Lead Generation Without Constant Posting — Near-Rebuild on /blog/future-networking-ai-human-oversight-andi-approach

    The /blog/future-networking-ai-human-oversight-andi-approach page covers ANDI's philosophical approach to AI-assisted networking; pur_004 ('How do startup founders build a LinkedIn presence that generates inbound leads without posting all day?') needs tactical founder use-case content — the pages answer different questions entirely.

    Content Optimization → New Content · Content · 1 queries, personas: Founder / CEO / Entrepreneur
  • 24

    No Explicit AI Crawler Directives in robots.txt

    The robots.txt file uses a single User-Agent: * block that allows all crawlers (with /dashboard/ and /api/ excluded). There are no specific directives for GPTBot, ClaudeBot, PerplexityBot, Google-Extended, or Bytespider. All AI crawlers are implicitly allowed under the wildcard rule.

    Technical Fix · Marketing · Site-wide crawler access policy.
  • 25

    Critical Pages Invisible to AI Crawlers Due to Client-Side Rendering

    Four commercially important pages linked from the site's main navigation — /features, /faq, /pricing, and /pages/about — return HTTP 404 errors when fetched server-side. The /pricing page occasionally returns a shell HTML document containing only Next.js framework JavaScript with no rendered content. These pages are built as client-side-only routes in the Next.js application and do not generate server-side HTML.

    Technical Fix · Engineering · At minimum 4 navigation-linked pages: /features, /faq, /pricing, /pages/about. Potentially other routes not discoverable from sitemap or navigation. The homepage also renders minimal content server-side (only tagline and navigation links).

Workstream Mapping

All three workstreams can start this week.

Engineering / DevOps

Layer 1 — Technical Fixes
Timeline: Days to 2 weeks
  • Critical Pages Invisible to AI Crawlers Due to Client-Side…
  • Homepage Renders Only Tagline and Navigation to Crawlers
  • Majority of Blog Content Exceeds 180-Day Freshness Threshold
  • Sitemap Uses Identical Timestamps for All Non-Blog URLs

Content Team

Layer 2 — Content Optimization
Timeline: 2–6 weeks
  • AI DM Performance Benchmarks & Competitor Review Synthesis…
  • AI Platform Shortlisting — Repositioning…
  • Personalization vs. Template Quality Positioning —…
  • Relationship Memory & Sales Context Tracking — Deepening…

Content Strategy

Layer 3 — NIOs + Off-Domain
Timeline: 1–3 months
  • Create /features/outreach-automation product page with…
  • Create /integrations/hubspot dedicated page (SSR-rendered,…
  • Publish 'LinkedIn Networking ROI Playbook for B2B Startups'…
  • Create 'ANDI Account Safety Architecture' page…
  • Create /features/data-enrichment page (SSR-rendered)…

[Synthesis] Implementation sequence follows a strict dependency: L1 technical fixes execute first because the CSR rendering failure means any new content published on the current Next.js setup will be as invisible to AI crawlers as the existing /features page. Publishing 111 new L3 content pages on an architecture that 404s commercially important routes is wasted effort. Once L1 fixes unblock crawler access, L2 optimizations to existing pages execute next (they improve content that already has some crawl access via the blog).

L3 new content — the 111 NIO recommendations — executes last, in priority_badge order: critical NIOs (nio_001, nio_002, nio_003) first, then high, then medium.

Methodology
Audit Methodology

Query Construction

150 queries constructed from persona × buying job × feature focus × pain point matrix
Every query carries four metadata fields assigned at creation time
High-intent jobs (Shortlisting + Comparison + Validation): 55% of queries (82 of 150)
Note: 150 queries across full buying journey.

Personas

VP of Sales / Head of Sales Development — VP of Sales / Head of Sales Development · Decision Maker
Chief Revenue Officer / Executive Leader — Chief Revenue Officer / Executive Leader · Decision Maker
Director of Revenue Operations / Operations Leader — Director of Revenue Operations / Operations Leader · Evaluator
Founder / CEO / Entrepreneur — Founder / CEO / Entrepreneur · Decision Maker
Head of Marketing / Demand Generation — Head of Marketing / Demand Generation · Evaluator

Buying Jobs Framework

8 non-linear buying jobs: Artifact Creation → Comparison → Consensus Creation → Problem Identification → Requirements Building → Shortlisting → Solution Exploration → Validation
High-intent jobs (Shortlisting + Comparison + Validation): 55% of queries (82 of 150)

Competitive Set

Primary: CoPilot AI, Dripify, Expandi, HeyReach, Salesflow
Secondary: LinkedIn Sales Navigator, Closely, Apollo.io, We-Connect
Surprise: Clay, Waalaxy, Instantly, PhantomBuster, Smartlead, ZoomInfo, Meet Alfred — flagged for review

Platforms & Scoring

Platforms: ChatGPT + Perplexity
Visibility: Binary — does the client appear in the response?
Win rate: Of visible queries, is the client the primary recommendation?

Cross-Platform Counting (Union Method)

When a query is run on multiple platforms, union logic is applied: a query counts as “visible” if the client appears on any platform, not each platform separately.
Winner resolution: When platforms disagree on the winner, majority vote is used. Vendor names are preferred over meta-values (e.g. “no clear winner”). True ties resolve to “no clear winner.”
Share of Voice: Each entity is counted once per query across platforms (union dedup), preventing double-counting when both platforms mention the same company.
This approach ensures headline metrics reflect real buyer-query outcomes rather than inflated per-platform counts.

Terminology

Mentions: Query-level visibility count. A company receives one mention per query where it appears in any platform response (union-deduped). This is the numerator for Share of Voice.
Unique Pages Cited: Count of distinct client page URLs cited across all platform responses, after URL normalization (stripping tracking parameters). The footer total in the Citation section uses this measure.
Citation Instances (Top Cited Domains): Raw count of citation occurrences per domain across all responses. A single domain can accumulate multiple citation instances from different queries and platforms. The Top Cited Domains table uses this measure.