Competitive intelligence for AI-mediated buying decisions. Where Pursue Networking wins, where it loses, and a prioritized three-layer execution plan — built from 150 buyer queries across ChatGPT + Perplexity.
Pursue Networking's visibility gap is not a product problem — it is a three-layer infrastructure deficit that prevents AI platforms from indexing, retrieving, and citing the brand even when buyers are actively searching for exactly what ANDI offers.
[Mechanism] Three compounding gaps create the pattern. First, client-side rendering on the Next.js site means four navigation-critical pages (/features, /pricing, /faq, /pages/about) return 404 errors or empty JavaScript shells to AI crawlers — the commercial pages that would be cited in vendor evaluation queries cannot be indexed at all. Second, 5 of 12 audited features (LinkedIn Outreach Automation & Sequences, Unified Data Layer — LinkedIn, Gmail & HubSpot Integration, GEO Visibility & AI Brand Presence, Multi-Channel Campaign Sequencing, Email Finding & Verification) have zero content coverage on the site, so buyers researching these capabilities encounter only competitors.
Third, 64% of blog posts are 180+ days old with zero content updated in the last 90 days — research shows AI platforms strongly prefer recently-updated content for citation, and Pursue Networking's content freshness is well below competitor baselines. These three gaps compound: fixing only crawlability reveals thin content; publishing new content on the broken architecture makes it uncrawlable.
[Synthesis] The L1 SSR fix for client-side rendering is a direct prerequisite for L2 and L3 work: publishing new /features/outreach-automation or /compare/ pages on the current Next.js architecture would route them through the same CSR-only rendering that causes /features to 404 today. Fixing the sitemap timestamp issue (L1) then ensures new L2/L3 content is prioritized in AI crawler scheduling rather than treated as equally-fresh with 2025-era posts.
Where Pursue Networking appears and where it doesn't — across personas, buying jobs, and platforms.
[TL;DR] Pursue Networking is visible in 10% of buyer queries but wins only 6%.
Pursue Networking is present in only 10% of all buyer queries (15/150) and absent from 95.5% of early-funnel research (42/44 queries) — the stages where shortlists are formed. Fixing this is the entire point of the 148 recommendations.
| Dimension | Combined | Platform Delta |
|---|---|---|
| All Queries | 10% | Even |
| By Persona | ||
| Chief Revenue Officer / Executive Leader | 7.1% | Perplexity +4 percentage points |
| Founder / CEO / Entrepreneur | 21.2% | Perplexity +6 percentage points |
| Head of Marketing / Demand Generation | 10.7% | Even |
| Director of Revenue Operations / Operations Leader | 0% | Even |
| VP of Sales / Head of Sales Development | 9.1% | Even |
| By Buying Job | ||
| Artifact Creation | 8.3% | Even |
| Comparison | 28.1% | Even |
| Consensus Creation | 0% | Even |
| Problem Identification | 0% | Even |
| Requirements Building | 12.5% | Perplexity +12 percentage points |
| Shortlisting | 3.9% | Perplexity +4 percentage points |
| Solution Exploration | 0% | Even |
| Validation | 8.3% | Even |
| Dimension | ChatGPT | Perplexity |
|---|---|---|
| All Queries | 8% | 10% |
| By Persona | ||
| Chief Revenue Officer / Executive Leader | 3.6% | 7.1% |
| Founder / CEO / Entrepreneur | 15.2% | 21.2% |
| Head of Marketing / Demand Generation | 10.7% | 10.7% |
| Director of Revenue Operations / Operations Leader | 0% | 0% |
| VP of Sales / Head of Sales Development | 9.1% | 9.1% |
| By Buying Job | ||
| Artifact Creation | 8.3% | 8.3% |
| Comparison | 28.1% | 28.1% |
| Consensus Creation | 0% | 0% |
| Problem Identification | 0% | 0% |
| Requirements Building | 0% | 12.5% |
| Shortlisting | 0% | 3.9% |
| Solution Exploration | 0% | 0% |
| Validation | 8.3% | 8.3% |
[Data] Overall visibility: 10% (15/150 queries). High-intent buying stage visibility: 14.6% (12/82 queries). Early-funnel invisibility: 95.5% (42/44 queries across Problem Identification, Solution Exploration, Requirements Building).
Best buying stage: Comparison at 28.1% (9/32 queries). Zero visibility in Solution Exploration (0/15) and Problem Identification (0/13).
[Synthesis] The visibility pattern reveals a funnel that only opens at the bottom. Pursue Networking appears primarily in Comparison and Validation queries — the stages where buyers have already formed a shortlist — while being completely absent from the discovery stages where shortlists are built. Buyers who never encounter Pursue Networking during problem identification and solution exploration will not be researching it at Comparison.
The 95.5% early-funnel invisibility rate is the root cause of the low overall visibility rate, not poor performance on individual queries.
60 queries won by named competitors · 31 no clear winner · 44 no vendor mentioned
Sorted by competitive damage — competitor-winning queries first.
| ID | Query | Persona | Stage | Winner |
|---|---|---|---|---|
| ⚑ Competitor Wins — 60 queries where a named competitor captures the buyer | ||||
| pur_012 | "What's the actual ROI of LinkedIn networking tools for early-stage B2B companies?" | Chief Revenue Officer / Executive Leader | Problem Identification | LinkedIn Sales Navigator |
| pur_045 | "Best LinkedIn automation tools for startup sales teams in 2026" | VP of Sales / Head of Sales Development | Shortlisting | Dripify |
| pur_046 | "Top AI-powered LinkedIn networking platforms for B2B revenue teams" | Chief Revenue Officer / Executive Leader | Shortlisting | LinkedIn Sales Navigator |
| pur_047 | "LinkedIn prospecting tools with native HubSpot integration — which ones actually sync data properly?" | Director of Revenue Operations / Operations Leader | Shortlisting | LinkedIn Sales Navigator |
| pur_050 | "LinkedIn outreach tools that won't get my team's accounts restricted — which ones are safest?" | VP of Sales / Head of Sales Development | Shortlisting | Expandi |
| pur_051 | "Which LinkedIn automation platforms have the best analytics for measuring actual pipeline impact?" | Chief Revenue Officer / Executive Leader | Shortlisting | Closely |
| pur_052 | "LinkedIn tools with built-in contact data enrichment and email verification for B2B prospecting" | Director of Revenue Operations / Operations Leader | Shortlisting | Apollo.io |
| pur_054 | "Best platforms for scaling personalized LinkedIn outreach for demand gen teams at startups" | Head of Marketing / Demand Generation | Shortlisting | Expandi |
| pur_055 | "LinkedIn prospecting tools for SDR teams of 5-10 reps — which platforms handle multi-seat well?" | VP of Sales / Head of Sales Development | Shortlisting | LinkedIn Sales Navigator |
| pur_056 | "Best alternatives to Sales Navigator for LinkedIn prospecting with AI messaging capabilities" | Chief Revenue Officer / Executive Leader | Shortlisting | Apollo.io |
Remaining competitor wins: Dripify ×11, Expandi ×10, HeyReach ×10, Salesflow ×7, CoPilot AI ×4, Apollo.io ×3, LinkedIn Sales Navigator ×3, We-Connect ×1, Closely ×1. 31 queries with no clear winner. 44 queries with no vendor mentioned. Full query-level data available in the analysis export.
Queries where Pursue Networking is mentioned but a competitor is positioned more favorably.
| ID | Query | Persona | Buying Job | Winner | Pursue Networking Position |
|---|---|---|---|---|---|
| pur_036 | "Requirements for LinkedIn AI messaging tools that actually sound like the person sending them, not a template" | Founder / CEO / Entrepreneur | Requirements Building | No Vendor Mentioned | Brief Mention |
| pur_039 | "Requirements for LinkedIn tools that can prove networking ROI to the board with actual pipeline data" | Chief Revenue Officer / Executive Leader | Requirements Building | LinkedIn Sales Navigator | Brief Mention |
| pur_053 | "AI LinkedIn messaging tools that sound like you wrote them yourself, not a bot" | Founder / CEO / Entrepreneur | Shortlisting | Expandi | Brief Mention |
| pur_095 | "ANDI vs CoPilot AI for building personal brands on LinkedIn — which tool is more effective?" | Head of Marketing / Demand Generation | Comparison | CoPilot AI | Strong 2nd |
| pur_100 | "Switching from Dripify to something with better personalization — CoPilot AI or ANDI?" | VP of Sales / Head of Sales Development | Comparison | CoPilot AI | Mentioned In List |
| pur_123 | "Is ANDI safe to use with LinkedIn — any account restriction risks?" | Founder / CEO / Entrepreneur | Validation | No Vendor Mentioned | Primary Recommendation |
Who’s winning when Pursue Networking isn’t — and who controls the narrative at each buying stage.
[TL;DR] Pursue Networking wins 6% of queries (9/150), ranks #9 in SOV — H2H record: 8W–5L across 8 competitors.
Pursue Networking ranks #9 in Share of Voice (15 mentions, 4.6% share) but wins 66.7% of the Comparison queries it appears in (7/9 visible Comparison queries) — a product that wins when found, in a market where it rarely gets found.
| Company | Mentions | Share |
|---|---|---|
| Expandi | 58 | 17.7% |
| Dripify | 52 | 15.8% |
| HeyReach | 48 | 14.6% |
| Apollo.io | 39 | 11.9% |
| LinkedIn Sales Navigator | 35 | 10.7% |
| Salesflow | 29 | 8.8% |
| Closely | 23 | 7% |
| CoPilot AI | 22 | 6.7% |
| Pursue Networking | 15 | 4.6% |
| We-Connect | 7 | 2.1% |
When Pursue Networking and a competitor both appear in the same response, who gets the recommendation? One query with multiple competitors generates a matchup against each — so H2H totals will exceed the query count.
Win = primary recommendation (cross-platform majority). Loss = competitor was. Tie = neither or third party.
For the 135 queries where Pursue Networking is completely absent:
Vendors appearing in responses not in Pursue Networking’s defined competitive set.
[Synthesis] These two metrics measure different things and must be read together. The unconditional win rate of 5.3% (8/150 total queries) reflects overall competitive position — Pursue Networking wins a small fraction of all buyer queries across the category. The H2H records show pairwise matchup outcomes in the narrow slice of queries where both brands appear simultaneously: Pursue Networking wins or ties against Dripify, HeyReach, Salesflow, Closely, and Apollo when they co-appear.
H2H parity with these competitors does not contradict the low overall win rate — it reflects that Pursue Networking competes well in the queries it reaches but reaches far fewer queries than its competitors. Expandi's dominance in SOV (17.7% share, 58 mentions) is a content volume advantage, not solely a product advantage.
What AI reads and trusts in this category.
[TL;DR] Pursue Networking had 8 unique pages cited across buyer queries, ranking #11 among all cited domains. 10 high-authority domains cite competitors but not Pursue Networking.
8 unique pursuenetworking.com pages were cited across 150 queries, with the domain ranked #11 among cited domains. The CSR rendering failure is the primary technical cause; expanding citable page count through L1 fixes and L3 content is the primary remedy.
Note: Domain-level citation counts (above) tally instances per individual domain. Competitor-level counts (below) aggregate across all domains owned by a single vendor, which may include subdomains.
Non-competitor domains citing other vendors but not Pursue Networking — off-domain authority opportunities.
These domains cited competitors but did not cite Pursue Networking pages in the queries analyzed. This reflects citation patterns in AI responses, not overall platform presence.
[Synthesis] 8 unique pages cited across 150 queries is a thin citation footprint — it means most of Pursue Networking's content is either not indexed by AI platforms or not deemed authoritative enough to surface in responses. The domain ranking of #11 confirms that pursuenetworking.com generates fewer citations than at least 10 other domains in this query space, including competitor domains and third-party review platforms. The practical implication: expanding the citable surface area (through L3 new content) and fixing the CSR rendering failure (L1) are the two actions most directly correlated with improving citations.
More citable pages at a higher freshness signal equals more citation instances.
Three layers of recommendations ranked by commercial impact and implementation speed.
[TL;DR] 25 priority recommendations (plus 6 near-rebuild optimizations) targeting 148 queries where Pursue Networking is currently invisible. 6 L1 technical fixes + 1 verification checks, 8 content optimizations (L2), 10 new content initiatives (L3).
148 recommendations execute in a strict sequence: L1 technical fixes first (prerequisite — new content published on the broken architecture will not be crawled), then L2 page optimizations, then L3 new content in priority order (critical NIOs: outreach automation, HubSpot integration, analytics/ROI).
Reading the priority numbers: Recommendations are ranked 1–25 across all three layers by commercial impact × implementation speed. Within each layer, items appear in priority order. Gaps in the sequence (e.g., L1 shows 1, 2, then 12) mean higher-priority items belong to a different layer.
Configuration and infrastructure changes. Owner: Engineering / DevOps. Timeline: Days to weeks.
| Priority | Finding | Impact | Timeline |
|---|---|---|---|
| #1 | Homepage Renders Only Tagline and Navigation to Crawlers | High | 1-3 days |
| #2 | Majority of Blog Content Exceeds 180-Day Freshness Threshold | High | 2-4 weeks |
| #3 | Meta Descriptions and OG Tags Not Assessable — Manual Verification Recommended | Medium | 1-3 days |
| #4 | Schema Markup Status Unknown — Manual Verification Recommended | Medium | 1-3 days |
| #19 | Sitemap Uses Identical Timestamps for All Non-Blog URLs | Medium | 1-3 days |
| #25 | Critical Pages Invisible to AI Crawlers Due to Client-Side Rendering | Critical | 1-2 weeks |
Items requiring manual review before determining if action is needed.
| Priority | Finding | Impact | Timeline |
|---|---|---|---|
| #24 | No Explicit AI Crawler Directives in robots.txt | Low | < 1 day |
Click any row to expand full issue/fix detail.
Existing pages that need restructuring or deepening. Owner: Content Team. Timeline: Weeks.
The /blog/ai-linkedin-dm-writing page contains no quantitative claims about reply rate improvements or conversion lift — buyers asking 'does AI-written LinkedIn outreach actually convert better?' (pur_020) cannot find a citable answer on this page. The /blog/ai-linkedin-dm-writing page does not compare ANDI's AI writing quality against Salesflow, Dripify, or HeyReach — the 'Salesflow AI messaging quality' (pur_119) and 'Dripify vs HeyReach message quality' (pur_124) queries route here but find no competitor context. The /blog/ai-linkedin-dm-writing page describes what AI DM writing is but does not explain the technical mechanism by which AI learns a sender's voice (pur_022) — a requirements-level question buyers are actively asking.
Queries affected: pur_009, pur_016, pur_020, pur_022, pur_119, pur_124, pur_133
The /blog/ai-linkedin-dm-writing page contains no content about HeyReach personalization — buyers asking specifically about HeyReach message quality (pur_112) find ANDI messaging content with no comparative relevance. The /blog/ai-linkedin-dm-writing page contains no content about CoPilot AI's known limitations for small startup teams (pur_114) — a buyer validating CoPilot AI complaints would find no ANDI alternative positioning.
Queries affected: pur_112, pur_114
The /blog/linkedin-dm-templates page provides message templates without explaining why template-based outreach is increasingly ignored — buyers searching 'why are LinkedIn acceptance rates dropping' (pur_001) land on a templates page that doesn't answer this problem-identification question. The /blog/linkedin-dm-templates page does not differentiate between true AI personalization and variable-substitution templates — buyers asking 'how do automation platforms handle real personalization vs. just blasting templates?' (pur_017) find templates, not the answer they need. The /blog/linkedin-connection-request-templates page does not include any mention of ANDI's approach to personalization or how using ANDI differs from copy-pasting from a template library — making it invisible for platform-evaluation queries (pur_053, pur_054, pur_058).
Queries affected: pur_001, pur_017, pur_036, pur_049, pur_053, pur_054, pur_058, pur_116, pur_135
The /blog/ai-linkedin-dm-writing page contains no customer outcome data — pur_129 ('Case studies of startups that grew their pipeline using LinkedIn AI messaging tools') requires named examples or anonymized benchmarks, neither of which appear on this page. The matched blog posts make no before/after performance claims — they teach messaging techniques but do not quantify what those techniques achieve in pipeline terms, making them uncitable for Consensus Creation queries.
Queries affected: pur_129
The /blog/ai-linkedin-dm-writing page teaches AI DM writing techniques generically without naming ANDI as the tool implementing them — buyers searching 'Best AI copilot tools for LinkedIn sales prospecting' (pur_064) find tips but no product recommendation. The /blog/ai-linkedin-dm-writing page does not compare ANDI against LinkedIn Sales Navigator or Apollo.io as an AI messaging alternative (pur_056 'Best alternatives to Sales Navigator with AI messaging capabilities') — the page never mentions these competitors. The /blog/ai-for-linkedin-content page discusses AI for LinkedIn content broadly but does not explain how ANDI specifically learns a user's writing style (pur_067 'AI-powered LinkedIn tools that actually learn your writing style') — a Shortlisting differentiator left unstated.
Queries affected: pur_046, pur_056, pur_064, pur_067
The /blog/build-linkedin-crm-with-andi page uses tutorial narrative ('here's how to build your CRM') rather than structured feature specifications — AI platforms cannot extract 'what data does ANDI capture about each relationship?' as a retrievable answer for pur_011. The /blog/build-linkedin-crm-with-andi page does not compare ANDI's relationship tracking against LinkedIn Sales Navigator — the winner for pur_060 ('LinkedIn tools with relationship tracking so reps don't forget context between conversations') — leaving buyers without a head-to-head differentiator. The /blog/smart-context-capture-andi-remember-every-conversation page addresses general networking but does not specifically frame relationship memory for enterprise sales cycles with long time horizons (pur_043 — 'what relationship tracking features should LinkedIn tools have for long enterprise sales cycles?').
Queries affected: pur_011, pur_043, pur_060
The /blog/ai-linkedin-dm-writing page contains no structured evaluation criteria section — pur_142 ('Write evaluation criteria for LinkedIn AI messaging tools focused on authenticity and personalization quality') lands on a how-to writing guide rather than an evaluation framework. The /blog/linkedin-dm-templates page provides message templates, not vendor evaluation templates — pur_146 ('Create a vendor evaluation template for LinkedIn outreach tools focused on AI message quality, personalization, and reply rates') requires a procurement-style template, not a messaging template. No existing page contains a Comparison matrix across Dripify, Expandi, HeyReach, and Salesflow on personalization and brand building (pur_143) — the matched blog posts have no structured multi-vendor Comparison table.
Queries affected: pur_142, pur_146, pur_143
The /blog/future-networking-ai-human-oversight-andi-approach page covers ANDI's philosophical approach to AI-assisted networking; pur_004 ('How do startup founders build a LinkedIn presence that generates inbound leads without posting all day?') needs tactical founder use-case content — the pages answer different questions entirely. The /blog/ethical-linkedin-outreach page discusses outreach ethics but does not explain how consistent ANDI-powered relationship nurturing converts to inbound leads for founders — the specific commercial mechanism buyers need to understand.
Queries affected: pur_004
Net new content addressing visibility and positioning gaps. Owner: Content Strategy. Timeline: Months.
Buyers at every stage of the automation purchase journey — from 'how do startups fix manual prospecting?' to 'build me a TCO model for a 10-person SDR team' — find zero Pursue Networking content. Competitors like Dripify, HeyReach, and Expandi dominate these queries by default, not by merit. Because LinkedIn Outreach Automation & Sequences is the feature with the largest query footprint (27 queries in the audit) and 0% visibility, fixing this gap has the highest potential query-volume payoff of any NIO. The absence is structural: without a dedicated product hub or feature pages for automation, no amount of blog optimization can close the gap.
ChatGPT (high): ChatGPT Comparison and Shortlisting queries for automation tools (pur_045, pur_081, pur_091) show competitors cited by name with feature specifics — a named product hub with extractable feature claims would be directly quotable. Perplexity (high): Perplexity leads ChatGPT by 2pp overall visibility; structured Comparison tables and self-contained how-it-works sections (the format Perplexity extracts for answer boxes) would improve citation rates for automation category queries.
The Director of Revenue Operations is the most likely person to block a LinkedIn tool purchase if CRM integration is broken or absent. With 0% visibility across all 28 RevOps queries, Pursue Networking is invisible to this persona at every stage of the buying journey. Competitors like We-Connect (#7 SOV), CoPilot AI, and Dripify win these queries because they publish integration guides and CRM sync documentation that AI platforms can cite. This is a content-type deficit: the site has no integration-class content (setup guides, sync architecture, data field mapping), only general blog posts.
ChatGPT (high): ChatGPT cites integration documentation and App Marketplace listings for CRM sync queries; structured content with specific field-mapping details and setup steps matches ChatGPT's citation pattern for integration queries. Perplexity (high): Perplexity favors Comparison tables and self-contained answer passages; a page structured around 'native vs Zapier' with a clear Comparison table would be directly extractable for Shortlisting queries like pur_047, pur_065.
The CRO's primary objection to any LinkedIn tool is the inability to prove ROI to the board. Pursue Networking's existing blog content covers networking tactics but contains no quantified pipeline impact data, time-savings benchmarks, or payback period analysis. Competitors like LinkedIn Sales Navigator and Closely win these queries because they publish attribution methodology and pipeline analytics documentation. This gap is commercially consequential: pur_007 ('I know LinkedIn networking works but I can't prove it to my board') and pur_127 ('ROI of implementing LinkedIn networking automation for a 15-person startup sales team') are exactly the queries CROs ask when building the business case — and Pursue Networking is absent from every response.
ChatGPT (medium): ChatGPT cites specific benchmark numbers when they appear in structured posts; ROI content with named data points (e.g., '34% reduction in time per qualified conversation') would be extracted for Consensus Creation queries. Perplexity (high): Perplexity consistently cites structured Comparison tables and benchmark data for B2B tool ROI queries; a page with a payback period table organized by team size would be directly extractable.
LinkedIn account restriction is a deal-blocking concern: when a VP of Sales sees their top SDR's account restricted for a week (pur_006), the next query is 'which platform is safest?' — and Pursue Networking is absent from every safety-related response. Expandi wins pur_050 and pur_072 specifically because it publishes cloud-based safety architecture documentation. This gap is particularly damaging because it appears at the Shortlisting and Requirements Building stages — the exact moment buyers are narrowing their list. A single well-structured 'ANDI Safety Architecture' page would address 7 of the 13 queries in this cluster.
ChatGPT (medium): ChatGPT cites named safety architecture descriptions (e.g., 'cloud-based, runs on dedicated IP') for automation safety queries; a page with specific technical safety claims would be extracted for pur_018 and pur_050. Perplexity (high): Perplexity cites community forums and structured resource pages for 'has anyone gotten banned' queries; a structured checklist page would be extractable for pur_030, pur_034, and pur_144.
Apollo.io wins the majority of data enrichment Comparison queries because it publishes specific accuracy benchmarks, data source methodology, and contact coverage statistics — content that AI platforms can directly cite when buyers ask 'which LinkedIn tool has the best email finding accuracy?' Pursue Networking's ANDI platform may offer enrichment and email finding capabilities, but zero content exists to establish this. RevOps evaluators (0% visibility across all 28 queries) are the primary buyers for enrichment tools, and they're currently invisible to any ANDI content. Apollo.io's dominance on pur_052, pur_057, pur_068, pur_098 is entirely a content gap, not a product gap.
ChatGPT (medium): ChatGPT cites named accuracy statistics for email finding queries; a page with specific benchmark data (e.g., '87% email deliverability rate') would be directly extractable for pur_025, pur_068. Perplexity (high): Perplexity favors Comparison tables for 'dedicated enrichment tool vs built-in feature' queries; a structured Comparison page with pros/cons and accuracy data would be highly citable for pur_023, pur_052.
Personal brand building is the one feature where Pursue Networking already wins when visible — 50% win rate (1/2 visible queries) signals the product resonates when buyers find it. But the 20% visibility rate on Personal Brand Growth & LinkedIn Presence queries means competitors like LinkedIn Sales Navigator and CoPilot AI default-win 8 out of 10 buyer queries in this category, simply because they publish LinkedIn thought leadership and brand building content that ANDI has no equivalent for. The Founder / CEO / Entrepreneur and Head of Marketing / Demand Generation personas both search for personal brand tools; creating a topic hub would serve both simultaneously while leveraging the existing win rate signal.
ChatGPT (medium): ChatGPT uses named product comparisons for 'ANDI vs CoPilot AI' queries; a dedicated Comparison post would be extractable for pur_095. Perplexity (high): Perplexity cites structured 'how B2B teams use X for Y' content in demand gen queries; the /features/personal-brand-building page with structured use cases would be citable for pur_005, pur_063.
Dripify wins pur_093 ('which handles multi-channel sequences better, LinkedIn plus email?') by default because it publishes multichannel sequence documentation. With only 5 queries in this cluster, the absolute query volume is small — but Requirements Building and Artifact Creation buying jobs in this cluster carry disproportionate commercial weight because they appear late in the evaluation funnel. A buyer writing an RFP for multichannel sequencing who never sees ANDI in their research will not include it on the shortlist.
ChatGPT (medium): ChatGPT cites named feature comparisons for multichannel queries; a Comparison post with specific capability claims would be extractable. Perplexity (high): Perplexity favors structured how-to and requirements content; a 'multichannel requirements checklist' page would be directly extractable for pur_042 and pur_061.
Pursue Networking offers GEO visibility as a product feature — a genuine differentiator in the LinkedIn automation market — yet publishes zero content explaining it. Buyers searching for 'tools that help B2B startups show up in AI-generated recommendations' (pur_059) and 'business case for GEO visibility services' (pur_137) receive answers from generic marketing platforms, not ANDI. This is a double irony: the client is invisible in AI search results when buyers search for the exact product that improves AI search results. A single authoritative content hub on this topic would immediately differentiate ANDI from every competitor in the LinkedIn automation space.
ChatGPT (high): ChatGPT answers 'how does GEO visibility work' queries with education content; a well-structured explainer page would be directly citable for pur_024 and pur_059. Perplexity (high): Perplexity would cite a structured GEO visibility checklist or requirements guide for pur_044 and pur_148; the self-contained Q&A format of a requirements page matches Perplexity's citation preference.
Pursue Networking has 28.1% visibility (9/32) on Comparison queries and a 77.8% win rate (7/9 visible) — the strongest performance of any buying stage. But 5 Comparison queries are invisible because they demand structured side-by-side content that blog posts cannot provide: Dripify vs Salesflow pricing comparisons, Expandi vs competitors on personalization, and three-way automation platform head-to-heads. Building a /compare/ directory with individual pairing pages would directly address these 5 queries, and would also compound benefits for the 9 currently-winning Comparison queries by providing a structural home for all Comparison content.
ChatGPT (high): ChatGPT citation rate on Comparison queries is already strong (Pursue Networking wins 7/9 visible Comparison queries); structured /compare/ pages with extractable feature tables would increase surface area for the remaining gaps. Perplexity (high): Perplexity favors structured Comparison tables for head-to-head queries; dedicated /compare/ pages with self-contained Comparison data would be directly extracted for pur_075, pur_085, pur_096.
Buyers at the Validation stage who ask 'Expandi's biggest weaknesses' (pur_109), 'HeyReach pricing gotchas' (pur_117), and 'Expandi pricing in 2026 — is it worth the cost?' (pur_122) are actively building the case to switch tools. Pursue Networking is absent from all three responses, missing the moment when switching intent is highest. These 4 queries represent a small cluster but disproportionate purchase-intent value. A single 'Why Teams Switch from [Competitor] to ANDI' content series would address all four with minimal production effort.
ChatGPT (medium): ChatGPT synthesizes competitor reviews for weakness queries; having named ANDI content that discusses competitor weaknesses in context would enable citation alongside G2 data. Perplexity (high): Perplexity cites community forums and review threads heavily for competitor complaint queries (pur_109, pur_122); community-format content (Reddit, Quora) would be the highest-leverage off-domain play for this cluster.
All recommendations across all three layers, ranked by commercial impact × implementation speed.
The homepage (pursuenetworking.com) returns only the ANDI product tagline, navigation links, and footer when fetched server-side. The full product description, feature highlights, social proof, and calls-to-action that would be visible in a browser are rendered entirely by client-side JavaScript and are invisible to AI crawlers.
Of 22 analyzed blog posts, 14 (64%) were last updated more than 180 days ago, and 2 are over 365 days old. Zero blog posts have been updated within the last 90 days. The most recently published ANDI-focused posts date to July 2025 (8+ months ago). Some posts show an 'Updated: October 19, 2025' date, suggesting a batch update pass, but the majority have not been refreshed.
Meta descriptions and Open Graph tags cannot be assessed from rendered markdown output. The homepage's meta description was visible in metadata ('ANDI delivers B2B networking tools on LinkedIn...') but individual page meta descriptions and OG tags for blog posts, the pricing page, and the scale page could not be verified.
Our analysis method returns rendered page text rather than raw HTML, making it impossible to assess whether JSON-LD schema markup (Organization, Product, Article, FAQ, HowTo) is present on any page. The site's Next.js architecture may include schema in the JavaScript bundle, but this cannot be confirmed from the rendered output.
Analytics and reporting has 13.3% visibility (2/15 queries) overall, with 14 queries routed to L3 because existing content lacks pipeline attribution data, ROI benchmarks, and measurement frameworks. The CRO persona — who must justify LinkedIn tool spend to boards — is invisible 92.9% of the time (2/28 visible queries), winning just 1 of those 2 visible queries.
HubSpot integration has 0% visibility (0/15 queries) across the audit. The Director of Revenue Operations / Operations Leader persona — which holds veto power over tool selection — has 0% visibility (0/28 queries) across all query types. 15 L3 queries spanning problem identification through artifact creation are unaddressed because the site has no integration documentation, use-case content, or Comparison material for HubSpot sync.
Outreach automation is Pursue Networking's primary category, yet the LinkedIn Outreach Automation & Sequences feature has 0% visibility (0/27 queries) across the audit. 28 L3 queries spanning Problem Identification through Artifact Creation are unaddressed because the site has no dedicated content treating ANDI as an outreach automation platform — only blog posts on adjacent messaging tactics.
The /blog/ai-linkedin-dm-writing page contains no quantitative claims about reply rate improvements or conversion lift — buyers asking 'does AI-written LinkedIn outreach actually convert better?' (pur_020) cannot find a citable answer on this page.
Account safety has 7.7% visibility (1/13 queries) with a 0% win rate on that 1 visible query. 13 L3 queries span problem identification through artifact creation; the site has no dedicated content addressing LinkedIn TOS compliance, safe automation limits, cloud-based vs. browser-extension safety, or account restriction risk — content types that competitors Expandi and Salesflow publish prominently.
5 L3 queries with buying_job='Comparison' were routed to L3 because the site's existing coverage comes entirely from blog posts, while Comparison queries require dedicated Comparison page types. The affinity override triggered for all 5: existing pages cover the right feature area but wrong content architecture — blogs versus structured head-to-head Comparison pages.
The /blog/ai-linkedin-dm-writing page contains no content about HeyReach personalization — buyers asking specifically about HeyReach message quality (pur_112) find ANDI messaging content with no comparative relevance.
Data enrichment has 0% visibility (0/8 queries) and email finding has 0% visibility (0/5 queries). 13 L3 queries span the full buying funnel; coverage status is 'thin' or 'missing' for all 13 because the site has no content explaining ANDI's data enrichment capabilities, email finder accuracy, or how these features compare to Apollo.io and dedicated enrichment tools.
GEO visibility has 0% visibility (0/6 queries) and coverage status is 'missing' for all 6 L3 queries, including the fact that Pursue Networking's own GEO Visibility feature has no content explaining what it does, how it works, or why B2B buyers should care about AI search presence.
Personal brand building has 20% visibility (2/10 queries) with a 50% win rate on those 2 visible queries — the best win rate of any feature cluster with significant query volume. Yet 8 queries are in L3 because no topic hub or feature page exists for this capability; coverage status is 'missing' for all 8 queries. The site wins when it appears, but appears only 20% of the time.
The /blog/linkedin-dm-templates page provides message templates without explaining why template-based outreach is increasingly ignored — buyers searching 'why are LinkedIn acceptance rates dropping' (pur_001) land on a templates page that doesn't answer this problem-identification question.
The /blog/ai-linkedin-dm-writing page contains no customer outcome data — pur_129 ('Case studies of startups that grew their pipeline using LinkedIn AI messaging tools') requires named examples or anonymized benchmarks, neither of which appear on this page.
The /blog/ai-linkedin-dm-writing page teaches AI DM writing techniques generically without naming ANDI as the tool implementing them — buyers searching 'Best AI copilot tools for LinkedIn sales prospecting' (pur_064) find tips but no product recommendation.
The /blog/build-linkedin-crm-with-andi page uses tutorial narrative ('here's how to build your CRM') rather than structured feature specifications — AI platforms cannot extract 'what data does ANDI capture about each relationship?' as a retrievable answer for pur_011.
All 11 non-blog URLs in the sitemap (homepage, resources, resources subcategories, training, signin, dashboard, privacy) share an identical lastmod timestamp of 2025-10-13T23:15:51.234Z. This indicates the timestamps are auto-generated at build/deploy time rather than reflecting actual content modification dates.
4 L3 queries target competitor weaknesses, pricing gotchas, and named ANDI comparisons during the Validation buying stage. Coverage status is 'missing' for 3 of 4 queries; Pursue Networking is invisible when buyers are actively comparing it against Expandi, HeyReach, and CoPilot AI during final evaluation.
The /blog/ai-linkedin-dm-writing page contains no structured evaluation criteria section — pur_142 ('Write evaluation criteria for LinkedIn AI messaging tools focused on authenticity and personalization quality') lands on a how-to writing guide rather than an evaluation framework.
Multichannel sequencing has 0% visibility (0/5 queries) and coverage status is 'missing' for all 5 L3 queries. No content exists on the site explaining ANDI's approach to LinkedIn-plus-email sequencing, making it invisible for buyers who require multichannel outreach as a baseline requirement.
The /blog/future-networking-ai-human-oversight-andi-approach page covers ANDI's philosophical approach to AI-assisted networking; pur_004 ('How do startup founders build a LinkedIn presence that generates inbound leads without posting all day?') needs tactical founder use-case content — the pages answer different questions entirely.
The robots.txt file uses a single User-Agent: * block that allows all crawlers (with /dashboard/ and /api/ excluded). There are no specific directives for GPTBot, ClaudeBot, PerplexityBot, Google-Extended, or Bytespider. All AI crawlers are implicitly allowed under the wildcard rule.
Four commercially important pages linked from the site's main navigation — /features, /faq, /pricing, and /pages/about — return HTTP 404 errors when fetched server-side. The /pricing page occasionally returns a shell HTML document containing only Next.js framework JavaScript with no rendered content. These pages are built as client-side-only routes in the Next.js application and do not generate server-side HTML.
All three workstreams can start this week.
[Synthesis] Implementation sequence follows a strict dependency: L1 technical fixes execute first because the CSR rendering failure means any new content published on the current Next.js setup will be as invisible to AI crawlers as the existing /features page. Publishing 111 new L3 content pages on an architecture that 404s commercially important routes is wasted effort. Once L1 fixes unblock crawler access, L2 optimizations to existing pages execute next (they improve content that already has some crawl access via the blog).
L3 new content — the 111 NIO recommendations — executes last, in priority_badge order: critical NIOs (nio_001, nio_002, nio_003) first, then high, then medium.