Engagement Foundation Review

Pursue Networking
Audit Foundation

Before we run the audit, we need to make sure we're asking the right questions about the right competitors to the right buyers. This document presents what we've learned about Pursue Networking's market — your job is to tell us what we got right, what we got wrong, and what we missed.

Prepared March 2026
pursuenetworking.com
AI-Powered LinkedIn Sales Copilot
GEO Readiness

Where You Stand Today

Before we measure citation visibility in the AI-powered LinkedIn sales copilot space, these three signals tell us whether AI crawlers can access and trust your site.

Technical Readiness
Needs Attention
One high-severity finding: two indexed pages (/executive-concierge and /resources/ai-productivity) return 404 errors. The /executive-concierge page is linked from every page's footer navigation, sending negative quality signals on every crawl.
Content Freshness
Needs Attention
Weighted freshness: 0.47. 5 pages updated within 90 days. 8 pages older than 6 months (2 older than 12 months). 4 product pages with no detectable date — verify manually. 5 structural pages also undated.
Crawl Coverage
Good
All major AI crawlers (GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Bytespider) confirmed allowed. Robots.txt properly excludes only /dashboard/ and /api/ routes. Sitemap accessible with 31+ indexed pages.
Executive Summary

What You Need to Know

AI search is reshaping how B2B sales teams discover and evaluate AI-powered LinkedIn sales copilots. The knowledge graph maps Pursue Networking's competitive landscape across 5 primary and 4 secondary competitors, with 5 buyer personas anchored by two decision-makers — the VP of Sales and the Founder/CEO. Companies establishing GEO visibility now in this category gain a first-mover advantage that compounds as AI platforms learn to trust cited domains.

Layer 1 reveals a solid crawl foundation — all major AI crawlers are permitted — but one high-severity finding requires immediate attention: "Two Indexed Pages Return 404 Errors." The /executive-concierge page (ANDI Scale's product page) is linked from every page footer and returns a 404, meaning every crawl encounter reinforces a broken-site signal. Two medium-severity structural findings — "Sitemap Timestamps Do Not Reflect Actual Modification Dates" and "Schema Markup Cannot Be Assessed" — further reduce the precision of AI crawler indexing.

Two actions before the validation call: (1) Validate the VP of Sales and Head of Revenue Operations personas — both are inferred from category patterns rather than sourced from deal data, and if they don't match real buyer roles, the query set reshapes substantially toward frontline sales manager evaluation patterns. (2) Engineering should fix the broken /executive-concierge and /resources/ai-productivity pages now — these 404 repairs don't require the validation call and directly improve crawl quality signals.

TL;DR — Action Items
  • 🟡 High: Two Indexed Pages Return 404 Errors — Engineering should restore or redirect /executive-concierge (ANDI Scale product page linked from every footer) and /resources/ai-productivity, then remove broken URLs from sitemap.xml.
  • 🟣 Validate at the Call: Marcus Rivera (VP of Sales) — This persona is inferred, not sourced from deal data. If VPs of Sales don't evaluate LinkedIn copilots at ANDI's deal size, we remove pipeline-visibility queries and reallocate to Sales Manager evaluation criteria.
  • 🟣 Validate at the Call: Closely as a primary competitor — Closely is at medium confidence for tier assignment. If they don't appear in competitive deals, we move them to secondary and shift approximately 6-8 head-to-head queries to other primary competitors.
  • ✅ Start Now: Fix broken navigation pages and update sitemap timestamps — Engineering can resolve these independently; they stop negative crawl signals before we measure baseline visibility.
  • 📋 Validation Call: Confirm inferred persona roles (VP Sales, Head RevOps) — Getting these two roles right determines whether the query set tests VP-level pipeline concerns or concentrates on frontline sales manager evaluation patterns, reshaping roughly 30% of the buyer query architecture.
How to Use This Document

Three Things to Know

What This Is This document presents what we've learned about the AI-powered LinkedIn sales copilot market from outside-in research. It captures the competitive landscape, buyer personas, feature taxonomy, pain points, and technical site analysis that will drive the audit's query set. Every section feeds directly into how we construct and prioritize buyer queries across AI platforms.

What We Need From You Look for the purple question boxes throughout the document. Each one identifies a specific input where your answer changes the audit's direction. The most consequential questions are about persona roles and competitor tiers — getting these wrong means testing the wrong queries against the wrong buyers.

Confidence Badges Every data point carries a confidence badge. High means sourced directly from public data (site content, G2 reviews, category listings). Medium means inferred from category patterns or partial signals. Low means best-guess. Medium and low items are the ones where your input matters most.

Company Profile

Pursue Networking

The client profile anchors every query — category, segment, and name variants determine how AI platforms identify and classify the company.

Client Profile High

Company Name Pursue Networking
Domain pursuenetworking.com
Name Variants Pursue, PursueNetworking, Pursue Networking Inc, ANDI, ANDI AI, Andi, Andi AI, Pursue Networking ANDI
Category AI-powered LinkedIn sales copilot — relationship-based outreach automation, CRM integration, and pipeline generation for B2B sales teams
Segment Startup
Key Products ANDI (AI LinkedIn Copilot), ANDI Scale (Executive Concierge Service)

Validate Buyers searching for "ANDI" will encounter a different brand universe than "Pursue Networking" — do most deals start with the product name (ANDI) or the company name? And does ANDI Scale target a meaningfully different buyer than the core copilot? If yes, we may need to split the query set into two distinct buying conversations with separate persona clusters.

Buyer Personas

Who's Buying

5 personas: 2 decision-makers, 1 evaluator, 2 influencers. These roles determine the search intent patterns the audit tests.

Critical Review Area Personas drive the entire query set. Every persona generates a distinct cluster of buyer queries based on their role, seniority, and buying stage. Adding, removing, or reclassifying a persona changes which queries the audit runs. Review each card carefully — especially the medium-confidence personas sourced from inference rather than direct data.

Data Sourcing Note Name, role, department, seniority, influence level, veto power, and technical level are sourced from the knowledge graph. Buying jobs and query focus areas are synthesized from the persona's role context and the client's category — these represent our best outside-in interpretation and should be validated at the call.

Marcus Rivera
VP of Sales
Decision-maker Medium
Senior sales leader responsible for pipeline targets, team productivity, and sales technology stack decisions. Evaluates tools based on revenue impact and team adoption rates, not technical implementation details.
Veto power: Yes — owns the sales technology budget and can kill or approve tool purchases unilaterally
Technical level: Low — evaluates by outcome metrics (meetings booked, pipeline generated), not feature architecture
Primary buying jobs: Justify ROI of new sales tools to exec team, ensure tool adoption across the sales org, reduce pipeline generation costs per rep
Query focus areas: LinkedIn automation ROI, sales team productivity tools, pipeline generation platforms, outreach tool comparison
Source: LLM inference from category patterns

Marcus is inferred, not sourced — does a VP of Sales actually evaluate LinkedIn copilots at ANDI's typical deal size, or do Sales Managers own the decision? If VPs aren't in the loop, we drop pipeline-visibility queries and shift to frontline evaluation criteria.

Jennifer Park
Sales Manager / Director of Sales Development
Evaluator High
Frontline sales leader managing SDR teams, responsible for outreach volume, reply rates, and meeting conversion. Evaluates tools hands-on — runs trials, compares workflows, and champions tools upward to VP-level decision-makers.
Veto power: No — recommends tools and runs evaluations but needs VP or exec approval for purchase
Technical level: Medium — comfortable configuring sequences and integrations, evaluates UX and workflow fit
Primary buying jobs: Find tools that improve SDR productivity without increasing LinkedIn account risk, run competitive trials, build the business case for VP approval
Query focus areas: Best LinkedIn automation tools for SDR teams, safe LinkedIn outreach software, LinkedIn CRM with HubSpot sync, outreach personalization at scale
Source: Automated scrape — G2 reviewer titles, product page targeting

Does Jennifer run a formal evaluation (demos, trials, scorecards) or is this bottom-up adoption where reps champion the tool? If bottom-up, the query set shifts from comparison queries to adoption and onboarding queries.

David Okonkwo
Head of Revenue Operations
Influencer Medium
Owns the sales technology stack architecture, CRM data integrity, and workflow automation. Evaluates tools through the lens of integration quality, data hygiene, and reporting accuracy — the person who decides if a tool plays nicely with the existing stack.
Veto power: No — advisory role on technical fit, but can block implementations that break data pipelines
Technical level: High — evaluates API documentation, integration depth, data mapping, and reporting capabilities
Primary buying jobs: Ensure new tools integrate cleanly with HubSpot and existing workflows, prevent data silos, maintain pipeline reporting accuracy
Query focus areas: LinkedIn tools with HubSpot integration, CRM sync for LinkedIn outreach, sales tool stack consolidation, revenue operations automation
Source: LLM inference from category patterns

David is inferred — does Revenue Operations actually evaluate LinkedIn sales tools at ANDI's target companies, or does this role only surface post-purchase during CRM integration? If RevOps isn't in the buying loop, we drop integration-evaluation queries and shift to post-sale enablement content.

Sarah Chen
Founder / CEO
Decision-maker High
Startup founder or CEO who needs to build pipeline personally but lacks time for manual LinkedIn networking. Primary buyer for ANDI Scale's executive concierge service — values done-for-you approaches over tool configuration.
Veto power: Yes — ultimate budget authority at startup stage, often the sole decision-maker
Technical level: Low — wants outcomes (meetings, pipeline, brand visibility), not feature configuration
Primary buying jobs: Generate pipeline without spending hours on LinkedIn daily, build executive thought leadership, outsource relationship-building to a trusted system
Query focus areas: LinkedIn ghostwriting services, executive LinkedIn management, AI LinkedIn assistant for founders, done-for-you LinkedIn outreach
Source: Automated scrape — ANDI Scale product page, case studies

Is the Founder/CEO persona driven primarily by ANDI Scale's executive concierge service, or do founders also evaluate the core ANDI copilot? If ANDI Scale is a distinct buying conversation, the query cluster splits into "LinkedIn ghostwriting" (founders) vs. "LinkedIn automation tool" (sales leaders).

Tyler Washington
SDR Team Lead / Senior Account Executive
Influencer High
Senior individual contributor who lives inside LinkedIn daily. The power user whose workflow the tool must fit. Champions tools upward based on personal productivity gains — if Tyler loves it, the team adopts it.
Veto power: No — influences through adoption and advocacy, not budget authority
Technical level: Medium — evaluates by daily workflow fit, message quality, and account safety track record
Primary buying jobs: Find tools that increase personal reply rates and meeting bookings, avoid LinkedIn account restrictions, reduce manual follow-up overhead
Query focus areas: Best LinkedIn outreach tools for SDRs, LinkedIn automation that won't get me banned, AI message writing for LinkedIn, LinkedIn prospecting workflow
Source: Automated scrape — G2 reviewer titles, use case pages

Does Tyler influence the purchase decision upward, or does he receive the tool after someone else buys it? If Tyler is user-only (assigned post-purchase), we drop influence-stage queries and focus on adoption and retention content instead.

Missing Personas? Who else shows up in your deals? Possible missing roles: Marketing Manager (if LinkedIn content creation drives inbound leads and marketing evaluates brand-safe automation), IT/Security Lead (if enterprise prospects need LinkedIn API compliance review before approving a browser extension), or Sales Enablement Manager (if larger teams have dedicated enablement evaluating rep productivity tools). What's missing?

Competitive Landscape

Who You're Competing Against

5 primary + 4 secondary competitors identified. Tier assignments determine which head-to-head queries the audit runs.

Why Tiers Matter Getting these tiers right determines which queries test direct competitive differentiation vs. broader category awareness. Each primary competitor generates approximately 6-8 head-to-head queries — queries like "ANDI vs CoPilot AI" or "best LinkedIn automation for safe outreach." Closely is listed as primary but at medium confidence — if they rarely appear in actual deals, moving them to secondary would shift approximately 6-8 queries out of the head-to-head set and reallocate them to category-level queries.

Primary Competitors

CoPilot AI

Primary High
copilot.ai
Most direct competitor — AI-powered LinkedIn outbound platform with three coordinated AI agents for strategy, targeting, and reply management; stronger brand recognition and G2 presence but significantly higher price point and steeper learning curve than ANDI.
Source: Category listing

Expandi

Primary High
expandi.io
Cloud-based LinkedIn automation platform popular with agencies and sales teams; strong on safe automation with smart sequences and A/B testing, but lacks the relationship-management CRM and HubSpot sync depth that ANDI offers, and users report LinkedIn account restrictions despite safety claims.
Source: Category listing

Salesflow

Primary High
salesflow.io
LinkedIn automation tool built for sales teams and agencies with high-volume outreach capabilities (400+ connection requests/month); user-friendly and affordable but plagued by LinkedIn safety concerns, bugs, and poor customer support.
Source: Category listing

Closely

Primary Medium
closelyhq.com
AI-driven multichannel outreach platform combining LinkedIn and email automation with data enrichment; strong on AI personalization and unified inbox but newer entrant with less established track record than ANDI's 7+ years of operational experience.
Source: Category listing

Dux-Soup

Primary High
dux-soup.com
Budget-friendly browser-based LinkedIn automation tool popular with individual sales reps and small teams; very affordable entry point but limited to Chrome extension, no native CRM integration, and lacks AI-powered personalization capabilities.
Source: Category listing

Secondary Competitors

Apollo.io

Secondary High
apollo.io
All-in-one sales intelligence platform with 275M+ contact database, email sequences, and dialer; much broader scope than ANDI but LinkedIn automation requires manual execution, data accuracy issues (65% reported), and confusing credit-based pricing model.
Source: Category listing

HeyReach

Secondary Medium
heyreach.io
Agency-focused LinkedIn automation platform with multi-account sender rotation and flat pricing tiers; strong for high-volume operations but LinkedIn-only (no email), expensive at scale ($999+ for 50 accounts), and lacks the relationship-first approach ANDI emphasizes.
Source: Category listing

LinkedIn Sales Navigator

Secondary High
linkedin.com/sales
LinkedIn's own premium prospecting tool with advanced search filters, lead recommendations, and InMail credits; native platform advantage but no outreach automation, no AI message generation, and expensive with limited ROI for teams that need active pipeline generation.
Source: Category listing

Amplemarket

Secondary Medium
amplemarket.com
AI sales copilot (Duo) that automates prospect finding and multichannel outreach; enterprise-grade with strong AI capabilities but significantly more expensive and complex than ANDI, targeting larger sales organizations rather than individual reps and small teams.
Source: Category listing

Validate Three competitors have medium-confidence tier assignments: Closely (primary — do they actually show up in competitive deals, or are they a category neighbor?), HeyReach (secondary — is their agency focus relevant to ANDI's buyer base?), and Amplemarket (secondary — does their enterprise positioning overlap with ANDI's startup segment?). Are there vendors we missed — particularly newer AI-first LinkedIn tools or Chrome extensions your team encounters in deals?

Feature Taxonomy

What Buyers Evaluate

12 buyer-level capabilities mapped. Strength ratings determine which capability queries test competitive differentiation vs. defensive positioning.

AI-Powered Message Personalization Strong High

Generate personalized LinkedIn messages and comments that sound authentic and human, not like a bot wrote them

Built-In LinkedIn CRM & Pipeline Management Strong High

Track and manage all my LinkedIn prospect relationships, deal stages, and follow-ups in one place without leaving LinkedIn

CRM Integration & HubSpot Sync Strong High

Automatically sync LinkedIn activity, contacts, and conversations with HubSpot so my CRM stays up to date without manual data entry

Prospect Activity Monitoring & Engagement Timing Strong High

Know when my prospects are active on LinkedIn and get notified about the best moments to engage with them

Warm Introduction Path Discovery Strong High

Identify mutual connections and warm introduction paths to reach decision-makers instead of cold outreach

Email Enrichment & Contact Discovery Moderate High

Find verified email addresses for my LinkedIn connections so I can reach out through multiple channels

Outreach Sequence Automation Moderate Medium

Automate my LinkedIn connection requests, follow-up messages, and outreach sequences without getting my account flagged

LinkedIn Content Creation & Post Writing Moderate Medium

Create engaging LinkedIn posts and thought leadership content that builds my personal brand and attracts inbound leads

Campaign Analytics & Performance Reporting Moderate Medium

See which outreach campaigns, messages, and sequences are driving replies and meetings so I can optimize my approach

Multichannel Outreach (LinkedIn + Email + Social) Weak High

Run coordinated outreach campaigns across LinkedIn, email, and other channels from one platform

Team Collaboration & Multi-Seat Management Weak Medium

Manage my whole sales team's LinkedIn outreach from one dashboard with shared templates, territories, and reporting

B2B Contact Database & Lead Discovery Absent High

Search and filter a large database of verified B2B contacts to find my ideal prospects without manual research

Validate Are the five "strong" ratings (AI Personalization, LinkedIn CRM, HubSpot Sync, Activity Monitoring, Warm Intros) accurate relative to CoPilot AI and Expandi specifically? Outreach Sequence Automation is rated "moderate" at medium confidence — is ANDI intentionally deprioritizing automation volume in favor of relationship quality, or is this a gap being actively closed? If intentional, the audit frames it as differentiation; if a gap, the audit tests defensive queries. Any capabilities missing from this list?

Pain Point Taxonomy

What Buyers Suffer

9 pain points: 5 high, 4 medium severity. Buyer language is how queries will be phrased — the audit tests whether AI platforms cite ANDI when buyers describe these problems.

Personalized outreach at scale feels impossible High High

"Every AI outreach tool I've tried makes my messages sound like a robot — prospects can spot the automation instantly and it kills my credibility"
Personas: SDR Team Lead, Sales Manager

LinkedIn activity doesn't flow into CRM High High

"My reps are having great LinkedIn conversations but none of it shows up in HubSpot — I have no visibility into what's actually happening in the pipeline"
Personas: VP of Sales, Sales Manager, Head of Revenue Operations

LinkedIn account safety at risk from automation High High

"I got my LinkedIn account restricted because the last automation tool I used was too aggressive — I can't afford to lose my network over a sales tool"
Personas: SDR Team Lead, Sales Manager, VP of Sales

Follow-up timing and tracking breaks down High High

"I have hundreds of LinkedIn conversations going and I keep dropping the ball on follow-ups — warm leads go cold because I just can't keep track of everyone"
Personas: SDR Team Lead, Sales Manager

Cold outreach generates minimal pipeline High High

"My team sends hundreds of cold connection requests every week and we're barely booking any meetings — it feels like we're just annoying people"
Personas: VP of Sales, SDR Team Lead, Founder / CEO

Too many disconnected sales tools Medium Medium

"My team uses five different tools just to prospect on LinkedIn — one for automation, one for emails, one for enrichment, and it's a mess to manage"
Personas: Head of Revenue Operations, Sales Manager, VP of Sales

Founders can't dedicate time to LinkedIn networking Medium High

"I know I should be active on LinkedIn to build my company's pipeline but I'm running a business — I don't have 2 hours a day to write posts and send messages"
Personas: Founder / CEO

No visibility into rep LinkedIn activity Medium Medium

"I have no idea what my SDRs are actually doing on LinkedIn all day — I can't coach them if I can't see their activity or results"
Personas: VP of Sales, Sales Manager

Scaling authentic relationship-building across a team Medium Medium

"Our top rep books meetings because she builds real relationships, but we can't clone her approach — every tool we've tried just turns our team into spammers"
Personas: VP of Sales, Sales Manager, SDR Team Lead

Validate Are the five high-severity pain points (spammy outreach, CRM blind spots, account safety, dropped follow-ups, cold outreach ROI) the problems your buyers actually articulate in sales calls? "Tool sprawl" is rated medium but affects 3 personas including RevOps — should it be high severity? Missing pain points to consider: compliance/security concerns (if enterprise buyers worry about data handling with LinkedIn browser extensions), onboarding time (if ramp-up speed is a deal factor vs. simpler tools like Dux-Soup), or LinkedIn algorithm changes (if buyers worry about platform policy shifts breaking their automation). What are we missing?

Layer 1 Site Findings

Technical Baseline

5 findings from the Layer 1 technical analysis. These are items your engineering team can evaluate and act on independently of the audit.

Engineering Action Required No critical blockers, but one high-severity item needs prompt attention: Two Indexed Pages Return 404 Errors — the /executive-concierge page (ANDI Scale's product page) is linked from every page's footer navigation and returns a 404 on every crawl. Engineering should also verify schema markup presence using Google's Rich Results Test and update sitemap timestamps to reflect actual modification dates. These fixes are independent of the audit and will improve baseline crawl quality before we measure visibility.

🟡 Two Indexed Pages Return 404 Errors

What we found: Two pages linked from the site navigation and/or sitemap return HTTP 404 errors: /executive-concierge (linked from footer navigation under PRODUCT) and /resources/ai-productivity (present in sitemap.xml with priority 0.8). Both are publicly indexed and reachable through standard crawl paths.

Why it matters: Broken pages waste crawl budget and send negative quality signals to both traditional search engines and AI crawlers. When an AI platform encounters 404s in its index, it may reduce trust scores for the entire domain. The /executive-concierge page is particularly impactful because it represents a commercial product page (ANDI Scale) linked from every page's footer navigation.

Business consequence: Queries like "AI LinkedIn concierge service for executives" or "done-for-you LinkedIn outreach" may bypass Pursue Networking entirely when AI platforms encounter a broken product page in their index, giving competitors a default visibility advantage for ANDI Scale-related queries.

Recommended fix: Either restore the /executive-concierge page with current ANDI Scale product content or remove the link from footer navigation. For /resources/ai-productivity, either create the resource category page or remove the URL from sitemap.xml. Implement 301 redirects from both URLs to the most relevant existing pages.

Impact: High Effort: < 1 day Owner: Engineering Affected: 2 pages — /executive-concierge (linked from every page footer) and /resources/ai-productivity (in sitemap)

🔵 Sitemap Timestamps Do Not Reflect Actual Modification Dates

What we found: All 11 non-blog URLs in sitemap.xml share an identical lastmod timestamp of 2025-10-13T23:15:51.234Z, including utility pages (signin, dashboard, privacy) and all resource category pages. This timestamp appears auto-generated rather than reflecting actual content modifications. Blog post timestamps appear accurate based on visible publication dates.

Why it matters: AI crawlers and search engines use sitemap lastmod to prioritize recrawl frequency. When all pages share the same timestamp, crawlers cannot distinguish recently updated pages from stale ones, reducing the efficiency of freshness signals. Googlebot documentation specifically warns that unreliable lastmod dates may cause the crawler to ignore sitemap timestamps entirely for the domain.

Business consequence: When AI platforms cannot distinguish fresh ANDI feature pages from stale content, recently updated pages about LinkedIn CRM or HubSpot sync may be deprioritized in responses to queries like "best LinkedIn automation tool 2026" where recency is a ranking factor.

Recommended fix: Update the sitemap generation logic to reflect actual page modification dates. If using a static site generator or CMS, configure it to track content changes and update lastmod per-page. Remove utility pages (signin, dashboard) from the sitemap entirely — they provide no SEO or AI visibility value.

Impact: Medium Effort: 1-3 days Owner: Engineering Affected: All non-blog pages in sitemap.xml (11 URLs)

🔵 Schema Markup Cannot Be Assessed — Manual Verification Recommended

What we found: Our analysis method returns rendered page content as text, not raw HTML. JSON-LD schema markup is embedded in HTML source and is invisible in the rendered output. We cannot determine whether Product, FAQ, Article, or Organization schema is present on any page.

Why it matters: Structured data (JSON-LD schema) provides explicit entity signals to AI platforms and search engines. FAQ schema on the FAQ page, Product schema on features/pricing pages, and Article schema on blog posts help AI systems accurately classify content type and extract structured answers. Google's AI Overviews and ChatGPT search both leverage schema markup to improve citation quality.

Business consequence: Without verified schema markup, AI platforms may miss structured entity signals when processing queries like "what is the best LinkedIn CRM for sales reps" — reducing ANDI's chances of being cited in direct-answer responses where structured data provides a classification advantage.

Recommended fix: Verify schema markup using Google's Rich Results Test or Schema.org Validator. Ensure: (1) Organization schema on the homepage, (2) Product schema on /features and /pricing, (3) FAQPage schema on /faq, (4) Article schema on all blog posts with datePublished and dateModified, (5) BreadcrumbList schema for navigation hierarchy.

Impact: Medium Effort: 1-3 days Owner: Engineering Affected: All pages site-wide — priority on homepage, features, pricing, FAQ, and blog posts

🔵 Meta Descriptions and Open Graph Tags Cannot Be Assessed

What we found: Meta descriptions, Open Graph tags, and Twitter Card tags are embedded in HTML source and not visible in rendered page output. We cannot verify whether these are present, accurate, or optimized across the site.

Why it matters: Meta descriptions influence how AI platforms summarize pages in search results and citations. OG tags determine how pages appear when shared or referenced in AI-generated responses. Missing or generic meta descriptions mean AI platforms must auto-generate summaries, which may not highlight ANDI's key differentiators.

Business consequence: Missing or generic meta descriptions mean AI platforms auto-generate summaries of ANDI's pages, which may not highlight relationship-based outreach as a differentiator in LinkedIn sales copilot queries — ceding the narrative framing to competitors with optimized descriptions.

Recommended fix: Verify meta descriptions and OG tags using a social preview tool or browser developer tools. Ensure each commercial page has a unique meta description under 160 characters that includes the primary value proposition. Verify og:title, og:description, and og:image are set on all pages.

Impact: Low Effort: 1-3 days Owner: Marketing Affected: All pages — priority on homepage, features, pricing, and high-traffic blog posts

🔵 Client-Side Rendering Status Unknown — Verify JavaScript Dependency

What we found: All 31 analyzed pages returned substantial text content through our rendering method, suggesting server-side or static rendering is functional. However, we cannot definitively confirm whether any pages rely on client-side JavaScript rendering that might fail for AI crawlers with limited JavaScript execution. The site appears to be built on Next.js based on 404 page metadata.

Why it matters: Next.js supports both server-side rendering (SSR) and client-side rendering (CSR). If any routes use CSR, AI crawlers like GPTBot and ClaudeBot may see empty or incomplete content. Since all pages returned content in our analysis, CSR is unlikely to be a blocking issue, but the site's framework warrants a quick verification.

Business consequence: If any ANDI pages rely on client-side JavaScript rendering, AI crawlers may see empty content for queries about LinkedIn automation features — though initial analysis suggests this is unlikely given all 31 pages returned substantial content.

Recommended fix: Test 3-5 key pages (homepage, features, pricing, one blog post) with JavaScript disabled in the browser to verify content renders without JS. Alternatively, use Google's URL Inspection tool in Search Console to see the rendered HTML that Googlebot processes.

Impact: Low Effort: < 1 day Owner: Engineering Affected: Potential site-wide impact if CSR is used on any routes

Site Analysis Summary

Total Pages Analyzed 31
Commercially Relevant Pages 31
Heading Hierarchy 0.82
Content Depth 0.69
Freshness 0.47 weighted (blog: 0.47, product: unable to assess, structural: unable to assess)
Passage Extractability 0.69
Schema Coverage Unable to assess (31 pages unscored)

Note 9 pages have no freshness score (4 product pages + 5 structural pages with no detectable dates). Schema coverage could not be assessed for any page due to analysis method limitations. These gaps should be verified manually.

What Happens Next

Next Steps

Why Now

• AI search adoption is accelerating — buyer discovery patterns are shifting quarter over quarter
• Early citations compound: domains that AI platforms learn to trust now get cited more frequently as training data accumulates
• Competitors who establish GEO visibility first create a structural disadvantage for late movers
• AI-powered LinkedIn sales copilot is still early-innings in GEO optimization — acting now means competing against inaction, not against entrenched strategies

The full audit will measure citation visibility across buyer queries in the AI-powered LinkedIn sales copilot space — queries like "best LinkedIn automation tool that won't get my account flagged," "AI sales copilot with HubSpot integration," and "LinkedIn outreach tool vs. CoPilot AI." You'll see exactly which queries return results that include your competitors but not ANDI — and what it would take to appear in those responses. Fixing the broken navigation pages and sitemap timestamps now improves your crawl baseline before we even measure it.

01

Validation Call

45-60 minutes to walk through this document together. Confirm personas, competitor tiers, feature strengths, and pain point severity. Every correction sharpens the query set.

02

Query Generation & Execution

Validated inputs generate buyer queries tested across selected AI platforms. Each persona × feature × pain point combination produces queries that mirror real buyer search behavior.

03

Full Audit Delivery

Visibility analysis, competitive positioning, and a three-layer action plan: quick wins, strategic content priorities, and long-term authority building — all prioritized by actual citation data.

Start Now — Engineering These don't depend on the rest of the audit and will improve your baseline visibility before we even measure it:

Fix broken navigation pages: Restore /executive-concierge with ANDI Scale content (or redirect + remove from footer), and resolve /resources/ai-productivity (restore or remove from sitemap.xml)
Update sitemap timestamps: Configure sitemap generation to reflect actual page modification dates and remove utility pages (signin, dashboard) from the sitemap
Verify schema markup: Run Google's Rich Results Test on homepage, /features, /pricing, /faq, and 2-3 blog posts to confirm JSON-LD is present and well-formed

Before the Call

Your Pre-Call Checklist

Two jobs before we meet. The questions on the left require your judgment — no one knows your business better than you. The engineering tasks on the right don't require the call at all.

Questions for You
Does a VP of Sales (Marcus Rivera) actually evaluate LinkedIn copilots at ANDI's deal size, or do Sales Managers own the decision?
If wrong: we drop pipeline-visibility queries and shift to frontline evaluation criteria, reshaping ~30% of buyer query architecture
Does Revenue Operations (David Okonkwo) evaluate LinkedIn sales tools pre-purchase, or only surface post-purchase during CRM integration?
If wrong: we drop integration-evaluation queries and shift to post-sale enablement content
Do most deals start with the product name (ANDI) or the company name (Pursue Networking), and does ANDI Scale target a different buyer than the core copilot?
If wrong: we may need to split the query set into two distinct buying conversations with separate persona clusters
Is the Founder/CEO persona (Sarah Chen) driven primarily by ANDI Scale, or do founders also evaluate the core ANDI copilot?
If wrong: query cluster splits into "LinkedIn ghostwriting" (founders) vs. "LinkedIn automation tool" (sales leaders)
Does Closely actually appear in competitive deals, or are they a category neighbor that should be moved to secondary?
If wrong: ~6-8 head-to-head queries get reallocated to other primary competitors
Does Jennifer Park run formal evaluations (demos, trials) or is adoption bottom-up where reps champion the tool?
If wrong: query set shifts from comparison queries to adoption and onboarding queries
Does the SDR Team Lead (Tyler Washington) influence the purchase decision upward, or receive the tool post-purchase?
If wrong: we drop influence-stage queries and focus on adoption/retention content
Are the five "strong" feature ratings accurate relative to CoPilot AI and Expandi — and is Outreach Sequence Automation "moderate" intentional or a gap being closed?
If wrong: query strategy changes — lean into strengths for differentiation, play defense on weaknesses
Are HeyReach and Amplemarket relevant secondary competitors, or should either be dropped or promoted?
If wrong: category-level query allocation shifts
Are the five high-severity pain points the problems buyers actually articulate — and should "tool sprawl" be elevated to high severity?
If wrong: buyer language in queries won't match real search behavior, reducing citation relevance
Are we missing personas (Marketing Manager, IT/Security, Sales Enablement) or pain points (compliance, onboarding time, LinkedIn algorithm changes)?
If wrong: entire buyer query clusters are absent from the audit
For Engineering — Start Now
Fix broken /executive-concierge and /resources/ai-productivity pages (restore, redirect, or remove from navigation and sitemap)
Stops negative crawl quality signals on every page visit — linked from every footer
Update sitemap.xml timestamps to reflect actual modification dates and remove utility pages (signin, dashboard)
Restores freshness signaling so AI crawlers prioritize recently updated commercial pages
Verify schema markup (JSON-LD) on homepage, /features, /pricing, /faq, and 2-3 blog posts using Rich Results Test
Confirms whether AI platforms receive structured entity signals from ANDI's key pages
Quick CSR check: test homepage, /features, /pricing with JavaScript disabled to verify content renders without JS
Rules out the edge case where AI crawlers see empty pages — likely fine but worth 10 minutes to confirm
Alignment

We're Aligned On

This isn't a contract — it's a shared understanding. The audit runs against what's below. If something changes between now and the call, we adjust. The goal is to make sure we're asking the right questions for the right buyers against the right competitors.
Already Confirmed
Competitive set — 5 primary + 4 secondary competitors (CoPilot AI, Expandi, Salesflow, Closely, Dux-Soup as primary; Apollo.io, HeyReach, LinkedIn Sales Navigator, Amplemarket as secondary)
Persona set — 5 personas: 2 decision-makers (VP Sales, Founder/CEO), 1 evaluator (Sales Manager), 2 influencers (Head RevOps, SDR Team Lead)
Feature taxonomy — 12 capabilities with outside-in strength ratings (5 strong, 4 moderate, 2 weak, 1 absent)
Pain point set — 9 buyer frustrations with severity ratings (5 high, 4 medium)
Layer 1 technical audit — 5 findings logged (1 high, 2 medium, 2 low), engineering notified
Decided at the Call
Inferred persona validation — confirm whether VP of Sales (Marcus Rivera) and Head of Revenue Operations (David Okonkwo) actually appear in ANDI's deal cycles, or if the query set should concentrate on Sales Manager and SDR roles
ANDI vs. ANDI Scale buyer split — determine if the executive concierge service targets a distinct buyer conversation requiring separate query clusters
Feature overweighting — top 3 features to emphasize in capability queries (proposed: AI Message Personalization, Built-In LinkedIn CRM, Warm Introduction Path Discovery based on strength × high-severity pain point linkage)
Pain point prioritization — top 3 buyer problems to test first (proposed: CRM/LinkedIn data gap, LinkedIn account safety, cold outreach low conversion based on severity × persona breadth)
Competitor tier adjustments — validate Closely as primary; confirm or drop HeyReach and Amplemarket as secondary
Client
Date