Before we run the audit, we need to make sure we're asking the right questions about the right competitors to the right buyers. This document presents what we've learned about Pursue Networking's market — your job is to tell us what we got right, what we got wrong, and what we missed.
Before we measure citation visibility in the AI-powered LinkedIn sales copilot space, these three signals tell us whether AI crawlers can access and trust your site.
AI search is reshaping how B2B sales teams discover and evaluate AI-powered LinkedIn sales copilots. The knowledge graph maps Pursue Networking's competitive landscape across 5 primary and 4 secondary competitors, with 5 buyer personas anchored by two decision-makers — the VP of Sales and the Founder/CEO. Companies establishing GEO visibility now in this category gain a first-mover advantage that compounds as AI platforms learn to trust cited domains.
Layer 1 reveals a solid crawl foundation — all major AI crawlers are permitted — but one high-severity finding requires immediate attention: "Two Indexed Pages Return 404 Errors." The /executive-concierge page (ANDI Scale's product page) is linked from every page footer and returns a 404, meaning every crawl encounter reinforces a broken-site signal. Two medium-severity structural findings — "Sitemap Timestamps Do Not Reflect Actual Modification Dates" and "Schema Markup Cannot Be Assessed" — further reduce the precision of AI crawler indexing.
Two actions before the validation call: (1) Validate the VP of Sales and Head of Revenue Operations personas — both are inferred from category patterns rather than sourced from deal data, and if they don't match real buyer roles, the query set reshapes substantially toward frontline sales manager evaluation patterns. (2) Engineering should fix the broken /executive-concierge and /resources/ai-productivity pages now — these 404 repairs don't require the validation call and directly improve crawl quality signals.
What This Is This document presents what we've learned about the AI-powered LinkedIn sales copilot market from outside-in research. It captures the competitive landscape, buyer personas, feature taxonomy, pain points, and technical site analysis that will drive the audit's query set. Every section feeds directly into how we construct and prioritize buyer queries across AI platforms.
What We Need From You Look for the purple question boxes throughout the document. Each one identifies a specific input where your answer changes the audit's direction. The most consequential questions are about persona roles and competitor tiers — getting these wrong means testing the wrong queries against the wrong buyers.
Confidence Badges Every data point carries a confidence badge. High means sourced directly from public data (site content, G2 reviews, category listings). Medium means inferred from category patterns or partial signals. Low means best-guess. Medium and low items are the ones where your input matters most.
The client profile anchors every query — category, segment, and name variants determine how AI platforms identify and classify the company.
Validate Buyers searching for "ANDI" will encounter a different brand universe than "Pursue Networking" — do most deals start with the product name (ANDI) or the company name? And does ANDI Scale target a meaningfully different buyer than the core copilot? If yes, we may need to split the query set into two distinct buying conversations with separate persona clusters.
5 personas: 2 decision-makers, 1 evaluator, 2 influencers. These roles determine the search intent patterns the audit tests.
Critical Review Area Personas drive the entire query set. Every persona generates a distinct cluster of buyer queries based on their role, seniority, and buying stage. Adding, removing, or reclassifying a persona changes which queries the audit runs. Review each card carefully — especially the medium-confidence personas sourced from inference rather than direct data.
Data Sourcing Note Name, role, department, seniority, influence level, veto power, and technical level are sourced from the knowledge graph. Buying jobs and query focus areas are synthesized from the persona's role context and the client's category — these represent our best outside-in interpretation and should be validated at the call.
→ Marcus is inferred, not sourced — does a VP of Sales actually evaluate LinkedIn copilots at ANDI's typical deal size, or do Sales Managers own the decision? If VPs aren't in the loop, we drop pipeline-visibility queries and shift to frontline evaluation criteria.
→ Does Jennifer run a formal evaluation (demos, trials, scorecards) or is this bottom-up adoption where reps champion the tool? If bottom-up, the query set shifts from comparison queries to adoption and onboarding queries.
→ David is inferred — does Revenue Operations actually evaluate LinkedIn sales tools at ANDI's target companies, or does this role only surface post-purchase during CRM integration? If RevOps isn't in the buying loop, we drop integration-evaluation queries and shift to post-sale enablement content.
→ Is the Founder/CEO persona driven primarily by ANDI Scale's executive concierge service, or do founders also evaluate the core ANDI copilot? If ANDI Scale is a distinct buying conversation, the query cluster splits into "LinkedIn ghostwriting" (founders) vs. "LinkedIn automation tool" (sales leaders).
→ Does Tyler influence the purchase decision upward, or does he receive the tool after someone else buys it? If Tyler is user-only (assigned post-purchase), we drop influence-stage queries and focus on adoption and retention content instead.
Missing Personas? Who else shows up in your deals? Possible missing roles: Marketing Manager (if LinkedIn content creation drives inbound leads and marketing evaluates brand-safe automation), IT/Security Lead (if enterprise prospects need LinkedIn API compliance review before approving a browser extension), or Sales Enablement Manager (if larger teams have dedicated enablement evaluating rep productivity tools). What's missing?
5 primary + 4 secondary competitors identified. Tier assignments determine which head-to-head queries the audit runs.
Why Tiers Matter Getting these tiers right determines which queries test direct competitive differentiation vs. broader category awareness. Each primary competitor generates approximately 6-8 head-to-head queries — queries like "ANDI vs CoPilot AI" or "best LinkedIn automation for safe outreach." Closely is listed as primary but at medium confidence — if they rarely appear in actual deals, moving them to secondary would shift approximately 6-8 queries out of the head-to-head set and reallocate them to category-level queries.
Validate Three competitors have medium-confidence tier assignments: Closely (primary — do they actually show up in competitive deals, or are they a category neighbor?), HeyReach (secondary — is their agency focus relevant to ANDI's buyer base?), and Amplemarket (secondary — does their enterprise positioning overlap with ANDI's startup segment?). Are there vendors we missed — particularly newer AI-first LinkedIn tools or Chrome extensions your team encounters in deals?
12 buyer-level capabilities mapped. Strength ratings determine which capability queries test competitive differentiation vs. defensive positioning.
Generate personalized LinkedIn messages and comments that sound authentic and human, not like a bot wrote them
Track and manage all my LinkedIn prospect relationships, deal stages, and follow-ups in one place without leaving LinkedIn
Automatically sync LinkedIn activity, contacts, and conversations with HubSpot so my CRM stays up to date without manual data entry
Know when my prospects are active on LinkedIn and get notified about the best moments to engage with them
Identify mutual connections and warm introduction paths to reach decision-makers instead of cold outreach
Find verified email addresses for my LinkedIn connections so I can reach out through multiple channels
Automate my LinkedIn connection requests, follow-up messages, and outreach sequences without getting my account flagged
Create engaging LinkedIn posts and thought leadership content that builds my personal brand and attracts inbound leads
See which outreach campaigns, messages, and sequences are driving replies and meetings so I can optimize my approach
Run coordinated outreach campaigns across LinkedIn, email, and other channels from one platform
Manage my whole sales team's LinkedIn outreach from one dashboard with shared templates, territories, and reporting
Search and filter a large database of verified B2B contacts to find my ideal prospects without manual research
Validate Are the five "strong" ratings (AI Personalization, LinkedIn CRM, HubSpot Sync, Activity Monitoring, Warm Intros) accurate relative to CoPilot AI and Expandi specifically? Outreach Sequence Automation is rated "moderate" at medium confidence — is ANDI intentionally deprioritizing automation volume in favor of relationship quality, or is this a gap being actively closed? If intentional, the audit frames it as differentiation; if a gap, the audit tests defensive queries. Any capabilities missing from this list?
9 pain points: 5 high, 4 medium severity. Buyer language is how queries will be phrased — the audit tests whether AI platforms cite ANDI when buyers describe these problems.
Validate Are the five high-severity pain points (spammy outreach, CRM blind spots, account safety, dropped follow-ups, cold outreach ROI) the problems your buyers actually articulate in sales calls? "Tool sprawl" is rated medium but affects 3 personas including RevOps — should it be high severity? Missing pain points to consider: compliance/security concerns (if enterprise buyers worry about data handling with LinkedIn browser extensions), onboarding time (if ramp-up speed is a deal factor vs. simpler tools like Dux-Soup), or LinkedIn algorithm changes (if buyers worry about platform policy shifts breaking their automation). What are we missing?
5 findings from the Layer 1 technical analysis. These are items your engineering team can evaluate and act on independently of the audit.
Engineering Action Required No critical blockers, but one high-severity item needs prompt attention: Two Indexed Pages Return 404 Errors — the /executive-concierge page (ANDI Scale's product page) is linked from every page's footer navigation and returns a 404 on every crawl. Engineering should also verify schema markup presence using Google's Rich Results Test and update sitemap timestamps to reflect actual modification dates. These fixes are independent of the audit and will improve baseline crawl quality before we measure visibility.
What we found: Two pages linked from the site navigation and/or sitemap return HTTP 404 errors: /executive-concierge (linked from footer navigation under PRODUCT) and /resources/ai-productivity (present in sitemap.xml with priority 0.8). Both are publicly indexed and reachable through standard crawl paths.
Why it matters: Broken pages waste crawl budget and send negative quality signals to both traditional search engines and AI crawlers. When an AI platform encounters 404s in its index, it may reduce trust scores for the entire domain. The /executive-concierge page is particularly impactful because it represents a commercial product page (ANDI Scale) linked from every page's footer navigation.
Recommended fix: Either restore the /executive-concierge page with current ANDI Scale product content or remove the link from footer navigation. For /resources/ai-productivity, either create the resource category page or remove the URL from sitemap.xml. Implement 301 redirects from both URLs to the most relevant existing pages.
What we found: All 11 non-blog URLs in sitemap.xml share an identical lastmod timestamp of 2025-10-13T23:15:51.234Z, including utility pages (signin, dashboard, privacy) and all resource category pages. This timestamp appears auto-generated rather than reflecting actual content modifications. Blog post timestamps appear accurate based on visible publication dates.
Why it matters: AI crawlers and search engines use sitemap lastmod to prioritize recrawl frequency. When all pages share the same timestamp, crawlers cannot distinguish recently updated pages from stale ones, reducing the efficiency of freshness signals. Googlebot documentation specifically warns that unreliable lastmod dates may cause the crawler to ignore sitemap timestamps entirely for the domain.
Recommended fix: Update the sitemap generation logic to reflect actual page modification dates. If using a static site generator or CMS, configure it to track content changes and update lastmod per-page. Remove utility pages (signin, dashboard) from the sitemap entirely — they provide no SEO or AI visibility value.
What we found: Our analysis method returns rendered page content as text, not raw HTML. JSON-LD schema markup is embedded in HTML source and is invisible in the rendered output. We cannot determine whether Product, FAQ, Article, or Organization schema is present on any page.
Why it matters: Structured data (JSON-LD schema) provides explicit entity signals to AI platforms and search engines. FAQ schema on the FAQ page, Product schema on features/pricing pages, and Article schema on blog posts help AI systems accurately classify content type and extract structured answers. Google's AI Overviews and ChatGPT search both leverage schema markup to improve citation quality.
Recommended fix: Verify schema markup using Google's Rich Results Test or Schema.org Validator. Ensure: (1) Organization schema on the homepage, (2) Product schema on /features and /pricing, (3) FAQPage schema on /faq, (4) Article schema on all blog posts with datePublished and dateModified, (5) BreadcrumbList schema for navigation hierarchy.
What we found: Meta descriptions, Open Graph tags, and Twitter Card tags are embedded in HTML source and not visible in rendered page output. We cannot verify whether these are present, accurate, or optimized across the site.
Why it matters: Meta descriptions influence how AI platforms summarize pages in search results and citations. OG tags determine how pages appear when shared or referenced in AI-generated responses. Missing or generic meta descriptions mean AI platforms must auto-generate summaries, which may not highlight ANDI's key differentiators.
Recommended fix: Verify meta descriptions and OG tags using a social preview tool or browser developer tools. Ensure each commercial page has a unique meta description under 160 characters that includes the primary value proposition. Verify og:title, og:description, and og:image are set on all pages.
What we found: All 31 analyzed pages returned substantial text content through our rendering method, suggesting server-side or static rendering is functional. However, we cannot definitively confirm whether any pages rely on client-side JavaScript rendering that might fail for AI crawlers with limited JavaScript execution. The site appears to be built on Next.js based on 404 page metadata.
Why it matters: Next.js supports both server-side rendering (SSR) and client-side rendering (CSR). If any routes use CSR, AI crawlers like GPTBot and ClaudeBot may see empty or incomplete content. Since all pages returned content in our analysis, CSR is unlikely to be a blocking issue, but the site's framework warrants a quick verification.
Recommended fix: Test 3-5 key pages (homepage, features, pricing, one blog post) with JavaScript disabled in the browser to verify content renders without JS. Alternatively, use Google's URL Inspection tool in Search Console to see the rendered HTML that Googlebot processes.
Note 9 pages have no freshness score (4 product pages + 5 structural pages with no detectable dates). Schema coverage could not be assessed for any page due to analysis method limitations. These gaps should be verified manually.
Why Now
• AI search adoption is accelerating — buyer discovery patterns are shifting quarter over quarter
• Early citations compound: domains that AI platforms learn to trust now get cited more frequently as training data accumulates
• Competitors who establish GEO visibility first create a structural disadvantage for late movers
• AI-powered LinkedIn sales copilot is still early-innings in GEO optimization — acting now means competing against inaction, not against entrenched strategies
The full audit will measure citation visibility across buyer queries in the AI-powered LinkedIn sales copilot space — queries like "best LinkedIn automation tool that won't get my account flagged," "AI sales copilot with HubSpot integration," and "LinkedIn outreach tool vs. CoPilot AI." You'll see exactly which queries return results that include your competitors but not ANDI — and what it would take to appear in those responses. Fixing the broken navigation pages and sitemap timestamps now improves your crawl baseline before we even measure it.
45-60 minutes to walk through this document together. Confirm personas, competitor tiers, feature strengths, and pain point severity. Every correction sharpens the query set.
Validated inputs generate buyer queries tested across selected AI platforms. Each persona × feature × pain point combination produces queries that mirror real buyer search behavior.
Visibility analysis, competitive positioning, and a three-layer action plan: quick wins, strategic content priorities, and long-term authority building — all prioritized by actual citation data.
Start Now — Engineering These don't depend on the rest of the audit and will improve your baseline visibility before we even measure it:
• Fix broken navigation pages: Restore /executive-concierge with ANDI Scale content (or redirect + remove from footer), and resolve /resources/ai-productivity (restore or remove from sitemap.xml)
• Update sitemap timestamps: Configure sitemap generation to reflect actual page modification dates and remove utility pages (signin, dashboard) from the sitemap
• Verify schema markup: Run Google's Rich Results Test on homepage, /features, /pricing, /faq, and 2-3 blog posts to confirm JSON-LD is present and well-formed
Two jobs before we meet. The questions on the left require your judgment — no one knows your business better than you. The engineering tasks on the right don't require the call at all.