Guidance for Performing Analysis with Petavue in Claude

1. Overview


These guidelines help you run clear, step-by-step analyses in Claude with Petavue. By following them, you will:

  • Generate insights you can trust and share confidently with stakeholders.
  • Avoid common AI issues like skipped steps, hallucinated metrics, or lost context.
  • Save time by using a repeatable process for analysis and verification.

2. Prompting Best Practices


2.1 Be Specific and Detailed


Broad prompts like “Analyze activity patterns for won deals” often generate vague, generic outputs. To get sharper insights, include timeframe, activity type, filters, and KPIs.


Better examples:


  • “From deals won in Q2, analyze calls, Slack messages, and demos that happened between first meeting and contract signed. Show average activities per deal and the correlation to win rates.”


  • “For all enterprise deals over $50K ARR, analyze stage-by-stage conversion in the last six months. Include time-in-stage and lost reasons.”


  • “Look at pipeline created in July. Break down by source (inbound vs outbound) and show how each channel contributed to SQLs, demos, and closed-won.”


Rule of thumb: If you’d expect an analyst to ask “which timeframe?” or “which accounts?”—include that detail in your prompt.


2.2 Ask for Suggestions When Unsure

If you’re not certain what to measure, let Petavue propose metrics or KPIs. This is especially useful in new workflows or when exploring a dataset for the first time.


Examples:


  • “I want to understand how active opportunities move from demo to close. What KPIs should I track?”


  • “We’re reviewing campaign ROI. Suggest the top five metrics to evaluate marketing spend effectiveness.”


  • “For customer onboarding health, what KPIs should I measure to spot early churn risk?”


👉 This approach saves time when you don’t have a predefined framework and keeps you aligned with industry-standard metrics.


2.3 Break Down Complex Analyses


Multi-layered prompts (10+ metrics, multiple breakdowns, multiple time periods) can overwhelm the system. Instead, stage your requests step by step.


Step 1 — Pull core dataset

“Show all open deals created in the last 90 days with demo completed.”

Step 2 — Add one breakdown

“Now split by deal owner and calculate conversion rate from demo → proposal.”

Step 3 — Add time comparison

“Compare these results week over week for the last 12 weeks.”

Step 4 — Add a filter

“Filter only deals over $25K ARR.”

👉 By layering requests, you get cleaner results and can adjust direction at each step.


Use case example (marketing ROI):


  • Step 1: “List all campaigns run in Q2 with spend, leads generated, and pipeline created.”


  • Step 2: “Break down by channel (events, paid search, email).”


  • Step 3: “Add conversion to closed-won for each channel.”


  • Step 4: “Compare against Q1 to highlight improvement or decline.”

Example: Bad vs. Good Prompt

Bad: “Show me pipeline insights.”


→ Likely returns a generic report with assumed metrics and columns.


Good: “For pipeline created in the last 60 days, show:


  • Total pipeline value by segment (SMB, Mid-Market, Enterprise).


  • Average deal size.


  • Conversion rate from SQL → Demo → Closed-Won.


  • Time-in-stage averages.”


→ Returns a structured, immediately usable report.


3. Plan Review and Iteration

Prevent Oversummarization

Claude sometimes compresses plans. Add this instruction to Claude in your Petavue workspace:

“When a plan is presented from Petavue, do not summarize. Always show all steps and requirements.”


Work in Small Chunks

Running 10+ steps at once risks skipped steps. Stick to three or four steps at a time, then approve before moving on.

Prompt pattern: “Execute the next three steps and pause for approval. Show plan vs. actual for each.”


Iterate and Refine

After each chunk, adjust filters, segments, or time windows. Iteration keeps results aligned with your business needs.


Keep Plans Manageable

Limit plans to five or six steps. Push additional scope into follow-ups for easier verification.


Avoid Overloaded Threads

Long threads consume context and create risk of hallucination. Group five or six metrics per thread, and start a new one for more.


4. Verification and Trust


Even when Petavue orchestrates analysis through Claude, you are still the analyst in charge. Trust is built by verifying along the way.


4.1 Verify Step by Step


You can always ask Petavue to show the plan vs. actual execution.

Prompt pattern: “For step {n}, show plan vs. actual, inputs, and resulting table.”


4.2 Cross-Check Dashboards


Claude is good at creating polished dashboards, but sometimes misstates figures when rendering visualizations. Always compare dashboard visuals against Petavue’s raw output tables.


How to do it:


  1. Ask Petavue for the raw table.

    Prompt: “Show me the raw output used to build this dashboard, including deal IDs, values, and owners.”


  2. Match a few rows manually.

    Prompt: “Give me five random deal IDs with values so I can confirm against the CRM.”


  3. Optionally, re-run totals.

    Prompt: “Sum ARR and pipeline value directly from the raw table, then compare against the dashboard total.”


4.3 Use Verification Support


  • Manual Verification (current): Ask the Petavue team to double-check that each step ran as intended.

    Prompt: “Flag this analysis for manual verification before I share with leadership. Confirm each step matches the plan.”


  • Verification Tool (coming soon): Petavue is developing tooling that will automatically confirm execution fidelity, highlight skipped steps, and validate that dashboards are consistent with raw data.

5. Known Issues in Claude and Workarounds


Even with Petavue orchestrating analysis in Claude, Claude has quirks you need to anticipate. Here’s how to spot them and the workarounds to stay on track.


5.1 Skipped Steps During Execution


Claude sometimes skips steps when:


  • A plan is too long (8–10+ steps).


  • An earlier step errors out.


  • A request overlaps with a previous step.

Workarounds:


  • Run in chunks of three steps.


  • If a step is skipped, re-run explicitly.


5.2 Context Length Limitations

Claude has a memory window. If a thread runs too long, older steps drop out; causing errors or hallucinations.


Workarounds:


  • Start a new thread for fresh analyses.


  • Reuse stored definitions instead of re-describing.


  • For quick changes, edit the original message instead of replying.

5.3 Large Analyses Become Hard to Verify

Plans longer than 10 steps generate lots of tables, charts, and summaries; making it hard to track what’s accurate.


Workarounds:


  • Break into smaller, verifiable chunks.


  • Validate each chunk before layering breakdowns.


  • Combine insights later into a single presentation/dashboard.

Example:


  • First analysis: MQL → SQL by channel.


  • Second analysis: SQL → Opportunity by geo.


  • Third: Opportunity → Closed-Won by owner.


  • Then stitch into a single QBR-ready funnel chart.

5.4 Reusing Definitions

Claude may not remember precise definitions across steps or threads unless you store them.


Workarounds:


  • Save as a reusable definition.


  • Reference by name instead of rewriting each time.

Example:


  • Stored once: “High-velocity deals = Closed-Won in <30 days with ARR <25K.”


  • Reused: “Run win-rate analysis for High-Velocity Deals vs Enterprise Segment.”

5.5 Data Gaps Create Surprising Outputs

Claude may produce outputs that look correct but are based on incomplete data (missing fields, blank owners, inconsistent stages).


Workarounds:


  • Always run a data assessment first.


  • Flag hygiene gaps before trusting analysis.


  • If gaps are large, adjust expectations or clean data before rerunning.

Here's a TL;DR

These guidelines help RevOps, CX, and GTM teams run reliable, step-by-step analyses in Claude with Petavue. By following them, you can generate insights you trust, avoid common AI issues, and save time with a repeatable workflow.


Prompting Best Practices: Be specific with timeframe, entities, activity types, and KPIs—vague prompts lead to vague outputs. If you’re unsure what to measure, ask Petavue for KPI suggestions. Break complex analyses into stages: start with the core dataset, then layer breakdowns (e.g., by owner, channel, or segment), time comparisons, or filters.


Plan Review & Iteration: Prevent oversummarization by telling Claude not to compress plans. Execute in small chunks (three to four steps) and refine as you go. Keep plans to five or six steps and start new threads to avoid overloaded context.


Verification & Trust: Always check plan vs. actual execution, cross-check dashboards with raw tables, and request verification support for high-stakes work.


Known Issues & Workarounds: If steps are skipped, re-run explicitly. Start new threads to manage context limits. Break large analyses into smaller parts for easier review. Save reusable definitions (e.g., “Enterprise Segment”) and run data quality checks to spot gaps before trusting results.


By applying these best practices, RevOps, CX, and GTM teams can run analysis in Claude with confidence. The key is to write clear prompts, keep plans manageable, verify outputs often, and store reusable definitions. Done consistently, this approach reduces errors, builds trust in the numbers, and turns Petavue into a decision-support system your business can rely on week after week.


FAQs


  1. Can I use Petavue MCP only with a paid Claude subscription?

Yes. Adding custom connectors requires a paid Claude subscription. This is an Anthropic limitation.


  1. I am on my company’s Claude account. Why do I not see the option to add custom connectors?

Only workspace owners can add custom connectors. Go to Settings → Members to see the owner and ask them to add the connector.


  1. How can I trust the results from Claude and Petavue?

Claude can make mistakes. Always verify:

The input Claude sent to Petavue matches your intent

Claude’s answers to Petavue clarification questions are accurate

The steps Petavue executed make sense

The Claude summary aligns with the underlying Petavue report


  1. What happens if my data changes after I run an analysis?

Petavue analyses are based on the data available at the time of execution. If your CRM or other source updates, you may need to re‑run the analysis to see the new values.


  1. Can I share Petavue reports outside of Claude?

Yes. You can export or download reports from Petavue and share them with your team. You can also upload artifacts into Petavue and distribute them there.


  1. Does Petavue support live dashboards?

Currently, dashboards in Claude are static snapshots. Petavue is working on live refresh and drill‑down dashboards in its own UI.

Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.

Still need help? Contact Us Contact Us