Skip to main content
~10 min with AI, ~45 min without Standard medical writing review required.Full paper → AI structured summary → Data verification → Final reviewed summary

Best for

  • Rapidly digesting source literature at the start of a new project
  • Preparing evidence summaries or briefing documents for account teams or clients
  • Building a reference library across a therapeutic area
  • Creating a foundation summary to feed into content outlines or messaging work
  • Summarising unfamiliar papers before a project kick-off

Inputs

  • Full text of the published paper (PDF or pasted text; do not rely on AI training data)
  • Any specific focus areas (e.g., “primary endpoint only”, “focus on safety results”)
  • Target summary length and format (structured abstract, narrative summary, or bullet-point overview)

Steps

1

Read the paper yourself

At minimum, read the abstract, results, and conclusions. You need to understand the paper before you can evaluate an AI-generated summary of it.
2

Provide the full text to the AI

Paste or upload the complete paper. Do not rely on the AI’s training data. It may have an incomplete or outdated version of the publication.
3

Run the prompt pattern

Use the prompt below, specifying your required output format and any focus areas. Adjust the section structure if the paper type demands it (e.g., a case series vs. an RCT).
4

Verify every data point

Open the source paper’s results tables and check every numerical value (p-values, CIs, HRs, sample sizes) side by side against the summary. Check that each result is attributed to the correct study arm, population, and analysis type.
5

Adjust emphasis and fill gaps

Fix inaccuracies, add missing safety data, adjust the balance between efficacy and safety, and refine language for your specific audience and purpose.
6

Final review

Confirm the summary is accurate, complete for its intended use, appropriately sourced, and contains no unsourced AI-generated context.

Output

A 400–800 word structured summary that follows the specified section format, reproduces numerical data exactly as stated in the source, and clearly distinguishes between primary, secondary, and exploratory endpoints. Any areas of uncertainty are flagged with [VERIFY] markers. The summary uses professional medical writing language without promotional framing.
Source paper excerpt (results section):
In the ITT population (n=450), patients receiving Drug X demonstrated a statistically significant improvement in overall survival compared to placebo (median OS: 14.7 months vs 10.3 months; HR 0.68, 95% CI: 0.55–0.84; p=0.0003). The ORR was 42% in the Drug X arm vs 18% in the placebo arm. Grade ≥3 treatment-emergent adverse events occurred in 67% of patients receiving Drug X vs 48% receiving placebo. The most common grade ≥3 AEs were neutropenia (23%), fatigue (12%), and diarrhoea (9%).
AI-generated summary (before review):
Primary endpoint: Drug X significantly improved overall survival in the ITT population, with a median OS of 14.7 months vs 10.3 months for placebo (HR 0.68, 95% CI: 0.55–0.84; p=0.0003). Response: The objective response rate was 42% vs 18%. Safety: Treatment-related adverse events were manageable, with neutropenia being the most common.
Issues caught in review:
  • ❌ “Treatment-related adverse events were manageable”: AI editorialised. The source says “treatment-emergent” not “treatment-related”, and does not characterise them as “manageable”
  • ❌ Safety data is incomplete — grade ≥3 AE rate (67% vs 48%), fatigue (12%), and diarrhoea (9%) are omitted
  • ❌ Missing the qualifier that these results are from the ITT population
Reviewed summary (final):
Primary endpoint: In the ITT population (n=450), Drug X demonstrated a statistically significant improvement in overall survival compared to placebo (median OS: 14.7 months vs 10.3 months; HR 0.68, 95% CI: 0.55–0.84; p=0.0003). Response: The ORR was 42% in the Drug X arm vs 18% in the placebo arm. Safety: Grade ≥3 treatment-emergent AEs occurred in 67% of patients receiving Drug X vs 48% receiving placebo. The most common grade ≥3 AEs were neutropenia (23%), fatigue (12%), and diarrhoea (9%).

Prompt pattern

You are a medical writing assistant. Your task is to summarise the following published paper into a structured summary.

Structure your summary with these sections:
- Citation (authors, journal, year)
- Study design and objective
- Population (key inclusion criteria, sample size)
- Primary endpoint and results
- Key secondary endpoints and results
- Safety findings
- Authors' conclusions
- Limitations noted by the authors

Rules:
- Base your summary only on the content of the provided paper. Do not add information from other sources.
- Reproduce data points (p-values, confidence intervals, hazard ratios, percentages) exactly as stated in the paper.
- If a finding is from a subgroup or post-hoc analysis, state this explicitly.
- Do not interpret the results beyond what the authors state.
- If you are uncertain about any data point, flag it with [VERIFY].

Paper text:
[INSERT FULL TEXT]
Customisation: Adjust the section headings for non-RCT study types (e.g., replace “Primary endpoint” with “Main findings” for observational studies). Add a “Focus on:” instruction at the top of the prompt if you only need specific sections summarised.

Why this works

AI compresses a 12-page paper into a structured 500-word draft in minutes, consistently extracting standard elements (design, population, endpoints, results, conclusions) even across papers with different reporting structures. The human writer then focuses on the high-value work — verification, emphasis, and contextualisation — rather than blank-page drafting.

Common mistakes

AI can swap hazard ratios (0.67 vs. 0.76), misattribute p-values, or confuse confidence intervals between study arms. Always verify every numerical value against the source paper’s results tables before using the summary downstream.
AI sometimes combines drug arm and comparator results, or merges ITT and per-protocol populations into a single statement. Check that each result is attributed to the correct arm, population, and analysis type.
Summaries that foreground efficacy and omit adverse event data create an unbalanced picture that carries through to every deliverable built from the summary. Cross-check that safety findings are present and proportionate.
AI may state the drug “significantly improved outcomes” when the paper reports a trend or a secondary endpoint result. Compare every conclusion statement against the authors’ own Discussion and Conclusions sections.
AI sometimes adds background information (disease prevalence, standard of care) from training data rather than from the paper. Read the summary with the paper’s Introduction open and flag any context not sourced from the provided text.

Tool stack

ToolRole
PubCrawlFind and retrieve the source paper if starting from an indication or research question rather than a specific reference
PosterLensExtract structured content from scientific posters before summarising
Alternatives: NotebookLM for source-grounded summarisation and Q&A across uploaded papers. Claude or ChatGPT for general-purpose summarisation. Elicit for finding and comparing related papers. Perplexity for quick context checks with cited sources.

Frequently asked questions

Yes, for working drafts — provided you are using an account and model that does not train on your inputs, and that sharing the content is permitted under your organisation’s policy. For published papers, copyright still applies. For unpublished CSRs or embargoed content, use an enterprise tool with appropriate data handling.
Prompt the model to use only the text you provide and to mark any gap as “not reported.” Ask for verbatim quotes for key numerical findings. On review, check that nothing in the summary describes disease background, mechanism, or context that did not appear in the source itself.
Match the deliverable. A one-page summary for a content outline needs 300–500 words. A briefing for a senior reviewer may need 1,000+ words with study design, results tables, and limitations. Ask for structured sections (Design, Population, Results, Safety, Limitations) so length follows content, not filler.
It can, but the summary will inherit the abstract’s blind spots — no Methods detail, no Limitations, no subgroup results. Treat abstract-only summaries as screening tools, not working summaries. If the paper is going to inform a deliverable, summarise from the full text.
Every numerical value (sample size, HR, CI, p-value), the analysis population for each result, endpoint definitions, and any stated limitations. Also check that safety is represented proportionately to the source, not minimised in favour of efficacy.

Review checklist

  • All data points (p-values, CIs, HRs, ORs, percentages, sample sizes) match the source paper
  • Study design is correctly described
  • Population and key inclusion/exclusion criteria are accurate
  • Primary endpoint result is correct and attributed to the right analysis (ITT, mITT, PP)
  • Secondary endpoints are accurately summarised
  • Safety data is present and not minimised
  • Conclusions match the authors’ stated conclusions
  • No unsourced claims or AI-generated background information
  • Subgroup and post-hoc results are clearly labelled as such
  • Summary length and format meet the project requirements

Next steps: Use your summary to Extract Study Data or Extract Key Messages, then Build a Content Outline for your deliverable.
Last reviewed: 15 April 2026