Welcome to DreamsPlus

Story Estimation
with the Fibonacci Series

A complete Planning Poker estimation guide — 10 real stories from an AI-enabled flight booking application, estimated with full vote breakdowns, strategy rationale, and SM decision notes.

Fibonacci Scale Planning Poker 5 User Stories

5 Enabler Stories Scrum Team 1
// Estimation Summary
Total Stories10 Stories
User Stories5 (US-001–005)
Enabler Stories5 (EN-001–005)
Total Story Points71 SP
Highest Estimate13 SP — US-004
Most Common Card8 SP (7 of 10)
TechniquePlanning Poker
Reference AnchorUS-LOGIN = 3 SP
Section 1

The Fibonacci Estimation Scale

Hover each card to explore the scale. The growing gaps between numbers force the team to acknowledge that high-complexity stories carry proportionally more uncertainty — and that’s intentional.

0

No Effort
< 1h

1

Trivial Fix
~2h

2

Simple Task
~4h

3

Small Story
~6h

5

Medium Story
1–2 days

8

Large Story
3–4 days

13

Very Large — Split?
5–7 days

21

Epic — MUST Split 8+ days

?

Unknown Spike it
TBD

Cannot Estimate
Re-scope

Complexity

How technically difficult is the solution? Does it touch multiple systems, require new architecture, or involve a new integration pattern?

  • Low — Standard pattern, component reuse possible
  • Medium — New integration, moderate branching logic
  • High — New architecture, multi-system, AI model integration

Effort

How much total work is involved? How many disciplines (FE, BE, AI/ML, DevOps) need to contribute? How many screens, APIs, or DB changes?

  • Low — 1 discipline, 1–2 days of work
  • Medium — 2 disciplines, 3–4 days of work
  • High — 3+ disciplines, full sprint involvement

Uncertainty

How well does the team understand what needs to be built? Is there a clear technical approach? Have we done similar work before?

  • Low — Done before, clear approach agreed
  • Medium — New pattern, approach agreed in grooming
  • High — Spike needed before estimating

3

Reference Anchor Story

US-LOGIN — User Authentication / Login Flow = 3 Story Points

Standard login form with email/password, JWT session token, remember-me checkbox, forgot password link. One FE component, one BE endpoint, one DB lookup. No AI involvement. Every story is estimated relative to this 3-point baseline — if a story is roughly twice as complex, it gets 5 SP. Three times as complex = 8 SP. This calibration anchor keeps estimation consistent sprint over sprint.
Rules

Planning Poker Rules — Scrum Team 1

These 8 rules govern every estimation session for Scrum Team 1. They eliminate anchoring, create honest divergence, and ensure story points reflect true team understanding.

Simultaneous reveal

All developers vote at the same moment. SM says "Reveal" — everyone turns their card. Early reveals anchor others and corrupt the estimate.

Fast consensus rule

If all votes are within one Fibonacci step (e.g., all 5 or mix of 3/5), take the median — no discussion needed.

Divergence discussion

If votes diverge by more than one step (e.g., 3 and 8), the highest and lowest voter explain their reasoning before a re-vote.

SM does not vote

on technical stories. SM votes can anchor the team. The SM facilitates only — developers estimate.

13 SP = split discussion

Any story estimated at 13 must be discussed for splitting. Any 21 SP story MUST be split before entering the sprint.

? = Spike first

If the team votes ? for a story, a Spike story (max 3 SP) is created to investigate. The original story is re-estimated after the Spike.

Enabler story rule

For Enabler stories, the primary implementors' votes carry more weight. FE votes on a DevOps story are informational, not binding.

Done means Done

Velocity is calculated from accepted story points at Sprint Review. A story that is 90% done counts as 0 SP for velocity.

Section 2 · All 10 Stories

10 Estimated Stories — Full Rationale

Each story shows the Planning Poker votes from all 5 team members, the final Fibonacci estimate, and a detailed point-by-point strategy explaining why the team chose that value.

User Stories (US-001 to US-005)

Deliver direct, user-visible value · Validated at Sprint Review by Product Owner: Priya Nair

Sprint 1 · E1 – AI Search & Discovery

Natural Language Flight Search

“As a traveller, I can type a conversational query such as ‘Cheapest flight from Chennai to Dubai next Friday’ into the search bar and receive AI-ranked flight results within 2 seconds.”

Acceptance Criteria

  • Given I am on the homepage (logged in or guest)
  • When I type a natural-language search query
  • Then results appear within 2 seconds, ranked by AI best-value score
  • And an NLP chip displays the system’s parsed interpretation of my query
  • And if no direct flights exist, the system shows AI-suggested alternatives

Planning Poker Votes

8

Arjun (TL)

8

Kavya (FE)

8

Rohit (FE)

8

Divya (AI)

5

Suresh (BE)

8 SP

Consensus

Estimation Strategy & Rationale

  1. Complexity: High. NLP layer requires integration with an LLM/intent-recognition API.
    New component — no reuse possible. Complex data flow: query → NLP parse → GDS API call → AI ranker → UI render.
    Three system boundaries crossed.
  2. Effort: High. Frontend (results UI with AI score display), Backend (NLP service wrapper, GDS integration),
    AI/ML (ranking algorithm hookup). Three disciplines involved in a single sprint.
  3. Divergence — Suresh voted 5: Suresh scoped only his BE work (NLP wrapper endpoint).
    After TL clarified the full-stack scope (FE results page + AI ranking integration + GDS call),
    Suresh agreed 8 was correct. No re-vote needed.
  4. Reference check: US-LOGIN = 3 SP. NLP Search involves 3 disciplines, 2 new API integrations,
    and a novel UI component. ~2.7× the login story = 8 SP confirmed.
  5. Why not 13? AI model v0.1 is available from the AI/ML team. GDS sandbox is confirmed.
    This is integration work, not model creation. 13 SP would imply novel AI research — not the case here.

Note: SM Risk Note: AI ranking logic may need tuning in staging. P95 latency target <2s is a hard constraint — load test before Sprint Review demo.

Sprint 2 · E3 – Booking & Checkout Flow

Passenger Details Entry with Auto-fill

“As a logged-in user, I can enter details for up to 9 passengers with auto-fill from my saved profile, so that I can complete the booking form quickly without re-entering known information.”

Planning Poker Votes

5

Arjun (TL)

5

Kavya (FE)

3

Rohit (FE)

3

Divya (AI)

5

Suresh (BE)

5 SP

Consensus

Estimation Strategy

  1. Divergence — Rohit & Divya voted 3: They were thinking of a basic form.
    When Kavya explained the dynamic repeat rows (up to 9 passengers), passenger-type branching
    (Adult/Child/Infant), and auto-fill edge cases (multiple saved profiles), both revised their
    view and consensus landed at 5.
  2. Complexity: Medium. Standard form component with dynamic repeat rows.
    Auto-fill from existing User Profile API — no new AI model needed. Passenger type logic
    adds branching but is well-understood.
  3. Low Uncertainty: Auto-fill pattern exists in another part of the app
    (billing form). Team has done similar work — low uncertainty is the key reason this isn’t 8 SP.

SM Risk Note: Validation rules for Infant passengers (lap vs seat) may add scope. Confirm with PO before sprint starts.

Sprint 3 · E4 – Fare Intelligence

30-Day Fare Trend Graph

“As a traveller, I can see a 30-day fare trend graph on the flight search results page, so that I can decide whether to book now or wait for a lower price.”

Planning Poker Votes

5

Arjun (TL)

8

Kavya (FE)

5

Rohit (FE)

8

Divya (AI)

5

Suresh (BE)

8 SP

Consensus

Estimation Strategy

  1. Three integrated components: (1) historical fare data retrieval from GDS (Suresh), (2) fare prediction line from AI model (Divya), (3) interactive chart UI with hover states and responsive layout (Kavya). Three disciplines, two new API integrations.
  2. Divergence — TL, Rohit, Suresh voted 5: They underweighted the interactive charting complexity. After Kavya demonstrated the D3.js/Chart.js implementation scope and Divya confirmed AI model output formatting requirements, all agreed 8 SP was correct.
  3. Why not 13? The fare prediction model already exists (Divya’s Sprint 2 work). This story is integration and display — not model creation. 13 would signal novel AI research, which is not the case.

SM Risk Note: If fare prediction model is not ready from Sprint 2, display historical data only and add prediction line as a follow-up story. Agree this split rule with PO before Sprint 3 planning.

Sprint 4 · E5 – AI Chatbot

Multi-Turn Conversational Flight Booking via Chatbot

“As a traveller, I can complete a full flight booking through the AI chatbot in 8 or fewer conversational turns, so that I can book without navigating the standard UI.”

Planning Poker Votes

13

Arjun (TL)

13

Kavya (FE)

8

Rohit (FE)

13

Divya (AI)

13

Suresh (BE)

13 SP

Consensus

Estimation Strategy

  1. Very High Complexity: Full multi-turn LLM dialogue engine + booking orchestration service + conversation state machine + payment integration within chat + human handoff capability. Five distinct engineering sub-systems in one story.
  2. Rohit voted 8: He was thinking of the chat UI component only. When the backend dialogue state machine and AI confidence-scoring integration were explained, Rohit revised to 13. This is the most important divergence discussion of these 10 stories.
  3. Split discussion triggered (Rule 5): TL proposed splitting into Story A (chatbot search & selection, 5 SP) and Story B (chatbot booking & payment, 8 SP). PO decided to keep as one story for demo coherence in Sprint 4. Pre-agreed split plan documented as fallback.
  4. High uncertainty: Multi-turn dialogue state management is a new pattern for this team. LLM response reliability at each turn is uncertain. Session timeout and recovery paths are complex edge cases the team has never built before.

SM HIGHEST RISK STORY: Monitor from Day 4 of Sprint 4. If behind, trigger the pre-agreed split immediately. Do not wait until Day 8. Split story ready: US-004a (5 SP) + US-004b (8 SP).

Sprint 2 · E6 – Notifications

Price Drop Alert — Setup & Email Notification

“As a traveller, I can set a price alert for a specific route and receive an email notification when the fare drops below my target price, so that I can book at the best time.”

Planning Poker Votes

5

Arjun (TL)

5

Kavya (FE)

5

Rohit (FE)

3

Divya (AI)

5

Suresh (BE)

5 SP

Consensus

Estimation Strategy

  1. Divya voted 3 — correctly: AI/ML has minimal involvement here. No ML model needed — just a rule-based threshold comparison. Divya’s vote correctly signals zero ML scope. Per Rule 7 (Enabler rule), for user stories the full team estimate applies — consensus at 5.
  2. BE-heavy pattern: Background scheduler + GDS price polling (every 15 min) + alert matching logic + SendGrid email integration. Clean, well-understood pattern but multi-component BE work justifies 5 over 3.
  3. Dependency flagged: Requires SendGrid API key from DevOps (Kiran). Must be confirmed available before Sprint 2 Day 1. SM logged this as a pre-sprint dependency check item.

SM Risk Note: Confirm GDS price polling frequency — every 15 min is acceptable per GDS SLA. If polling is too frequent, API costs may spike. Agree cost threshold with PO.

Enabler Stories (EN-001 to EN-005)

Build the technical foundation — infrastructure, compliance, AI pipelines · Validated by Tech Lead: Arjun Mehta
Sprint 1 · E8 – Platform & Infrastructure

AI Data Pipeline — GDS Flight Data Ingestion

“As the AI/ML Engineer (system capability), I need an automated pipeline that ingests raw flight data from GDS/Amadeus every 6 hours, applies data quality validation, and writes clean records to the ML feature store, so that AI models have fresh, reliable training data.”

Planning Poker Votes

8

Arjun (TL)

3

Kavya (FE)

3

Rohit (FE)

8

Divya (AI)

8

Suresh (BE)

8 SP

Consensus

Estimation Strategy & Rationale

  1. Enabler Rule (Rule 7) applied: Kavya and Rohit voted 3 — they have zero involvement in this infrastructure story. Their votes are informational. The primary implementors are Divya (AI/ML feature store schema) and Suresh (BE pipeline code). TL, Divya, Suresh consensus at 8 is the binding estimate.
  2. New infrastructure pattern: Kafka/batch ingestion, Snowflake schema design, GDS API rate-limit handling, dead-letter queue, PagerDuty integration — multiple new infrastructure components being set up for the first time.
  3. Strategic importance: This story unblocks all AI model training from Sprint 2 onwards. If this slips, the entire AI feature roadmap slips. SM flagged as CRITICAL Sprint 1 dependency.

SM Risk Note: Snowflake instance provisioning is a DevOps dependency — must be in place by Sprint 1, Day 3. SM to confirm with Kiran at Sprint Planning kickoff.

Sprint 1 · E8 – Platform & Infrastructure

CI/CD Pipeline Baseline — Auto-Deploy to Staging

“As the development team, we need a CI/CD pipeline that automatically builds, tests, and deploys all code changes to staging within 15 minutes of a merge to develop, so that teams can integrate and test continuously.”

Planning Poker Votes

8

Arjun (TL)

5

Kavya (FE)

5

Rohit (FE)

5

Divya (AI)

5

Suresh (BE)

5 SP

Consensus

Estimation Strategy

  1. TL voted 8 — scope clarification resolved it: TL was thinking of the full pipeline including production deployment capability. Once the team clarified the story is STAGING only (not production), TL agreed to 5 SP. This is a classic scope-assumption divergence.
  2. Low uncertainty: The team has used GitHub Actions and Kubernetes before. This is a well-established DevOps pattern — not novel. Kiran (DevOps) has set up identical pipelines on previous projects.
  3. BLOCKER RISK flagged by SM: This story must complete by Sprint 1, Day 6 at latest. If CI/CD is not complete, all subsequent story deployments from Sprint 2 onwards are manual — multiplying risk across the project.

SM BLOCKER RISK: If EN-002 is not complete by Day 6 of Sprint 1, all Sprint 2 deployments are manual. SM to track daily and escalate to TL immediately if any blocker emerges.

Sprint 2 · E8 – Compliance

GDPR Data Rights Engine — Export & Deletion Compliance

“As a compliance system capability, the platform must allow registered users to request full data export or complete account deletion within 72 hours to comply with GDPR Article 17 (Right to Erasure).”

Planning Poker Votes

8

Arjun (TL)

5

Kavya (FE)

5

Rohit (FE)

3

Divya (AI)

8

Suresh (BE)

8 SP

Consensus

Estimation Strategy

  1. BE complexity drives the 8: Deletion must cascade across multiple tables (profile, booking history, preferences, alerts, sessions) without corrupting relational integrity. Booking records must be anonymised (not deleted) for financial compliance — this is legally nuanced work.
  2. Divergence pattern: Kavya/Rohit (5) were estimating only the settings UI. Divya (3) has zero AI/ML scope. TL and Suresh (8) correctly weighted the cascade deletion complexity and audit log design. Overall estimate follows primary implementors.
  3. Legal dependency: Legal team must confirm which data can be deleted vs. anonymised before implementation begins. SM flagged this as a pre-sprint gate — do not start coding without legal sign-off.

SM Risk Note: Requires legal sign-off on data retention policy before Sprint 2 coding begins. SM to confirm legal approval status at Sprint 1 Review. Story held in backlog until confirmed.

Sprint 3 · E7 – Admin & Analytics

AI Model Performance Dashboard — Export & Deletion

“As the business analytics team, we need a real-time dashboard displaying AI model performance KPIs — NLP accuracy, fare prediction error rate, recommendation CTR, chatbot CSAT — so the product team can monitor AI quality and trigger retraining when thresholds are breached.”

Planning Poker Votes

8

Arjun (TL)

8

Kavya (FE)

8

Rohit (FE)

8

Divya (AI)

5

Suresh (BE)

8 SP

Consensus

Estimation Strategy

  1. Strong consensus at 8: Four of five voters landed immediately on 8 — the team clearly shared the same mental model of this story. This is a sign of a well-groomed story with good acceptance criteria.
  2. Suresh voted 5 — correctly scoped his part: Suresh’s scope is a data aggregation REST API wrapping existing model outputs — light work. But the total story effort (React dashboard + AI KPI endpoints + 90-day retention + CSV export) is 8 SP. Suresh agreed post-discussion.
  3. Dependency on EN-001 and AI models: Dashboard requires AI model output APIs to be stable. EN-001 (data pipeline) must be complete. Divya must define KPI schemas before Kavya builds the display widgets.

SM Risk Note: Dashboard requires agreement on KPI thresholds (what is Green/Amber/Red per metric). PO and TL must define these before FE build starts on Sprint 3, Day 1.

Sprint 3 · E8 – Security & Compliance

Payment Gateway — PCI-DSS Compliant Checkout

“As a platform security capability, the payment checkout must use Stripe/Razorpay hosted fields (PCI-DSS Level 1 compliant) where no card data touches application servers, with 3DS authentication and an event-bus completion signal.”

Planning Poker Votes

8

Arjun (TL)

8

Kavya (FE)

8

Rohit (FE)

3

Divya (AI)

8

Suresh (BE)

8 SP

Consensus

Estimation Strategy

  1. Divya voted 3 — zero AI/ML scope: Payment processing has no machine learning component. Divya’s 3 correctly signals zero ML involvement. Per Rule 7, overall estimate is driven by the primary implementors (Rohit + Suresh) who both independently voted 8.
  2. Low uncertainty — high complexity: Stripe/Razorpay documentation is comprehensive. Both Rohit and Suresh have integrated payment gateways before. But PCI-DSS hosted fields + 3DS async flow + event-bus integration is genuinely complex work = 8 SP is correct.
  3. Why not 13? With hosted fields, the hardest compliance work is offloaded to the gateway provider. If we were building our own card vaulting, this would be 13. With Stripe/Razorpay hosted fields, it is 8.
  4. Special code review flagged: TL will conduct a dedicated security review for this story before merge — an additional DoD gate beyond the standard 2-reviewer rule.

SM Risk Note: Stripe/Razorpay sandbox credentials must be available Sprint 3, Day 1. Webhook endpoint URL must be provisioned on staging by DevOps (Kiran) before coding begins.

Section 3

Estimation Summary & Insights

All 10 stories at a glance — story points, complexity, effort, uncertainty — plus sprint capacity analysis and SM insights drawn from the estimation data.
# ID Sprint Story Title Type SP Complexity Effort Uncertainty
1 US-001 S1 Natural Language Flight Search USER 8 High High Medium
2 US-002 S2 Passenger Entry with Auto-fill USER 5 Medium Medium Low
3 US-003 S3 30-Day Fare Trend Graph USER 8 High High Medium
4 US-004 S4 Multi-Turn Chatbot Booking USER 13 Very High Very High High
5 US-005 S2 Price Drop Alert & Email USER 5 Medium Medium Low
6 EN-001 S1 AI Data Pipeline / GDS Ingestion ENABLER 8 High High Medium
7 EN-002 S1 CI/CD Pipeline Baseline ENABLER 5 Medium Medium Low
8 EN-003 S2 GDPR Data Rights Engine ENABLER 8 High High Medium
9 EN-004 S3 AI Model KPI Dashboard ENABLER 8 High High Medium
10 EN-005 S3 Payment Gateway PCI-DSS ENABLER 8 High High Low
TOTAL STORY POINTS → 71 SP

Sprint Capacity Breakdown

Enter your details below to access all three professionally formatted guides. We will send you the download links and occasional Agile resources from DreamsPlus — no spam, ever.
SPRINT 1 (CAPACITY: 35 SP)
US-001 8 SP
EN-001 8 SP
EN-002 5 SP

These 3 stories 21 SP
+14 SP buffer for other backlog items
WITHIN CAPACITY
SPRINT 2 (CAPACITY: 35 SP)
US-002 5 SP
US-005 5 SP
EN-003 8 SP

These 3 stories 18 SP
+17 SP buffer for other backlog items
WITHIN CAPACITY
Sprint 3–4 (Capacity: 35 SP/team)
US-003 8 SP
EN-004 8 SP
EN-005 8 SP
US-004 (S4) 13 SP

Split across 2 teams 37 SP
Monitor US-004

SM Insights from Estimation Data

8 SP dominates the backlog

7 of 10 stories are 8 SP. This tells us the backlog is mature and stories are well-decomposed, but consistently complex. SM should track whether 8-point stories complete within sprint boundaries or spill over in Sprint 2–3.

US-004 (Chatbot, 13 SP) is the only near-epic story

SM has a pre-agreed split plan ready: Story A (chatbot search & selection, 5 SP) + Story B (chatbot booking & payment, 8 SP). Monitor from Day 4 of Sprint 4 — if behind, trigger the split immediately.

EN-002 (CI/CD, 5 SP) is a hard dependency

for ALL subsequent stories. Any slip in EN-002 multiplies sprint risk across Sprint 2 and beyond. SM must track this as a Sprint 1 critical path item with daily check-ins.

EN-003 (GDPR, 8 SP) has a legal approval dependency

SM must confirm legal sign-off exists before Sprint 2 planning begins. If not approved, the story is held in the backlog — do not attempt to code before legal confirmation is in writing.

Divergence patterns reveal knowledge gaps

Stories where FE voted very differently from AI/ML (EN-001, EN-003) signal cross-discipline estimation education is needed. SM to schedule a 30-minute estimation calibration session every 3 sprints to keep the team anchored.

Sprint 3 carries 37 SP

across these 4 stories alone — exceeding a single team's velocity. US-003/EN-004/EN-005 (Team 1) + US-004 (Team 1, Sprint 4) should be distributed thoughtfully. SM to confirm split with PO at Sprint 2 Review.

Download the Complete Estimation Guide

Get the full professionally formatted Word document — all 10 story cards, Planning Poker vote tables, strategy rationale, sprint capacity analysis, and SM insights — ready to share with your team.
  • 10 fully estimated stories with vote breakdowns
  • Fibonacci scale reference and anchor story
  • 8 Planning Poker rules for your team
  • Sprint capacity tally and SM insights
  • Editable .docx — add your project name

Story Estimation Guide

Fibonacci Series · Planning Poker · 10 Real Stories · AI Flight Booking

    This will close in 0 seconds