TripRelay Β· Internal Document Β· Live Collaborative
Agile
Playbook.
Playbook.
Not corporate-agile fluff. A fast-moving solo founder workflow for an AI travel product with heavy UI, data, and itinerary logic. Checkboxes, kanban, and the sprint goal all save in real time for both of you.
01
Run 4 Parallel Lanes
Your project is too broad for one generic backlog. Split work into these four tracks.
Lane 1 Β· Product / UX
What the user sees and feels
Homepage, globe flow, planner UX
Trip teaser and share pages
Onboarding, branding, mobile polish
Lane 2 Β· Trip Engine / Data
What makes trips actually good
POI quality, must-see ranking
Scheduling logic, day-trip expansion
City sufficiency detection, scoring
Lane 3 Β· Platform / Infra
What keeps the lights on
Firebase, Vercel, auth, logging
Error tracking, caching, rate limits
Build stability, env vars
Lane 4 Β· Growth / Monetization
What drives people to come back
SEO pages, affiliate hooks
Trip trailers, social loops
Analytics, leaderboard ideas
Why this matters
This one change makes your backlog dramatically cleaner. Never mix infrastructure work with UI polish or trip quality work in the same sprint.
02
Work Hierarchy
Never keep a flat task list. Strategy at the top, execution at the bottom.
Vision
Build the best AI travel planner for itinerary creation and shareable trip discovery.
Quarterly
Themes that anchor each quarter's work
Planning experience people trust
Better schedules through POI quality
Shareable trips and growth loops
Stable production deployment
Epics
Large bodies of work with clear outcomes
NYC itinerary quality overhaul
Planner calendar redesign
Trip teaser share page
Day-trip expansion engine
Stories
User-facing goals that deliver specific value
Must-see attractions prioritized over filler
Warning when city can't fill 5-day trip
Share trip on clean public page
Tasks
Concrete build actions that complete a story
Add POI iconic score
Update scheduler weighting
Create empty-state UI
Write regression tests
03
1-Week Sprint Rhythm
One-week sprints beat two-week ones β you're iterating fast and discovering problems constantly.
MON
Focus
Pick sprint goal
Lock sprint backlog
TUE
Build
Build and test
No feature hopping
WED
Build
Build and test
Midweek check
THU
Build
Build and test
Regression checks
FRI
Ship
Ship it
Review + retro
Sunday night or Monday morning
20β30 minutes of backlog grooming. Review what rolled over, reprioritize based on what you learned last week.
04
One Sprint Goal
Not five. Not three. One. This is the hardest discipline and the most important.
The trap
Without a single goal you'll get trapped doing favicon tweaks, UI polish, data cleanup, and ranking logic all at once β and ship none of them well.
Good examples
"Make 3β5 day itineraries feel trustworthy in top cities."
"Stabilize trip generation so empty schedules stop happening."
"Polish first-time-user flow from homepage to finished trip."
"Stabilize trip generation so empty schedules stop happening."
"Polish first-time-user flow from homepage to finished trip."
05
3 Backlogs, Not 1
Protect yourself from chasing every good idea immediately.
A. Strategic
Big bets and product direction. Not being built this week.
Trip teaser viral loop
City expansion system
Affiliate booking integration
B. Sprint-Ready
Only tickets clear enough to build this week.
Has acceptance criteria
Scope is locked
Edge cases documented
C. Parking Lot
Cool ideas, "maybe later," brand experiments.
Interesting but not urgent
Needs more research
Revisit next quarter
06
Scoring System
Score 1β5 on each dimension. Use the result to prioritize ruthlessly.
Priority = User impact + Growth impact + Risk reduction + Confidence β Effort
Initiative
User
Growth
Risk
Conf.
Effortβ
Score
Itinerary quality fixes
17
Production reliability
15
Shareability / growth loops
13
Visual polish
8
07
Definition of Ready
A task is ready to build only when all of these are true. Check them off as you go.
Story
TR-042
Improve NYC trip quality
Ready checklist
β
Must-see POIs listedβ
Filler categories downgradedScheduler rules documented
Test cities and trip lengths chosen
Expected output behavior described
Required for all tickets
User problem Β· Success criteria Β· Affected files/systems Β· Edge cases Β· Visual reference if UI Β· Sample input/output if backend
This prevents bad Codex prompts
Vague tickets create vague implementations. Never hand AI a task you couldn't hand a contractor.
08
Definition of Done
A task is done only when all of these pass.
Standard Done
Applies to every task, every sprint.
Feature works in local test
Core edge cases checked
Debug visibility added
No obvious regression
Build passes
Preview deploy checked
Itinerary Done
Mandatory for anything touching trip logic.
Happy-path test city
Weak-data city test
Long-trip case (7+ days)
Short-trip case (1β2 days)
09
Discovery vs Build
Separate research from implementation. Never let discovery quietly become 8 hours of coding.
Discovery Ticket
Used when you need to figure something out first.
Why are schedules empty?
Analyze bad POI data
Review production errors
Output: findings + recommendation + next build ticket
Build Ticket
Used only when the solution is already known.
Implement the chosen fix
Change weights / logic
Rebuild UI component
Requires: discovery output first
10
Lightweight Ceremonies
Total weekly overhead: ~80 minutes. Keep ceremonies tiny.
Weekly Planning
30 min Β· Monday
What is the sprint goal?
What 3β5 stories matter most?
What must ship by Friday?
What is explicitly not in scope?
Midweek Checkpoint
10 min Β· Wednesday
What is blocked?
Did a discovery change priorities?
Is the sprint goal still realistic?
Friday Review
20 min Β· Friday
What shipped?
What broke?
What surprised us?
What should be productized next?
Retro
10 min Β· Friday
Keep doing
Stop doing
Start doing
11
Kanban Board
Drag cards between columns. Add new cards. All changes save and sync in real time.
Backlog 0
Ready 0
In Progress 0
Review 0
Test 0
Shipped 0
Board rules
Never more than 2 items in progress at once. Bugs jump the queue only for P0 blockers. Discovery tickets must have written findings before build tickets spawn.
12
Ticket Template
Use this exact structure for every sprint-ready ticket.
Story
Template
One-sentence user-facing goal
Why it matters
How does this help trust, growth, retention, or revenue? If you can't answer this, the ticket isn't ready.
Scope
Exactly what is included. Be specific.
Not in scope
What is intentionally excluded. Writing this forces clarity.
Acceptance criteria
Concrete pass/fail behaviors. If you can't test it, it's not a criterion.
Technical notes
Relevant files, APIs, tables, models, env vars, logging.
Test cases
Happy path Β· Edge case Β· Failure case. All three required.
Deployment notes
Anything risky in rollout. DB migrations, feature flags, rate limit changes.
13
AI in the Workflow
Make AI part of the process, not random helper energy. Each tool has a specific role.
ChatGPT β Strategy
Product breakdown
Sprint planning
Ticket writing
Architecture decisions
Prompt generation for Codex
Retro summaries
Codex β Implementation
Scoped implementation
Refactors
UI cleanup
Test writing
Focused bug hunts
Claude β Polish
Final design polish
Component cleanup
Readability improvements
Better layout judgment
The Rule: Never say "fix the planner." Always give: problem Β· scope Β· files Β· expected behavior Β· constraints Β· acceptance tests.
14
3 Quality Gates
Every deploy must pass all three. In order. No exceptions.
1
Product Sanity
"Does the output actually make sense to a real traveler?"
2
Technical Sanity
"Build passes, logs look normal, key flows function."
3
Business Sanity
"Does this improve trust, conversion, or growth β or at least not hurt them?"
Technically correct β done
A feature that is technically correct but creates weird schedules still fails Gate 1.
15
Bug Triage Policy
Every bug gets a priority tag. The tag determines how fast it interrupts your sprint.
P0
Production Broken
Trip generation fails, auth broken, blank planner, data corruption.
β Interrupts sprint immediately. Everything stops.
P1
Core Experience Degraded
Schedules weird, major UI overlap, must-see attractions missing, slow responses.
β Can replace lower-priority sprint work.
P2
Annoying But Usable
Bad spacing, minor ranking weirdness, icon mismatch.
β Backlog unless tied to current sprint goal.
P3
Cosmetic
Tiny visual issues, copy cleanup, non-blocking polish.
β Parking lot. Batch with next polish sprint.
16
Weekly Metrics
Check off what you've tracked each Friday. Trip quality + success rate matter most right now.
Product
Trip generation success rate
Empty schedule rate
% trips with X+ quality POIs
Manual quality score (5 samples)
UX
Search β trip completion rate
Drop-off point in flow
Planner load time
Mobile usability notes
Reliability
Build success
Error count by route
API failures
Firebase / Vercel issues
Growth
Shared trips created
Trailer views
Return visitors
Signup conversion
17
Roadmap
Four phases in sequence. Do not skip ahead. Each unlocks the next.
Phase 1 Β· Now
Trust the Itinerary
Focus here first
Must-see attraction logic
POI quality filtering
Duration realism
Empty schedule fixes
City sufficiency detection
Phase 2 Β· Next
Make Planning Smooth
After phase 1
Planner layout
Drag/drop editability
Better day grouping
Travel time logic
Mobile polish
Phase 3 Β· Growth
Make it Shareable
Needs phases 1+2
Public trip page
Trip teaser / trailer
Clean link previews
Copy / edit someone's trip
Phase 4 Β· Scale
Growth + Revenue
Needs phase 3
Affiliate booking surfaces
SEO landing pages
City pages
Leaderboard / community
18
Example Sprint
A real sprint in the right format. Use this as the template for your Monday planning.
Sprint β Credible Itineraries
Week of March 10
Sprint goal: "Make generated itineraries feel credible in top cities."
Done by Friday
A user can generate a 4-day NYC trip and get a believable itinerary without empty days or obvious filler dominance.
19
Tool Setup
Keep it simple. The best setup is the one you'll actually use.
Roadmap + Epics + Sprint Board
Notion or Linear
Build Tickets
GitHub Issues
PR Review + Previews
GitHub PRs / Vercel
Backlog Shaping + Planning
ChatGPT
Quick Findings
Slack / Notes Doc
Epic Specs
One Notion page per epic
20
Traps to Avoid
The specific ways this project gets messy. Know them before they happen.
β
Mixing strategy, debugging, and coding in one task. These require different mental modes. Split them.
β
Starting too many things at once. Two in progress, maximum. Always.
β
Polishing visuals before itinerary quality is trustworthy. Beautiful UX on bad trips won't retain anyone.
β
Not defining acceptance criteria before using Codex. Vague prompts create vague code.
β
Treating AI output as done before product sanity checking it. Always run Gate 1 manually.
β
Shipping without regression checks on core cities. NYC, Paris, Tokyo minimum after any trip engine change.
The exact operating model
Cadence
1-week sprints
WIP Limit
2 items max
Sprint Size
3 stories + tasks
Top KPI
Trip quality + success rate
Current Theme
Trustworthy itinerary generation
Decision Rule
No growth features until trips feel good in top cities
Daily routine
Monday: One sprint goal, 3 stories.
Daily: Sprint board only. Discoveries β separate ticket.
Before Codex: Write acceptance criteria first.
Before deploy: All 3 quality gates.
Friday: Ship, review, retro.
Daily: Sprint board only. Discoveries β separate ticket.
Before Codex: Write acceptance criteria first.
Before deploy: All 3 quality gates.
Friday: Ship, review, retro.