PR #13 was 26 commits, 23,000 words of PR description, and touched nearly every system in the app. It was the kind of PR that makes code reviewers weep and project managers nervous. It was also completely necessary, because sometimes technical debt compounds to the point where incremental fixes create more problems than they solve.
Here's what we shipped, and why it all had to land together.
The Onboarding Problem
Our onboarding was two steps: a cramped screen that asked for everything at once, and a "you're done" screen. Users were confused about what they were signing up for, overwhelmed by the number of inputs, and — most critically — never granting notification permissions because we asked too early in the flow.
The redesign splits onboarding into four focused steps:
- Goal — What are you using Lumo for? (Tracking, budgeting, both)
- Currency — What currency do you earn in?
- Income & Budget — One income stream, one monthly budget. Dead simple.
- Notifications — System permission request with clear explanation of what we'll send
Each step is a dedicated component with its own validation and back navigation. The progress bar fills smoothly (and we fixed a bug where it stayed blue at 100% instead of turning green — the kind of thing that's invisible until someone points it out, and then you can't unsee it).
The Post-Onboarding Flash
After implementing the new flow, we hit a nasty UX bug: fresh users completing onboarding would see a brief flash of "Loading dashboard..." before the empty dashboard rendered. The sequence was:
- User completes onboarding →
router.replace("/(home)/dashboard") - Dashboard mounts → Convex queries fire → queries return empty data
- Brief moment where
isLoading: trueshows a spinner - Data resolves as empty → empty state renders
That flash — maybe 200ms — felt like the app was broken. Users who just finished a carefully designed onboarding flow were greeted by a loading spinner that screamed "I don't know who you are."
The fix was a module-level flag (onboarding-completion-state.ts) that the dashboard checks on mount:
// Set before navigation in onboarding
setJustCompletedOnboarding(true);
// Dashboard reads it synchronously on mount
const justFinished = useRef(getJustCompletedOnboarding());
// If just finished, skip straight to empty state — no loading spinner
const contentState = justFinished.current ? "empty" : isLoading ? "loading" : "data";We also replaced six interacting boolean state variables (shouldShowFullLoader, shouldShowEmptyState, isLoadingWithNoData, showDelayedLoader) with a single contentState enum. Six booleans → one enum. The component went from "impossible to reason about" to "obviously correct."
Rate Limiting: Two Layers of Defense
Before this PR, our public endpoints had zero rate limiting. Contact form? Submit as fast as you can click. Feature requests? Fill our database to your heart's content. Blog comments? Bot paradise.
We integrated @convex-dev/rate-limiter with 24 rate limit definitions across two layers:
Layer 1: Global (DoS Protection)
"global:api": { kind: "token bucket", rate: 500, period: 60_000 },
"global:ai": { kind: "token bucket", rate: 15, period: 60_000 },
"global:public": { kind: "token bucket", rate: 100, period: 60_000 },These use sharded counters for high throughput. If the entire app is getting hammered, these fire first.
Layer 2: Per-User (Abuse Prevention)
Every write mutation got its own limit: receipt scanning (3/min), transaction creation (30/min), blog comments (5/min), contact form (3/hour). The limits are generous enough that real users will never hit them, but tight enough that a script can't drain our compute budget.
The Receipt Scanning Problem
Receipt scanning is expensive — it calls an AI model to extract transaction data from an image. We needed dual-layer rate checking: is the global AI budget exhausted AND has this user scanned too many receipts?
Both checks need to happen atomically. If we check global limits in one transaction and per-user limits in another, a race condition could allow both to pass even when one should fail.
We built a checkDualRateLimit internal mutation that checks both in a single Convex transaction:
export const checkDualRateLimit = internalMutation({
handler: async (ctx, { globalKey, userKey, userId }) => {
const globalCheck = await rateLimiter.check(ctx, globalKey);
const userCheck = await rateLimiter.check(ctx, userKey, { key: userId });
if (!globalCheck.ok) throw new Error("Service temporarily busy");
if (!userCheck.ok) throw new Error("Too many scans. Try again later.");
// Both passed — consume tokens
await rateLimiter.consume(ctx, globalKey);
await rateLimiter.consume(ctx, userKey, { key: userId });
},
});Coarsened Error Messages
Public endpoints (contact form, waitlist, feature requests) return generic error messages:
// Never: "Rate limit exceeded. Try again in 23 seconds."
// Always: "Too many requests. Please try again later."The specific "try again in N seconds" message leaks timing information that attackers can use to calibrate their rate. Coarsened messages are less helpful to users but significantly less helpful to bots.
10 Notification Bugs
While we were in the notification system adding the permission step to onboarding, we audited the entire notification pipeline and found 10 bugs:
Backend (4 fixes):
- Pagination was filtering after
.paginate(), meaning pages had inconsistent item counts - Unread count was including archived notifications
- New notifications defaulted to
archived: undefinedinstead ofarchived: false - "Mark all read" was marking archived notifications as read
Mobile (6 fixes):
- Missing
cycle_resetnotification type (no icon, no label, no action button) - Loading state showed "No notifications" instead of skeletons
- Filter switching showed stale data because the dedup ref wasn't resetting
- Three notification types were missing action buttons
Each fix was small. Together, they transformed the notification center from "technically works but feels broken" to "actually reliable."
Push Notification Architecture
We migrated from manual Expo Push API calls to @convex-dev/expo-push-notifications, which handles token management, delivery retries, and receipt processing automatically.
The most important fix was separating permission-requesting from token registration:
// On onboarding: explicitly ask for permission
async function registerPushToken() {
const { status } = await Notifications.requestPermissionsAsync();
if (status === "granted") await savePushToken();
}
// On app launch: silently re-register if already permitted
async function refreshPushTokenIfPermitted() {
const { status } = await Notifications.getPermissionsAsync();
if (status === "granted") await savePushToken();
}Before this fix, the app asked for notification permissions on every launch. Users who denied the first time were getting asked again on every app open. That's the kind of UX that gets you one-star reviews.
Why One PR?
I know what you're thinking. "This should have been 8 separate PRs." And you're right — in an ideal world. But the onboarding redesign required the notification permission step, which required the push notification migration, which surfaced the notification bugs, which required the rate limiting (because we were touching every public endpoint anyway), which required the backend query optimizations (because the new admin rate limit testing page was hitting the slow queries).
Everything was connected. Shipping them separately would have created a series of half-working intermediate states.
Sometimes the responsible thing is one big PR with a thorough test plan rather than eight small PRs with cascading merge conflicts.
Abdul Rafay Founder, Syntax Lab Technology 26 commits, 0 regrets (okay, maybe one — that PR description took longer to write than some of the code)