Performance: Another 10x
We shipped a major list performance improvement a few months ago. At the time, we said lists were fast enough for the large majority of use cases. We weren't quite right — or rather, we underestimated how quickly some workspaces would grow.
Teams with large operational datasets started running into walls at 50,000 records. This round of optimization addresses that.
The Numbers
Before this update, a list with 50,000 records and multiple active filters could take 3-4 seconds to load. After: under 300 milliseconds.
At 100,000 records: previously untested territory with unpredictable behavior. After: under 500 milliseconds with consistent performance.
The ceiling has moved up dramatically, and performance across the range is more predictable.
What Changed
Query planning. The database query planner was being given enough information to work with, but not the hints it needed to make optimal decisions. Explicit index hints and query restructuring moved several queries from sequential scans to index seeks.
Filter pushdown. Complex filters that previously ran as application-layer post-processing now run as database predicates. The database returns fewer rows and does less work; the application processes a smaller result set.
Count optimization. The total record count shown in list headers was triggering a full COUNT(*) on every view. This has been replaced with an estimated count that's accurate enough for display purposes while being dramatically cheaper to compute.
Cache strategy. Frequently-run list queries with stable results now use short-duration caching. The first user to run a query incurs the full cost; subsequent users within the cache window get the result instantly.
Why This Matters Beyond Power Users
Performance at scale matters for everyone, not just workspaces with 100,000 records. The improvements that make large lists fast also make small lists faster. Optimized queries at every scale, better cache behavior across all workspaces.
Performance work compounds. We'll keep doing it.