Background CSV Export
Data exports have a tension: the larger the dataset, the longer the export takes, and the more likely a browser timeout will kill the process before it completes.
A 500-record export completes in a second. A 50,000-record export takes time — enough time for a browser to time out, a connection to drop, or a user to close the tab because they assumed something went wrong.
Background exports solve this cleanly.
Trigger and Continue Working
When you initiate an export, the platform starts processing it in the background. The export dialog closes, you get a notification that the export is running, and you continue working without waiting.
There's no loading spinner staring at you. No risk of accidentally closing the tab. The export runs completely independently of your browser session.
Download When Ready
When the export completes — seconds or minutes later, depending on volume — you receive a notification. The download link appears in the notification and in the exports panel. Click to download.
The file is available for a defined period (typically 24 hours) so you can download it later if you're not ready immediately. Multiple exports can be queued and downloaded in sequence.
Handles Large Volumes
Background exports are designed for volume. A 10,000-record export, a 500,000-record export, an export with related data joined from multiple object types — the infrastructure handles it. The complexity of the export is abstracted from the user experience.
Filters applied in the list view carry over to the export. The exported file contains exactly what you were looking at — same filter criteria, same column set (or the full field set, configurable in the export options), same sort order.
Formats
Exports produce standard CSV files, importable into spreadsheet tools, analytics platforms, and other systems. Column headers match field labels. Date formats are consistent and configurable. Numeric fields use the locale-appropriate decimal separator.
Background CSV export means data is always accessible, regardless of volume. Large datasets are not a problem — they're just a longer background job.