Cross-Source Data Joins
Data that needs to be understood together often lives in separate systems. Customer records in your application, subscription data in a billing service, usage metrics in an analytics platform, support tickets in a help desk tool. Separately, each source is useful. Combined on a single row, they tell a richer story.
Cross-source data joins connect records from multiple sources using a shared key field, producing a unified view of data that doesn't live in one place.
The Join Concept
A join combines data from two sources where records share a matching value in a key field. Your customer record has an email address. The billing service's subscription data also has an email address. Join them on email, and each customer row in your list gains subscription plan, billing status, and renewal date columns from the billing service.
The join is computed at query time. You're not copying data between systems — you're combining them for display. Changes in either source are reflected the next time the view loads.
Configure in the Builder
Join configuration lives in the data source settings for a list or table component. Specify the primary source (your records), the secondary source (the external service), and the key fields on each side that should match. Choose which fields from the secondary source to include in the result.
The combined data source appears to the table component as a single flat data set. Column configuration selects from fields in both sources. Sorting and filtering work across all fields.
1:1 and 1:Many Joins
Swifty supports 1:1 joins (one matching record per primary record) and 1:many joins (multiple matching records per primary record).
For 1:1 joins, the secondary source's fields appear as direct columns on the row. For 1:many joins, the secondary data appears as a nested sub-table or aggregated metrics. A customer's support tickets (1:many) might appear as a ticket count and a link to open the full ticket list.
Performance Considerations
Cross-source joins execute both queries — primary and secondary — and merge the results. For large datasets, this is done in batches. The component shows a loading indicator while the merge completes.
For frequently accessed joins between large datasets, caching the secondary source data with a configurable TTL reduces latency. The cache is per-workspace and per-integration, with manual invalidation available when immediate freshness is required.
Security and Access
Join queries only fetch data that the workspace's credentials authorize. The secondary source query uses the workspace's configured integration credentials. A join can only include data that those credentials grant access to — no privilege escalation through join configuration.