Skip to main content
Sinks receive data from the router and write it to storage. Each sink runs independently — if one is slow or down, the others continue unaffected.

Available sinks

SinkTypeUse case
ClickHouseclickhouseProduction analytics (recommended)
Diskdisk_binary, disk_plaintextBinary or plaintext file storage
ParquetparquetColumnar archival with compression
Arrow IPCarrow_ipcFast local storage for Polars/DuckDB
ForwarderforwarderEdge-to-cloud Tell-to-Tell relay
StdoutstdoutDebug output (development only)
NullnullBenchmarking (discards all data)

Choosing a sink

ClickHouse is the recommended production sink. It handles event-type routing, per-table batching, concurrent flushes, and retry logic. The query engine connects directly to ClickHouse for analytics. Disk sinks are for local file storage. Use disk_binary for high-throughput archival with optional LZ4 compression, or disk_plaintext for human-readable logs you can grep and tail. Parquet is for data warehousing. Files are readable by Spark, DuckDB, Pandas, and Polars. Good for cold storage with excellent compression ratios. Arrow IPC is for hot data that needs frequent access. ~10x faster reads than Parquet, with zero-copy memory mapping. Ideal for real-time dashboards backed by Polars or DuckDB. Forwarder sends data to another Tell instance over TCP. Use this for edge-to-cloud deployments where edge nodes collect data and relay it to a central server.

Backpressure

If a sink can’t keep up, Tell drops batches for that sink rather than blocking the pipeline. Other sinks continue receiving data normally. Dropped batches are tracked in pipeline metrics — check tell status --metrics to spot sinks falling behind.

File rotation

Disk-based sinks (disk, Parquet, Arrow IPC) organize files by workspace and time:
{path}/{workspace_id}/{date}/{hour}/
Rotation can be hourly or daily. Disk sinks use atomic rotation to guarantee zero data loss during file switches.