Skip to main content
Sinks receive data from the router and write it to storage. Each sink runs independently — if one is slow or down, the others continue unaffected.

Available sinks

SinkTypeUse case
ClickHouseclickhouseProduction analytics (recommended)
Diskdisk_binary, disk_plaintextBinary or plaintext file storage
ParquetparquetColumnar archival with compression
Arrow IPCarrow_ipcFast local storage for Polars/DuckDB
VortexvortexFast columnar reads with cascading compression
ForwarderforwarderEdge-to-cloud Tell-to-Tell relay
StdoutstdoutDebug output (development only)
NullnullBenchmarking (discards all data)

Choosing a sink

ClickHouse is the recommended production sink. It handles event-type routing, per-table batching, concurrent flushes, and retry logic. The query engine connects directly to ClickHouse for analytics. Disk sinks are for local file storage. Use disk_binary for high-throughput archival with optional LZ4 compression, or disk_plaintext for human-readable logs you can grep and tail. Parquet is for data warehousing. Files are readable by Spark, DuckDB, Pandas, and Polars. Good for cold storage with excellent compression ratios. Arrow IPC is for hot data that needs frequent access. ~10x faster reads than Parquet, with zero-copy memory mapping. Ideal for real-time dashboards backed by Polars or DuckDB. Vortex is for fast analytical reads without a database. ~100x faster random access than Parquet via cascading columnar encodings. Use it when you need ClickHouse-class query speed on local files. Forwarder sends data to another Tell instance over TCP. Use this for edge-to-cloud deployments where edge nodes collect data and relay it to a central server.

Performance

Measured with tell-bench sink — 10M events, realistic cardinality (1000 devices, 25 event types, unique payloads), Apple M4 Pro:
SinkEvents/sWrittenRatioBest for
disk_binary33.0M639 MB0.36xMaximum write speed
disk_binary_lz421.6M145 MB0.08xSpeed + compression
arrow_ipc2.9M1.7 GB0.98xFast reads (Polars/DuckDB)
parquet_lz42.6M251 MB0.14xSpeed/compression balance
parquet_zstd2.3M171 MB0.10xCold archival (best ratio)
parquet_uncompressed2.6M769 MB0.43xColumnar without codec overhead
vortex1.7M1.8 GB0.99xFast random access + scans
disk_plaintext1.4M2.7 GB1.53xHuman-readable grep/tail
Ratio = bytes on disk / input bytes. Lower is better compression. Parquet’s columnar layout eliminates per-event FlatBuffer overhead, so even “uncompressed” Parquet (0.43x) is smaller than raw binary (0.36x at full throughput).

Backpressure

If a sink can’t keep up, Tell drops batches for that sink rather than blocking the pipeline. Other sinks continue receiving data normally. Dropped batches are tracked in pipeline metrics — check tell status --metrics to spot sinks falling behind.

File rotation

Disk-based sinks (disk, Parquet, Arrow IPC, Vortex) organize files by workspace and time:
{path}/{workspace_id}/{date}/{hour}/
Rotation can be hourly or daily. Disk sinks use atomic rotation to guarantee zero data loss during file switches.