Available sinks
| Sink | Type | Use case |
|---|---|---|
| ClickHouse | clickhouse | Production analytics (recommended) |
| Disk | disk_binary, disk_plaintext | Binary or plaintext file storage |
| Parquet | parquet | Columnar archival with compression |
| Arrow IPC | arrow_ipc | Fast local storage for Polars/DuckDB |
| Vortex | vortex | Fast columnar reads with cascading compression |
| Forwarder | forwarder | Edge-to-cloud Tell-to-Tell relay |
| Stdout | stdout | Debug output (development only) |
| Null | null | Benchmarking (discards all data) |
Choosing a sink
ClickHouse is the recommended production sink. It handles event-type routing, per-table batching, concurrent flushes, and retry logic. The query engine connects directly to ClickHouse for analytics. Disk sinks are for local file storage. Usedisk_binary for high-throughput archival with optional LZ4 compression, or disk_plaintext for human-readable logs you can grep and tail.
Parquet is for data warehousing. Files are readable by Spark, DuckDB, Pandas, and Polars. Good for cold storage with excellent compression ratios.
Arrow IPC is for hot data that needs frequent access. ~10x faster reads than Parquet, with zero-copy memory mapping. Ideal for real-time dashboards backed by Polars or DuckDB.
Vortex is for fast analytical reads without a database. ~100x faster random access than Parquet via cascading columnar encodings. Use it when you need ClickHouse-class query speed on local files.
Forwarder sends data to another Tell instance over TCP. Use this for edge-to-cloud deployments where edge nodes collect data and relay it to a central server.
Performance
Measured withtell-bench sink — 10M events, realistic cardinality (1000 devices, 25 event types, unique payloads), Apple M4 Pro:
| Sink | Events/s | Written | Ratio | Best for |
|---|---|---|---|---|
| disk_binary | 33.0M | 639 MB | 0.36x | Maximum write speed |
| disk_binary_lz4 | 21.6M | 145 MB | 0.08x | Speed + compression |
| arrow_ipc | 2.9M | 1.7 GB | 0.98x | Fast reads (Polars/DuckDB) |
| parquet_lz4 | 2.6M | 251 MB | 0.14x | Speed/compression balance |
| parquet_zstd | 2.3M | 171 MB | 0.10x | Cold archival (best ratio) |
| parquet_uncompressed | 2.6M | 769 MB | 0.43x | Columnar without codec overhead |
| vortex | 1.7M | 1.8 GB | 0.99x | Fast random access + scans |
| disk_plaintext | 1.4M | 2.7 GB | 1.53x | Human-readable grep/tail |
Backpressure
If a sink can’t keep up, Tell drops batches for that sink rather than blocking the pipeline. Other sinks continue receiving data normally. Dropped batches are tracked in pipeline metrics — checktell status --metrics to spot sinks falling behind.
File rotation
Disk-based sinks (disk, Parquet, Arrow IPC, Vortex) organize files by workspace and time:hourly or daily. Disk sinks use atomic rotation to guarantee zero data loss during file switches.