Available sinks
| Sink | Type | Use case |
|---|---|---|
| ClickHouse | clickhouse | Production analytics (recommended) |
| Disk | disk_binary, disk_plaintext | Binary or plaintext file storage |
| Parquet | parquet | Columnar archival with compression |
| Arrow IPC | arrow_ipc | Fast local storage for Polars/DuckDB |
| Forwarder | forwarder | Edge-to-cloud Tell-to-Tell relay |
| Stdout | stdout | Debug output (development only) |
| Null | null | Benchmarking (discards all data) |
Choosing a sink
ClickHouse is the recommended production sink. It handles event-type routing, per-table batching, concurrent flushes, and retry logic. The query engine connects directly to ClickHouse for analytics. Disk sinks are for local file storage. Usedisk_binary for high-throughput archival with optional LZ4 compression, or disk_plaintext for human-readable logs you can grep and tail.
Parquet is for data warehousing. Files are readable by Spark, DuckDB, Pandas, and Polars. Good for cold storage with excellent compression ratios.
Arrow IPC is for hot data that needs frequent access. ~10x faster reads than Parquet, with zero-copy memory mapping. Ideal for real-time dashboards backed by Polars or DuckDB.
Forwarder sends data to another Tell instance over TCP. Use this for edge-to-cloud deployments where edge nodes collect data and relay it to a central server.
Backpressure
If a sink can’t keep up, Tell drops batches for that sink rather than blocking the pipeline. Other sinks continue receiving data normally. Dropped batches are tracked in pipeline metrics — checktell status --metrics to spot sinks falling behind.
File rotation
Disk-based sinks (disk, Parquet, Arrow IPC) organize files by workspace and time:hourly or daily. Disk sinks use atomic rotation to guarantee zero data loss during file switches.