Skip to main content
Tell’s pipeline moves data from your apps to storage in real time. SDKs and services send events, logs, and sessions to sources. The router matches data to one or more sinks based on rules you define, optionally applying transforms along the way.
Sources          Router              Sinks
───────          ──────              ─────
TCP    ───┐                     ┌──→ ClickHouse
HTTP   ───┼──→ routing rules ───┼──→ Disk
Syslog ───┘    + transforms     └──→ Parquet

How data flows

  1. Sources accept incoming data — TCP for SDKs, HTTP for webhooks, syslog for infrastructure logs
  2. Routing rules match each batch by source name or type and determine which sinks receive it
  3. Transforms (optional) modify data in-flight — redact PII, extract log patterns, filter events
  4. Sinks write data to storage — ClickHouse for analytics, disk for archival, or forward to another Tell instance
A batch can go to multiple sinks simultaneously (fan-out). Tell delivers to each sink without copying the data, so fan-out to 5 sinks costs roughly the same as writing to 1.

Minimal configuration

A working pipeline needs one source, one sink, and a routing rule:
[[sources.tcp]]
port = 50000

[sinks.clickhouse]
host = "localhost:8123"
database = "tell"

[routing]
default = ["clickhouse"]
See Pipeline Configuration for the full config reference.

Sources

Sources listen for incoming data. You can run multiple sources at the same time.
SourceUse case
TCPSDK ingestion (primary, highest throughput)
HTTPREST API, webhooks, browser clients
SyslogInfrastructure logs (RFC 3164/5424)

Routing

Routing rules decide which sinks receive which data. Rules match by source name or source type. The first matching rule wins. Unmatched data goes to the default sinks.
[routing]
default = ["clickhouse"]

[[routing.rules]]
match = { source_type = "syslog" }
sinks = ["clickhouse", "logs"]
See Routing for matching logic and examples.

Transforms

Transforms process data between routing and sinks. They’re configured per routing rule and applied in order.
TransformWhat it does
Pattern extractionCluster logs into patterns using the Drain algorithm
RedactScrub PII with 11 built-in patterns — email, phone, IP, and more
FilterDrop or keep events by condition
ReduceConsolidate similar events with count metadata
See Transforms for configuration details.

Sinks

Sinks write data to storage or forward it elsewhere.
SinkUse case
ClickHouseProduction analytics (recommended)
DiskBinary or plaintext file storage
ParquetColumnar archival with compression
Arrow IPCFast local storage for Polars/DuckDB
ForwarderEdge-to-cloud Tell-to-Tell relay
StdoutDebugging

Backpressure

If a sink can’t keep up, Tell drops batches for that sink rather than blocking the entire pipeline. Other sinks continue receiving data normally. Backpressure events are tracked in pipeline metrics and logged — check tell status --metrics to spot sinks that are falling behind.

Graceful shutdown

When Tell shuts down, it finishes processing batches already in the pipeline before stopping. Sources stop accepting new connections, in-flight batches drain to sinks, and final metrics are logged.