Skip to main content
Tell’s pipeline — sources, routing, transforms, and sinks — is configured in a single TOML file. The defaults work for most setups, so you only need to add what you want to change.

Minimal config

A working pipeline with one source and one sink:
[[sources.tcp]]
port = 50000

[sinks.clickhouse]
host = "localhost:8123"
database = "tell"

[routing]
default = ["clickhouse"]
This accepts SDK data on TCP port 50000 and writes it to ClickHouse. Everything else uses defaults.

Sources

Sources define where data comes in. You can run multiple sources at the same time.

TCP

The primary source for SDK data. SDKs send FlatBuffer batches over TCP.
[[sources.tcp]]
port = 50000
Common options:
FieldDefaultNotes
port50000Listen port (required)
address"::"Bind address (IPv4+IPv6)
flush_interval"100ms"Batch flush interval
max_connections10000Max concurrent connections
forwarding_modefalseTrust upstream Tell instances

HTTP

REST API source for webhook integrations and browser clients.
[sources.http]
port = 8080
FieldDefaultNotes
port8080Listen port
max_payload_size10MBMax request body
cors_enabledfalseEnable for browser clients
trust_proxyfalseTrust X-Forwarded-For
tls_cert_pathTLS certificate path
tls_key_pathTLS private key path

Syslog

Collect logs from syslog-compatible systems (RFC 3164/5424).
[sources.syslog_tcp]
port = 514
workspace_id = "1"

[sources.syslog_udp]
port = 514
workspace_id = "1"
Syslog sources require a workspace_id since syslog clients don’t authenticate with API keys.

Sinks

Sinks define where data goes. Each sink has a name and a type.
[sinks.clickhouse]
host = "localhost:8123"
database = "tell"
username = "default"
password = ""
FieldDefaultNotes
hostClickHouse HTTP address (required)
database"default"Database name
batch_size50000Rows per insert
flush_interval"5s"Flush interval

Disk

Binary and plaintext file sinks for local storage.
[sinks.logs]
type = "disk_plaintext"
path = "/var/log/tell"
rotation = "daily"
compression = "lz4"
FieldDefaultNotes
pathOutput directory (required)
rotation"daily""hourly" or "daily"
compression"none""none" or "lz4"

Parquet

Columnar storage for data warehousing.
[sinks.archive]
type = "parquet"
path = "/data/parquet"
compression = "snappy"
Compression options: snappy, zstd, lz4, uncompressed.

Arrow IPC

Fast columnar storage for hot data — readable with DuckDB, PyArrow, or Polars.
[sinks.hot]
type = "arrow_ipc"
path = "/data/arrow"
rotation = "hourly"

Forwarder

Send data to another Tell instance for edge-to-cloud deployments.
[sinks.upstream]
type = "forwarder"
target = "central-tell.example.com:50000"
api_key = "abcdef0123456789abcdef0123456789"
The api_key must be exactly 32 hex characters.

Routing

Routing connects sources to sinks. Data from a source goes through matching rules and is delivered to the configured sinks.
[routing]
default = ["clickhouse"]

[[routing.rules]]
match = { source_type = "syslog" }
sinks = ["clickhouse", "logs"]

[[routing.rules.transformers]]
type = "pattern_matcher"
  • default — sinks for traffic that doesn’t match any rule
  • match — filter by source (exact name) or source_type ("tcp", "syslog")
  • sinks — where to send matched data (must exist in [sinks])
  • transformers — transforms to apply in order before writing
See Routing for match logic and Transforms for available transformers.

Transforms

Transforms modify data in routing rules before it reaches sinks.
[[routing.rules.transformers]]
type = "redact"
strategy = "hash"
hash_key = "your-secret"
patterns = ["email", "ipv4"]
Available types: pattern_matcher, redact, filter, reduce. See Transforms for configuration details.

Global defaults

Tune pipeline-wide defaults in [global]:
[global]
batch_size = 500
queue_size = 1000
shutdown_timeout_secs = 2
Most users don’t need to change these.