Skip to main content
Tell is a single Rust binary. How you deploy it depends on your scale, your environment, and where your data needs to go. Here are the four topologies.

Single node

Everything on one machine — ingestion, pipeline, storage, and query.
SDKs / Sources ──→ [ Tell ] ──→ ClickHouse / Disk
This is the default. Install Tell, start it, point your SDKs at it. One binary handles TCP and HTTP ingestion, runs the pipeline, writes to ClickHouse or local disk, and serves the API and dashboards. A single node handles up to 64M events/sec on batched TCP ingestion. Storage write throughput depends on your sink — ClickHouse, Parquet, Arrow IPC, and raw disk all have different profiles. Good for: indie projects, startups, small teams, development, and honestly most production workloads under moderate volume.
curl -sSfL https://tell.rs | bash
tell run -c config.toml
Read more: Self-hosting | Configuration

Distributed

Edge nodes pre-process data and forward to a central collector. Each edge node runs Tell with transforms — filtering, redacting PII, extracting log patterns — then forwards pre-formatted batches to an upstream Tell instance via the forwarder sink.
            ┌─ [ Tell edge ] ──transforms──┐
Sources ────┤                              ├──→ [ Tell collector ] ──→ ClickHouse
            └─ [ Tell edge ] ──transforms──┘
The forwarding path uses FlatBuffers with batched zero-copy ingestion — the performance-optimized path. Edge nodes handle the heavy per-event work (parsing, redaction, pattern extraction) so the central collector just routes and writes. Each edge node handles 100K–1M+ events/sec depending on transforms configured. The central collector handles raw forwarded batches at full pipeline speed. Good for: high-volume production, multi-region deployments, enterprises, anywhere you need to pre-process close to the source. Read more: Forwarder sink | Transforms | Routing

Collect and forward

Collect data locally — write to disk — then forward to an upstream system. The upstream can be another Tell instance, a SOC, an MSSP, or any system that accepts the forwarded format.
Sources ──→ [ Tell ] ──→ Disk (Parquet / Arrow / binary)
                    └──→ Forwarder ──→ SOC / MSSP / Tell Cloud
This topology is designed for environments where data needs to be captured reliably at the edge, even when the upstream connection is intermittent or unavailable. Local disk storage acts as a buffer and an audit trail. Good for: OT and industrial environments, air-gapped networks, compliance-heavy deployments, managed security providers. Read more: Disk sink | Parquet sink | Forwarder sink

Tell Cloud

Point your SDKs directly at Tell Cloud. No infrastructure to manage.
SDKs ──→ tell.rs ──→ managed storage + dashboards
Coming soon. Same SDKs, same data model, same query API — hosted by Tell.

Mixing topologies

These aren’t mutually exclusive. You can run a single node for your web app’s product analytics, a distributed setup for high-volume backend logs, and a collect-and-forward node in an air-gapped factory — all feeding into the same Tell Cloud or self-hosted collector.

What’s next

  • Quickstart — get a single node running in under 5 minutes
  • Self-hosting — production deployment guide
  • Scaling — scaling beyond a single node