LogoLogo
WarpStream.comSlackDiscordContact UsCreate Account
  • Overview
    • Introduction
    • Architecture
      • Service Discovery
      • Write Path
      • Read Path
      • Life of a Request (Simplified)
    • Change Log
  • Getting Started
    • Install the WarpStream Agent / CLI
    • Run the Demo
    • "Hello World" for Apache Kafka
  • BYOC
    • Run the Agents Locally
    • Deploy the Agents
      • Object Storage Configuration
      • Kubernetes Known Issues
      • Rolling Restarts and Upgrades
    • Client Configuration
      • Tuning for Performance
      • Configure Clients to Eliminate AZ Networking Costs
        • Force Interzone Load Balancing
      • Configuring Kafka Client ID Features
      • Known Issues
    • Infrastructure as Code
      • Terraform Provider
      • Helm charts
      • Terraform Modules
      • Protecting important resources from accidental deletion
    • Monitoring
      • Pre-made Datadog Dashboard
      • Pre-made Grafana Dashboard
      • Important Metrics and Logs
      • Recommended List of Alerts
      • Monitoring Consumer Groups
      • Hosted Prometheus Endpoint
    • Authentication
      • SASL Authentication
      • Mutual TLS (mTLS)
      • Basic Authentication
    • Advanced Agent Deployment Options
      • Agent Roles
      • Agent Groups
      • Protect Data in Motion with TLS Encryption
      • Low Latency Clusters
      • Network Architecture Considerations
      • Agent Configuration Reference
      • Reducing Infrastructure Costs
      • Client Configuration Auto-tuning
    • Hosted Metadata Endpoint
    • Managed Data Pipelines
      • Cookbooks
    • Schema Registry
      • WarpStream BYOC Schema Registry
      • Schema Validation
      • WarpStream Schema Linking
    • Orbit
    • Port Forwarding (K8s)
  • Reference
    • ACLs
    • Billing
      • Direct billing
      • AWS Marketplace
    • Benchmarking
    • Compression
    • Protocol and Feature Support
      • Kafka vs WarpStream Configuration Reference
      • Compacted topics
      • Topic Configuration Reference
    • Secrets Overview
    • Security and Privacy Considerations
    • API Reference
      • API Keys
        • Create
        • Delete
        • List
      • Virtual Clusters
        • Create
        • Delete
        • Describe
        • List
        • DescribeConfiguration
        • UpdateConfiguration
      • Virtual Clusters Credentials
        • Create
        • Delete
        • List
      • Monitoring
        • Describe All Consumer Groups
      • Pipelines
        • List Pipelines
        • Create Pipeline
        • Delete Pipeline
        • Describe Pipeline
        • Create Pipeline Configuration
        • Change Pipeline State
      • Invoices
        • Get Pending Invoice
        • Get Past Invoice
    • CLI Reference
      • warpstream agent
      • warpstream demo
      • warpstream cli
      • warpstream cli-beta
        • benchmark-consumer
        • benchmark-producer
        • console-consumer
        • console-producer
        • consumer-group-lag
        • diagnose-record
        • file-reader
        • file-scrubber
      • warpstream playground
    • Integrations
      • Arroyo
      • AWS Lambda Triggers
      • ClickHouse
      • Debezium
      • Decodable
      • DeltaStream
      • docker-compose
      • DuckDB
      • ElastiFlow
      • Estuary
      • Fly.io
      • Imply
      • InfluxDB
      • Kestra
      • Materialize
      • MinIO
      • MirrorMaker
      • MotherDuck
      • Ockam
      • OpenTelemetry Collector
      • ParadeDB
      • Parquet
      • Quix Streams
      • Railway
      • Redpanda Console
      • RisingWave
      • Rockset
      • ShadowTraffic
      • SQLite
      • Streambased
      • Streamlit
      • Timeplus
      • Tinybird
      • Upsolver
    • Partitions Auto-Scaler (beta)
    • Serverless Clusters
    • Enable SAML Single Sign-on (SSO)
    • Trusted Domains
    • Diagnostics
      • GoMaxProcs
      • Small Files
Powered by GitBook
On this page

Was this helpful?

  1. BYOC

Client Configuration

This pages explains how to configure your Apache Kafka client with WarpStream.

PreviousRolling Restarts and UpgradesNextTuning for Performance

Last updated 14 days ago

Was this helpful?

Don't forget to review our documentation for once you're done. A few small changes in client configuration can result in 10-20x higher throughput when using WarpStream.

WarpStream provides API-compatibility with Apache Kafka, so you can just connect your clients to the WarpStream agents by setting the WarpStream Application Bootstrap URL (obtained from the ) as the value in the Kafka bootstrap settings. For example, using the librdkafka client in Go:

var (
    // Not explicitly required, but will eliminate inter-zone networking
    // in multi-zone deployments.
    availabilityZone = lookupAZ()
    // Not explicitly required, but may help with debugging to isolate
    // individual clients in logs and telemetry.
    sessionID = uuid.New().String()
    // The string you would normally use as your client ID with regular
    // Apache Kafka.
    applicationID = "application-foo"
    clientID = fmt.Sprintf("%s,ws_si=%s,ws_az=%s", sessionID, availabilityZone)
    bootstrapServer = "api-80ba097c-d4ef-4e0b-8e86-d05b80fee6ed.kafka.discoveryv2.prod-z.us-east-1.warpstream.com:9092"
)

producerConfig := map[string]kafka.ConfigValue{
	// Not developing locally? In K8s you should use the Agent
	// service from Agent chart. If you're not using K8s, then
	// you can use our convenience hosted bootstrap URL which
	// you can find in the "Connect" tab of the virtual cluster
	// view in the WarpStream console.
	"bootstrap.servers": "localhost:9092",
	"broker.address.family": "v4",
	"log.connection.close":  "false",
	"client.id": clientID,
}

config := kafka.ConfigMap(producerConfig)
producer, err := kafka.NewProducer(&config)
if err != nil {
	return fmt.Errorf("error initialiazing kafka producer: %w", err)
}

Client IDs

If you just want to get up and running quickly, you can use a regular client ID and skip encoding your application's availability zone in the client ID. However, beware that you may incur higher costs due to inter-zone networking.

You can read more about the WarpStream service discovery system in our , but if you want to take advantage of WarpStream's zone-aware service discovery system and achieve good load balancing, you must encode your application's availability zone in your Kafka client's client ID using the format in the code sample above.

To learn more about the WarpStream-specific client ID features (like ws_si and ws_az) check out .

Follow to learn how to determine your application's availability zone in all major cloud environments and properly template your client ID.

tuning your Kafka client for maximum performance with WarpStream
WarpStream console
Service Discovery reference documentation
our documentation on configuring Kafka Client ID features
this documentation