Kafka is great. It allows for the creation of real-time, high-throughput, low latency data streams that are easily scalable. But it is also complex. Optimizing your Apache Kafka deployment can be a challenge because there are many layers to the distributed architecture and parameters that can be tweaked within those layers. 

For example: Normally, a high-throughput publish-subscribe (pub/sub) pattern with automated data redundancy is a good thing. But when your consumers struggle to keep up with your data stream, or if they fail to read the messages because these messages disappear way before the consumers get to them, then work needs to be done to support the performance needs of the consuming applications.

Best Practices for Kafka Optimization

Kafka optimization is a broad topic that can be very deep and granular, but here are some key best practices to get you started.

1. Upgrade to the latest version of Kafka.

This might sound blindingly obvious, but you’d be surprised how many people use older versions of Kafka. A really simple Kafka optimization move is to upgrade and use the latest version of the platform. You have to determine if your customers are using older versions of Kafka (ver. 0.10 or older). If they are, they should upgrade immediately.

The latest version of Kafka (ver. 0.8x) comes with Apache ZooKeeper, which is used primarily to coordinate consumer groups. Using the outdated version of Kafka can lead to long-running rebalances as well as rebalance algorithm failures.

2. Understand data throughput rates. 

Optimizing your Apache Kafka deployment is an exercise in optimizing the layers of the platform stack. Partitions are the storage layer upon which throughput performance is based. The data-rate-per-partition is the average size of the message multiplied by the number of messages-per-second. Put simply, it is the rate at which data travels through the partition. Desired throughput rates dictate the target architecture of the partitions.

3. Stick to random partitioning when writing to topics, unless architectural demands call for otherwise.

Solutions architects would prefer each partition to support similar amounts of data and throughput rates. In reality, data rates vary over time as do the raw number of producers and consumers. 

The performance challenge presented by the variability is the potential for consumer lag, AKA consumer read rates falling behind producer write rates. As Kafka environments scale, random partitioning is an effective way to ensure you don’t introduce artificial bottlenecks unnecessarily attempting to apply static definitions to a moving performance target.

Partition leadership is usually the product of simple elections via metadata maintained with the Zookeeper. Leadership election does not, however, take into account the performance of the individual partitions. There are proprietary balancers that can be leveraged depending on your Kafka distribution, but short of such tooling, random partitioning provides the most hands-off path to balanced performance.

The takeaway? Stick to random partitioning when writing to topics, unless architectural demands demand otherwise.

4. Adjust consumer socket buffers to achieve high-speed ingest.

In the older Kafka versions, the parameter receive.buffer.bytes is set to 64kB as its default. In the newer Kafka versions, the parameter is socket.receive.buffer.bytes, with 100kB as the default.

What does this mean for Kafka optimization? For high-throughput environments, these default values are way too small, thus insufficient. This is very much the case when the network’s bandwidth-delay product between the broker and the consumer is bigger than that of LAN (local area network).

If your network is running on 10 Gbps or higher and has latencies of 1 millisecond or more, you’re advised to tune your socket buffers to 8 or 16 MB. If memory is an issue, consider 1 MB.

Explore More Ways to Optimize Kafka

Optimizing your Apache Kafka deployment is an ongoing job, but these five best practices should be a solid start. Check out our other resources, which discuss in length the best practices of Kafka when applied to specific areas of application development and data management.

The tips mentioned above are just the tip of the proverbial iceberg. Kafka is continuously becoming more and more popular for application developers, IT professionals, and data managers. And for good reasons. Explore Kafka further, and learn how it can take your organization to the next level.

Already using Kafka? Monitor and improve its performance with Pepperdata Streaming Spotlight.