HN2new | past | comments | ask | show | jobs | submitlogin

won't that be a problem for high-traffic topics? Kafka latency is usually in single digit milliseconds. For a topic with high throughput, a typical java client instance can send thousands of messages per second. When the acknowledgement latency increases to 1000ms, then the producer client would need to have multiple threads to handle the blocking calls. Either producer will have to scale to multiple instances, or else risk crashing with out-of-memory errors.


(WarpStream cofounder)

Yeah you have to produce in parallel and use batching, but it works well in practice. We’ve tested it up to 1GiB/s in throughout without issue




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: