HN2new | past | comments | ask | show | jobs | submitlogin

Replying here instead of below because we hit depth limit. WarpStream definitely isn’t magical, it makes a very real trade off around latency.

On the read side, the architecture is such that you’ll have to pay for 1 GET request for every 4 MiB of data produced for each availability zone you run in. If you do the math on this, it is much cheaper than manually replicating data across zones and paying for interzone networking.

RE:deletes. Deleting files in S3 is free, it can just be a bit annoying to do but the WarpStream agents manage that automatically. It’s creating files that is expensive, but the WarpStream storage engine is designed to minimize this.

I will do a future blog post on how we keep S3 GET costs minimal, it’s difficult to explain in a HN comment on mobile. Feel free to shoot us an email at founders@warpstreamlabs.com or join our slack if you care for a more in depth explanation later!



Very interesting trade-off! I was curious what you and Ryan were cooking post DDOG. "cost-effective serverless kafka" is a very interesting play. And congrats on the public announcement for "shipping Husky", finally. --Marc




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: