That's awesome to hear there's ongoing investment in the Firebase/GCP functions space! I'm in touch with many game developers who use or considered Firebase Functions/GCP Functions for their backends/player data web services. Figure I'd pass along the primary gotchas/complaints I hear, since it sounds like many of them are being actively addressed in the work you mentioned:
- Time from deployment to availability feels very long compared to alternatives. Firebase Functions (even for 1-3 functions) can be 1-3 minutes long. I've heard it's much longer for more functions (/ maybe region dependent?)
- Cold Starts are a major pain to deal with. Workarounds like minInstances are expensive/subvert the scale to $0 value proposition/don't solve latency in the scale-up case, and are charged per function. Some devs refactor their backend to be a single function endpoint to work around this and minimize cost which seems to contradict the small functions development style demonstrated in the docs.
- It'd be nice to have more serverless-friendly datastore primitives within the GCP ecosystem that can (1) scale to $0/mo base for pricing, (2) handle high write throughput (including per entry) and (3) support serverless connections well. RTDB, Firestore, Datastore, Memorystore, Spanner, Alloy etc. don't quite nail all those points. Something based on Spanner or an elastic sort of Memorystore that really scale down to $0 for cost could be amazing.
Some are migrating to Cloud Run to have concurrency per function, though it sounds like Cloud Functions v2 gets very close to that use case once it's available across all regions.
I love Firebase Functions since it really nails the use case of: (1) start from $0, prototype your application quickly, (2) scale up to withstand practically infinite traffic at reasonable cost without having to change any code, (3) seamlessly graduate to using more of a high quality cloud platform without having to change any code. It's rare to find and other efforts haven't matched the overall dev UX. Outside of cold start spikes the platform is very stable and hands-off, and has incredible logging/metrics/alerting available from the GCP side.
Separately I worry the GCP ecosystem is missing a story around cheap/fast edge functions and integrations with next-gen frontend tooling which often rely on many quick API calls. (It would be interesting to see something like a Firebase acquisition & integration targeting that world of tooling)
If there were a scale to $0 edge datastore like PlanetScale/UpStash + scale to $0 edge functions offering with the simplicity and GCP-integration of Firebase it would be awesome.
Thank you for the feedback. I can’t comment about future roadmaps, but we expect v2 to severely reduce cold start problems with concurrency support. WRT deploy times, I’m not a fan either, but this is the cost of standardizing on Docker. I don’t honestly see that decision being reversed soon. That’s why we’ve instead decided to invest in an emulation suite. How has that worked out for you? And I’m curious why the real-time database and Firestore don’t meet your needs. The real-time database requires manual sharding, but tools for that have dramatically improved (e.g. in v2 functions, a single function can listen to all databases in a region). Firestore is built on spanner. It prohibits queries that fall apart at scale, but it’s a planet scale database that you’ll never have to shard.
- Time from deployment to availability feels very long compared to alternatives. Firebase Functions (even for 1-3 functions) can be 1-3 minutes long. I've heard it's much longer for more functions (/ maybe region dependent?)
- Cold Starts are a major pain to deal with. Workarounds like minInstances are expensive/subvert the scale to $0 value proposition/don't solve latency in the scale-up case, and are charged per function. Some devs refactor their backend to be a single function endpoint to work around this and minimize cost which seems to contradict the small functions development style demonstrated in the docs.
- It'd be nice to have more serverless-friendly datastore primitives within the GCP ecosystem that can (1) scale to $0/mo base for pricing, (2) handle high write throughput (including per entry) and (3) support serverless connections well. RTDB, Firestore, Datastore, Memorystore, Spanner, Alloy etc. don't quite nail all those points. Something based on Spanner or an elastic sort of Memorystore that really scale down to $0 for cost could be amazing.
Some are migrating to Cloud Run to have concurrency per function, though it sounds like Cloud Functions v2 gets very close to that use case once it's available across all regions.
I love Firebase Functions since it really nails the use case of: (1) start from $0, prototype your application quickly, (2) scale up to withstand practically infinite traffic at reasonable cost without having to change any code, (3) seamlessly graduate to using more of a high quality cloud platform without having to change any code. It's rare to find and other efforts haven't matched the overall dev UX. Outside of cold start spikes the platform is very stable and hands-off, and has incredible logging/metrics/alerting available from the GCP side.
Separately I worry the GCP ecosystem is missing a story around cheap/fast edge functions and integrations with next-gen frontend tooling which often rely on many quick API calls. (It would be interesting to see something like a Firebase acquisition & integration targeting that world of tooling)
If there were a scale to $0 edge datastore like PlanetScale/UpStash + scale to $0 edge functions offering with the simplicity and GCP-integration of Firebase it would be awesome.