HN2new | past | comments | ask | show | jobs | submitlogin

> ...as well as increasing the Node.js heap size to 32Gb.

> ...also saw that the process’s heap size stayed fairly constant at around 1.2 Gb.

This is because 1.2 GB is the max allowed heap size in v8. Increasing beyond this value has no effect.

> ...It’s unclear why Express.js chose not to use a constant time data structure like a map to store its handlers.

It it is non-trivial (not possible?) to do this in O(1) for routes that use matching / wildcards, etc. This optimization would only be possible for simple routes.




That seems like a pretty low size to me... how are people getting around this when they need to handle >1.2GB of data on Node?


Native code modules I assume.


> It it is non-trivial (not possible?) to do this in O(1) for routes that use matching / wildcards

I'd be impressed if they did it consistently in O(1) for static routes. I think they were looking for O(log(number of different routes)) instead of O(n).


If the paths have some kind of non crazy Regex leading up to what gets munched for path variables (eg /path1 versus /path2) then you could at least build a tree of maps at each level, which would be roughly constant time for the most common cases.


> roughly constant time for the most common cases.

For small values of n ;)

A tree of maps would be consistently log(n), like any map. Even a hashtable would hash to buckets eventually, and are log(n).


Not if you're using a Cuckoo hashmap.

Honestly, I have no idea why people aren't taught Cuckoo-maps by default. It's simple, has true O(1) worst-case lookup, and has easy deletion. About the only problem is trying to prove insertion time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: