Sometimes, perhaps, but simple malloc/free are much faster and more predictable than what you see in GC'd languages, often mitigating the problem. On top of that, many critical paths can avoid even those by confining their memory allocation to the stack, something that effectively ceases to exist in GC'd languages.
> but simple malloc/free are much faster and more predictable than what you see in GC'd languages
More predictable, very likely. Much faster, this is simply not true. E.g., quotes from an article by Brian Goetz[1]:
The common code path for new Object() in HotSpot 1.4.2 and later is
approximately 10 machine instructions, whereas the best performing
malloc implementations in C require on average between 60 and 100
instructions per call. ... "Garbage collection will never be as efficient
as direct memory management." And, in a way, those statements are right
-- dynamic memory management is not as fast -- it's often considerably faster.
> allocation to the stack, something that effectively ceases to exist in GC'd languages.*
Not true, either. With escape analysis, Hotspot JVM can do stack allocation. The flag of doing escape analysis is actually turned on by default now.
That writeup is certainly interesting, mostly with regard to escape analysis, but it's entirely untrustworthy with regard to the comparison to malloc. First, it appears to base its assumptions about malloc implementations on a paper dating to 1993 (WTF?!), second is that it thinks "instructions" is a meaningful metric for judging performance.
malloc is not "much faster" than GC; it's often a performance bottleneck. That's why any C/C++ project worried about performance writes custom slab allocators, which a GC doesn't prevent you from doing. In fact, the GC frees you from thinking about memory management in most cases so you can focus your effort on optimizing the pieces that count. GC doesn't prevent stack allocation either; C# has explicit stack allocation and Java does it implicitly with escape analysis. The only advantage of malloc is its predictability. Hopefully Go will eventually get a pauseless collector to mitigate that concern too.