It's hard to define semantics. Here, you have potentially copied less than dst->len bytes into dst->data. Is the remaining data (up to dst->len) still valid, is it garbage, should you have adjusted dst->len, etc?
When does it even make sense to partially mix byte-level representations of some data?
The buffer abstraction is necessary, but far from sufficient to build security-critical code. To do so, you also need to code in semantics somewhere, and buffers know zilch about the meaning of the data.
Check again. Hint: "abort()" is not "return" -- that code doesn't prevent you from having to know how large the buffer is, it just shoots your program in the head if you don't. Which is basically the same semantics Java et al have by throwing an out of bounds exception when you don't catch it.
Incidentally, this is another reason that the language can't save you from yourself. Program termination is sometimes totally unreasonable, e.g. in real-time settings. And it's unreasonable regardless of the language. When an airplane falls out of the sky, you don't get to declare victory just because the reason it did was an uncaught out of bounds exception rather than a buffer overrun. In that context the programmer must explicitly handle the condition that the buffer is too small, regardless of the language, because "abnormal program termination" is not an acceptable outcome.
Your code checks whether dst->len < n, but not whether dst->len > n; i.e., after copying 5 bytes to dst, there may be another 10 bytes of garbage in dst. This 'garbage' might as well be a password from a string that was just free'd. A fix would be to set dst->len to n after this operation.
OK, I see what you're getting at. There is a difference between the end of the buffer and the end of the data. But copying some bytes into a buffer doesn't imply that you intend to discard the rest of the buffer. It's quite common to want to replace a header or some other subsection of a buffer and have the rest of the buffer remain unmodified.
Moreover, the purpose of the above is to achieve the level of bounds checking that you get with the likes of Java. If you want to go beyond that then you need something more complicated -- maybe add separate variables to the buffer struct for buffer_len and data_len and then have different memcpy functions based on whether you want existing data in the buffer to be truncated. But there comes a point at which further complexity produces more confusion than safety.
Ideally, you would have a parsing and validation routine which fills in a struct. Afterwards, you manipulate only the struct instead of raw byte representation.
> But there comes a point at which further complexity produces more confusion than safety.
Buffer is simply not the right abstraction for complex protocols, regardless of how "complex" operations you define.
> Ideally, you would have a parsing and validation routine which fills in a struct.
The parsing and validation routine is where you need the buffer abstraction. If you make a mistake there then a copy function that won't let you read or write past the end of the buffer will mitigate the damage.
When does it even make sense to partially mix byte-level representations of some data?
The buffer abstraction is necessary, but far from sufficient to build security-critical code. To do so, you also need to code in semantics somewhere, and buffers know zilch about the meaning of the data.