Generally, things that write tempfiles while processing data in /tmp and manage not to log enough because they wedge the system before it occurs to the code that anything's gone wrong enough to log.
Yes, absolutely, "bad software, no cookie," but the usual culprit is some sort of vendor binary where the poor sod running the system has no control over that.
BSD systems generally clean out an on-disk /tmp during the normal boot process, yes. There are ways around this, but when I've been responsible for babysitting craptastic vendorware it's always been on Linux or Solaris.
Personally I've (after quite some grumbling about it) accepted /tmp being on tmpfs and just live with it; my current source of crankiness is "people who don't configure their systems to write to syslog" since if the box gets wedged by an I/O storm systemd will shoot systemd-journald in the head and then journald sometimes deletes all of your previous logs as it starts up.
One example that springs to mind was a vendor antivirus system that unpacked email attachments into /tmp - generally when it died on its arse the only way to figure out why was to dig into /tmp and look at what it had left, then try and infer backwards to the culprit email from there.
Yes, the problem isn't disk usage, the problem is that if journald's writes get too slow a watchdog timeout will cause systemd to assume it's crashed and shoot it in the head, which leaves the journal part written, which means on restart the new journald process throws away the old journal as corrupt.
(this may have been fixed in the last couple of years, but it leaves me somewhat untrusting of it in terms of actually being able to read my logs)
For what it's worth, OpenBSD which could be considered conservative says this about /tmp [1]:
> Temporary files that are not preserved between system reboots. Periodically cleaned by daily(8).
So no one should expect those files to be stored permanently.
[1]: http://man.openbsd.org/hier