Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

It might make the exploits mildly more difficult in some cases, but adding a synthetic delay (whether randomized or artificially inflated to meet a certain high water mark) isn't likely to help.

https://security.stackexchange.com/a/96493/271113

For example, consider the case of a cache-timing leak (rather than the classical "string compare" one). You'd have to have a lot of control over the behavior of your operating environment to guarantee it doesn't leak bits of your secret from cache misses.

When you add a delay to your processing, unless the same tasks being executed, I would expect power analysis leakage and random jitter from the kernel's scheduler to reveal that fact. It might be a more costly analysis to detect, but it's still there.

Generally, I think this is a doomed line of reasoning.



Actually, the link you provide seems to support the parent comment's suggestion, rather than detract from it.

The previous comment was suggesting making sure that every code path takes the same amount of time, not adding a random delay (which doesn't work).

And while I agree that power-analysis attacks etc. are still going to apply, the over-arching context here is just timing-analysis


The link I provided is about random delays being inferior to setting a high water mark, yes.

I'm not just echoing the argument made by the link, though. I'm adding to it.

I don't think the "cap runtime to a minimum value" strategy will actually help, due to how much jitter your cap measurements will experience from the normal operation of the machine.

If you filter it out when measuring, you'll end up capping too low, so some values will be above it. For a visualization, let's pretend that you capped the runtime at 80% what it actually takes in the real world:

  function biased() {
    return Math.max(0.8, Math.random())
  }
  let samples = [];
  for (let i = 0; i < 1000; i++) {
    samples.push(biased());
  }
  // Now plot `samples`
Alternatively, let's say you cap it sufficiently high that there's always some slack time at the end.

Will the kernel switch away to another process on the same machine?

If so, will the time between "the kernel has switched to another process since we're really idling" to "the kernel has swapped back to our process" be measurable?

It's better to make sure your actual algorithm is actually constant-time, even if that means fighting with compilers and hardware vendors' decisions.


Cache and power need to be exploited locally, generally. Introducing random delays to raise the noise floor would work for network services, I believe.


> power need[s] to be exploited locally

Not in the presence of DVFS, it turns out: https://www.hertzbleed.com/hertzbleed.pdf


Cache and power are shared resources, not just timing observations. High-assurance security always advised physical separation as much as possible to avoid timing channels. So, you'd run them on different boards, flush the caches, or make power invisible to untrusted applications. They also used to modify the granularity of visible timing or use logical time to prevent the measurements from happening.

Recently, people have come up with partitioned caches to deal with this. I don't know if they exist in production. A simple strategy is turning off shared caches while running processes of different, security levels on their own cores. Also, investing in multi-core and many-core architectures for this.

Finally, many of us pushed for randomized execution or scheduling to throw off the timing of specific things. Combined with fine-grained processes (eg separation kernels), that should reduce what they can do a lot.


It depends.

AES cache-timing was broken over a network (but required, like, 2^28 samples).

I wouldn't bet the farm on this line of thinking providing resilience. It might be just annoying enough for an attacker to not really bother. (Or maybe only if the target is, like, a software cryptocurrency wallet with enough value to loot if they're successful.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: