Hacker Timesnew | past | comments | ask | show | jobs | submit | teh's commentslogin

Another useful "Emergency exit" is CTRL+Z which stops the process and cannot be intercepted.

It's often faster than hitting CTRL+C and waiting for process cleanup, especially when many resources are used. Then you can do e.g. `kill -9 $(jobs -p)` to kill the stopped tasks.


All of the keyboard-driven terminal signals can be intercepted; catching INT (^C) for cleanup is just more common than the others. Only KILL and STOP cannot be caught.

^Z sends TSTP (not STOP, though they have the same default behavior) to suspend; some programs catch this to do terminal state cleanup before re-raising it to accept the suspension. Catching it to do full backout doesn't make as much sense because the program anticipates being resumed.

^\ sends QUIT, which normally causes a core dump and is rarely caught. If you have core dumps disabled (via ulimit -c 0 or other system configuration) then you can often use it as a harder version of ^C; this is how I would tend to get out of ‘sl’ in places where I found it unwantedly installed.


ctrl-z pauses the process, it doesn't terminate. I think of z as in zombie as you can then run fg to bring it back from paused state or as you suggested kill in it for good

For the most simple case of a single job, I use the job number (`[1]` in the example) with %-notation for the background jobs in kill (which is typically a shell builtin):

    $ cat
    ^Z[1] + Stopped                    cat
    $ kill %1

Super useful tool but need to be aware that this is reading potentially untrusted input (e.g. in the case of http request logs) and written in c++, so a possible attack vector. I use lnav where I trust the logs, but do wish a safe implementation existed.

Memory safety doesn't mean it's safe. And C++ doesn't mean it's unsafe.

Browsers are in C++, do you not use them? Curl is in C, do you not use it? Kernel is C...


"Memory safe" means that there are no memory safety issues. One of the most critical areas targeted by exploits is just gone. And this in turn leads -- according to the numbers published by Google -- to a severe reduction of exploitable issues.

C++ means you can not know whether code is safe or not. That does not mean it is unsafe, but assuming it is is the only sane way to handle this. Incidentally this is exactly what browsers do: They typically require two out of these three to be true for any new piece of code: "written in a memory-safe languge", "sandboxed" and "no untrusted inputs". This blocks C++ from some areas in a browser completely.


Chrome uses sandboxing and a lot of safe tooling (wuffs, rust) for untrusted data.

curl is heavily fuzzed and you still mostly control what you are downloading unless the target is compromised.

With logs the attacker controls what goes into your logs.

And you don't need to really look very hard, there are a fair number of very recent stack and heap overflows: https://github.com/tstack/lnav/issues?q=is%3Aissue%20heap%20...


There is precedent: with type checkers like pyright you can opt into specific checks, or have a basic, standard, strict setting, each expanding the set of checks done.

How would dependencies work in this schism? E.g. if serde starts using named impls, do all dependencies have to use named impls?


I can't fully answer your question but I did once spend about a week porting plain internal configuration to cue, jsonnet, dhall and a few related tools (a few thousand lines).

I was initially most excited by cue but the novelty friction turned out to be too high. After presenting the various approaches the team agreed as well.

In the end we used jsonnet which turned out to be a safe choice. It's not been a source of bugs and no one complained so far that it's difficult to understand its behaviour.


I feel you (like many people) got burned by the steep learning curve. Empirically some pretty high powered companies use nix successfully. It's of course always difficult to know the counterfactual (would they have been fine with ubuntu) but the power to get SBOMs, patch a dependencies deep in the dependency stack, roll back entire server installs etc. really helps these people scale.

nixpkgs is also the largest and most up to date package set (followed by arch) so there's clearly something in the technology that allows a loosely organised group of people to scale to that level.


NixOS has very limited usage, with few companies adopting it for critical or commercial tasks. It is more common in experimental niches.

One of the main issues with nixpkgs is that users have to rely on overlays for a package. This can lead to obscure errors because if something fails in the original package or a Nix module, it's hard to pinpoint the problem. Additionally, the overuse of links in the directory hierarchy further complicates things, giving the impression that NixOS is a patched and poorly designed structure.

As someone who has tried Nix, uses NixOS, and created my own modular configuration, I made optimizations and wrote some modules to scratch my own itch. I realized I was wasting time trying to make one tool configure other tools. That’s essentially what NixOS does through Nix. Why complicate a Linux system when I can just write bash scripts and automate my tasks without hassle? Sure, they might say it’s reproducible, but it really isn’t. Several packages in NixOS can fail because a developer redefined a variable; this then affects another part of the module and misconfigures the upstream package. So, you end up struggling with something that should be simple and straightforward to diagnose.


I know it's not a proper measurement, but I can't remember the last time I missed something in AUR, but in my short time on NixOS I missed 2 apps and 1 app that disappeared in the NixOS channel upgrade.


I feel the same way. Excited to see another attempt. But it's a c++ engine so not something I would want to expose to the internet really.


I've looked into this but saw hugely variable throughput, sometimes as little as 20 MB / second. Even if full throughput I think s3 single key performance maxes out at ~130 MB / second. How did you get these huge s3 blobs into lambda in a reasonable amount of time?


* With larger lambdas you get more predictable performance, 2GB RAM lambdas should get you ~ 90MB/s [0]

* Assuming you can parse faster than you read from S3 (true for most workloads?) that read throughput is your bottleneck.

* Set target query time, e.g 1s. That means for queries to finish in 1s each record on S3 has to be 90MB or smaller.

* Partition your data in such a way that each record on S3 is smaller than 90 MBs.

* Forgot to mention, you can also do parallel reads from S3, depending on your data format / parsing speed might be something to look into as well.

This is somewhat of a simplified guide (e.g for some workloads merging data takes time and we're not including that here) but should be good enough to start with.

[0] - https://bryson3gps.wordpress.com/2021/04/01/a-quick-look-at-...


This is a great book!

After reading it I found it much harder to enjoy movies showing bad security though (such has heists, nuclear anything, ..).

E.g. from the book I learned about the IAEA recommendations for safekeeping nuclear material [1], and it's pretty clear that smart people spent some time thinking about the various threats.

Anyway, rambling. It's a great and very entertaining book, go read it!

[1] https://www-pub.iaea.org/MTCD/Publications/PDF/Pub1481_web.p...


I just spent some time implementing a lazy VM: Note also the push/enter VS eval/apply implementation change in GHC described in [1].

[1] https://www.microsoft.com/en-us/research/publication/make-fa...


I think this is a common misunderstanding. The images are in the public domain. Nothing stops Getty (or you, or anyone) from selling them, even though you can just use them for free.

The value-add service that Getty offers is legal indemnification, i.e. they cover the legal costs if the image turns out to be copyrighted after all. To offer this service they spend some time and money upfront to research images' copyright status.

Whether you think that's good value for money is up to you.


> they spend some time and money upfront to research images' copyright status.

From the discussion a few days ago, that doesn't seem to be the case. It seems to be more like they just gamble on not getting caught most of the time. https://hackertimes.com/item?id=22340547


The other issue brought up recently is when they try to enforce their licencing of public domain images, which is a lot more shady. Selling you a licence, sure, why not. Complaining you’re using a public domain image without getty’s licence? Threatening legal action over the same?

There may be a lot of value to a lot of their portfolio. But there’s some warty rough edges too.


I don't misunderstand it at all. I am aware its legal. I just think that getty should be completely transparent about the copyright status, instead of granting a restricted license to use something they don't own the rights to grant in the first place.


If they really do indemnify you, it's actually a pretty huge benefit. It's pretty easy to use content that is 'royalty free' but then get sued later on when you find out it actually wasn't.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: