Hacker Timesnew | past | comments | ask | show | jobs | submit | anthk's commentslogin

Plan9 has no terminals. Well, it actually has one in 9front, vt(1), but as a tool to call against Unix systems (such as ssh), not to interact with 9front software. vt(1) in 9front it's as 'native' as HyperTerminal in Windows.

We have namespaces from the day one. Proper namespaces.

The Unix way died with Plan9/9front and there are no teletypes, period. Just windows with shells running inside as any other program. You can run a browser under a window(1) instead of rc(1) which is the shell.

Hey 'software engineer', how much of the output of an LLM it's actually reproducible vs the one from a calculator or any programming language with the same input in different sessions?

A lot vs a human? I bet the LLM with the same prompt will write same code as before more often than I would (given I don’t remember what I wrote in the past).

Why are you so concerned about the LLM producing the exact same code across different sessions? Seems like a really weird thing to focus on. Why aren't you focused on things like security, maintainability, UI/UX, performance?

Agreed. It's not like humans can produce the same output given the same input for anything more than trivial inputs.

I'd argue that it's actually a benefit; I like that I can do several generations and compare them and pick the best result. HP, for example, used to do this with software teams, and that's how we got Rocky Mountain BASIC (AIUI the competing team was East Coast BASIC).


Comp-sci people like repeatability when they want that and true randomness when that is desired. Things in between are rarely desired.

In computing, things are much more useful when they behave in predictable ways. Even AI, many (most?) would argue.


Computer science and software development are hardly related

Not really related to this 'discussion' but this is an interesting problem in the AI space. It's essentially a well understood problem in unreliable distributed systems - if you have a series of steps that might not respond with the same answer every time (because one might fail usually) then how do you get to a useful and reliable outcome? I've been experimenting with running a prompt multiple times and having an agent diff the output to find parts that some runs missed, or having it vote on which run resulted in the best response, with a modicum of success. If you're concerned about having another layer of AI in there then getting the agents to return some structured output that you can just run through a deterministic function is an alternative.

Non-determinism is a problem that you can mitigate to some extent with a bit of effort, and is important if your AI is running without a human-in-the-loop step. If you're there prompting it though then it doesn't actually matter. If you don't get a good result just try again.


Don’t know if this is an annoying response… but how about just going through the code and check and grade the quality yourself?

I could do, but the end goal is to scale this to 100x what I can do myself, and there isn't time to review all those changes. By attempting to answer the problem when it's tiny and I can still keep it in my head then I'll end up building something that works at scale.

Maybe. The point is that this is all new, and looking forwards I think it's worth figuring out this stuff early.


It is under 9front. There are not terminals, you wan windows with shells on it.

DOSBox-X might fake it well enough.

I encountered this and hadn’t realized that DOSBox’s main goal was to play games - and that lots of “non game things” didn’t work. But DOSbox-X covered some; I ended up running DOS 5 in VMware.

Metal Slug and Garou looked fine-ish with 25% scanlines on LCD screens and 50% on PC CRT's.

My first exposure to Metal Slug was actually in regular emulators, and I never used the scanline filters, so now when I use the scanline filters in Metal Slug they feel..."wrong". In my mind, Metal Slug is supposed to have really sharp, chunky pixels.

Not my case; I'm old enough to play it at mid-late 90's in both bars and arcade rooms.

And that's the problem with current pixel art artists: they have no idea of what actual pixel art looked like. Hint: look at Garou with at least scanlines (or maybe a bilinear filter) enabled. That's what's Garou almost meant too look in CRT, far closer than raw pixel art.


I need to play Garou: Mark of the Wolves again, haven't touched that one in years. I believe that's the one that has a character named "Butt".

I've played a lot of Neo Geo games, and I even used to own a full MVS machine for awhile with its own CRT (mostly playing KOF 99), but I guess the scanlines never did much for me. I grew up playing the SNES and PlayStation and N64, but I almost equally grew up with emulators, so I guess I'm just used to the raw digital signal being displayed.


TBH pixel art on CRT's looked distinct, a bit smoother than LCD's. WIth just slight scanlines you could play the games well enough.

I mainly post in Usenet and IRC, and download PD movies (seriously) and books. I don't pirate any more because even current pirated media it's somehow a free advertisement for these people.

From Gutenberg, PD comics from the golden era -and pulp scifi-, noir movies, old weird science/fantasy series in B/W and whatnot, I'm pretty much covered. Ironically most current scifi media can be traced to...Bradbury novels, PKD's paranoia and some Weird Science comics.

Once 1984 gets into PD, that's it. It is in Canada, but you can read it online as long as you don't download or share it:

https://gutenbergcanada.ca/ebooks/ebooks/orwellg-nineteeneig...


Salvage old free as in freedom distros. Learn about i2pd and tunneling Usenet/IRC and Email (even cool online Nethack/Wesnoth/FreeCiv gameplays over it, any turn based libre game will work).

There are some Usenet servers (text content only, no binaries, all illegal crap it's cut down by design) listening under I2P servers. By design enforcing any cross-pond law it's impossible.

Learn about NNCP in order to tunnel messages over it, really useful for asynchronous connections such as Email and Usenet: https://nncpgo.org

Also, learn connect to a Pubnix and to use Usenet/IRC/Email/Mastodon services (tut it's a TUI Mastodon client) from remote servers. Make their own law obsolete across the world. Learn Mutt and GPG too, it's about 20 minutes of your life and for basic email a simple text editor like Nano, Mg or Mcedit would suffice to compose an email.

Try free Biltbee servers over IRC too, these can be connected even from DOS IRC clients in order to connect to modern services such as Jabber, Steam chat and even discord (join the &bitlbee channel once you connected ot a public Bitlbee server, there are several, and type down 'plugins' to get the available chat systems in that service) and thus any age bullshit for FreeDOS it's by design unenforceable without breaking network drivers and TCP/IP stacks as TSR's and whatnot. Ditto for old Amiga, RiscOS and such old releases which are unsupported. And banning retro computing would make the several civil right unions sue the state (and the judges) like crazy for huge amounts of money. Even META too as being the main lobby instigator.

Claim your freedoms back.


With the dawn of this bill I am finally building out my airgapped network.

I’ll be passing messages to and from the former internet using NNCP bundles. I’m planning to work on some interesting solutions for async communications over Nostr, with some alternate paths through radio for emergencies. Finally looking into steganography as well.

Hope to see you all there.


Or Algol 68, which is doing a comeback.

Or even ESPOL and its evolution, NEWP, never went away, only available to Unisys customers that care about security as top deployment priority.

I wish more people knew about the Burroughs Large Systems[0] machines. I haven't written any code for them, but I got turned-on to them by a financial Customer who ran a ClearPath Series A MCP system (and later one of the NT-based Clearpath machines with the SCAMP processor on a card) back in the late 90s, and later by a fellow contractor who did ALGOL programming for Unisys in the mid-70s and early 80s. It seems like an architecture with an uncompromising attitude toward security, and an utterly parallel universe to what the rest of the industry is (except for, perhaps, the IBM AS/400, at least in the sense of being uncompromising on design ideals).

[0] https://en.wikipedia.org/wiki/Burroughs_Large_Systems


Yes, IBM i and z/OS, are the other survivors.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: