Hacker Timesnew | past | comments | ask | show | jobs | submit | cjfd's commentslogin

When I was a teenager, I read a book about assembly language for the commodore and implemented the game of life in a really simple way. I just used the text screen. To switch on a cell, I would put an asterisk ('*') in it. Then I could run my machine code program and it would evolve according to the rules of the game of life.

And who didn't do that! :)

You could also 4x the resolution by using half- and quarter-block characters from the top half of the ASCII table (or it'd be the PETSCII one i C64 case).


> And who didn't do that! :)

Exactly. It's even how I taught myself extremely basic Pascal -- getting my BASIC Life program running in Pascal. With asterisks.

A taught a friend at uni, who was a much better programmer than me, how the algorithm worked. He did a pixel-by-pixel version in machine code, but it was a bit slow on a ZX Spectrum.

So he did exactly the quarter-character-cell version you describe. I wrote the editor in BASIC, and he wrote a machine-code routine that kicked in when told and ran the generations. For extra fun he emitted some of the intermediate state to the border, so the border flashed stripes of colour as it calculated, so you could see it "thinking". Handy for static patterns -- you could see it hadn't crashed.

I've been considering doing a quarter-cell Mandelbrot for about 30Y now. Never got round to it yet.


The answer to a lot of "wow, how did the 8-bit machine pull that off? it seems like that would eat a lot of RAM" is that the framebuffer is the data storage. You were literally looking at the primary data store itself, because when a full-resolution framebuffer was 1/4th your addressable RAM (and slightly more than that for your actual RAM since you couldn't ever quite use all 64KB no matter how you mapped it), you need to get the most bang for the buck out of that RAM usage as you can.

Ha, I remember doing this with my Apple //. I forget what I was doing, but realized if I could set a pixel and later get what color was drawn at that location I could use it as a big array. Didn't know about peek/poke yet. One of those core "computers are magic" memories.

When I got into retrocomputing a few years ago, I also did this. Works great with TRS-80 semigraphic characters. First, I wrote it in C with a Z80 c compiler. The, I wrote it again in assembly and it was much faster! Amazing!

Well, if you do not need to care about performance everything can be extremely simple indeed. Let me show you some data structure in coq/rocq while switching off notations and diplaying low level content.

Require Import String.

Definition hello: string := "Hello world!".

Print hello.

hello = String (Ascii.Ascii false false false true false false true false) (String (Ascii.Ascii true false true false false true true false) (String (Ascii.Ascii false false true true false true true false) (String (Ascii.Ascii false false true true false true true false) (String (Ascii.Ascii true true true true false true true false) (String (Ascii.Ascii false false false false false true false false) (String (Ascii.Ascii true true true false true true true false) (String (Ascii.Ascii true true true true false true true false) (String (Ascii.Ascii false true false false true true true false) (String (Ascii.Ascii false false true true false true true false) (String (Ascii.Ascii false false true false false true true false) (String (Ascii.Ascii true false false false false true false false) EmptyString))))))))))) : string


In Lean, strings are packed arrays of bytes, encoded as UTF-8. Lean is very careful about performance; after all, a self-hosted system that can't generate fast code would not scale.

You know you could just define the verified specs in lean and if performance is a problem, use the lean spec to extract an interface and tests for a more performant language like rust. You could at least in theory use Lean as an orchestrator of verified interfaces.

There are also some funny humorous pieces on this site.

The short summary of it being that these people are beyond terrible at giving names to things.

Programmers and engineers should never be allowed to name things.

I say that as a programmer and engineer.


"We suck at naming things" -- Bjarne Stroustrup, in a talk about SFINAE

On one side I agree. On other side I look how marketing people name things and I think we're still better off

Imagine if next edition of GCC, released in 2026 was named 2027. Then it was GCC One. Then GCC 720. Then GCC XE. Then just plain GCC. Then GCC Teams


And then finally…GNU 720 AssistantDriver.

(Tip of the hat to Microsoft’s marketing teams.)


The python community has the habit of giving short names for things

Well, I think there is something to it. Computers were at some point newly invented so research in algorithms suddenly became much more applicable. This opened up a gold mine of research opportunities. But like real life mines at some point they get depleted and then the research becomes much less interesting unless you happen to be interested in niche topics. But, of course, the paper mill needs to keep running and so does the production of PhDs.


There are some services where it makes sense. E.g., submitting taxes with the government, logging into the banking website. Apart from that kind of service, yes I don't think I would want my identity or age verified on more or less any website.


the catch is that for both cases same backend provider is most likely used. persona for example. and you have no choice who will id your face.


I mean, if you live in a country where the state will delegate ID verification to a creepy company instead of having that as an in house capability you have more pressing structural issues to deal with.


ok, lets do a poll. id like to see who uses what. remember its not only countries its also private businesses like banks or lawyers

and remember its like ratchet. there might be 99% of services that use inhouse face id, and its enough to have only one to leak your data.


Ha! You are concerned about the privacy aspects of IDs but you want me to list what authentication services I use for you? That's too funny to help out with :p


i ment to list id services that are used by your services not services themselves.

My data point is persona.


The article talks about 'software development will be democratized' but the current LLM hype is quite the opposite. The LLMs are owned by large companies and are quite impossible to train by any individual, if only because of energy costs. The situation where I am typing my code on my linux machine is much more democratic.


Right, people misuse this term "democratized" all the time. Because it sounds nice. But it's incorrect.

Democracy is about governance, not access.

A "democratized" LLM would be one in which its users collectively made decisions about how it was managed. Or if the companies that owned LLMs were ran democratically.


>Democracy is about governance, not access.

It can be about both meanings. The additional meanings of democratize to describe "more accessible" are documented in Oxford and Merriam-Webster dictionaries:

https://www.encyclopedia.com/humanities/dictionaries-thesaur...

https://www.merriam-webster.com/dictionary/democratic#:~:tex...


With the consequence that disambiguation may be needed.


I've been wondering recently if there's some practical path forward for some sort of co-op based LLM training. Something which puts the power in the hands of the users somehow.


You may check Prime Intellect's prime-diloco and Nous Research's DisTrO


The claim isn't that the LLMs are democratized. The claim is that LLMs are causing software development to be democratized. As in, people who want software are more able to make it themselves rather than having to go ask the elites for some. As in, the elites in IT now have less power to govern what software other people can have.

(Or alternatively, it's getting harder to stamp out "shadow IT" and all the risks and headaches it causes.)


But the LLMs are quite the opposite: People should not bother with developing software, but ask the big LLM providers to do it for them instead.

In all aspects of the term, software is getting less democratized. But that is in line with a decades long trend, where computers used to ship with BASIC installed and now you need a specialized IDE tool which has a learning curve.

It used to be that you could dabble with HTML but now you need to learn a few javascript frameworks just to modify existing code. You used to start a piece of software by running it, modern server software is a fragile jigsaw that is delivered to production in the cloud. The list goes on. The future we are being promised is that you ask your paid-for development agent to make the necessary changes you require and deliver in to production in the cloud.

Which is fine, in a way, but it shifts power to the professionals. Just as Google, Apple or Microsoft owns your identity and your data, and you pay to use it, they can also decide to deny access for any reason. They are private companies, after all, and it is their data.


If software development were democratized, then decisions that software developers make would be made democratically. On or off the job. On the job, the workplace would be run democratically, instead of as it is now, dictatorially. Or off the job, groups of engineers would be coming together to create governance and make collective decisions about the software they use, like the Debian project or the recent Nix governance. Neither is the case.

Building yourself a table using some new carbon fiber hammer isn't democracy. That's just consumerism.


Hard to state that LLMs "democratize" software development when LLM companies can ban you from software development for any reason or no reason at all, and without recourse of any kind. The HN frontpage currently showcases an Antigravity ban that applied across Gemini, and there's few companies that provide affordable LLM services.

The actual elites greatly extended their control over software development, that's the opposite of democracy


This only remains true so long as open weight models lack significant utility.

Access to compilers was almost as controlled as access to LLMs to prior to the GNU toolchain and Linux putting a C compiler and unix (ish) machine in the hands of anyone who cared for one.


The problem is compute and memory. I think OpenAI bought RAM supply mainly to choke the ability of consumer hardware to run open weight models (that hit the memory bottleneck before other constraints). Now there's a shortage in other components as well. I don't see how local AI can compete in usefulness.


It is democratising from the perspective of non-programmers- they can now make their own tools.

What you say about big tech is true at same time though. I worry about what happens when China takes the lead and no longer feels the need to do open models. First hints already showing - advance access to ds4 only for Chinese hardware makers


Programming is probably the most democratized profession ever.

The problem was never access barriers, but the fact that people are too lazy to study even a 200-300 pages on something as simple as ruby on rails.


I think there’s an actual barrier. I’ve seen it, especially since the (until recently) brisk market for programmers was sucking people out of traditional engineering.

It’s puzzling because programming seems so easy and fun. And even before LLM’s, we had StackOverflow after all.

But for some reason a lot of people just hit a wall when they try to learn programming, and we don’t know why. The “CS 101” course at colleges has extremely high attrition.

A minor secondary effect may have been that if you were not a software developer, your boss didn’t want to see you programming.


This is literally the same for all professions, only in CS/SE it is for some unknown fucking reason considered “a problem”. Why isn’t there “replace extremely expensive doctors/lawyers with AI” movement?


Because programmers made the LLMs, and they first applied it to the problems they know, so the examples of "replacing a programmer" are abundant. Then the hype train rolled in and now it's suddenly going to replace everything, just that software engineering is the low-hanging fruit since they already have "proof" that it works in that domain.

Hint: it actually doesn't work at real depth, and why not is fairly well explained in TFA: they hype always overestimates the depth of the field. So these advances do help to make easy thing easy (in the case of LLMs because they have been trained on a billion examples of the easy stuff), but don't really end up helping with the hard things (because they really only make new things that weren't encompassed in their training by getting lucky, and because tedious things are different than hard things).


There will be, code was just a natural first start because it’s just text.


Overly optimistic people are already talking about using LLM based AI as a way to provide healthcare access in underserved (i.e. rural) areas. There's already lots of studies going on for things like using AI to identify tumors and cancers in MRI and other images.

There's national headlines every few months for lawyers getting in trouble for submitting LLM hallucinated citations in court, so lawyers are starting to do it to themselves as well.

It's early days yet, because unlike most CRUD apps, the consequences of hallucinations and outright bad calls in medicine and law are life ending. Unless the bubble pops soon, it's coming though.


CS programs have high attrition rates because programming or "coding" has been touted as easy money for a couple few decades now. When people find out it's not so easy, they bail. Holding a few layers of abstractions in your head is not something that everyone does easily.

Just as keeping most of the structure of a 4-novel-long story in your head is not something everyone can do, hence why being a successful author is not something that everyone can do. Start telling everyone that being a novelist is easy money, though, and you'll see Comp 101 courses filling up and the attrition rate correspondingly go through the roof.


Yeah. There's a barrier also for professional surfing, soccer, cinema acting, submarine soldering, cooking.

Lots of people bought thousand dollars worth of cooking books and still make food their dogs turn the noses with disgust at.

Maybe there's some fucking talent requirement to do that stuff, even if just a little bit, to the despair of all Project/Product Manager types that secretly hate and despise software engineers.


Anyone could make their own tools before this as well. Just needed to learn something first.

Real democratizing or programming is free access to compilers, SDKs, etc. AI coding does nothing to help that. In fact, it hurts it, because those non-programmers only get access to the AI tools on the terms of the AI companies. Sure they could train their own models, but then we're back to having to learn things.


They can rent their own tools, more like.


No, they can make their own tools. They rent someone else's tools in the process of making their own tools.


Not entirely true. For instance if I use LLMs to build an ios app I still need to pay apple $100 to use my own app for an undetermined amount of time.

If I build a web app i still need to pay for a domain, for a server for egress.

We are just renting. Wouldn’t be surprised if in the future this gets even more depressing


They can continue renting to maintain the tools they make.


They _have to_ continue renting, because they didn't learn anything while "making" those tools.


One day people will not even be able to own computers anymore. They will be owned, controlled and rented out by corporate elites for limited purposes only. The personal computer will probably either cease to exist due to economic factors. It will probably be made illegal for citizens to own free computers. We'll probably need licenses to operate one.

The mere concept of people "making their own tools" is just comical in this bleak timeline.


Terrible argument. They always could learn and DIY.


You have to have a knack for it, most people are not programmer types


I don't think it's about being a "type" so much as choosing what to specialize in.

I could learn plumbing skills and do the plumbing around my house. I've chosen not to.


There’s definitely a type. My wife is much smarter and harder working than me, near perfect SAT score, made it through an engineering degree at a much better school than I went to. Then did med school, residency, and fellowship.

She’s insanely quick. I once told her about one way hashing and before I was even half way through the explanation. Before I and ever said a thing about what they were used for she stops me and says “oh so that’s why websites can’t just send you your password when you forget it”.

At her job she has to call time of death for kids, tell people their kid has cancer, deal with people who literally want her dead, work shifts where she is the one ultimately responsible for the life and death of every patient that walks in the door, and work 7a-4p one day then 10p-7a the next.

She can do all that but she says that she hated her Matlab class in college more than anything else and she could absolutely never do my job because she doesn’t have it in her to bang her head against a wall chasing down a bug for an hour that turns out to be a typo.


Sounds like you are in a wonderful relationship, I’m glad!


... if they are privileged enough to be able to take time away from family and jobs.

The current crop of LLMs are subsidised enough to make this learning less expensive for those with little of both time and money. That's what's meant by democratised.


The people taking the lead in most of Ai in America are bootlickers of fascism. So not much difference than China on a long enough time line.


The US losing the plot doesn’t change the fact that the tech is fundamentally democraticism on a personal level.

If all the frontier models disappear into autocratic dark holes then yeah we have a problem but the fundamental freedom gain an “individuals can make tools without knowing coding” isn’t going anywhere


You'd bet that if LLMs were democratizing, they'd be 100's of feet away from it.

That they're charging in suggests it can be just as feudal as every other technology. It has no moral value. It's a tool; a butcher can swing an axe in the kitchen as much as in the battlefield.


It's "democratizing" in the same way Uber "democratized" taxis...


Taxi became more accessible and reliable, didn't it


For a second, while the hook was sinking in the fish's lip. Now Uber is being "democratized" to billions in profits.


have you priced an Uber lately?


That's a great point but you didn't make your linux machine yourself. A large tech corp made it, and each of its parts. Some of us could probably make their own computers but I don't think I'd be able to make one smaller than the house I live in. There's something to be said about large-scale automation and that's not that it "democratizes" anything. Like you say: quite the opposite.


You are assuming democracy wasn't designed to crush the individual and reduce autonomy at all cost. How cute.


Sure, we can run the math on heat dissipation. The law of Stefan-Boltzman is free and open source and it application is high school level physics. You talk about 50 MW. You are going to need a lot of surface area to radiate that off at somewhere close to reasonable temperatures.


> The law of Stefan-Boltzman is free and open source... What do you mean by "open source"? Can we contribute changes to it?


E.W. Dijkstra: "Measuring programming progress by lines of code is like measuring aircraft building progress by weight".

Personally, I think, much of the art of programming is to do as much as possible with as few lines of code as possible.


That’s Bill Gates’, not Dijkstra’s.


measuring aphorism worth by attribution is like architecture about drowning.


To be fair there was no value judgement there, just a correction in attribution.


There’s more to it than that though. The solution using the least possible lines is often inscrutable and brittle. The art is in finding the right level of abstraction which can deliver the performance required while being sufficiently legible. Depending on the specific problem you have to weight your solution accordingly, if performance is critical you must often forfeit legibility. The art is in recognising and dealing with trade offs.


when building an airplane one of the goals is to figure out what you can remove without affecting the stability of the plane. performance (use of fuel) matters here too. so it's kind-of the same thing?


Agree, and even better solution some times: no code at all.


They say that prediction is difficult, especially when it is about the future. Unwise economic policies may be punished quickly, slowly or might be revoked before punished severely. The question is how much risk one is willing to take. Another matter is of morality. Being invested into something means supporting its practices and being partly responsible for them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: