Hacker Timesnew | past | comments | ask | show | jobs | submit | sph's commentslogin

AI art enjoyers and missing the point of art: name a better duo.

No one has ever claimed AI cannot imitate a Monet, but however good the imitation, it still isn't art any more than a Xerox of a painting is art. This is the exact reason why most people feel bad after discovering that what they felt was work of human ingenuity, is just a fake, a simulacrum of it. The creation of art, arguably the most human of instincts, cannot be separated from the emotions and effort that went into it.

All this proves is that most people cannot tell if that picture is a Monet or not.


> All this proves is that most people cannot tell if that picture is a Monet or not.

It proves that people don't actually know what they like about "art" or even why they think some art is good, and some is bad.

These people criticized and trashed a widely regarded, famous painting because they were told that it was a cheap imitation.

If the AI generated a real imitation and the Met hung it on their walls I guarantee these same people would celebrate it just the same because they are told that it is real.


> It proves that people don't actually know what they like about "art" or even why they think some art is good, and some is bad.

That's because those are famously difficult questions to answer.


> All this proves is that most people cannot tell if that picture is a Monet or not.

It goes beyond that. It proves that many people have an inherent bias against AI itself that's unrelated to whatever it generates. "This was made by AI, therefore it's bad in every way".


that's because when dealing with art, the "why" something was made can be as important as the "what"

... I mean, yes? People object to AI art (and generative AI in general) on ethical grounds, not just aesthetic ones. This is something anti-AI people are quite explicit about.

Exactly. I'd rather appreciate a bad piece of art made in earnest by a person.

That's precisely the difference between art and a commodity.


That's fair, but a lot of the attacks on AI art specifically point to perceived technical and compositional flaws. Heck, you still see people making "mangled fingers" jokes, and that hasn't been a thing in frontier models for a couple years now. Plus, a lot of the stylistic and "lacks creativity" critiques come from people churning out images with basic prompts on default settings; a modicum of effort makes it much more difficult to distinguish.

Good points, but consider what this post does prove: people’s arguments against AI art are shallow; they often attack the artifacts themselves instead of making your deeper argument.

I remember this old episode of Doctor Who where the Doctor scoffs at a postcard with the Mona Lisa on it and derides souless "art made by computers."

As a digital artist, of course I rolled my eyes at the time, but these days I just keep thinking about that storyline more and more.

We've basically transitioned to a world where digital art is almost the default, but I think the world is going to value physical art much more highly in the coming years.


This is a good place as any to ask, last time I didn't get any answer: has there ever been a serious Linux exploit from manipulating/predicting bad PRNG? Apart from the Debian SSH key generation fiasco from years ago, of course.

Having a good entropy source makes mathematical sense, and you want something a bit more "random" than a dice roll, but I wonder at which point it becomes security theatre.

Of all the possible avenues for exploiting a modern OS might have, I figure kernel PRNG prediction to be very, very far down the list of things to try.


It’s both hard to attack but also a hugely audited system with a lot of attention paid.

That being said, [1] from 2012. The challenge with security is that structural weaknesses can take a long time to be discovered but once they are it’s catastrophic. Modern Linux finally switched to CSPRNG and proper construction and relies less on the numerology of entropy estimation it had been using (ie real security instead of theater). RDRAND has also been there for a long time on the x86 side which is useful because even if it’s insecure it gets mixed with other entropy sources like instruction execution time and scheduling jitter to protect standalone servers and iot devices.

Of course you hit the nail on the head in terms of the challenge of distinguishing security theater because you won’t know if the hardening is useful until there’s a problem, but there’s enough knowledgeable people on it that it’s less security theater than it might seem if you know what’s going on.

[1] https://www.usenix.org/system/files/conference/usenixsecurit...


30 years ago BSDs already had non-blocking /dev/random (there was no difference to /dev/urandom). OpenBSD especially wouldn’t have shipped something known insecure. Blocking random probably caused more issues (DOS, random hangs, etc.) than a no blocking CSPRNG would have.

Linux did /dev/random first, so naturally it had the oldest design for a few years, without the security expert scrutiny and experience, which the other OSes had for their implementations.

OpenBSD didn't exist yet when /dev/random and /dev/urandom were created for Linux.


Easy to dunk on Linux, but it would be misremembering history.

OpenBSD was using an insecure PRNG before 2014 when they upgraded to ChaCha20 to replace RC4. Yarrow came about in 1999 with Fortuna following in 2003 as a more secure version. Apple switched to Fortuna in 2020 while Linux switched to ChaCha20 in 2016.

Windows is possibly in the strongest position using SHA-1 prior to Vista but the specific details are largely unknown. There were known weaknesses found in Windows 2000 which used RC4 (the same thing OpenBSD was using until 2014). They migrated to AES-CTR-DRBG at least from 2019.

So 30 years ago, that's 1996 and everyone was basically some form of insecure. 20 years ago, we're in the early 2000s and probably FreeBSD was the first to start trying to secure but their implementation was quite weak (Yarrow was a good step but it was weak). It took up until about the mid 2010s for everyone to migrate and Linux was not the first nor the last.

The strongest I'll say is that Theodore Ts'o's objections to Yarrow have not stood the test of time (Yarrow itself was weak, but the argument against CSPRNG in favor of entropy numerology was incorrect). Linux similarly has made some troubling statements when it comes to security / cryptography. I think those cultural things are the things that give a certain impression even though technically Linux does seem to have ultimately navigated to a decent position overall.


/dev/[u]random is actually a CSPRNG. it uses a cryptographic hash function to mix in every drop of randomness accessible to the kernel. predicting it without compromising the kernel entails predicting all the randomness that went into it, past a certain point you are better off bruteforcing the internal state directly and that's intractable.

the greatest danger is right after boot where it's possible the kernel didn't have enough randomness to mix in yet. not as much of an issue on modern systems.


which is why periodically the kernel updates an entropy file on disk which is used on reboot as extra seed

cloning/snapshotting VMs might have entropy problems, but a proper VM manager will also inject entropy into the VM on boot


I think this one is among the most significant findings: https://factorable.net/

I also believe there were some android ASLR issues based on the same weakness (i.e., low early boot-time entropy).

But this is all quite old, and there've been massive improvements. Basically, "don't use a very old linux kernel" is your mitigation for these issues.


Some of the paranoia has been proven correct. For example both Intel and AMD had RDRAND bugs so not relying on it as sole source was the correct choice.

There was a bitcoin key generation flaw on android, and AFAIK people lost money.

I don't think anything a computer can do is more random than a dice roll.

Assuming you are talking about real physical dice and not an imaginary function that generates perfectly random die rolls.

They are actually pretty poor random number generators. For starters, dice are chaotic, not random, the outcome is based entirely on initial conditions. For humans rolling dice, the space of initial conditions can become surprisingly constrained, especially if the human wants to achieve specific outcomes.


> For starters, dice are chaotic, not random

Unless you're doing something at the quantum level (maybe) this is true of every random number generator.


You can analyze it much like you'd analyze a password. If you construct a password from four words taken from a list of 1024 words, that's 40 bits of entropy. On average, a brute force attacker would have to try 2^39 (half the possibilities) random passwords before cracking your account. You can then apply that number to the time/money required for one attempt, and see if it's sufficiently secure for your tastes. If the answer comes back as 10 minutes, maybe it's not good enough. If it's 10 quadrillion years, you're probably OK.

If you have bad PRNG, you should be able to quantify it in terms of bits. The Debian bug resulted in 15 bits of randomness, since all inputs to the PRNG were erased except for the pid, which was 15 bits at the time.

Another real-world example, albeit not Linux. I once worked on a program that had the option of encrypting save files. The encryption was custom (not done by me!) and had a bit of an issue. The encryption itself was not bad, but the save file's master encryption key was generated from the current time. This reduced the number of bits of randomness to well within brute-force range, especially if you could guess at roughly when the key was created. This was convenient for users who had lost their passwords, but somewhat less convenient for users who wanted to actually protect their data.

An attacker isn't going to spontaneously try breaking your PRNG, but if you do have an issue, it's a real concern. It'll be far down the list of things to try just because any modern system will hopefully have very good randomness.


I have never truly appreciated Marx and his theory on alienation after working for 18 months for a big co. on two projects, the first of which was canned halfway through, the second of which might see the same fate, but I have preempted it by giving my notice - these are my last two weeks. Delivered on time, client very happy about the quality of the work, I faced some challenges which I feel I solved in a brilliant manner; I was paid well and the work load wasn't that great, apart from dealing with layers of bureaucracy and managers; yet, at the end of the day, all I feel is emptiness.

My effort, my work, gone to waste, nothing tangible left behind than an incrementing number in my bank's database. Is my purpose on earth to run on a hamster wheel in exchange for the right to food and a roof over my head, until the day I'm too old to be useful to the machine?

I keep hearing of younger developers wanting to stay in tech because "at least the pay is good"; if you're like me, there'll come a time where the emptiness within can't ever be filled with money alone.

---

"I exist to provide labor to my employer. In return I receive just enough money to provide for basic shelter, food and transportation so I can return the next day and labor for my employer again. I will do this day after day and be thankful for the privilege. I will do this week after week. Month after month and year after year until my health and body can no longer provide value to my employer. Then I will die."

— Anonymous


My theory is that Microsoft is paying Adobe billions never to release their tools on Linux. It's Windows' last stronghold.

I have stopped playing native ports and just prefer Proton when I have the choice. Many devs using Unity & co. just tick the "export to Linux" option and never try the build, which is often much slower or bug ridden.

I was playing Project: Gorgon recently, I was about to refund because it ran terribly on my machine (despite the low end graphics), when I noticed it was using the native build, switched to Proton and got a 200% FPS boost.

As long as I can play on Linux, I don't care what translation layer it goes through.


AI: Actual Indians^WMalagasy

Don’t get offended then.

They needed to make room for more AI posts

Underrated comment

Imagine writing this comment on 2013 Hacker News with a straight face.

AI;dr: keyword arguments would be great in all languages, not just Smalltalk

Also, obviously bot/bought account.


Swift also has them. It's probably the one thing I sometimes wish Rust would copy from them.

Thinking more about this, I'm realizing that Swift was probably transitively inspired by Smalltalk; although I don't know much about Objective-C, my vague understanding is that it's a bit more inspired by Smalltalk's view of object-oriented programming via message passing than what commonly is considered OO nowadays (which is reflected somewhat in that it doesn't use the typical dot-operator for method calls), and I'm guessing that it was included in Swift as one of the things that was liked about Objective-C (and maybe a little to make interop more direct).

While I wish every language had them in a way, they do tend to enshrine the argument names in the ABI, so now you can't change those in public APIs. (Main reason I think it shouldn't be part of the ABI is because I think only the necessary things to identify and correctly use a contract should be part of the ABI)

I have been thinking off if there is a way to have it work without enshrining them in the ABI and my only real idea is: allow arbitrary names at the call site (e.g. `my_function(some_var=1)` or `my_function(some_var: 1)`) but don't enforce the naming, just have linting spit out a warning for it if it doesn't match.


I don't feel like anyone finds it onerous to not be able to rename functions as a non-breaking change, so it's not obvious to me that this is much of a deal-breaker. If anything, I'd be thrilled for it to force people to spend more time carefully picking the names of their parameters in public APIs so that the autocompletion/documentation popup in my editor has higher quality information!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: