Jobs had won complete cultural dominance of desktop pcs with the iMac 27". If you saw a desktop on a tv show for the past 20 years it was an iMac 27". Tim saw they could cancel it and go against their policy of minimal cords and sell separate Mac minis and Mac Studio displays.
I got one recently for a room that was already wired with speakers, and man the ability to control the volume on Apple TV remote app on your phone is amazing. For whatever reason none of my other Apple TV's will allow that (could be the tv's fault, but obviously somewhere along the line they at least expect a speaker bar). I'm sure it's a fault of the HDMI spec somewhere that you can't easily change the volume on the tv itself but you can on downstream devices.
I have an AppleTV on 2 different TVs and it works to change the volume on both of them.
Sometimes it stops working, but a reboot of the remote fixes it. The idea that I need to reboot a remote hurts my soul a little, but at least it works. Hold the TV and Volume Down buttons until the LED on the Apple TV turns off. Wait a bit and you’ll see a notice that it disconnects. Wait a while longer and press a button (volume seems safest) and it will connect again.
There are also some settings around the remote and volume. It can be set to use HDMI, the TV’s IR, and there is a learn option. The TV I’m currently on is using the IR direct to the TV… I guess this is why it doesn’t work when I try to use the app, but I almost never use that anyway.
The difference is with the app. With the Apple Remote it works fine, but in the app it only works with HDMI magic (I'm sure some tv's allow you to use the HDMI magic with no sound bar / receiver)
Just wait till these robot maximalists figure out that a pile of oxygen, carbon, hydrogen, and nitrogen is much cheaper than robots made out of steel and carbon fiber.
I mean, they haven't glommed onto the daily experience of giving a kid a snickers bar and asking them a question is cheaper than building a nuclear reactor to power GPT4o levels of LLM...
If we could directly convert the food energy of a Snickers bar to electricity we could easily power AI. A Snickers bar has 250 kcal, which is 1000 kJ or about 250 grams of TNT.[https://www.wolframalpha.com/input?i=250+kcal+in+joule] chatgpt-4 uses 3.6 kJ to 36 kJ per query so you could get potentially hundreds of queries on a single Snickers bar.
We only need a way to harness the power of the human body. Maybe we put people in VR for fun while using their body heat to power the AI.
TNT and other explosives have relatively little energy per kg compared to eg petrol or snickers.
That's explosives are chemicals selected / designed to be able to release their chemical energy really quickly and without needing any external oxidizer (because harvesting atmospheric oxygen would be too slow). That focus obviously leads to compromises in other areas, like energy density.
Temporarily, on the margin. A human would need multiple Snickers bars per day to survive, and can't survive on Snickers bars alone for more than couple days or weeks.
Also no human is anywhere close to being as knowledgeable and skilled as LLMs at all the things at the same time, so it hardly even compares.
Only if you also ate some random other stuff you found lying around. Doesn't even have to provide much in the way of energy, just enough 'dirt' to round out your diet with whatever other essentials you need.
Human bodies have evolved to survive for a long time on relatively little, yes. But not to evolve for a very long time on a single source of very 'clean' food like snickers bars. 'Clean' in the sense that chemically snickers has relatively well defined inputs, whereas hungry humans would eat just about anything, including insects and grass and bark or leather.
This isn’t true. There are countless cases of people surviving for months on nearly no food at all.
I’m not talking about what it takes to stay alive for long term periods. I was refuting the silly idea that you would die after a couple of days/weeks of snickers.
How much vitamin C is in a snickers bar? I think you'd get scurvy within a month or two if that's all you had.
How much vitamin A? Night blindness. Vitamin B? Neurological issues, confusion.
That's the thing with mono-diets, your body needs a diverse range of things that it can't synthesise itself.
But to the core point, in cases where the output of an LLM is good enough, many already have much lower energy requirements than humans: o4-mini is currently priced at $1.1 per million tokens of input and $4.4/million tokens of output; if that's all being spent on electricity at $0.1/kWh, that's a max of 11 kWh/million tokens in and 44 kWh/million tokens out — how many calories would a human have to burn to read, write, hear, speak, and internally monologue the equivalent of a million tokens?
They're fully aware of the obvious fact that LLMs are getting better at reasoning than humans at scale in general, and this includes power efficiency too. Meanwhile, what is not getting comparably better is robotics. This leads to obvious conclusion about natural order of things and division of labor: computers are for thinking, humans are for doing manual labor.
> the obvious fact that LLMs are getting better at reasoning than humans
I wanted to say that you were wrong, that LLMs can't reason and so it certainly isn't an obvious truth that they do it better than humans, but when I asked AI if LLMs can reason it told me that they can't which (while still not being reasoned by the LLM) seems to support the spirit of your claim since it gave a correct answer while you (a presumed human that can reason) got it wrong.
We might be elevating the importance of reasoning too much because us humans need to use it to solve many difficult problems. But if intuition was stronger, conscious/explicit/logical reasoning might not be needed. Didn't the famous mathemetician Ramanujan say that God gave him his answers in his dreams? That sounds like really powerful intuition like an LLM. Us humans can already solve a lot of incredibly complex problems intuitively, but they're quite domain-specific, like for spatial navigation and social interaction.
Anthropologist Gregory Bateson predicted we'll know machines are conscious when we ask a question and the computer responds, "That reminds me of a story."
That seems to be the hangup. I have to use a definition that would put it on equal footing to what we do as humans since that's the comparison being made.
Computers and software can be said to "understand", "think", and "reason" in their own way and informally people have always used those words in that context. Recently, software which has been trained on human-reasoned output is producing text that mimics reasoning well enough that it can be confused for the real thing, but nobody has been able to show that any reasoning (as a human reasons) is what's occurring.
If the output it produces is as useful to me as the output produced by a human with the magical and expensive capability to 'reason', why should I care?
There are several that would apply. Let's use this one as an example: Reason is the capacity of consciously applying logic by drawing valid conclusions from new or existing information, with the aim of seeking the truth.
I don't think you need consciousness to reason. I don't see why repeated application of rewrite rules to extrapolate logical conclusions from antecedents shouldn't be considered reasoning. LLMs are perfectly able to match and apply rewrite rules, while using fuzzy concepts rather than being bound to crisp ontologies that make symbolic reasoning impractical to scale up. And for better or worse, LLMs can also apply simplified heuristics and rules of thumb, and end up making the same mistakes that humans make.
If you think "consciously" is a loaded term, wait until you get to "truth"!
Maybe it'd be easier to try another definition:
2
a(1)
: the power of comprehending, inferring, or thinking especially in orderly rational ways : intelligence
The same source defined intelligence as:
a(1)
: the ability to learn or understand or to deal with new or trying situations : reason
also : the skilled use of reason
And here we get the core of the issue. AI doesn't "think". It doesn't comprehend or understand what it does. There is no actual "I" in AI that didn't come from the people whose works were used to train it. At least not yet. I question if LLMs will ever be capable of anything more than producing a convincing affectation of the process used to produce the material it was trained on. I suspect that AGI will have to come from elsewhere. That doesn't mean that what passes for AI these days can't be useful, but I don't think it's capable of reason and as far as I know, nobody has proved otherwise.
Comprehend, from com- ("together" or "with") and prehendere ("to seize" or "grasp"). To take a hold of.
Can a calculator comprehend arithmetic? Can it take a hold of a number (in a register, for example), and a second number, and add them together to get a hold of the result?
What is computation, really? When we design machines to do arithmetic, do the machines actually do arithmetic, or do they just coincidentally come up with states that we humans can interpret as a correspondence with arithmetic?
More importantly, would a rose by any other name smell as sweet?
If you put a problem into text, and give it to an LLM, and an LLM applied a series of higher order pattern matching to it to produce more text, and you read the resulting text and interpret it as reasoning about the solution to a problem, has the LLM reasoned? Does the calculator calculate? Or does it really matter?
To all of you complaining about LLMs hallucinating, do try to give the same prompt to a kid on a sugar rush and let me know if you're getting more reliable responses.
Seems like the obvious solution to this, is if you collect driving data on public highways, the data has to be made available to the public. If you collect the data on private highways you are free to keep it private. If you don't intend to use it in a product on public highways it can remain private.
Doesn't even seem that crazy when you consider the government is already licensing them to be able to use their private data anyway. Biggest issue is someone didn't set it up this way from the start.
Changing the color of a door will cost you a few hundred dollars or less. Changing all the windows may cost tens or hundreds of thousands of dollars. Rotating the foundation five degrees may be more expensive than building a new building.
In Astoria, Oregon on Wireless road you can find nearly 100 in a tree. I'm not sure why they are in such high numbers, but you can often see them scavenging fields where seafood waste (shells) are dumped.
That’s amazing! I’d love to see it. Looking at a map, the road doesn’t look too long so we could hunt for it, but if you see this could you describe how to find the specific tree?
I moved back to the south after living out west for 20 years and it is insane the amount of trash dropped by trash workers while they are dumping bins. Part of it is cultural in that trash is literally piled at the street in bins of varying condition vs out west where you know if it doesn't fit in your 90 gal bin it ain't getting picked up by the robot arm.
The transition from "pile everything in a heap" to "if it's not in your wheelie bin it's not getting picked up" happens pretty quickly. Just need the garbage company to specify that in their contract with the municipality. I honestly miss being able to pile up oddly shaped pieces of trash though. Now if you have something weird, it's just not going to get picked up and you have to figure out how to get it to the dump.
If VC investors have high income, they are the class of people that get audited the most. By a lot, like 25x more likely than someone with median income.