I haven't seen the associated talk, but (a) I would imagine the author chuckled while reading this, because it's sort of a joke among scholars, and (b) the point is likely focused much more on the context of presenting research (e.g., at conferences) rather than a blanket ironclad rule for all presentations you ever make ever.
While I think there's some validity to your point that the author's presentation suffers excess verbosity, I'm not too worried about it because the linked slides seem more meant to act as a reference document than an example of a good presentation, and the level of text is just fine for that purpose.
Yeah I’m the author, this was a joke. I also wanted to convey to the students in the room that this was not a high quality presentation, more so text just converted into presentation form.
There are presentations that you actually present to an audience, to which this point is valid.
But lots of presentations, including this one I think, are merely used as a means of conveying information (yeah, not my favorite way of doing so, but being a contrarian doesn't do anybody's career any good), and those are indeed intended to be read and need to have explicitly all the information that you otherwise would be speaking and addressing.
I always thought this was funny. We were taught this in grad school, but hardly anybody followed this guideline. "If there's too much text on the slide, the audience will be busy reading the slide, and not paying attention to you". I try to have just the main point on the slide, then I can talk around it. The details should be in the slide notes, if you need reminders.
Your family is starving and your dog died of radiation poisoning from the fallout but at least your local LLM can browse this and recommend a good software stack for your automated booby traps.
I agree. This is language evolving. If someone from the 16th century could hear a modern well-educated person speak English today they would likely be horrified at how degenerate it would sound to them.
So I don't think current English is in some perfect state that should not change.
They're a customer already if they're opening the home screen and they probably already mounted it on their wall so fuck them. Show them ads. Also turn on the microphone in the background (what my Hisense tv does).
Your amazon links are broken. But I think you're missing the point of this thing. This isn't for people that really even care about performance. It's for people that want a laptop that works with their iPhone, does all the things their school needs them to do in a browser, and doesn't come with a complete dogsh*t OS, and isn't of dubious quality like an HP or a "NIAKUN", whatever that is.
>This isn't for people that really even care about performance. It's for people that want a laptop that works with their iPhone
That was my conclusion to my comment in my original. The title of "no other budget laptop can compete" is not just sensationalized, it is factually wrong. It should have been "the least expensive macbook yet comes with a catch"
"No other budget laptop can compete on offering MacOS" is certainly a correct statement, but it's not a particularly interesting one. If they're missing the point, it's because it was exaggerated to the point of not being recognizable.
And for their kids sick and tired of trying to help them fix Window's incompetence. You're into Dell for at least $800 for anything approaching an actually usable laptop. This is definitely my mom's next laptop.
> but the Neo may be proof that decades old macs could run Tahoe, and maybe as well or better than the Neo
The A18 Pro is going to out perform many "decades old" processors, which would you be referring to?
I wouldn't conflate "affordable" with "low-end" in terms of processing speed. Apple is able to get the price to this point because of decisions that the rest of the market did not make.
I think an old Mac Pro quad Xeon with 32gb of ram and an ssd of that era could do it. I agree that most cpus from 20 years ago could not. I understand that doing it and doing it well aren’t the same.
I ve just vibed for 2 weeks a pretty complex Python+Next.js app. I've forced Codex into TDD, so everything(!) has to be tested.
So far, it is really really stable and type errors haven't been a thing yet.
Not wanting to disagree, I am sure with Rust, it would be even more stable.
What will you use for dependent types, Idris 2? Lean? None are as popular as Rust especially counting the number of production level packages available.
This is quite sad to see someone react to a comment they disagree with by assuming that different opinion is paid for. I'd love it if you dug into my comment history and found even a shred of evidence that I'm being paid to talk positively about my programming language of choice.
All comments are paid for in some way, even if only in "warm fuzzies". If that is sad, why are you choosing to be sad? But outlandish comments usually require greater payment to justify someone putting in the effort. If you're not being paid well, what's the motivation to post things you know don't make any sense to try and sell a brand?
No, unless you mean the problem of over-engineering? In which case, yes, that is a realistic concern. In the real world, tests are quite often more than good enough. And since they are good enough they end up covering all the same cases a half-assed type system is able to assert anyway by virtue of the remaining logic needing to be tested, so the type system doesn't become all that important in the first place.
A half-assed type system is helpful for people writing code by hand. Then you get things like the squiggly lines in your editor and automated refactoring tools, which are quite beneficial for productivity. However, when an LLM is writing code none of that matters. It doesn't care one bit if the failure reports comes from the compiler or the test suite. It is all the same to it.
I suspect a more general and much more clever learning algorithm will emerge by then and will require less training data to get to a competent problem solving state faster even with dirty data. Something able to discriminate between novel information and junk. Until then I think there will be a quality decline after a few more years.
How will it emerge? In the past we've been told that the a(g)i will write itself, rapidly iterating itself into a super intelligence that handily solves all our current and future problems, but it's beginning to look like a chicken or the egg scenario.
Living systems were able to brute force their way to human brain, but it took billions of years and access to parallel processes that make the entire collective history of human computation seem like a mote to a star.
What novel spark do you see accelerating this process to such a hyperbolic extreme?
I would imagine a trajectory similar to AlphaGo, it starts out trying to replicate humans and then at a certain point pivots to entirely self-play. I think the main hurdle with llms, is that there isn't a strong reward target to go after. It seems like the current target is to simply replicate humans, but to go beyond that they will need a different target.
I agree in general, but defining an appropriate target seems intractable at the moment. Perhaps it is something the AIs will have to define for themselves.
I think real intelligences are working with myriad such targets, but an adversarial environment seems essential for developing intelligence along this axis.
I do think if there's a path to AGI from current efforts it will be through game play, but that could just be the impressionable kid who watched Wargames in the 80s speaking through me.
It took a billion years to get to the tool-making state, and then less than a 1000th of that time to making CPUs. Then a 1000th of that time to make LLMs. We are in a parabolic extreme
This is begging the question. What evidence is there that this is all the same "stuff" driving towards some future apex? What does it mean to "get to" the tool making state outside of a Civ-style video game?
Sorry but for $5 in credits you can have an agent port over all your bullshit to the next fad. I'll have one port over all my bullshit when the time comes too.
> Slides should have maybe a sentence of text at most
Proceeds to have slides with many bullet points and more than several sentences of text per.
I don't find issue with the slides as they are but if you're going to make arbitrary rules why not follow them yourself?
reply