When I talk about fundamental limitations, I mean limitations that can't be solved, even if they could be improved.
We have improved hallucinations significantly, and yet it seems clear that they are inherent to the technology and so will always exist to some extent.
As a general architecture, an LLM also has limitations that can't be improved unless we switch to another, fundamentally different AI design that's non LLM based.
There are also limitations due to maths and/or physics that aren't fixable under any design. Outside science fiction, there is no technology whose limitations are all fixable.
Am I misreading that paper? They define hallucinations as anything other than the correct answer and prove that there are infinitely many questions an LLM can't answer correctly, but that's true of any architecture- there are infinitely many problems a team of geniuses with supercomputers can't answer. If an LLM can be made to reliably say "I don't know" when it doesn't, hallucinations are solved- they contend that this doesn't matter because you can keep drawing from your pile of infinite unanswerable questions and the LLM will either never answer or will make something up. Seems like a technically true result that isn't usefully true.
def visit_bf(g):
n, children = g
yield n
if children:
iterators = [iter(visit_df(c)) for c in children]
while iterators:
try:
yield next(iterators[0])
except StopIteration:
iterators.pop(0)
iterators = iterators[1:] + iterators[:1]
The difference between DFS and BFS is literally just the last line that rotates the list of child trees.
Python is a pretty mainstream language and even though the DFS case can be simplified by using `yield from` and BFS cannot, I consider that just to be syntactic sugar on top of this base implementation.
Well, the article says that the effect of the impact was much larger than the scientists expected. That doesn't really give a lot of confidence in how good we are at predicting these things.
One of the ways that the classics can be improved is not to take the analytic ideal coefficients and approximate them to the closest floating point number, but rather take those ideal coefficients as a starting point for a search of slightly better ones.
The SLEEF Vectorized Math Library [1] does this and therefore can usually provide accuracy guarantees for the whole floating point range with a polynomial order lower than theory would predict.
Its asinf function [2] is accurate to 1 ULP for all single precision floats, and is similar to the `asin_cg` from the article, with the main difference the sqrt is done on the input of the polynomial instead of the output.
That seems to be due to he microcontroller using its pins in duplex.
There is indeed no radiation being emitted in that case, just the lamp and rotation.
Ah nice for noticing d!=0 is d>0. Not sure how I missed the multiplication to get rid of the vector form; I guess I was too obsessed with the x-x trick...
I added your changes to the Shadertoy version with your HN nickname. I'll integrate it to the original later.
I saw that you used `float z;` to later use `z` instead of the constant `0.`.
You can also apply that to get a zero vector: `vec3 y;` and use `y` in place of `p-p`.
It seems that leaving the obsession behind some more can save another byte.
Especially in HPC there are lots of workloads that do not benefit from SMT. Such workloads are almost always bottlenecked on either memory bandwidth or vector execution ports. These are exactly the resources that are shared between the sibling threads.
So now you have a choice of either disabling SMT in the bios, or make sure the application correctly interprets the CPU topology and only spawns one thread per physical core. The former is often the easier option, both from software development and system administration perspective.
>Especially in HPC there are lots of workloads that do not benefit from SMT...So now you have a choice of either disabling SMT in the bios
Thats madness. Theyre cheaper than their all-core equivalent. Why even buy one in the first place if HT slows down the CPU? Youre still better off with them enabled.
In addition to needing SMT to get full performance, there were a lot of other small details you needed to get right on Xeon Phi to get close to the advertised performance. Think of AVX512 and the HBM.
For practical applications, it never really delivered.
reply