Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I think the Searle/Aaronson thing is sorta dead (we're arguing over what, 3 paragraphs?) But...

You touch on a very interesting point in your reply here:

"You'd need something "outside of" the lookup table that could mutate its memory, or hold state of its own while reading from the lookup table. The complexity class of doing any of this would be irrelevant to Turing completeness."

I don't know if this is irrelevant to Turing completeness... I suspect you are right, but I'm not sure how to write a proof of that.

However, it is the basic problem I have faced writing code that can make representations (write it's own programs). My solution requires abandoning the symbolic route completely. We still do computations, and the computations interact and change each other, like how molecules interact and change each other. But it's a whole level down from the symbolic computation problem you bring up.

I tried doing cellular automata that had interacting mechanisms to alter their rule sets, which isn't that far from neural nets that alter their functions. And I tried approaching neural nets that could alter their structure based on their responses to input. Not merely back propagation of values, but back propagation of structure. But like all computation, it's the problem of syntax and semantics all over again!

I just decided to go down a level and make no associations between syntax and semantics at all, and instead build up a syntactically homeostatic system, that would hopefully then be able to create semantic behavior by creating "semantic" structures from intrinsic syntactic behavior. So, my approach is not "outside of" but rather from the "inside out".

If you have any suggestions about how to code a solution to the "outside of" problem, in any kind of symbolic system, I would be very interested in your ideas. [that would be some cool voodoo!]



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: