Hacker Timesnew | past | comments | ask | show | jobs | submitlogin
Researchers create “neuromorphic chip” that is modeled after the brain (yale.edu)
67 points by jonbaer on Nov 30, 2017 | hide | past | favorite | 28 comments


I honestly think that focusing on asynchronicity is almost completely the wrong way to go about designing neuromorphic systems. One thing that engineers almost never focus on, but is a well-researched vital aspect of the brain, is the concept of firing frequency and synchronicity: "cells that fire together wire together". This aspect of synchronized, timed firing is so strong that it creates neural oscillations on a large scale that are consistently characteristic of different brain states (brain waves) and widely theorized to be important to how information is encoded in the brain. It's also very commonly observed that, for instance, the intensity of a stimulus is encoded not in the amplitude of a particular neuron's response but rather in the frequency of a neuron's firing. This oscillatory activity has been shown to be active at all different levels of organization, too. There's basically an entire dimension to how the brain works (in terms of positively and negatively interfering wave patterns localized in different parts of the brain, with neurons that serve to speed up or slow down a continuous pattern of firing) that people looking to reproduce its function usually totally ignore. They think of neuronal firings as discrete calculations, when really they're transformations on an ongoing pattern of activity that's evolving over time. I feel this way of looking at the brain is far more consistent with how biological processes generally function: the point is not that each neuron is accomplishing some precise operation independent of the other neurons, but rather that with a large enough population of imprecise neurons firing individually an overall pattern of useful activity emerges.


> It's also very commonly observed that, for instance, the intensity of a stimulus is encoded not in the amplitude of a particular neuron's response but rather in the frequency of a neuron's firing.

This was actually mentioned by Geoff Hinton in his deep learning coursera lectures and the reason why his RBM's were outputting binary signals rather than float values -- the timing was more important than the values. It made a lot of sense when he talked about it. I saw this back in 2012. I assumed this was commonly understood now, but I guess not?


I've hypothesised the optimum way would be to use both binary and floats at the same time after a paper I read that basically said neuron pathways are both digital and analog. Something about the analog (float) being a weight pathway indicator or something. I wish I knew more about neuroscience.


hmm interesting. I wonder where I could find out more.


Caveat: I'm jumping out of my depth and just throwing my non-neural-researcher understanding out there.

At the individual neuron level, everything is asynchronous, right?

The firing/pulsing behaviors happen at a network level and this implementation should still see harmonic effects. In fact, you might even see richer harmonic effects because the neurons are asynchronous and not all clocked together.

So while you're probably right that firing/pulsing/synchronicity behaviors are a vital aspect of the brain, at the network level the best way for us to see more brain-like harmonics will be to have asynchronous neurons.


yeah logically it seems like it would be impossible to have patterns of synchronicity without asynchronous neurons.


I agree that most computer scientists working on anything "neural" would benefit from a more biology-based understanding of the brain. But I think the article was using "synchronicity" in a different way, in that the bad thing about synchronous hardware is that it forces all computations to fit into a set time period, even though computations take less time. So removing that barrier is an efficiency increase.

And I think the "fire together wire together" thing is more about synaptic plasticity and doesn't really relate to the concept of synchronicity as described in the article (to my understanding). Fire together wire together means When a neuron fires as a result of signal from another one, certain molecules are created to strengthen the neurons relationship. There is nothing timing / synchronous about this, using timing in the sense of requiring a third-party "scheduler" to regulate firing.

Individual neurons are not regulated by this constraint. But yes, there are so many other amazing features of neurons that we don't fully understand and that aren't used in computer design because of a lack of basic knowledge about the biology and chemistry.


I think directly emulating the current understanding of biological systems isn’t necessarily the best way. It’s a losing proposition to emulate hardware with other hardware that’s fundamentally different from the original. I think the better way would be emulating what we understand of the ‘software’: memory, concepts, language/communication, etc.

(I say that all with great humility. This is by no means a field I can speak about with authority.


That's the problem, though: the idea behind a neuromorphic system is that it works like the brain, but what these "neuromorphic" chips do relies on an extremely vague relationship between a type of computing we know how to do (parallel processing), and what the brain actually does. Furthermore, I think that the type of deep learning that actually works well (like recurrent neural nets) relies more on these aspects of the brain (synchronized and oscillatory activity) than a lot of engineers working in the field realize as they chase after other concepts of how neurons might work. I feel like there's a lot of breakthroughs that could be made in the field of neuromorphic computing (and honestly in our understanding of how the brain works) if there were simply more people who were equally well-trained enough in neuroscience and computing to see the deeper relationships. As it is, I feel like the majority of experts that have deep knowledge in one of these fields are hampered by a superficial understanding of what's going on in the other. We're really only scratching the surface of how we could design a system that approximates what the brain does.


To compound the problem I think a lot of people suffer from a lack of understanding of their lack of understanding.

Being ignorant of what nervous tissue really works like is largely a transitory problem if one stays with it long enough, but "knowing" that it's just like an artificial neural net, or standard electrical circuit design with funky clocking and some memristor gates, is potentially devastating.

And if I may be allowed a personal remark, it's refreshing in that context to see your humility about the matter. BSc + some years in the lab may not be a researcher's career but it substantial, and more than most have to show. Liked your comments. I'll be looking for more of them whenever topics like these come up.


If you don't mind, could you share some resources on existing models that approximate the neuron behaviour more accurately? If you're talking about feedback-driven models like RNNs/LSTMs/GRUs/etc, then I can safely say that there are people working on realising these models on custom computing architectures. The simple feed-forward neural network could be thought of as the low-hanging fruit in this regard, which, nevertheless, is still an impressive feat.


This is just a smattering of recent articles from the Journal of Computational Neuroscience, but I think something more along the lines of models like these would give us closer approximations of how neurons really behave:

[1] https://link.springer.com/article/10.1007/s10827-017-0667-3

[2] https://link.springer.com/article/10.1007/s10827-017-0668-2

[3] https://link.springer.com/article/10.1007/s10827-017-0655-7

[4] https://link.springer.com/article/10.1007/s10827-017-0646-8

[5] https://link.springer.com/article/10.1007/s10827-017-0658-4


There's a lot of good information on modeling spiking neurons here: http://www.scholarpedia.org/article/Category:Spiking_Network...

Also, I like the papers linked here by Eugene Izhikevich: http://www.izhikevich.org/publications/index.htm

The new links discussing the capsule neural networks are somewhat reminiscent of this polychronous computation paper: http://www.izhikevich.org/publications/polychronous_wavefron....

https://hackernoon.com/what-is-a-capsnet-or-capsule-network-...


Out of curiousity given your seemingly strong sentiments, what is your background and what authors/researchers would you say have influenced your thinking the most?


I'm just a frontend engineer who dabbles in ML and cognitive science; my neuroscience experience is limited to a bachelor's degree and a couple years of lab work so I'm admittedly a lot more passionate than I am personally authoritative on the subject. The biggest influence on my thinking regarding intelligent systems generally is Douglas Hofstadter, but the notions of brain function I alluded to above are prevalent throughout a lot of computational neuroscience.


I sympathize with this being in a similar professional situation and having once been introduced to this field via Hofstadter. His musings are usually on a distinctly symbolic level however, and though it is not my intention to defend a kind of results-only pragmatic approach, I do concern myself with strategies for progressively reaching better models of intelligence and their realization in contemporary hardware in particular. What, for instance, is your opinion about Hinton's recently published capsules idea?

https://arxiv.org/abs/1710.09829 https://openreview.net/forum?id=HJWLfGWRb


I haven't had the opportunity to dive deeply into capsule networks yet, but I have been fairly excited by what little I've read so far. I think that Hinton et al are brilliant and definitely taking neural nets in the right direction by trying to approximate more of the mid-level organization of the brain, but I feel like recent attempts to integrate things like attention and memory into nns are still at a very elementary level. This isn't to denigrate recent work at all: this is a stage we have to cross on our way to more powerful intelligent systems, and you're right that pragmatically one of the best ways to further the field is to focus on making models that are useful in their current state even if they poorly approximate more complex systems. On the whole, I'd wager the state of the field is still at the Bohr model level, but progressing rapidly.


This is like saying that all flying birds have feathers therefore we must have something similar to feathers on a flying machine.

Simply emulating things just because it feels like they should be important is not a particularly promising approach when you're designing circuits.


To torture your analogy: in my mind, if the brain is like a bird, then modern approaches to machine learning are like helicopters. Yes, these things can both fly, but they don't fly in nearly the same way, and if your helicopter isn't getting off the ground then looking at the design of a bird isn't going to help you very much with many of the challenges you're going to face in getting it to work. You wouldn't claim that a helicopter is bird-shaped, even though there's some basic relationships between the aerodynamic principles of a bird's wings and a helicopter rotor.

If we're having trouble getting our helicopter to work, maybe we should be trying to make a working airplane first, since then we can base more of it off the design of a bird and use this to help us to better understand the basic principles of aerodynamics.

Do you get where I'm coming from here?


To be clear, once we understand the principles behind the neocortex, only then it will make sense to ask questions about the importance of oscillations or how it relates to asynchronous vs synchronous systems, and many more phenomenon we have observed but do not really understand.

Without an underlying theory to tie it all together, it's difficult to make sense of it.

I personally think that oscillations are more likely to be an emergent property which is a common theme in nature.


Both are operating according to principles of thermodynamics. That’s what we lack and that’s what a biologically defensible model of AI can teach us.


TL;DR Researcher and team at Yale invent a chip that is similar to a neural network. It's good at object recognition and operates asynchronously.


1 million “neurons” that communicate via 256 million “synapses.”

Human brain = an estimated 100 billion neurons for over 100 trillion synapses according to Wikipedia

So a 5 orders of magnitude difference for neuron count, but only 1 small order of magnitude for average synapse per neuron (1 for 256 in former, 1 for 1000 in latter)

I wish scientific news were better at framing the numbers they quote from press release.

It’s interesting to keep track of these sort of numbers, because while we clearly won’t have a chip that can do things remotely near what a human brain can when the numbers are at parity, at least it’ll help focus people more on exploring changes in architecture, rather than throwing more and more computational units at the problem.


while we clearly won’t have a chip that can do things remotely near what a human brain can when the numbers are at parity

That seems hard to say at this point. Certainly quantity won't be the sole determining factor; but quantity combined with behavior and interactions might mean that on-a-chip neurons function better than human/animal brains.


I find that neuromorphic chips are in this awkward valley where they aren’t trying to be state of the art machine learning systems (TPUs and GPUs do ML much better), but they’re also not trying to emulate biology faithfully. I think experimenting with new hardware and technologies is great, and I hope we discover something really useful from these. Still, I cant help but wonder whether they should be more focused in either practicality or pure science.


But there are people working on the pure science of learning how the brain works.

Taking bits and pieces of strategies, chemicals, or structures found in nature and adapting them to other technologies has a long history of success.

If I had some bitcoins to bet, I'd put them on these emulated brain/neuron-on-a-chip techniques when it comes to our pursuit of general purpose AI.


I guess one focus would be "extreme energy efficiency". Start from that point and solve the performance aspect later.


I wonder how that train it? The network is built into the silicon so I don't see how training could occur after its creation... But maybe they build a simulated network and train it then construct a faithful hardware representation




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: