Is “OI” The New AI? Biocomputers Could One Day Run On Human Brain Cells

Could computers of the future run on human brain cells? A team of researchers at Johns Hopkins University certainly think so. In a paper published in the journal Frontiers in Science, the team outline their plans for ‘organoid intelligence’, an emerging multidisciplinary field looking to develop biocomputers that operate with human brains cells. Such a development could not only massively expand the capabilities of modern computing but also open up new fields of study.

Organoids are tiny, self-organizing 3D tissues that are typically derived from stem cells, and mimic the main functional and architectural complexity of an organ. It is possible there could be as many types of organoids as there are tissues and organs in the body. To date, scientists have produced organoid cultures for intestines, liver, pancreas, kidneys, prostate, lung, optic cup, and the brain, and it seems more may be on the way. 

These tissues provide unique opportunities for scientists to study human diseases that do not rely on traditional methods associated with animal models. The reliance on animal models has historically led to a bottleneck in treatment discovery as there are biological processes that are specific to the human body and cannot be modeled on animals. The development of organoids promises to overcome these limitations. Yet the team at Johns Hopkins University are taking the research into organoids in a completely different direction. 

“Computing and artificial intelligence have been driving the technology revolution but they are reaching a ceiling,” explained Thomas Hartung, a professor of environmental health sciences at the Johns Hopkins Bloomberg School of Public Health and Whiting School of Engineering, in a statement. “Biocomputing is an enormous effort of compacting computational power and increasing its efficiency to push past our current technological limits.”


Emergent dynamics of neuromorphic nanowire networks

The human brain is a product of evolution, tuned and reshaped by an ever-changing environment. The brain’s neuronal system is able to achieve the ability to recognize, conceptualize and memorize objects in the physical world. Using environmental information we establish logical associations that ultimately allows us not only to survive, but also to solve highly complex problems1.However, in an increasingly connected and interactive world, the volume of information to process has exponentially increased, and in order to extract and synthesize meaningful information, computerized approaches, such as machine learning and its various incarnations have gained tremendous popularity2.

Typically, Artificial Neural Networks (ANNs) attain this goal by a very delicate and case-selective combination of learning strategies3. Data containing complex or contextual associations between objects normally requires an heuristic sampling which limits their ability to synthesize information. Conventional CMOS architectures also restrains the amount of data that is efficiently processed with ANNs due to power consumption bottlenecks.
Interest in the creation of synthetic neurons that could increase the processing abilities of ANNs has increased considerably with the discovery of nanomaterials with memristive properties4. A memristive device is a non-linear two-terminal device in which the resistance shows resilience to change (i.e. memory), manifested in hysteretic behavior when the energy change is reversed or reduced, also termed as resistive switching. The memristor thus has two important neurosynapse-like properties, plasticity and retention. Traditional integrate-and-fire models, that emulate the electrical behavior of neurons using passive circuit elements, can be simulated exclusively with these elements5,6,7. Memristive devices have been successfully embedded into various CMOS architectures, enabling the realization of synthetic neural networks(SNN). SNNs imitate the topology of an ANN in a physical layout, typically stacking memristive terminals in cross-bar configurations8,9. Using voltage pulses to configure the internal state, or weight, of individual memristors; memorization, learning and classification abilities have been achieved10,11,12,13. However promising, this approach remains reliant upon CMOS technology and inherits some of its limitations: large cost-efficiency ratio, high power consumption, and subpar performance with respect to computerized ANNs …..

Figure 1

Morphological and structural properties of PVP-coated Ag nanowires and nanowire network. (a) Optical micrograph image of nanowire network layout after drop-cast deposition on a SiO2 substrate. (b) SEM image of nanowire interconnectivity in a selected area of the network. (c) HR-TEM image showing the atomic planes of the [100] facet of a Ag nanowire with the nanometric PVP layer embedded on the lateral surface of the nanowire. Figures (d,e) sketch the detail of the insulating junctions formed by the polymeric PVP layer between the Ag surfaces of overlapping nanowires. (f) Scheme of the measurement system. Two tungsten probes, separated by distance d = 500 μm, act as electrodes, contacting the nanowire network deposited on SiO2. The scale bars for figures (ac) are 100 μm, 10 μm and 2 nm, respectively.

Read full post

Preana: Game Theory Based Prediction with Reinforcement Learning.

In this article, we have developed a game theory based prediction tool, named Preana, based on a promising model developed by Professor Bruce Beuno de Mesquita. The first part of this work is dedicated to exploration of the specifics of Mesquita’s algorithm and reproduction of the factors and features that have not been revealed in literature. In addition, we have developed a learning mechanism to model the players’ reasoning ability when it comes to taking risks. Preana can pre-dict the outcome of any issue with multiple steak-holders who have conflicting interests in eco-nomic, business, and political sciences. We have utilized game theory, expected utility theory, Me-dian voter theory, probability distribution and reinforcement learning. We were able to repro-duce Mesquita’s reported results and have included two case studies from his publications and compared his results to that of Preana. We have also applied Preana on Irans 2013 presidential election to verify the accuracy of the prediction made by Preana.


Blue Brain team discovers a multi-dimensional universe in brain networks

For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets.

Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.

Blue Brain team discovers a multi-dimensional universe in brain networks

The research, published today in Frontiers in Computational Neuroscience, shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object.

“We found a world that we had never imagined,” says neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, “there are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

Markram suggests this may explain why it has been so hard to understand the brain. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”

If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend. This is where algebraic topology comes in: a branch of mathematics that can describe systems with any number of dimensions. The mathematicians who brought algebraic topology to the study of brain networks in the Blue Brain Project were Kathryn Hess from EPFL and Ran Levi from Aberdeen University.

“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time,” explains Hess.

In 2015, Blue Brain published the first digital copy of a piece of the neocortex – the most evolved part of the brain and the seat of our sensations, actions, and consciousness. In this latest research, using algebraic topology, multiple tests were performed on the virtual brain tissue to show that the multi-dimensional brain structures discovered could never be produced by chance. Experiments were then performed on real brain tissue in the Blue Brain’s wet lab in Lausanne confirming that the earlier discoveries in the virtual tissue are biologically relevant and also suggesting that the brain constantly rewires during development to build a networkwith as many high-dimensional structures as possible.

When the researchers presented the virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes, that the researchers refer to as cavities. “The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner,” says Levi. “It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”

The big question these researchers are asking now is whether the intricacy of tasks we can perform depends on the complexity of the multi-dimensional “sandcastles” the brain can build. Neuroscience has also been struggling to find where the brain stores its memories. “They may be ‘hiding’ in high-dimensional cavities,” Markram speculates.