Blue Brain team discovers a multi-dimensional universe in brain networks

For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets.

Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.

Blue Brain team discovers a multi-dimensional universe in brain networks

The research, published today in Frontiers in Computational Neuroscience, shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object.

“We found a world that we had never imagined,” says neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, “there are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

Markram suggests this may explain why it has been so hard to understand the brain. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”

If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend. This is where algebraic topology comes in: a branch of mathematics that can describe systems with any number of dimensions. The mathematicians who brought algebraic topology to the study of brain networks in the Blue Brain Project were Kathryn Hess from EPFL and Ran Levi from Aberdeen University.

“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time,” explains Hess.

In 2015, Blue Brain published the first digital copy of a piece of the neocortex – the most evolved part of the brain and the seat of our sensations, actions, and consciousness. In this latest research, using algebraic topology, multiple tests were performed on the virtual brain tissue to show that the multi-dimensional brain structures discovered could never be produced by chance. Experiments were then performed on real brain tissue in the Blue Brain’s wet lab in Lausanne confirming that the earlier discoveries in the virtual tissue are biologically relevant and also suggesting that the brain constantly rewires during development to build a networkwith as many high-dimensional structures as possible.

When the researchers presented the virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes, that the researchers refer to as cavities. “The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner,” says Levi. “It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”

The big question these researchers are asking now is whether the intricacy of tasks we can perform depends on the complexity of the multi-dimensional “sandcastles” the brain can build. Neuroscience has also been struggling to find where the brain stores its memories. “They may be ‘hiding’ in high-dimensional cavities,” Markram speculates.

Source: 
https://medicalxpress.com/news/2017-06-blue-brain-team-multi-dimensional-universe.html?utm_source=TrendMD&utm_medium=cpc&utm_campaign=MedicalXpress_TrendMD_1

How to predict the side effects of millions of drug combinations.

Doctors have no idea, but Stanford University computer scientists have figured it out, using artificial intelligence

July 11, 2018

An example graph of polypharmacy side effects derived from genomic and patient population data, protein–protein interactions, drug–protein targets, and drug–drug interactions encoded by 964 different polypharmacy side effects. The graph representation is used to develop Decagon. (credit: Marinka Zitnik et al./Bioinformatics)

Millions of people take up to five or more medications a day, but doctors have no idea what side effects might arise from adding another drug.*

Now, Stanford University computer scientists have developed a deep-learning system (a kind of AI modeled after the brain) called Decagon** that could help doctors make better decisions about which drugs to prescribe. It could also help researchers find better combinations of drugs to treat complex diseases.

The problem is that with so many drugs currently on the U.S. pharmaceutical market, “it’s practically impossible to test a new drug in combination with all other drugs, because just for one drug, that would be five thousand new experiments,” said Marinka Zitnik, a postdoctoral fellow in computer science and lead author of a paper presented July 10 at the 2018 meeting of the International Society for Computational Biology.

With some new drug combinations (“polypharmacy”), she said, “truly we don’t know what will happen.”

How proteins interact and how different drugs affect these proteins

So Zitnik and associates created a network describing how the more than 19,000 proteins in our bodies interact with each other and how different drugs affect these proteins. Using more than 4 million known associations between drugs and side effects, the team then designed a method to identify patterns in how side effects arise, based on how drugs target different proteins, and also to infer patterns about drug-interaction side effects.***

Based on that method, the system could predict the consequences of taking two drugs together.

To evaluate the The research was supported by the National Science Foundation, the National Institutes of Health, the Defense Advanced Research Projects Agency, the Stanford Data Science Initiative, and the Chan Zuckerberg Biohub. system, the group looked to see if its predictions came true. In many cases, they did. For example, there was no indication in the original data that the combination of atorvastatin (marketed under the trade name Lipitor among others), a cholesterol drug, and amlopidine (Norvasc), a blood-pressure medication, could lead to muscle inflammation. Yet Decagon predicted that it would, and it was right.

In the future, the team members hope to extend their results to include more multiple drug interactions. They also hope to create a more user-friendly tool to give doctors guidance on whether it’s a good idea to prescribe a particular drug to a particular patient, and to help researchers developing drug regimens for complex diseases, with fewer side effects.

Ref.: Bioinformatics (open access). Source: Stanford University.

* More than 23 percent of Americans took three or more prescription drugs in the past 30 days, according to a 2017 CDC estimate. Furthermore, 39 percent over age 65 take five or more, a number that’s increased three-fold in the last several decades. There are about 1,000 known side effects and 5,000 drugs on the market, making for nearly 125 billion possible side effects between all possible pairs of drugs. Most of these have never been prescribed together, let alone systematically studied, according to the Stanford researchers.

** In geometry, a decagon is a ten-sided polygon.

*** The research was supported by the National Science Foundation, the National Institutes of Health, the Defense Advanced Research Projects Agency, the Stanford Data Science Initiative, and the Chan Zuckerberg Biohub.

source:   KurzweilAi.net , Stanford.edu

Spotting Image Manipulation with AI

Twenty-eight years ago, Adobe Photoshop brought the analog photograph into the digital world, reshaping the human relationship with the image. Today, people edit images to achieve new heights of artistic expressionto preserve our history and even to find missing children. On the flipside, some people use these powerful tools to “doctor” photos for deceptive purposes. Like any technology, it’s an extension of human intent, and can be used for both the best and the worst of our imaginations.

In 1710 Jonathan Swift wrote, “Falsehood flies, and the truth comes limping after it.” Even today, as a society, we’ve struggled to understand the way perception and belief are shaped between authenticity, truth, falsehood and media. Add newer social media technologies to the mix, and those falsehoods fly faster than ever.

That’s why, in addition to creating new capabilities and features for the creation of digital media, Adobe is exploring the boundaries of what’s possible using new technologies, such as artificial intelligence, to increase trust and authenticity in digital media.

AI: a new solution for an old problem

Vlad Morariu, senior research scientist at Adobe, has been working on technologies related to computer vision for many years. In 2016, he started applying his talents to the challenge of detecting image manipulation as part of the DARPA Media Forensics program.

Vlad explains that a variety of tools already exist to help document and trace the digital manipulation of photos. “File formats contain metadata that can be used to store information about how the image was captured and manipulated. Forensic tools can be used to detect manipulation by examining the noise distribution, strong edges, lighting and other pixel values of a photo. Watermarks can be used to establish original creation of an image.”

Of course, none of these tools perfectly provide a deep understanding of a photo’s authenticity, nor are they practical for every situation. Some are easily defeated; some tools require deep expertise and some lengthy execution and analysis to use properly.

Vlad suspected technologies, such as artificial intelligence and machine learning, could be used to more easily, reliably and quickly detect whether or not any part of a digital image had been manipulated, and if so, what aspects were modified.

Building on research he started fourteen years ago and continued as a Ph.D. student in computer science at the University of Maryland, Vlad describes some of these new techniques in a recent paper — Learning Rich Features for Image Manipulation Detection.

“We focused on three common tampering techniques—splicing, where parts of two different images are combined; copy-move, where objects in a photograph are moved or cloned from one place to another; and removal, where an object is removed from a photograph, and filled-in,” he notes.

Every time an image is manipulated, it leaves behind clues that can be studied to understand how it was altered.  “Each of these techniques tend to leave certain artifacts, such as strong contrast edges, deliberately smoothed areas, or different noise patterns,” he says. Although these artifacts are not usually visible to the human eye, they are much more easily detectable through close analysis at the pixel level, or by applying filters that help highlight these changes.

Now, what used to take a forensic expert hours to do can be done in seconds. The results of this project are that AI can successfully identify which images have been manipulated. AI can identify the type of manipulation used and highlight the specific area of the photograph that was altered.

“Using tens of thousands of examples of known, manipulated images, we successfully trained a deep learning neural network to recognize image manipulation, fusing two distinct techniques together in one network to benefit from their complementary detection capabilities,” Vlad explains.

The first technique uses an RGB stream (changes to red, green and blue color values of pixels) to detect tampering. The second uses a noise stream filter. Image noise is random variation of color and brightness in an image and produced by the sensor of a digital camera or as a byproduct of software manipulation. It looks a little like static. Many photographs and cameras have unique noise patterns, so it is possible to detect noise inconsistencies between authentic and tampered regions, especially if imagery has been combined from two or more photos.

An example of authentic images, manipulated images, the RGB and noise streams used to detect manipulation, and the results of AI analysis. Source: the NC2016 dataset

While these techniques are still being perfected, and do not necessarily solve the problem of “absolute truth” of a photo, they provide more possibility and more options for managing the impact of digital manipulation, and they potentially answer questions of authenticity more effectively.

Vlad notes that future work might explore ways to extend the algorithm to include other artifacts of manipulation, such as differences in illumination throughout a photograph or compression introduced by repeated saving of digital files.

The human factor

Technology alone is not enough to solve an age-old challenge that increasingly confronts us in today’s news environment: What media, if any, can we treat as authentic versions of the truth?

Jon Brandt, senior principal scientist and director for Adobe Research, says that answering that question often comes down to trust and reputation rather than technology. “The Associated Press and other news organizations publish guidelines for the appropriate digital editing of photographs for news media,” he explains.

In other words, when you see a photo on a news site or newspaper, at some level you must trust the chain of custody for that photo, and rely on the ethics of the publisher to refrain from improper manipulation of the image.

The same will be true of newer techniques that are democratizing the ability to manipulate voice and video, he adds, “I think one of the important roles Adobe can play is to develop technology that helps them monitor and verify authenticity as part of their process.

“It’s important to develop technology responsibly, but ultimately these technologies are created in service to society.  Consequently, we all share the responsibility to address potential negative impacts of new technologies through changes to our social institutions and conventions.”

Read more about artificial intelligence in our Human & Machine collection.

source: https://theblog.adobe.com/spotting-image-manipulation-ai/ 

Microsoft researchers build a bot that draws what you tell it to

If you’re handed a note that asks you to draw a picture of a bird with a yellow body, black wings and a short beak, chances are you’ll start with a rough outline of a bird, then glance back at the note, see the yellow part and reach for a yellow pen to fill in the body, read the note again and reach for a black pen to draw the wings and, after a final check, shorten the beak and define it with a reflective glint. Then, for good measure, you might sketch a tree branch where the bird rests.

Now, there’s a bot that can do that, too.

The new artificial intelligence technology under development in Microsoft’s research labs is programmed to pay close attention to individual words when generating images from caption-like text descriptions. This deliberate focus produced a nearly three-fold boost in image quality compared to the previous state-of-the-art technique for text-to-image generation, according to results on an industry standard test reported in a research paper posted on arXiv.org.

The technology, which the researchers simply call the drawing bot, can generate images of everything from ordinary pastoral scenes, such as grazing livestock, to the absurd, such as a floating double-decker bus. Each image contains details that are absent from the text descriptions, indicating that this artificial intelligence contains an artificial imagination.

Continue reading: https://blogs.microsoft.com/ai/drawing-ai/