Google AI platform like a Raspberry Pi

Google has promised us new hardware products for machine learning at the edge, and now it’s finally out. The thing you’re going to take away from this is that Google built a Raspberry Pi with machine learning. This is Google’s Coral, with an Edge TPU platform, a custom-made ASIC that is designed to run machine learning algorithms ‘at the edge’. Here is the link to the board that looks like a Raspberry Pi.

This new hardware was launched ahead of the TensorFlow Dev Summit, revolving around machine learning and ‘AI’ in embedded applications, specifically power- and computationally-limited environments. This is ‘the edge’ in marketing speak, and already we’ve seen a few products designed from the ground up to run ML algorithms and inference in embedded applications. There are RISC-V microcontrollers with machine learning accelerators available now, and Nvidia has been working on this for years. Now Google is throwing their hat into the ring with a custom-designed ASIC that accelerates TensorFlow. It just so happens that the board looks like a Raspberry Pi.

WHAT’S ON THE BOARD

On board the Coral dev board is an NXP i.MX 8M SOC with a quad-core Cortex-A53 and a Cortex-M4F. The GPU is listed as ‘Integrated GC7000 Lite Graphics’. RAM is 1 GB of LPDDR4, Flash is provided with 8GB of eMMC, and WiFi and Bluetooth 4.1 are included. Connectivity is provided through USB, with Type-C OTG, a Type-C power connection, a Type-A 3.0 host, and a micro-B serial console. Gigabit Ethernet, a 3.5mm audio jack, a microphone, full-size HDMI, 4-lane MIPI-DSI, and 4-lane MIPI-CSI2 camera support. The GPIO pins are exactly — and I mean exactly — like the Raspberry Pi GPIO pins. The GPIO pins provide the same signals in the same places, although due to the different SOCs, you will need to change a line or two of code defining the pin numbers.

You might be asking why Google would build a Raspberry Pi clone. That answer comes in the form of a machine learning accelerator chip implanted on the board. Machine learning and AI chips were popular in the 80s and everything old is new again, I guess. The Google Edge TPU coprocessor has support for TensorFlow Lite, or ‘machine learning at the edge’. The point of TensorFlow Lite isn’t to train a system, but to run an existing model. It’ll do facial recognition.

The Coral dev board is available for $149.00, and you can order it on Mouser. As of this writing, there are 1320 units on order at Mouser, with a delivery date of March 6th (search for Mouser part number 212-193575000077).

source: https://hackaday.com/2019/03/05/google-launches-ai-platform-that-looks-remarkably-like-a-raspberry-pi/ 

SpiNNaker, the Million-Core Supercomputer, Finally Switched On

After 12 years in the making, the “brain computer” designed at the University of Manchester is finally switched on. What does this computer do? How is it made? And who is Steve Furber?

AI systems have been rapidly developed in the past decade with the use of deep learning, neural networks, and large computers to try and simulate neurons. But AI is not the only area of interest when using such techniques; scientists and engineers alike are also keen to try and simulate the human brain to better understand how it works and why.

Simulating the brain is no trivial task. The complexity of the human brain is difficult to replicate, which is part of why the SpiNNaker computer is important.

The Challenges of Simulating a Brain

One of the first fundamental differences between the brain and computers is how their “smallest units” function. Brain neurons can have multiple connections and react to impulses in a range of different ways. Computer transistors, by comparison, are switches that, while can be connected to other transistors, can only have one of two states.

Neurons are also able to forge links between other neurons and react to stimuli differently (which is one definition of “learning”), whereas transistor connections are fixed.

Because of these differences, scientists have to “simulate” neurons and connections in software rather than in hardware, which severely impacts the number of neurons and links that can be simulated simultaneously.

What about simulation neurons in hardware?

Neurons and transistors share little in common but a better comparison would be simple microcontrollers and FPGAs; microcontrollers are akin to neurons in that they can process outside signals quickly while being comparatively simple in architecture while FPGAs provide the ability to break and create connections between microcontrollers.

Could hardware simulation be the key? One team of researchers believes so and has spent the last 12 years on the idea.

The SpiNNaker

A research team at the University of Manchester have spent the last 12 years creating a computer that will simulate neurons and connections with the use of many simple cores all interconnected on a massive parallel system and the computer, called SpiNNaker, was finally turned on.

The million-core computer is designed to simulate up to a billion neurons in real-time to allow scientists to study neural networks and pathways in a realistic manner by using hardware as opposed to software.

Unlike traditional methods for simulating neurons, SpiNNaker has individual processors that each simulate up to 1000 neurons that transmit and receive small packets of data to and from many other neurons simultaneously.

Hexagonal topology between processors and a 48-processor SpiNNaker 
computer - Image courtesy University of Manchester

The Spiking Neural Network Architecture system (SpiNNaker) consists of 10 19-inch computer racks with each rack containing 100,000 ARM cores. This core density is achieved with the use of a custom IC that contain up to 18 cores. Each board in a rack has 48 chips, which results in each board containing 864 processors.

Unlike typical software systems, the cores are arranged in a hexagonal pattern with data transmission handled entirely in hardware. It is this topology that allows for the system to simulate one billion neurons in real-time. The system uses ARM9 processors containing a total of 7TB of RAM and 57K nodes while each processor has an off-die 128MB SDRAM and each core has 32KB ROM and 64KB data tightly-coupled memory DTCM …

https://www.allaboutcircuits.com/news/simulate-human-brain-spinnaker-million-core-computer-switched-on/

https://www.research.manchester.ac.uk/portal/files/60826558/FULL_TEXT.PDF

3D-printed Deep Learning neural network uses light instead of electrons

It’s a novel idea, using light diffracted through numerous plates instead of electrons. And to some, it might seem a little like replacing a computer with an abacus, but researchers at UCLA have high hopes for their quirky, shiny, speed-of-light artificial neural network.

Coined by Rina Dechter in 1986, Deep Learning is one of the fastest-growing methodologies in the machine learning community and is often used in face, speech and audio recognition, language processing, social network filtering and medical image analysis as well as addressing more specific tasks, such as solving inverse imaging problems.

Traditionally, deep learning systems are implemented on a computer to learn data representation and abstraction and perform tasks, on par with or better than – the performance of humans. However the team led by Dr. Aydogan Ozcan, the Chancellor’s Professor of electrical and computer engineering at UCLA, didn’t use a traditional computer set-up, instead choosing to forgo all those energy-hungry electrons in favor of light waves. The result was its all-optical Diffractive Deep Neural Network (D2NN) architecture.

The setup uses 3D-printed translucent sheets, each with thousands of raised pixels, which deflect light through each panel in order to perform set tasks. By the way, these tasks are performed without the use of any power, except for the input light beam.

The UCLA team’s all-optical deep neural network – which looks like the guts of a solid gold car battery – literally operates at the speed of light, and will find applications in image analysis, feature detection and object classification. Researchers on the team also envisage possibilities for D2NN architectures performing specialized tasks in cameras. Perhaps your next DSLR might identify your subjects on the fly and post the tagged image to your Facebook timeline.

view gallery

“Using passive components that are fabricated layer by layer, and connecting these layers to each other via light diffraction created a unique all-optical platform to perform machine learning tasks at the speed of light,” said Dr. Ozcan.

For now though, this is a proof of concept, but it shines a light on some unique opportunities for the machine learning industry.

The research has been published in the journal Science.

[Sources]
https://newatlas.com/diffractive-deep-neural-network-uses-light-to-learn/55718/
http://innovate.ee.ucla.edu/
https://arxiv.org/abs/1804.08711

Volta Tensor Core GPU Achieves New AI Performance Milestones

Artificial intelligence powered by deep learning now solves challenges once thought impossible, such as computers understanding and conversing in natural speech and autonomous driving. Inspired by the effectiveness of deep learning to solve a great many challenges, the exponentially growing complexity of algorithms has resulted in a voracious appetite for faster computing. NVIDIA designed the Volta Tensor Core architecture to meet these needs.

NVIDIA and many other companies and researchers have been developing both computing hardware and software platforms to address this need. For instance, Google created their TPU (tensor processing unit) accelerators which have generated good performance on the limited number of neural networks that can run on TPUs.

In this blog, we share some of our recent advancements which deliver dramatic performance gains on GPUs to the AI community. We have achieved record-setting ResNet-50 performance for a single chip and single server with these improvements. Recently, fast.ai also announced their record-setting performance on a single cloud instance.

Our results demonstrate that:

  • A single V100 Tensor Core GPU achieves 1,075 images/second when training ResNet-50, a 4x performance increase compared to the previous generation Pascal GPU.
  • A single DGX-1 server powered by eight Tensor Core V100s achieves 7,850 images/second, almost 2x the 4,200 images/second from a year ago on the same system.
  • A single AWS P3 cloud instance powered by eight Tensor Core V100s can train ResNet-50 in less than three hours, 3x faster than a TPU instance.

 

Volta Tensor Core GPU ResNet-50 record
Figure 1. Volta Tensor Core GPU Achieves Speed Records In ResNet-50 (AWS P3.16xlarge instance consists of 8x Tesla V100 GPUs).

Massive parallel processing performance on a diversity of algorithms makes NVIDIA GPUs naturally great for deep learning. We didn’t stop there. Tapping our years of experience and close collaboration with AI researchers all over the world, we created a new architecture optimized for the many models of deep learning – the NVIDIA Tensor Core GPU.

Combined with high-speed NVLink interconnect plus deep optimizations within all current frameworks, we achieve state-of-the-art performance. NVIDIA CUDA GPU programmability ensures performance for the large diversity of modern networks, as well as provides a platform to bring up emerging frameworks and tomorrow’s deep network inventions  …..
<read continue>