Orange Pi AI Stick Lite packs 5.6 TOPS Gryfalcon GPU

Shenzhen Xunlong Software’s $19.90 “Orange Pi AI Stick Lite” USB stick features a GTI Lightspeeur SPR2801S NPU at up to 5.6 TOPS @ 100MHz. It’s supported with free, Linux-based AI model transformation tools.

Shenzhen Xunlong Software’s Orange Pi project has released an AI accelerator with a USB stick form factor equipped with Gyrfalcon Technology, Inc.’s Lightspeeur SPR2801S CNN accelerator chip. The Orange Pi AI Stick Lite is designed to accelerate AI inferencing using Caffe and PyTorch frameworks, with TensorFlow support coming soon. It’s optimized for use with Allwinner based Orange Pi SBCs, but the SDK appears to be adaptable to any Linux-driven x86 or Arm-based computer with a USB port.


 

Orange Pi AI Stick Lite


The Orange Pi AI Stick Lite is a relaunch of an almost identical Orange Pi AI Stick 2801 that was announced in Nov. 2018, according to a CNXSoft post. The previous model cost $69 and required purchasing GTI’s PLAI (People Learning Artificial Intelligence) model transformation tools for $149 to do anything more than run a demo. The new device is not only much cheaper at $19.90, but the PLAI training tools are now free. There’s no download button, however — you must contact the company to get the download link.

GTI’s up to 9.3 TOPS per Watt Lightspeeur SPR2801S is a lower-end sibling to the up to 24-TOPS/W Lightspeeur 2803S NPU, which is built into SolidRun’s i.MX 8M Mini SOM. The “best peak” performance of the 2801S is 5.6 TOPS @ 100MHz. It can also run in an “ultra low power” mode of 2.8 TOPS @ 300mW. GTI also offers a mid-range Lightspeeur 2802 model at up to 9.9 TOPS/W.

 

The 28nm fabricated, 7 x 7mm Lightspeeur SPR2801S has an SDIO 3.0 interface and eMMC 4.5 storage. It offers read bandwidth of 68MB/s and write bandwidth of 84.69 MB/s. The NPU includes a 2-dimensional Matrix Processing Engine (MPE) featuring an APiM (AI Processing in Memory) technology that uses magnetoresistive random access memory (MRAM) …..

sources: http://linuxgizmos.com/orange-pi-ai-stick-lite-taps-5-6-tops-gryfalcon-gpu/

Deep neural network chip from Intel®

Prototype and deploy deep neural network (DNN) applications smarter and more efficiently with a tiny, fanless, deep learning development kit designed to enable a new generation of intelligent devices.

The new, improved Intel® Neural Compute Stick 2 (Intel® NCS 2) features Intel’s latest high-performance vision processing unit: the Intel® Movidius™ Myriad™ X VPU. With more compute cores and a dedicated hardware accelerator for deep neural network inference, the Intel® NCS 2 delivers up to eight times the performance boost compared to the previous generation Intel® Movidius™ Neural Compute Stick (NCS).

Technical Specifications

  • Processor: Intel® Movidius™ Myriad™ X Vision Processing Unit (VPU)
  • Supported frameworks: TensorFlow* and Caffe*
  • Connectivity: USB 3.0 Type-A
  • Dimensions: 2.85 in. x 1.06 in. x 0.55 in. (72.5 mm x 27 mm x 14 mm)
  • Operating temperature: 0° C to 40° C
  • Compatible operating systems: Ubuntu* 16.04.3 LTS (64 bit), CentOS* 7.4 (64 bit), and Windows® 10 (64 bit)

source: https://software.intel.com/en-us/neural-compute-stick

Google AI platform like a Raspberry Pi

Google has promised us new hardware products for machine learning at the edge, and now it’s finally out. The thing you’re going to take away from this is that Google built a Raspberry Pi with machine learning. This is Google’s Coral, with an Edge TPU platform, a custom-made ASIC that is designed to run machine learning algorithms ‘at the edge’. Here is the link to the board that looks like a Raspberry Pi.

This new hardware was launched ahead of the TensorFlow Dev Summit, revolving around machine learning and ‘AI’ in embedded applications, specifically power- and computationally-limited environments. This is ‘the edge’ in marketing speak, and already we’ve seen a few products designed from the ground up to run ML algorithms and inference in embedded applications. There are RISC-V microcontrollers with machine learning accelerators available now, and Nvidia has been working on this for years. Now Google is throwing their hat into the ring with a custom-designed ASIC that accelerates TensorFlow. It just so happens that the board looks like a Raspberry Pi.

WHAT’S ON THE BOARD

On board the Coral dev board is an NXP i.MX 8M SOC with a quad-core Cortex-A53 and a Cortex-M4F. The GPU is listed as ‘Integrated GC7000 Lite Graphics’. RAM is 1 GB of LPDDR4, Flash is provided with 8GB of eMMC, and WiFi and Bluetooth 4.1 are included. Connectivity is provided through USB, with Type-C OTG, a Type-C power connection, a Type-A 3.0 host, and a micro-B serial console. Gigabit Ethernet, a 3.5mm audio jack, a microphone, full-size HDMI, 4-lane MIPI-DSI, and 4-lane MIPI-CSI2 camera support. The GPIO pins are exactly — and I mean exactly — like the Raspberry Pi GPIO pins. The GPIO pins provide the same signals in the same places, although due to the different SOCs, you will need to change a line or two of code defining the pin numbers.

You might be asking why Google would build a Raspberry Pi clone. That answer comes in the form of a machine learning accelerator chip implanted on the board. Machine learning and AI chips were popular in the 80s and everything old is new again, I guess. The Google Edge TPU coprocessor has support for TensorFlow Lite, or ‘machine learning at the edge’. The point of TensorFlow Lite isn’t to train a system, but to run an existing model. It’ll do facial recognition.

The Coral dev board is available for $149.00, and you can order it on Mouser. As of this writing, there are 1320 units on order at Mouser, with a delivery date of March 6th (search for Mouser part number 212-193575000077).

source: https://hackaday.com/2019/03/05/google-launches-ai-platform-that-looks-remarkably-like-a-raspberry-pi/ 

SpiNNaker, the Million-Core Supercomputer, Finally Switched On

After 12 years in the making, the “brain computer” designed at the University of Manchester is finally switched on. What does this computer do? How is it made? And who is Steve Furber?

AI systems have been rapidly developed in the past decade with the use of deep learning, neural networks, and large computers to try and simulate neurons. But AI is not the only area of interest when using such techniques; scientists and engineers alike are also keen to try and simulate the human brain to better understand how it works and why.

Simulating the brain is no trivial task. The complexity of the human brain is difficult to replicate, which is part of why the SpiNNaker computer is important.

The Challenges of Simulating a Brain

One of the first fundamental differences between the brain and computers is how their “smallest units” function. Brain neurons can have multiple connections and react to impulses in a range of different ways. Computer transistors, by comparison, are switches that, while can be connected to other transistors, can only have one of two states.

Neurons are also able to forge links between other neurons and react to stimuli differently (which is one definition of “learning”), whereas transistor connections are fixed.

Because of these differences, scientists have to “simulate” neurons and connections in software rather than in hardware, which severely impacts the number of neurons and links that can be simulated simultaneously.

What about simulation neurons in hardware?

Neurons and transistors share little in common but a better comparison would be simple microcontrollers and FPGAs; microcontrollers are akin to neurons in that they can process outside signals quickly while being comparatively simple in architecture while FPGAs provide the ability to break and create connections between microcontrollers.

Could hardware simulation be the key? One team of researchers believes so and has spent the last 12 years on the idea.

The SpiNNaker

A research team at the University of Manchester have spent the last 12 years creating a computer that will simulate neurons and connections with the use of many simple cores all interconnected on a massive parallel system and the computer, called SpiNNaker, was finally turned on.

The million-core computer is designed to simulate up to a billion neurons in real-time to allow scientists to study neural networks and pathways in a realistic manner by using hardware as opposed to software.

Unlike traditional methods for simulating neurons, SpiNNaker has individual processors that each simulate up to 1000 neurons that transmit and receive small packets of data to and from many other neurons simultaneously.

Hexagonal topology between processors and a 48-processor SpiNNaker 
computer - Image courtesy University of Manchester

The Spiking Neural Network Architecture system (SpiNNaker) consists of 10 19-inch computer racks with each rack containing 100,000 ARM cores. This core density is achieved with the use of a custom IC that contain up to 18 cores. Each board in a rack has 48 chips, which results in each board containing 864 processors.

Unlike typical software systems, the cores are arranged in a hexagonal pattern with data transmission handled entirely in hardware. It is this topology that allows for the system to simulate one billion neurons in real-time. The system uses ARM9 processors containing a total of 7TB of RAM and 57K nodes while each processor has an off-die 128MB SDRAM and each core has 32KB ROM and 64KB data tightly-coupled memory DTCM …

https://www.allaboutcircuits.com/news/simulate-human-brain-spinnaker-million-core-computer-switched-on/

https://www.research.manchester.ac.uk/portal/files/60826558/FULL_TEXT.PDF