Vision processing unit

A vision processing unit (VPU) is (as of 2023) an emerging class of microprocessor; it is a specific type of AI accelerator, designed to accelerate machine vision tasks.[1][2]

Overview

Vision processing units are distinct from video processing units (which are specialised for video encoding and decoding) in their suitability for running machine vision algorithms such as CNN (convolutional neural networks), SIFT (scale-invariant feature transform) and similar.

They may include direct interfaces to take data from cameras (bypassing any off chip buffers), and have a greater emphasis on on-chip dataflow between many parallel execution units with scratchpad memory, like a manycore DSP. But, like video processing units, they may have a focus on low precision fixed point arithmetic for image processing.

Contrast with GPUs

They are distinct from GPUs, which contain specialised hardware for rasterization and texture mapping (for 3D graphics), and whose memory architecture is optimised for manipulating bitmap images in off-chip memory (reading textures, and modifying frame buffers, with random access patterns). VPUs are optimized for performance per watt, while GPUs mainly focus on absolute performance.

Target markets are robotics, the internet of things (IoT), new classes of digital cameras for virtual reality and augmented reality, smart cameras, and integrating machine vision acceleration into smartphones and other mobile devices.

Examples

Broader category

Some processors are not described as VPUs, but are equally applicable to machine vision tasks. These may form a broader category of AI accelerators (to which VPUs may also belong), however as of 2016 there is no consensus on the name:

See also

  • Adapteva Epiphany, a manycore processor with similar emphasis on on-chip dataflow, focussed on 32-bit floating point performance
  • CELL, a multicore processor with features fairly consistent with vision processing units (SIMD instructions & datatypes suitable for video, and on-chip DMA between scratchpad memories)
  • Coprocessor
  • Graphics processing unit, also commonly used to run vision algorithms. NVidia's Pascal architecture includes FP16 support, to provide a better precision/cost tradeoff for AI workloads
  • MPSoC
  • OpenCL
  • OpenVX
  • Physics processing unit, a past attempt to complement the CPU and GPU with a high throughput accelerator
  • Tensor Processing Unit, a chip used internally by Google for accelerating AI calculations

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.