Technology

Machine/Computer Vision Challenge

Conventional cameras work to reconstruct images for human consumption by measuring the intensity of light on a pixel array at a fixed frame rate (typically 10-60 frames a second) . As a result these cameras have a single output mode, a continuous stream of full frame images. While this has historically been done for the accurate communication and reproduction in electronic and printed media, it makes these cameras inefficient and limiting in computer vision. This is largely due to the fact that this architecture continually captures redundant/irrelevant information while causing a combination of latency, blurring, and inaccurate signal information.

Redundant information is continuously captured even when the scene remains static. Consider the example where there is only motion in 1% of the total scene. In that scenario 99% of the pixels contain no new information yet they are continually read off of the camera and blindly processed.

Latency is incurred because of the tremendous amount of data that needs to be read off of the sensor and it must be completed within a frame time. Consider a camera that is operating at 30 fps. Waiting for the previous frame to be transferred can account for up to 33 ms of delay depending on the exposure time and that doesn't include its own data transfer or processing delays.

Blurring occurs when either an object is in motion or the camera itself is in motion (even when the scene is static). This happens because a point in the scene ends up being captured by multiple pixels during the exposure time.

 

While the exposure time can be shortened to limit the effects of motion blur, it negatively effects the accuracy of the signal. For example, with an exposure time of 1 ms and a fixed frame rate of 30 fps, there are now 32 ms between every frame where no data is captured. That makes the sensor blind for 97% of the time. Signal inaccuracy occurs when the sensor is no longer measuring light (looking at the scene) so it cannot capture anything happening during that time and the signal fidelity suffers.

When used in computer or machine vision, these fundamental limitations result in substantial signal inaccuracies and substantial inefficiencies in processing, power, bandwidth, and latency.

Oculi SPU™: Integrated Neuromorphic Sensing & Processing
Oculi Understands.png

Oculi makes the SPU™ Sensing and Processing Unit, a single chip complete vision solution that delivers real time Vision Intelligence (VI) at the edge.

The SPU™ provides a disruptive architecture to mimic the eye:

Integrated sensing + processing

Parallel sensing + processing

Saliency/features (smart events) output

Sparse processing

Bi-directional communication

Oculi S11 SPU Front.png

Oculi SPU™ is the first practical silicon that closely mimics biology in selectivity, parallel processing, and efficiency but outperforms in speed.

Oculi SPU: Advantages
Lightening.png

Captures high speed dynamics

i.e. the muzzle flash of a firearm

battery.jfif

Lower total power, size & weight

mW vs 100's mW

audio_wave.png

Wavelength and color agnostic

Demonstrated in both visible and IR

Sun.png

Manages extreme lighting

140dB dynamic range

parallel.png

Intellipixel

Parallelized in-pixel processing

Clock.png

Low latency signal to information

us vs 10's ms

Low signal bandwidth

Smart events typically < 1% of full frame bandwidth

Subscribe to Our Site

Thanks for submitting!