Wednesday, March 9, 2016

ADAS framework challenges

After studying the ADAS solution providers based on computer vision, one thing stands out to me. A lot of vendors have excellent demos of individual algorithms, such as lane detection, pedestrian detection, etc. But when asked if they're putting it all together, a common answer is either we're working on it or we have a fixed set of algorithms running together. I believe that putting algorithms together in an efficient manner will be a fundamental challenge to be resolved for mass-market low-cost low-power ADAS.

Here's what I see as the primary challenges that need to be solved:

  1. Syntactical expression of pipelines, that enable re-use of processing blocks
  2. There are too many technologies to learn for a developer:
    1. Algorithms
    2. Operating systems (for scheduling the algorithms to take advantage of multi-core architectures)
    3. SIMD / Vectorization (for taking advantage of special architectures for vision processing)
    4. GPU usage - either for simple parallelism or with a deep learning framework.
  3. The need to iterate quickly - as algorithms get updated and workloads shift with addition of custom hardware.
I built the audio concurrency framework that's been running on all of Qualcomm's 7K (from the first Android phone onwards), 8K and 9K platforms. The challenges there were: audio and voice concurrency, the need to handle different sampling rates and the long list of post-processing blocks. I believe that we solved it elegantly with the right balance of re-configurability and simplicity, in resource constrained (MHz and Memory) platforms.  I see similar challenges in CV based solutions for ADAS. 

It will be a fun ride.
Kuntal.