Image Sensors at 2016 EI Symposium

2016 IS&T International Symposium on Electronic Imaging to be held on Feb. 14–18 in San Francisco, CA, publishes its preliminary Program. There are many image sensor related short courses and papers:

EI13: Introduction to CMOS Image Sensor Technology

Instructor: Arnaud Darmont, APHESA

A time-of-flight CMOS range image sensor using 4-tap output pixels with lateral-electric-field control,

Taichi Kasugai, Sang-Man Han, Hanh Trang, Taishi Takasawa, Satoshi Aoyama, Keita Yasutomi, Keiichiro Kagawa, and Shoji Kawahito;
Shizuoka Univ. and Brookman Technology (Japan)

Design, implementation and evaluation of a TOF range image sensor using multi-tap lock-in pixels with cascaded charge draining and modulating gates,

Trang Nguyen, Taichi Kasugai, Keigo Isobe, Sang-Man Han, Taishi Takasawa, De XIng Lioe, Keita Yasutomi, Keiichiro Kagawa, and Shoji Kawahito;
Shizuoka Univ. and Brookman Technology (Japan)

A high dynamic range linear vision sensor with event asynchronous and frame-based synchronous operation,
Juan A. Leñero-Bardallo, Ricardo Carmona-Galán, and Angel Rodríguez-Vázquez,
Universidad de Sevilla (Spain)

A dual-core highly programmable 120dB image sensor,

Benoit Dupont,
Pyxalis (France)

Analog current mode implementation of global and local tone mapping algorithm for wide dynamic range image display,
Peng Chen, Kartikeya Murari, and Orly Yadid-Pecht,
Univ. of Calgary (Canada)

High dynamic range challenges
Short presentation by Arnaud Darmont, APHESA SPRL (Belgium)

Image sensor with organic photoconductive films by stacking the red/green and blue components,
Tomomi Takagi, Toshikatu Sakai, Kazunori Miyakawa, and Mamoru Furuta;
NHK Science & Technology Research Laboratories and Kochi University of Technology (Japan)

High-sensitivity CMOS image sensor overlaid with Ga2O3/CIGS heterojunction photodiode,
Kazunori Miyakawa, Shigeyuki Imura, Hiroshi Ohtake, Misao Kubota, Kenji Kikuchi, Tokio Nakada, Toru Okino, Yutaka Hirose, Yoshihisa Kato, and Nobukazu Teranishi;
NHK Science and Technology Research Laboratories, NHK Sapporo Station, Tokyo University of Science, Panasonic Corporation, University of Hyogo, and Shizuoka University (Japan)

Sub-micron pixel CMOS image sensor with new color filter patterns,
Biay-Cheng Hseih, Sergio Goma, Hasib Siddiqui, Kalin Atanassov, Jiafu Luo, RJ Lin, Hy Cheng, Kuoyu Chou, JJ Sze, and Calvin Chao;
Qualcomm Technologies Inc. (United States) and TSMC (Taiwan)

A CMOS image sensor with variable frame rate for low-power operation,
Byoung-Soo Choi, Sung-Hyun Jo, Myunghan Bae, Sang-Hwan Kim, and Jang-Kyoo Shin, Kyungpook National University (South Korea)

ADC techniques for optimized conversion time in CMOS image sensors,
Cedric Pastorelli and Pascal Mellot; ANRT and STMicroelectronics (France)

Miniature lensless computational infrared imager,
Evan Erickson, Mark Kellam, Patrick Gill, James Tringali, and David Stork,
Rambus (United States)

Focal-plane scale space generation with a 6T pixel architecture,
Fernanda Oliveira, José Gabriel Gomes, Ricardo Carmona-Galán, Jorge Fernández-Berni, and Angel Rodríguez-Vázquez;
Universidade Federal do Rio de Janeiro (Brazil) and Instituto de Microelectrónica de Sevilla (Spain)

Development of an 8K full-resolution single-chip image acquisition system,
Tomohiro Nakamura, Ryohei Funatsu, Takahiro Yamasaki, Kazuya Kitamura, and Hiroshi Shimamoto,
Japan Broadcasting Corporation (NHK) (Japan)

A 1.12-um pixel CMOS image sensor survey,
Clemenz Portmann, Lele Wang, Guofeng Liu, Ousmane Diop, and Boyd Fowler,
Google Inc (United States)

A comparative noise analysis and measurement for n-type and p-type pixels with CMS technique,
Xiaoliang Ge, Bastien Mamdy, and Albert Theuwissen;
Technische Univ. Delft (Netherlands), STMicroelectronics, Universite Claude Bernard Lyon 1 (France), and Harvest Imaging (Belgium)

Increases in hot pixel development rates for small digital pixel sizes,
Glenn Chapman, Rahul Thomas, Rohan Thomas, Klinsmann Meneses, Tony Yang, Israel Koren, and Zahava Koren;
Simon Fraser Univ. (Canada) and Univ. of Massachusetts Amherst (United States)

Correlation of photo-response blooming metrics with image quality in CMOS image sensors,
Pulla Reddy Ailuri, Orit Skorka, Ning Li, Radu Ispasoiu, and Vladi Koborov;
ON Semiconductor (United States)
0 Read More

Light Camera Demo

Mashable got a chance to see Light Co's L16 52MP array camera prototype at CES 2016. Few quotes from Mashable impressions:

"On the prototype, the photo stitching took a little while to work and froze. In the end, I didn't get to see how fast it was. When I asked Dr. Rajiv Laroia, Light's co-founder and Chief Technology Officer, how long it will take to generate a 52-megapixel image on the final product, he told me they're shooting for under a minute.

That's a long time to wait for a complete image. The Light team is going to try to make the processing as fast and instantaneous as possible, but the company's not promising anything faster than under a minute right now.
"

"I have to admit, the sample image taken by the L16 looked pretty good with lots of details when zoomed in, but it also looked like it had a lot of image noise."

0 Read More

NVIDIA Presents Deep Learning Automotive Imaging Platform

NVIDIA launches NVIDIA DRIVE PX 2, said to be the world’s most powerful engine for in-vehicle artificial intelligence. DRIVE PX 2 can process the inputs of 12 video cameras, plus lidar, radar and ultrasonic sensors. It fuses them to accurately detect objects, identify them, determine where the car is relative to the world around it, and then calculate its optimal path for safe travel.

The company's Youtube promo video presents the new processor:

0 Read More

Socionext Shipping Dual Camera Image Processor

PRNewswire: Socionext (Fujitsu + Panasonic Semi) introduces “M-12MO” (MBG967) Milbeaut Image Processor. The MBG967, which will be available in volume shipments starting in January, is mainly targeted at smartphones and other mobile applications. It supports dual camera, the latest trend in mobile applications, along with functionalities such as low light shot and depth map generation. The expansion of dual camera capabilities in the mobile camera market has been highly anticipated because dual cameras enable new functionalities previously considered difficult with mobile cameras. These include low light shot, which integrates images from color and monochrome sensors, and the generation of depth maps, which can create background blur comparable to that of SLR cameras.

Main features of the MBG967 include:

Low light shot by dual camera: By integrating the images from color and monochrome image sensors, the MBG967 enables high-sensitivity, low-noise pictures:


High-speed, high-accuracy auto focus supports high speed “Phase Detect AF”, in addition to conventional “Contrast AF”. The MBG967 also supports “Laser AF” which has an advantage in the low light conditions. Its “Super Hybrid AF” utilizes these three AF methods in combination, always allowing faster and more accurate AF in varying conditions:

0 Read More

14nm Ambarella Camera SoC Consumes 2W in 4K 60fps Mode

BusinessWire: Ambarella introduces the H2 and H12 camera SoCs for sports and flying cameras. 14nm process-based H2 targets high-end camera models with 4K Ultra HD H.265/HEVC video at 60 fps and 4K AVC video at 120 fps and includes 10-bit HDR video processing. 28nm process-based H12 targets mainstream cameras and offers 4K Ultra HD HEVC video at 30 fps.

With the introduction of H2 and H12 we now provide a complete portfolio of 4K Ultra HD HEVC solutions for sports and flying cameras,” said Fermi Wang, President and CEO of Ambarella.

0 Read More

Intel Unveils R200 and ZR300 RealSense 3D Cameras

Intel announces R200 RealSense 3D camera, said to be the company's first long-range depth camera for 2 in 1s and tablets. The new camera is aimed to:
  • 3D scanning: Scan people and objects in 3D to share on social media or print on a 3D printer.
  • Immersive Gaming: Scan oneself into a game and be the character in top rated games
  • Enhanced Photography/Video: Create live video with depth enabled special effects, remove/change backgrounds or enhance the focus and color of photographs on the fly.
  • Immersive Shopping: Capturing body shape and measurements as depth data that is transformed into a digital model enabling people to virtually try on clothes.
The RealSense R200 camera is capable of capturing VGA-resolution depth information at 60 fps. The camera uses dual-infrared imagers to calculate depth using stereoscopic techniques. By leveraging IR technology, the camera provides reliable depth information even in darker areas and shadows as well as when capturing flat or texture-less surfaces. The operating range for the Intel RealSense Camera R200 is between 0.5 meters and 3.5 meters, in indoor situations. The RGB sensor is 1080p resolution at 30 fps.

A number of OEM featurs RealSense R200, including the HP Spectre x2, Lenovo Ideapad Miix 700, Acer Aspire Switch 12 S, NEC LaVie Hybrid Zero11 and Panasonic. The Intel RealSense Camera R200 is supported on all Windows 10 systems that run on 6th Generation Intel Core Processors.

RealSense ZR300 camera is an integrated unit within the new RealSense Smartphone Developer Kit. The Intel RealSense Camera ZR300 provides high-quality and high-density depth data at VGA-resolution of 60 fps. The ZR300 supports Google Project Tango spec for feature tracking and synchronization via time stamping between sensors.

Source: The Inquirer
0 Read More

Himax WLO Adopted in 3D Structured Light Camera

Himax announces that its Wafer Level Optics (“WLO”) laser diode collimator with integrated Diffractive Optical Element (“DOE”) has been integrated into laser projectors for next-generation structured light camera. Himax's WLO systemhas a height of less than two millimeters. The WLO component is then stacked on top of a laser diode to reduce the overall height of a coded laser projector assembly to five millimeters.

Jordan Wu, President and CEO of Himax Technologies says "We are currently collaborating with several major OEMs' product developments using our WLO as our expertise in WLO design and manufacturing enables significant size and cost reduction of coded laser projectors. For example, in an active sensing 3D camera projector, our technology can reduce the size of the incumbent laser projector module by a factor of 9, actually making it smaller than conventional camera modules. This breakthrough allows our WLO collimator to be easily integrated into next-generation smartphones, tablets, automobiles, wearable devices, IoT applications, consumer electronics accessories and several other products to enable new applications in the consumer, medical, and industrial marketplaces."

The WLO laser collimator and DOE will be manufactured by Himax’s Wafer Optics production facility in Taiwan. The first production run for 3D camera applications is scheduled for delivery and sampling by Himax's partners and select
customers in Q1 2016.

Update: GlobeNewsWire: Himax reports "higher-than-expected engineering fees from AR/VR project engagements with both current and new customers."
0 Read More

Heptagon Introduces Next Generation OLIVIA ToF 3DRanger

BusinessWire: Heptagon introduces OLIVIA, a complete ToF system module with an integrated microprocessor, adaptive algorithms, advanced optics, ToF sensor and light source. OLIVIA can accurately measure distance up to 2 meters in normal lighting conditions, compared with other solutions that are only able to reach similar distances in lower lighting. OLIVIA also requires 40% less power when ranging than alternate solutions.

We’re excited about the advancements OLIVIA brings to the market,” says René Kromhof, SVP of Sales and Marketing at Heptagon. “Our team is moving fast - In less than 3 months from the release of LAURA, our first product, we have introduced OLIVIA, our next generation sensor. With over 20 years’ experience in highly accurate distance mapping and 3D imaging technology, Heptagon is uniquely positioned to innovate and rapidly bring world-class products to market.
0 Read More