Single exposure passive three-dimensional information reconstruction based on an ordinary imaging system
Shen-Cheng Dou(窦申成)1,2, Fan Liu(刘璠)1,2, Hu Li(李虎)3, Xu-Ri Yao(姚旭日)4,5,†, Xue-Feng Liu(刘雪峰)1,2,‡, and Guang-Jie Zhai(翟光杰)1,2
1 Key Laboratory of Electronics and Information Technology for Space Systems, National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China; 2 University of Chinese Academy of Sciences, Beijing 100049, China; 3 Laboratory of Satellite Mission Operation, National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China; 4 Center for Quantum Technology Research and Key Laboratory of Advanced Optoelectronic Quantum Architecture and Measurements(MOE), School of Physics, Beijing Institute of Technology, Beijing 100081, China; 5 Beijing Academy of Quantum Information Sciences, Beijing 100193, China
Abstract Existing three-dimensional (3D) imaging technologies have issues such as requiring active illumination, multiple exposures, or coding modulation. We propose a passive single 3D imaging method based on an ordinary imaging system. Using the point spread function of the imaging system to realize the non-coding measurement on the target, the full-focus images and depth information of the 3D target can be extracted from a single two-dimensional (2D) image through the compressed sensing algorithm. Simulation and experiments show that this approach can complete passive 3D imaging based on an ordinary imaging system without any coding operations. This method can achieve millimeter-level vertical resolution under single exposure conditions and has the potential for real-time dynamic 3D imaging. It improves the efficiency of 3D information detection, reduces the complexity of the imaging system, and may be of considerable value to the field of computer vision and other related applications.
Fund: Project supported by the National Key Research and Development Program of China (Grant No. 2018YFB0504302) and Beijing Institute of Technology Research Fund Program for Young Scholars (Grant No. 202122012).
Corresponding Authors:
Xu-Ri Yao, Xue-Feng Liu
E-mail: yaoxuri@bit.edu.cn;liuxuefeng@nssc.ac.cn
Cite this article:
Shen-Cheng Dou(窦申成), Fan Liu(刘璠), Hu Li(李虎), Xu-Ri Yao(姚旭日), Xue-Feng Liu(刘雪峰), and Guang-Jie Zhai(翟光杰) Single exposure passive three-dimensional information reconstruction based on an ordinary imaging system 2023 Chin. Phys. B 32 114204
[1] Barbastathis G, Ozcan A and Situ G 2019 Optica6 921 [2] Antipa N, Kuo G, Heckel R, Mildenhall B, Bostan E, Ng R and Waller L 2018 Optica5 1 [3] Howland G A, Lum D J, Ware M R and Howell J C 2013 Opt. Express21 23822 [4] Xiong W, Hu H P, Xiong N X, Yang L T, Peng W C, Wang X F and Qu Y Z 2014 Inf. Sci.258 403 [5] Haim H, Elmalem S, Giryes R, Bronstein A M and Marom E 2018 IEEE Trans. Comput. Imaging4 403 [6] Berlich R, Brauer A and Stallinga S 2016 Opt. Express24 5946 [7] Lin J Y, Lin X, Ji X Y and Dai Q H 2014 IEEE Signal Process. Lett.21 1471 [8] Lin J Y, Ji X Y, Xu W L and Dai Q H 2013 IEEE Trans. Image Process.22 4545 [9] Zhang X X, Qiao L, Zhao T Y and Qiu R S 2018 Chin. Phys. B27 054205 [10] Horisaki R, Tanida J, Stern A and Javidi B 2012 Opt. Lett.37 2013 [11] Candés E J, Romberg J and Tao T 2006 IEEE Trans. Inf. Theory52 489 [12] Donoho D L 2006 IEEE Trans. Inf. Theory52 1289 [13] Candés E J and Tao T 2006 IEEE Trans. Inf. Theory52 5406 [14] Candés E J and Wakin M B 2008 IEEE Sign. Process. Mag.25 21 [15] Baraniuk R G 2007 IEEE Sign. Process. Mag.24 118 [16] Duarte M F and Baraniuk R G 2013 Appl. Comput. Harmon. Anal.35 111 [17] Studer V, Bobin J, Chahid M, Mousavi H S, Candes E and Dahan M 2012 Proc. Natl. Acad. Sci. USA109 E1679 [18] Duarte M F, Davenport M A, Takhar D, Laska J N, Sun T, Kelly K F and Baraniuk R G 2008 IEEE Sign. Process. Mag.25 83 [19] Liu X F, Yao X R, Wang C, Guo X Y and Zhai G J 2017 Opt. Express25 3286 [20] Howland G A, Dixon P B and Howell J C 2011 Appl. Opt.50 5917 [21] Liu S, Yao X R, Liu X F, Xu D Z, Wang X D, Liu B, Wang C, Zhai G J and Zhao Q 2019 Opt. Express27 22138 [22] Sun Y B, Chen J W, Liu Q S and Liu G C 2020 Pattern Recognition98 107051 [23] Liu X F, Yu W K, Yao X R, Dai B, Li L Z, Wang C and Zhai G J 2016 Opt. Commun.365 173 [24] Arce G R, Brady D J, Carin L, Arguello H and Kittle D S 2014 IEEE Sign. Process. Mag.31 105 [25] Qian L L, Lü Q B, Huang M and Xiang L B 2015 Chin. Phys. B24 080703 [26] Yu W K, Yao X R, Liu X F, Li L Z and Zhai G J 2015 Appl. Opt.54 363 [27] Gao L, Liang J Y, Li C Y and Wang L H V 2014 Nature516 74 [28] Llull P, Liao X J, Yuan X, Yang J B, Kittle D, Carin L, Sapiro G and Brady D J 2013 Opt. Express21 10526 [29] Reddy D, Veeraraghavan A and Chellappa R 2011 2011 IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, 2011, Colorado Springs, United States of America, p. 329 [30] Sun B, Edgar M P, Bowman R, Vittert L E, Welsh S, Bowman A and Padgett M J 2013 Science340 844 [31] Yuan X, Liao X J, Llull P, Brady D and Carin L 2016 Appl. Opt.55 7556 [32] Li J Z, Xue F and Blu T 2017 J. Opt. Soc. Am. A34 1029 [33] Li C B, Yin W T, Jiang H and Zhang Y 2013 Comput. Optim. Appl.56 507
Altmetric calculates a score based on the online attention an article receives. Each coloured thread in the circle represents a different type of online attention. The number in the centre is the Altmetric score. Social media and mainstream news media are the main sources that calculate the score. Reference managers such as Mendeley are also tracked but do not contribute to the score. Older articles often score higher because they have had more time to get noticed. To account for this, Altmetric has included the context data for other articles of a similar age.