|
|
Image segmentation of exfoliated two-dimensional materials by generative adversarial network-based data augmentation |
Xiaoyu Cheng(程晓昱)1, Chenxue Xie(解晨雪)1, Yulun Liu(刘宇伦)1, Ruixue Bai(白瑞雪)1, Nanhai Xiao(肖南海)1, Yanbo Ren(任琰博)1, Xilin Zhang(张喜林)1, Hui Ma(马惠)2,†, and Chongyun Jiang(蒋崇云)1,‡ |
1 College of Electronic Information and Optical Engineering, Nankai University, Tianjin 300350, China; 2 School of Physical Science and Technology, Tiangong University, Tianjin 300387, China |
|
|
Abstract Mechanically cleaved two-dimensional materials are random in size and thickness. Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production. Deep learning algorithms have been adopted as an alternative, nevertheless a major challenge is a lack of sufficient actual training images. Here we report the generation of synthetic two-dimensional materials images using StyleGAN3 to complement the dataset. DeepLabv3Plus network is trained with the synthetic images which reduces overfitting and improves recognition accuracy to over 90%. A semi-supervisory technique for labeling images is introduced to reduce manual efforts. The sharper edges recognized by this method facilitate material stacking with precise edge alignment, which benefits exploring novel properties of layered-material devices that crucially depend on the interlayer twist-angle. This feasible and efficient method allows for the rapid and high-quality manufacturing of atomically thin materials and devices.
|
Received: 02 January 2024
Revised: 26 January 2024
Accepted manuscript online: 30 January 2024
|
PACS:
|
07.05.Pj
|
(Image processing)
|
|
68.65.-k
|
(Low-dimensional, mesoscopic, nanoscale and other related systems: structure and nonelectronic properties)
|
|
84.35.+i
|
(Neural networks)
|
|
87.64.M-
|
(Optical microscopy)
|
|
Fund: Project supported by the National Key Research and Development Program of China (Grant No. 2022YFB2803900), the National Natural Science Foundation of China (Grant Nos. 61974075 and 61704121), the Natural Science Foundation of Tianjin Municipality (Grant Nos. 22JCZDJC00460 and 19JCQNJC00700), Tianjin Municipal Education Commission (Grant No. 2019KJ028), and Fundamental Research Funds for the Central Universities (Grant No. 22JCZDJC00460). |
Corresponding Authors:
Hui Ma, Chongyun Jiang
E-mail: mahuimoving@163.com;jiang.chongyun@nankai.edu.cn
|
Cite this article:
Xiaoyu Cheng(程晓昱), Chenxue Xie(解晨雪), Yulun Liu(刘宇伦), Ruixue Bai(白瑞雪), Nanhai Xiao(肖南海), Yanbo Ren(任琰博), Xilin Zhang(张喜林), Hui Ma(马惠), and Chongyun Jiang(蒋崇云) Image segmentation of exfoliated two-dimensional materials by generative adversarial network-based data augmentation 2024 Chin. Phys. B 33 030703
|
[1] Tan Q, Rasmita A, Zhang Z, Cai H, Cai X, Dai X, Watanabe K, Taniguchi T, MacDonald A H and Gao W 2023 Nat. Mater. 22 605 [2] Sauer M O, Taghizadeh A, Petralanda U, Ovesen M, Thygesen K S, Olsen T, Cornean H and Pedersen T G 2023 NPJ Comput. Mater. 9 35 [3] Bertoldo F, Ali S, Manti S and Thygesen K S 2022 NPJ Comput. Mater. 8 56 [4] Zhou X, Zhang R W, Zhang Z, Feng W, Mokrousov Y and Yao Y 2021 NPJ Comput. Mater. 7 160 [5] Li M Y, Pu J, Huang J K, Miyauchi Y, Matsuda K, Takenobu T and Li L J 2018 Adv. Funct. Mater. 28 1706860 [6] Dong J, Zhang L and Ding F 2019 Adv. Mater. 31 e1801583 [7] Zhao B, Shen D, Zhang Z, Lu P, Hossain M, Li J, Li B and Duan X 2021 Adv. Funct. Mater. 31 2105132 [8] Zhang X W, D W He, He J Q, Zhao S Q, Hao S C, Wang Y S and Yi L X 2017 Chin. Phys. B 26 97202 [9] Qi X, Gao M, Ding C, Zhang W, Qu R, Guo Y, Gao H and Zhang Z 2021 Phys. Status Solidi (RRL) 15 2100052 [10] Hu B, Zhang T, Wang K, Wang L, Zhang Y, Gao S, Ye X, Zhou Q, Jiang S, Li X, Shi F and Chen C 2023 Small 19 e2207538 [11] Hu X, Qiu C and Liu D 2020 Nano Res. 14 840 [12] Masubuchi S, Morimoto M, Morikawa S, Onodera M, Asakawa Y, Watanabe K, Taniguchi T and Machida T 2018 Nat. Commun. 9 1413 [13] Masubuchi S and Machida T 2019 NPJ 2D Mater. Appl. 3 4 [14] Dong X, Yetisen A K, Tian H, Gueler I, Stier A V, Li Z, Koehler M H, Dong J, Jakobi M, Finley J J and Koch A W 2020 ACS Photonics 7 1216 [15] Li H, Wu J, Huang X, Lu G, Yang J, Lu X, Zhang Q and Zhang H 2013 ACS Nano 7 10344 [16] Nolen C M, Denina G, Teweldebrhan D, Bhanu B and Balandin A A 2011 ACS Nano 5 914 [17] Han B, Lin Y, Yang Y, Mao N, Li W, Wang H, Yasuda K, Wang X, Fatemi V, Zhou L, Wang J I, Ma Q, Cao Y, Rodan-Legrain D, Bie Y Q, Navarro-Moratalla E, Klein D, MacNeill D, Wu S, Kitadai H, Ling X, Jarillo-Herrero P, Kong J, Yin J and Palacios T 2020 Adv. Mater. 32 e2000953 [18] Saito Y, Shin K, Terayama K, Desai S, Onga M, Nakagawa Y, Itahashi Y M, Iwasa Y, Yamada M and Tsuda K 2019 NPJ Comput. Mater. 5 124 [19] Masubuchi S, Watanabe E, Seo Y, Okazaki S, Sasagawa T, Watanabe K, Taniguchi T and Machida T 2020 NPJ 2D Mater. Appl. 4 3 [20] Talaei Khoei T, Ould Slimane H and Kaabouch N 2023 Neural. Comput. Appl. 35 23103 [21] Whang S E, Roh Y, Song H and Lee J G 2023 VLDB J 32 791 [22] Yang R, Mei L, Zhang Q, Fan Y, Shin H S, Voiry D and Zeng Z 2022 Nat. Protoc. 17 358 [23] Miao Q, Lin H, Wang X and Hassan M M 2021 Comput. Netw. 197 108327 [24] Weiss K R and Khoshgoftaar T M 2016 Proceedings of the 15th IEEE International Conference on Machine Learning and Applications, December 18–20, 2016, Anaheim, CA, USA, pp. 207–213 [25] Wang K Z, Zhang D Y, Li Y, Zhang R M and Lin L 2017 IEEE Trans. Circuits Syst. Video Technol. 27 2591 [26] Karras T, Aittala M, Laine S, Harkonen E, Hellsten J, Lehtinen J and Aila T 2021 Proceedings of the Conference on Neural Information Processing Systems, December 06-14, 2021, Electr Network, pp. 852-863 [27] Rezatofighi H, Tsoi N, Gwak J, Sadeghian A, Reid I, Savarese S and Soc I C 2019 Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 16-20, 2019, Long Beach, CA, pp. 658-666 [28] Jin C, Tao Z, Li T, Xu Y, Tang Y, Zhu J, Liu S, Watanabe K, Taniguchi T, Hone J C, Fu L, Shan J and Mak K F 2021 Nat. Mater. 20 940 [29] Yang Y, Li J, Yin J, Xu S, Mullan C, Taniguchi T, Watanabe K, Geim A K, Novoselov K S and Mishchenko A 2020 Sci. Adv. 6 eabd3655 [30] Liu Y, Dini K, Tan Q, Liew T, Novoselov K S and Gao W 2020 Sci. Adv. 6 eaba1830 [31] Liu F, Wu W J, Bai Y S, Chae S H, Li Q Y, Wang J, Hone J and Zhu X Y 2020 Science 367 903 [32] Tang R, Chen W, Wu Y, Xiong H and Yan B 2023 Sensors 23 3834 [33] Nhat-Duc H, Quoc-Lam N and Van-Duc T 2018 Autom. Constr. 94 203 [34] Borji A 2022 Comput. Vision Image Understanding 215 103329 |
No Suggested Reading articles found! |
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
Altmetric
|
blogs
Facebook pages
Wikipedia page
Google+ users
|
Online attention
Altmetric calculates a score based on the online attention an article receives. Each coloured thread in the circle represents a different type of online attention. The number in the centre is the Altmetric score. Social media and mainstream news media are the main sources that calculate the score. Reference managers such as Mendeley are also tracked but do not contribute to the score. Older articles often score higher because they have had more time to get noticed. To account for this, Altmetric has included the context data for other articles of a similar age.
View more on Altmetrics
|
|
|