Deep learning for phase recovery
In a new review paper published in Light Science & Application, scientists from The University of Hong Kong, Northwestern Polytechnical University, The Chinese University of Hong Kong, Guangdong University of Technology and Massachusetts Institute of Technology have reviewed various deep learning phase recovery methods from the following four perspectives:
• Deep-learning-pre-processing for phase recovery: the neural network performs some pre-processing on the intensity measurement before phase recovery, such as pixel super-resolution, noise reduction, hologram generation, and autofocusing.
• Deep-learning-in-processing for phase recovery: the neural network directly performs phase recovery or participates in the process of phase recovery together with the physical model or physics-based algorithm by supervised or unsupervised learning modes.
• Deep-learning-post-processing for phase recovery: the neural network performs post-processing after phase recovery, such as noise reduction, resolution enhancement, aberration correction, and phase unwrapping.
• Deep learning for phase processing: the neural network uses the recovered phase for specific applications, such as segmentation, classification, and imaging modal transformation.
To let readers learn more about phase recovery, they also presented a live-updating resource (https://github.com/kqwang/phase-recovery).
When deep learning is applied to various processes of phase recovery, it not only brings unprecedented effects but also introduces some unpredictable risks. Some methods may look the same, but there are differences that are difficult to detect. These scientists point out the differences and connections between some similar methods and gave suggestions on how to make the most of deep learning and physical models for phase recovery:
“It should be noted that the uPD (untrained physics-driven) scheme is free from numerous intensity images as a prerequisite, but requires numerous iterations for each inference; while the tPD (trained physics-driven) scheme completes the inference only passing through the trained neural network once, but requires a large number of intensity images for pretraining.”
“zf is a fixed vector, which means that the input of the neural network is independent of the sample, and therefore the neural network cannot be pre-trained like the PD approach.” they said when introducing the structural-prior network-in-physics strategy.
“Learning-based deep neural networks have enormous potential and efficiency, while conventional physics-based methods are more reliable. We thus encourage the incorporation of physical models with deep neural networks, especially for those well modeling from the real world, rather than letting the deep neural network perform all tasks as a black box.” the scientists suggest.
END
