Poster
Stabilizing and Accelerating Autofocus with Expert Trajectory Regularized Deep Reinforcement Learning
Shouhang Zhu · Chenglin Li · Yuankun Jiang · Li Wei · Nuowen Kan · Ziyang Zheng · Wenrui Dai · Junni Zou · Hongkai Xiong
Autofocus is a crucial component of modern digital cameras. While recent learning-based methods achieve state-of-the-art in-focus prediction accuracy, they unfortunately ignore the potential focus hunting phenomenon of back-and-forth lens movement in the multi-step focusing procedure. To address this, in this paper, we propose an expert regularized deep reinforcement learning (DRL)-based approach for autofocus, which is able to utilize the sequential information of lens movement trajectory to both enhance the multi-step in-focus prediction accuracy and reduce the chance of focus hunting. Our method generally follows an actor-critic framework. To accelerate the DRL's training with a higher sample efficiency, we initialize the policy with a pre-trained single-step prediction network, where the network is further improved by modifying the output of absolute in-focus position distribution to the relative lens movement distribution to establish a better mapping between input images and lens movement. To further stabilize DRL's training with lower focus hunting occurrence in the resulting lens movement trajectory, we generate some offline trajectories based on the prior knowledge to avoid focus hunting, which are then leveraged as an offline dataset of expert trajectories to regularize actor network's training. Empirical evaluations show that our method outperforms those learning-based methods on public benchmarks, with higher single- and multi-step prediction accuracies, and a significant reduction of focus hunting rate.
Live content is unavailable. Log in and register to view live content