人生就是一列开往坟墓的列车,路途上会有很多站,很难有人能自始至终的陪你走完。当陪你的人要下车时,即使舍不得也要心存感激,然后挥手道别。😃😃😃
————宫崎骏《千与千寻》
热爱生活、热爱钻研、勇于尝试、敢于挑战、永不言弃
在读博士, 2019-至今
华中科技大学
学士, 2015-2019
华中农业大学
Deep learning-based methods have achieved excellent performance in image-deraining tasks. Unfortunately, most existing deraining methods incorrectly assume a uniform rain streak distribution and a fixed fine-grained level. And this uncertainty of rain streaks will result in the model not being competent at repairing all fine-grained rain streaks. In addition, some existing convolution-based methods extend the receptive field mainly by stacking convolution kernels, which frequently results in inaccurate feature extraction. In this work, we propose momentum-contrast and large-kernel for multi-fine-grained deraining network (MOONLIT). To address the problem that the model is not competent at all fine-grained levels, we use the unsupervised dictionary contrastive learning method to treat different fine-grained rainy images as different degradation tasks. Then, to address the problem of inaccurate feature extraction, we carefully constructed a restoration network based on large-kernel convolution with a larger and more accurate receptive field. In addition, we designed a data enhancement method to weaken features other than rain streaks in order to be better classified for different degradation tasks. Extensive experiments on synthetic and real-world deraining datasets show that the proposed method MOONLIT achieves the state-of-the-art performance on some datasets. Code is available at https://github.com/awhitewhale/moonlit.
To alleviate the above issue, we propose a new architecture that combines cross-modal knowledge transfer from visual to audio modality into our semi-supervised learning method with consistency regularization. We posit that introducing visual emotional knowledge by the cross-modal transfer method can increase the diversity and accuracy of pseudo-labels and improve the robustness of the model. To combine knowledge from cross-modal transfer and semi-supervised learning, we design two fusion algorithms, i.e. weighted fusion and consistent & random.