Underwater Image Enhancement and Object Detection Using Edge Preserving and Multiscale Contextual Neural Network

Authors:

M. Meenakumari,Balaji S,John Paul Praveen A,S. Ramya,

DOI NO:

https://doi.org/10.26782/jmcms.spl.2019.08.00043

Keywords:

Underwater enhancement,Striking item revelation,edge safeguarding,multi-scale setting,RGB-D saliency identification,object cover,

Abstract

The submerged perception circumstances cause incredible difficulties to the issue of article location from the low-goals submerged pictures. In this paper, we acquaint an effective strategy with improve the pictures caught submerged and corrupted in light of the medium dispersing and retention. It expands on the mixing of 2 pictures that are legitimately gotten from a shading redressed and white-adjusted adaptation of the first corrupted picture. In the wake of improving the submerged picture, plans to identify object that present in the submerged by utilizing novel edge saving and multiscale logical neural network. We concentrated for the most part on discovery of an item in the submerged that they are utilized to isolate them an article from the foundation by utilizing a mix of programmed difference extending pursued by picture number-crunching task, worldwide edge, and least channel. Our system could be a solitary picture approach that doesn't need particular equipment or information about the submerged conditions or scene structure. our upgraded pictures are described by better exposedness of the dull area, improved worldwide complexity and edge sharpness and our striking article recognition accomplishes both clear identification limit and multi-scale logical vigor at the same time in this manner accomplishes an enhanced presentation.

Refference:

I. Y. Wei et al. (2015). “STC: A simple to complex framework for
weakly-supervised semantic segmentation.” [Online]. Available:
https://arxiv.org/abs/1509.03150
II. V. Mahadevanand N. Vasconcelos, “Saliency-based discriminant track- ing,” in
Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2009, pp.1007–
1013.

III. S. Hong, T. You, S. Kwak, and B. Han, “Online tracking by learning
discriminative saliency map with convolutional neural network,” in Proc. Int.
Conf. Mach. Learn., 2015,pp. 597–606.
IV. B. Lei, E.-L. Tan, S. Chen, D. Ni, and T. Wang, “Saliency-driven image
classification method based on histogram mining and image score,” Pattern
Recognit., vol. 48,no.8, pp. 2567–2580,2015
V. B. Li, W. Xiong, O. Wu, W. Hu, S. Maybank, and S. Yan, “Horror image
recognition based on context-aware multi-instance learning,” IEEE Trans.
Image Process., vol. 24, no. 12, pp. 5193–5205, Dec. 2015.
VI. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for
rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 11,
pp. 1254–1259, Nov. 1998
VII. L. Zhang, M. H. Tong, T. K. Marks, H. Shan, and G. W. Cottrell, “SUN: A
Bayesian framework for saliency using natural statistics,” J. Vis., vol. 8, no. 7,
p. 32,Dec. 2008.
VIII. N. Murray, M. Vanrell, X. Otazu, and C. A. Parraga, “Saliency esti- mation
using a non-parametric low-level vision model,” in Proc. IEEE Conf. Comput.
Vis. Pattern Recognit. (CVPR), Jun. 2011, pp. 433–440.
IX. Y. Zhai and M. Shah, “Visual attention detection in video sequences using
spatiotemporal cues,” in Proc. ACM MM, 2006, pp.815–824.
X. M.-M. Cheng, G.-X. Zhang, N. J. Mitra, X. Huang, and S.-M. Hu, “Global
contrast based salient region detection,” in Proc. CVPR, 2011, pp.569–582.
XI. A. Borji, M.-M. Cheng, H. Jiang, and J. Li. (2014). “Salient object detec- tion:
A survey.” [Online]. Available: https://arxiv.org/abs/1411.5878
XII. T. Liu, J. Sun, N.-N. Zheng, X. Tang, and H.-Y. Shum, “Learning to detect a
salient object,” in Proc. CVPR, Jun. 2007, pp.1–5.
XIII. K. Shi, K. Wang, J. Lu, and L. Lin, “Pisa: Pixelwise image saliency by
aggregating complementary appearance contrast measures with spatial priors,”
in Proc. CVPR, 2013, pp.2115–2122.
XIV. P. Jiang, H. Ling, J. Yu, and J. Peng, “Salient region detection by UFO:
Uniqueness, focusness and objectness,” in Proc. ICCV, 2013, pp.1976–1983.
XV. Y. Wei, F. Wen, W. Zhu, and J. Sun, “Geodesic saliency using back- ground
priors,” in Proc. ECCV, 2012,pp. 29–42.
XVI. W. Zhu, S. Liang, Y. Wei, and J. Sun, “Saliency optimization from robust
background detection,” in Proc. CVPR, 2014, PP 2814–2821.

View | Download