2024 : 3 : 29
sekineh asadi amiri

sekineh asadi amiri

Academic rank: Assistant Professor
ORCID:
Education: PhD.
ScopusId:
Faculty: Faculty of Technology and Engineering
Address: university of mazandaran
Phone: 011-35302901

Research

Title
A Saliency Detection Model via Fusing Extracted Low-Level and High-Level Features from an Image
Type
JournalPaper
Keywords
Saliency region detection connected components low-level feature high-level feature.
Year
2019
Journal aut journal of electrical engineering
DOI
Researchers sekineh asadi amiri ، Hamid Hassanpour

Abstract

Saliency regions attract more human’s attention than other regions in an image. Low- level and high-level features are utilized in saliency region detection. Low-level features contain primitive information such as color or texture while high-level features usually consider visual systems. Recently, some salient region detection methods have been proposed based on only low-level features or high-level features. It is necessary to consider both low-level features and high-level features to overcome their limitations. In this paper, a novel saliency detection method is proposed which uses both low-level and high-level features. Color difference and texture difference are considered as lowlevel features while modeling human’s attention to the center of the image is considered as a highlevel feature. In this approach, color saliency maps are extracted from each channel in lab color space; and texture saliency maps are extracted using wavelet transform and local variance of each channel. Finally, these feature maps are fused to construct the final saliency map. In the post-processing step, morphological operators and the connected components technique are applied on the final saliency map to construct further contiguous saliency regions. We have compared our proposed method with four state-of-the-art methods on the MSRA (Microsoft Security Response Alliance) database. The average F-measure over 1000 images of the MSRA dataset is achieved 0.7824. Experimental results demonstrate that the proposed method outperforms the existing methods in saliency region detection.