<span id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
<span id="fpn9h"><noframes id="fpn9h">
<th id="fpn9h"></th>
<strike id="fpn9h"><noframes id="fpn9h"><strike id="fpn9h"></strike>
<th id="fpn9h"><noframes id="fpn9h">
<span id="fpn9h"><video id="fpn9h"></video></span>
<ruby id="fpn9h"></ruby>
<strike id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>

基于切分通道注意力網絡的圖像分類算法

Image classification algorithm based on split channel attention network

  • 摘要: 通道注意力機制可以有效利用不同的特征通道,通過對特征圖的通道進行加權和調整,使得卷積神經網絡可以更加關注重要的特征通道,以提高卷積神經網絡的分類能力. 然而,對于使用全局平均池化來獲取通道全局特征的方法,特征圖中不同的通道有極大概率出現相同的均值,使得全局平均池化后的特征缺乏多樣性,進一步影響網絡分類性能. 針對此問題,提出一種切分通道注意力機制來構建模塊,該模塊對全局平均池化的輸出維度進行了擴展,減輕全局平均池化造成的信息丟失,增強了通道注意力中全局平均池化層的特征多樣性,然后使用多個一維卷積分別計算通道維度上每個區域的注意力權重. 將切分通道注意力機制與多種圖像分類網絡相結合,在CIFAR-100和ImageNet數據集上進行了圖像分類實驗. 實驗結果表明,切分通道注意力機制在保持輕量化的基礎上仍然能有效提高模型的精度,并且與其他注意力機制相比也表現出較好的優勢.

     

    Abstract: The channel attention mechanism can effectively make use of different feature channels. By weighting and adjusting the channels of feature graphs, convolutional neural networks can pay more attention to important feature channels, thus improving their classification ability. The first step in this mechanism involves compressing the feature map of each channel to obtain global features inside each channel. Global average pooling stands out as the best choice because of its ease of use and high efficiency. However, a challenge arises when global average pooling is used to obtain the global features of channels: different channels in the feature graph have a high probability of exhibiting the same mean value. Meanwhile, using only one scalar to measure the importance of the whole feature graph will not accurately reflect the complexity and diversity of features, resulting in the lack of diversity of features after global average pooling, which further affects the classification performance of the network. To solve this problem, a split channel attention mechanism is proposed to build a module. This module extends the output dimension of global average pooling, reduces the information loss caused by global average pooling, enhances the diversity of the output features of the global average pooling layer in channel attention, and uses multiple one-dimensional convolutions to calculate the attention weight of each region in the channel dimension. By splitting the output feature map of the global average pooling layer into multiple regions, variations in the feature maps of different regions are preserved while the global information of channels is compressed. Furthermore, the importance of different regional features is considered comprehensively, and a more comprehensive and fine-grained method is adopted to evaluate and utilize feature map information than global average pooling, effectively improving the ability and performance of the model. Image classification experiments are performed on the CIFAR-100 and ImageNet datasets by combining the split channel attention mechanism with multiple image classification networks. Experimental results show that the split channel attention mechanism can effectively improve the accuracy of the model while remaining lightweight and that the proposed mechanism has better advantages than other attention mechanisms. Furthermore, Grad-CAM is used to analyze the results predicted by the model visually. The analysis results show that the network model when integrated with the split channel attention mechanism, can better learn the feature fitting of the target object region well and has better feature extraction and classification capabilities. This underscores the potential of the split attention mechanism to improve the performance of network models.

     

/

返回文章
返回
<span id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
<span id="fpn9h"><noframes id="fpn9h">
<th id="fpn9h"></th>
<strike id="fpn9h"><noframes id="fpn9h"><strike id="fpn9h"></strike>
<th id="fpn9h"><noframes id="fpn9h">
<span id="fpn9h"><video id="fpn9h"></video></span>
<ruby id="fpn9h"></ruby>
<strike id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
www.77susu.com