论文标题

快速傅立叶变换以优化对象识别中的卷积神经网络

Fast Fourier Transformation for Optimizing Convolutional Neural Networks in Object Recognition

论文作者

Nair, Varsha, Chatterjee, Moitrayee, Tavakoli, Neda, Namin, Akbar Siami, Snoeyink, Craig

论文摘要

本文建议使用基于快速傅立叶变换的U-NET(精制的完全卷积网络)并在神经网络中执行图像卷积。利用快速的傅立叶变换,它降低了卷积神经网络(CNN)中涉及的图像卷积成本,从而降低了整体计算成本。提出的模型可以标识来自图像的对象信息。在图像数据集上,我们将快速傅立叶变换算法应用于图像数据,以获取有关图像数据的更多信息,然后再通过U-NET体系结构进行分割。更具体地说,我们实施了基于FFT的卷积神经网络,以改善网络的训练时间。提出的方法应用于公开可用的广泛生物图像基准收集(BBBC)数据集。我们的模型显示,卷积期间的培训时间从$ 600-700 $ MS/Step到$ 400-500 $ MS/Step/step。我们使用联合(IOU)度量的交叉点评估了模型的准确性,显示出显着改善。

This paper proposes to use Fast Fourier Transformation-based U-Net (a refined fully convolutional networks) and perform image convolution in neural networks. Leveraging the Fast Fourier Transformation, it reduces the image convolution costs involved in the Convolutional Neural Networks (CNNs) and thus reduces the overall computational costs. The proposed model identifies the object information from the images. We apply the Fast Fourier transform algorithm on an image data set to obtain more accessible information about the image data, before segmenting them through the U-Net architecture. More specifically, we implement the FFT-based convolutional neural network to improve the training time of the network. The proposed approach was applied to publicly available Broad Bioimage Benchmark Collection (BBBC) dataset. Our model demonstrated improvement in training time during convolution from $600-700$ ms/step to $400-500$ ms/step. We evaluated the accuracy of our model using Intersection over Union (IoU) metric showing significant improvements.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源