中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
A Lightweight Hybrid Model with Location-Preserving ViT for Efficient Food Recognition

文献类型:期刊论文

作者Sheng, Guorui3; Min, Weiqing1,2; Zhu, Xiangyi3; Xu, Liang3; Sun, Qingshuo3; Yang, Yancun3; Wang, Lili3; Jiang, Shuqiang1,2
刊名NUTRIENTS
出版日期2024
卷号16期号:2页码:16
关键词food recognition lightweight global feature ViT nutrition management
DOI10.3390/nu16020200
英文摘要Food-image recognition plays a pivotal role in intelligent nutrition management, and lightweight recognition methods based on deep learning are crucial for enabling mobile deployment. This capability empowers individuals to effectively manage their daily diet and nutrition using devices such as smartphones. In this study, we propose an Efficient Hybrid Food Recognition Net (EHFR-Net), a novel neural network that integrates Convolutional Neural Networks (CNN) and Vision Transformer (ViT). We find that in the context of food-image recognition tasks, while ViT demonstrates superiority in extracting global information, its approach of disregarding the initial spatial information hampers its efficacy. Therefore, we designed a ViT method termed Location-Preserving Vision Transformer (LP-ViT), which retains positional information during the global information extraction process. To ensure the lightweight nature of the model, we employ an inverted residual block on the CNN side to extract local features. Global and local features are seamlessly integrated by directly summing and concatenating the outputs from the convolutional and ViT structures, resulting in the creation of a unified Hybrid Block (HBlock) in a coherent manner. Moreover, we optimize the hierarchical layout of EHFR-Net to accommodate the unique characteristics of HBlock, effectively reducing the model size. Our extensive experiments on three well-known food image-recognition datasets demonstrate the superiority of our approach. For instance, on the ETHZ Food-101 dataset, our method achieves an outstanding recognition accuracy of 90.7%, which is 3.5% higher than the state-of-the-art ViT-based lightweight network MobileViTv2 (87.2%), which has an equivalent number of parameters and calculations.
WOS研究方向Nutrition & Dietetics
语种英语
WOS记录号WOS:001151224800001
出版者MDPI
源URL[http://119.78.100.204/handle/2XEOYT63/38395]  
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Yang, Yancun
作者单位1.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100190, Peoples R China
2.Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China
3.Ludong Univ, Sch Informat & Elect Engn, Yantai 264025, Peoples R China
推荐引用方式
GB/T 7714
Sheng, Guorui,Min, Weiqing,Zhu, Xiangyi,et al. A Lightweight Hybrid Model with Location-Preserving ViT for Efficient Food Recognition[J]. NUTRIENTS,2024,16(2):16.
APA Sheng, Guorui.,Min, Weiqing.,Zhu, Xiangyi.,Xu, Liang.,Sun, Qingshuo.,...&Jiang, Shuqiang.(2024).A Lightweight Hybrid Model with Location-Preserving ViT for Efficient Food Recognition.NUTRIENTS,16(2),16.
MLA Sheng, Guorui,et al."A Lightweight Hybrid Model with Location-Preserving ViT for Efficient Food Recognition".NUTRIENTS 16.2(2024):16.

入库方式: OAI收割

来源:计算技术研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。