Electronics, Vol. 13, Pages 4552: Multimodal Food Image Classification with Large Language Models
Electronics doi: 10.3390/electronics13224552
Authors: Jun-Hwa Kim Nam-Ho Kim Donghyeok Jo Chee Sun Won
In this study, we leverage advancements in large language models (LLMs) for fine-grained food image classification. We achieve this by integrating textual features extracted from images using an LLM into a multimodal learning framework. Specifically, semantic textual descriptions generated by the LLM are encoded and combined with image features obtained from a transformer-based architecture to improve food image classification. Our approach employs a cross-attention mechanism to effectively fuse visual and textual modalities, enhancing the model’s ability to extract discriminative features beyond what can be achieved with visual features alone.