Graduate School of Information Science, University of Hyogo, Kobe, Japan
Email: af24o008@guh.u-hyogo.ac.jp
Manuscript received February 26, 2025; accepted April 2, 2025; published April 17, 2025
Abstract—This study proposes a novel framework for predicting stock price movements using a prompt-based approach with the Large Language Model Meta AI (LLaMA) model, where candlestick charts serve as the primary input. Unlike traditional deep learning models that process images through convolutional or transformer-based architectures, the proposed method leverages LLaMA’s prompt-driven reasoning to interpret financial chart patterns. In addition, a teacher-student model incorporating LLaMA and Qwen is explored. To assess the effectiveness of this prompt-based Large Language Model (LLM) approach, its performance is compared with established models, including Convolutional Neural Network (CNN), ResNet, and Vision Transformer. Experimental results demonstrate that the proposed method consistently outperforms these deep learning models, highlighting the potential of prompt-based LLM techniques for financial time series forecasting using visual inputs.
Keywords—Convolutional Neural Network (CNN), Large Language Model Meta AI (LLaMA), Residual Neural Network (ResNet), vision transformer, stock price prediction
[PDF]
Cite: Qizhao Chen, "Image-Driven Stock Price Prediction with LLaMA: A Prompt-Based Approach," International Journal of Modeling and Optimization, vol. 15, no. 1, pp. 17-24, 2025.
Copyright © 2025 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).