Comparative Analysis of Performance of Deep Learning Classification Approach based on LSTM-RNN for Textual and Image Datasets

Alaa Sahl Gaafar, Jasim Mohammed Dahr, Alaa Khalaf Hamoud


Deep learning approaches can be applied to a large amount of data for the purpose of simplifying and improving the engineering practice of automated decision-making activities rather than relying on human encoded heuristics. The need for generating faster and effective decisions about systems, processes, and applications gave rise to many artificial intelligences motivated approaches such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), fuzzy analytics, etc. Deep learning deploys diverse multiple layers of cascaded processing elements to enable features extraction and transformations. These deep learning approaches conduct multiple levels of depiction corresponding to distinct abstraction levels. There are several applications of deep learning algorithms including weather forecasting, object recognition, stock market performance forecasts, medical diagnosis, and emergency warning systems. This paper investigates the performance of the deep learning approach on the basis of processing components, data representation, and data types. To achieve this, a deep learning algorithm based on a long short-term memory-recurrent neural network (LSTM-RNN) was utilized to learn hidden patterns and features in the textual and image datasets respectively. The outcomes reveal that the performance of the image-based deep learning model was better in terms of speed due to well-defined patterns of data representation against the data with sentiments-based deep learning by 3.49 mins. to 18.25 mins. While the LSTM-RNN with images offered better classification accuracy by 96.50% to 85.69% due to complex network architecture, processing elements, and features of the underlying datasets

Full Text:



X. Zhu, Y. Xu, H. Xu, and C. Chen, “Quaternion convolutional neural networks,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 631–647.

L. Zou, S. Yu, T. Meng, Z. Zhang, X. Liang, and Y. Xie, “A technical review of convolutional neural network-based mammographic breast cancer diagnosis,” Comput. Math. Methods Med., vol. 2019, 2019.

O. Abdel-Hamid, L. Deng, and D. Yu, “Exploring convolutional neural network structures and optimization techniques for speech recognition.,” in Interspeech, 2013, vol. 11, pp. 73–75.

O. L. P. Hansen et al., “Species‐level image classification with convolutional neural network enables insect identification from habitus images,” Ecol. Evol., vol. 10, no. 2, pp. 737–747, 2020.

V. Suryanarayanan, B. Patra, P. Bhattacharya, C. Fufa, and C. Lee, “ScopeIt: Scoping Task Relevant Sentences in Documents,” arXiv Prepr. arXiv2003.04988, 2020.

U. Dixit, A. Mishra, A. Shukla, and R. Tiwari, “Texture classification using convolutional neural network optimized with whale optimization algorithm,” SN Appl. Sci., vol. 1, no. 6, pp. 1–11, 2019.

X. Huang, Z. Li, C. Wang, and H. Ning, “Identifying disaster related social media for rapid response: a visual-textual fused CNN architecture,” Int. J. Digit. Earth, 2019.

F. E. F. Junior and G. G. Yen, “Particle swarm optimization of deep neural networks architectures for image classification,” Swarm Evol. Comput., vol. 49, pp. 62–74, 2019.

V. Kudva, K. Prasad, and S. Guruvare, “Hybrid transfer learning for classification of uterine cervix images for cervical cancer screening,” J. Digit. Imaging, vol. 33, no. 3, pp. 619–631, 2020.

K. O’Shea and R. Nash, “An introduction to convolutional neural networks,” arXiv Prepr. arXiv1511.08458, 2015.

A. Hamoud and A. Humadi, “Student’s success prediction model based on artificial neural networks (ANN) and a combination of feature selection methods,” J. Southwest Jiaotong Univ., vol. 54, no. 3, 2019.

S. Sokolov, S. Vlaev, and M. Chalashkanov, “Technique for storing and automated processing of weather station data in cloud platforms,” in IOP Conference Series: Materials Science and Engineering, 2021, vol. 1032, no. 1, p. 12021.

Y. Wang, L. Wang, Y. Yang, and T. Lian, “SemSeq4FD: Integrating global semantic relationship and local sequential order to enhance text representation for fake news detection,” Expert Syst. Appl., vol. 166, p. 114090, 2021.

M. Thoma, “Analysis and optimization of convolutional neural network architectures,” arXiv Prepr. arXiv1707.09725, 2017.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv Prepr. arXiv1409.1556, 2014.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.

A. Hirose and S. Yoshida, “Generalization characteristics of complex-valued feedforward neural networks in relation to signal coherence,” IEEE Trans. Neural Networks Learn. Syst., vol. 23, no. 4, pp. 541–551, 2012.

C. Trabelsi et al., “Deep complex networks,” arXiv Prepr. arXiv1705.09792, 2017.

C. J. Gaudet and A. S. Maida, “Deep quaternion networks,” in 2018 International Joint Conference on Neural Networks (IJCNN), 2018, pp. 1–8.

S. Minaee, E. Azimi, and A. Abdolrashidi, “Deep-sentiment: Sentiment analysis using ensemble of cnn and bi-lstm models,” arXiv Prepr. arXiv1904.04206, 2019.

T. Xu et al., “Multi-feature based benchmark for cervical dysplasia classification evaluation,” Pattern Recognit., vol. 63, pp. 468–475, 2017.

W. Yu and M. Pacheco, “Impact of random weights on nonlinear system identification using convolutional neural networks,” Inf. Sci. (Ny)., vol. 477, pp. 1–14, 2019.

M. Blaivas and L. Blaivas, “Are all deep learning architectures alike for point‐of‐care ultrasound?: evidence from a cardiac image classification model suggests otherwise,” J. Ultrasound Med., vol. 39, no. 6, pp. 1187–1194, 2020.

V. S. Martins, A. L. Kaleita, B. K. Gelder, H. L. F. da Silveira, and C. A. Abe, “Exploring multiscale object-based convolutional neural network (multi-OCNN) for remote sensing image classification at high spatial resolution,” ISPRS J. Photogramm. Remote Sens., vol. 168, pp. 56–73, 2020.

S. Bera and V. K. Shrivastava, “Analysis of various optimizers on deep convolutional neural network model in the application of hyperspectral remote sensing image classification,” Int. J. Remote Sens., vol. 41, no. 7, pp. 2664–2683, 2020.

X. Cao, J. Yao, Z. Xu, and D. Meng, “Hyperspectral image classification with convolutional neural network and active learning,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 7, pp. 4604–4616, 2020.

K. Safari, S. Prasad, and D. Labate, “A multiscale deep learning approach for high-resolution hyperspectral image classification,” IEEE Geosci. Remote Sens. Lett., vol. 18, no. 1, pp. 167–171, 2020.

K. Shankar, Y. Zhang, Y. Liu, L. Wu, and C.-H. Chen, “Hyperparameter tuning deep learning for diabetic retinopathy fundus image classification,” IEEE Access, vol. 8, pp. 118164–118173, 2020.

R. J. S. Raj, S. J. Shobana, I. V. Pustokhina, D. A. Pustokhin, D. Gupta, and K. Shankar, “Optimal feature selection-based medical image classification using deep learning model in internet of medical things,” IEEE Access, vol. 8, pp. 58006–58017, 2020.

Y. Wang et al., “Weak supervision for fake news detection via reinforcement learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, vol. 34, no. 01, pp. 516–523.

E. Hossain, O. Sharif, and M. M. Hoque, “NLP-CUET@ DravidianLangTech-EACL2021: Investigating Visual and Textual Features to Identify Trolls from Multimodal Social Media Memes,” arXiv Prepr. arXiv2103.00466, 2021.

W. Li, L. Zhu, Y. Shi, K. Guo, and E. Cambria, “User reviews: Sentiment analysis using lexicon integrated two-channel CNN–LSTM family models,” Appl. Soft Comput., vol. 94, p. 106435, 2020.

M. Rodrigues Makiuchi, T. Warnita, K. Uto, and K. Shinoda, “Multimodal fusion of bert-cnn and gated cnn representations for depression detection,” in Proceedings of the 9th International on Audio/Visual Emotion Challenge and Workshop, 2019, pp. 55–63.

N. Gupta and A. S. Jalal, “Integration of textual cues for fine-grained image captioning using deep CNN and LSTM,” Neural Comput. Appl., vol. 32, no. 24, pp. 17899–17908, 2020.

Y. Ren and J. Zhang, “Fake news detection on news-oriented heterogeneous information networks through hierarchical graph attention,” arXiv Prepr. arXiv2002.04397, 2020.

R. K. Kaliyar, A. Goswami, P. Narang, and S. Sinha, “FNDNet–a deep convolutional neural network for fake news detection,” Cogn. Syst. Res., vol. 61, pp. 32–44, 2020.

B. Zoph and Q. V Le, “Neural architecture search with reinforcement learning,” arXiv Prepr. arXiv1611.01578, 2016.

F. A. Ozbay and B. Alatas, “Fake news detection within online social media using supervised artificial intelligence algorithms,” Phys. A Stat. Mech. its Appl., vol. 540, p. 123174, 2020.

Q. Umer, H. Liu, and I. Illahi, “CNN-based automatic prioritization of bug reports,” IEEE Trans. Reliab., vol. 69, no. 4, pp. 1341–1354, 2019.

P. Yuan and R. Huang, “Integrating the device-to-device communication technology into edge computing: A case study,” Peer-to-Peer Netw. Appl., vol. 14, no. 2, pp. 599–608, 2021.


Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.