Fine-Tuning BERT for Aspect Extraction in Multi-domain ABSA
Abstract
Aspect extraction plays a crucial role in understanding the fine-grained nuances of text data, allowing businesses and researchers to gain deeper insights into customer opinions, sentiment distributions, and preferences. This study presents a BERT-based framework for aspect extraction in ABSA and evaluates its performance. Our research focuses on the comprehensive analysis of aspect extraction as we test our method using the SEMEVAL dataset of various consumer evaluations across diverse domains, including laptops, restaurants, and Twitter. By fine-tuning BERT on a large dataset, we aim to overcome the limitations of traditional approaches and improve the accuracy and efficiency of aspect extraction in ABSA. The experimental findings provide evidence of the efficacy of our methodology with a noteworthy aspect extraction accuracy of 0.99, highlighting its capacity to properly and consistently extract features. The article also explores the applicability of our approach to new domains and its possible applications in real-world scenarios.
Full Text:
PDFReferences
References:
G. Jacobs and V. Hoste, "Fine-grained implicit sentiment in financial news: Uncovering hidden bulls and bears," Electronics, vol. 10, no. 20, p. 2554, 2021.
A. Torfi, R. A. Shirvani, Y. Keneshloo, N. Tavaf, and E. A. Fox, "Natural language processing advancements by deep learning: A survey," arXiv preprint arXiv:2003.01200, 2020.
A. Akram and A. S. Sabir, "Explicit aspect extraction techniques," Journal of Al-Qadisiyah for computer science and mathematics, vol. 14, no. 4, pp. Page 90-98, 2022.
M. Hajiali, "Big data and sentiment analysis: A comprehensive and systematic literature review," Concurrency and Computation: Practice and Experience, vol. 32, no. 14, p. e5671, 2020.
F. Z. Ruskanda, D. H. Widyantoro, and A. Purwarianti, "Comparative study on language rule based methods for aspect extraction in sentiment analysis," in 2018 International Conference on Asian Language Processing (IALP), 2018: IEEE, pp. 56-61.
T. Gaillat, B. Stearns, G. Sridhar, R. McDermott, M. Zarrouk, and B. Davis, "Implicit and explicit aspect extraction in financial microblogs," in 1st Workshop on Economics and Natural Language Processing (ECONLP 2018), 2018, vol. 56, no. 1: Association for Computational Linguistics, pp. 55-61.
M. Bodini, "Aspect extraction from bangla reviews through stacked auto-encoders," Data, vol. 4, no. 3, p. 121, 2019.
N. Majumder, R. Bhardwaj, S. Poria, A. Gelbukh, and A. Hussain, "Improving aspect-level sentiment analysis with aspect extraction," Neural Computing and Applications, pp. 1-11, 2022.
W. Ansar, S. Goswami, A. Chakrabarti, and B. Chakraborty, "An efficient methodology for aspect-based sentiment analysis using BERT through refined aspect extraction," Journal of Intelligent & Fuzzy Systems, vol. 40, no. 5, pp. 9627-9644, 2021.
L. Yu and X. Bai, "Implicit aspect extraction from online clothing reviews with fine-tuning BERT algorithm," in Journal of Physics: Conference Series, 2021, vol. 1995, no. 1: IOP Publishing, p. 012040.
J. Z. Maitama, N. Idris, A. Abdi, and A. T. Bimba, "Aspect extraction in sentiment analysis based on emotional affect using supervised approach," in 2021 4th International Conference on Artificial Intelligence and Big Data (ICAIBD), 2021: IEEE, pp. 372-376.
M. Syamala and N. Nalini, "ABSA: Computational Measurement Analysis Approach for Prognosticated Aspect Extraction System," TEM Journal, vol. 10, no. 1, 2021.
A. A. Chamid, Widowati, and R. Kusumaningrum, "Graph-Based Semi-Supervised Deep Learning for Indonesian Aspect-Based Sentiment Analysis," Big Data and Cognitive Computing, vol. 7, no. 1, p. 5, 2022.
F. Z. Ruskanda, D. H. Widyantoro, and A. Purwarianti, "Sequential covering rule learning for language rule-based aspect extraction," in 2019 International Conference on Advanced Computer Science and information Systems (ICACSIS), 2019: IEEE, pp. 229-234.
L. Hou, C.-P. Yu, and D. Samaras, "Squared earth mover's distance-based loss for training deep neural networks," arXiv preprint arXiv:1611.05916, 2016.
Y. Tay et al., "Scale efficiently: Insights from pre-training and fine-tuning transformers," arXiv preprint arXiv:2109.10686, 2021.
Y. Sun, Y. Zheng, C. Hao, and H. Qiu, "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task--Next Sentence Prediction," arXiv e-prints, p. arXiv: 2109.03564, 2021.
J. Singh, B. McCann, R. Socher, and C. Xiong, "BERT is not an interlingua and the bias of tokenization," in Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), 2019, pp. 47-55.
D. Kirange, R. R. Deshmukh, and M. Kirange, "Aspect based sentiment analysis semeval-2014 task 4," Asian Journal of Computer Science and Information Technology (AJCSIT) Vol, vol. 4, 2014.
A. J. Quijano, S. Nguyen, and J. Ordonez, "Grid search hyperparameter benchmarking of BERT, ALBERT, and LongFormer on DuoRC," arXiv preprint arXiv:2101.06326, 2021.
M. Hossin and M. N. Sulaiman, "A review on evaluation metrics for data classification evaluations," International journal of data mining & knowledge management process, vol. 5, no. 2, p. 1, 2015.
DOI: https://doi.org/10.31449/inf.v47i9.5217
This work is licensed under a Creative Commons Attribution 3.0 License.