Pre-trained language models fine-tuned with SVM for legal textual entailment recognition

Quan Van Nguyen, Anh Trong Nguyen, Huy Quang Pham, Kiet Van Nguyen
Author affiliations

Authors

  • Quan Van Nguyen University of Information Technology, VNU-HCM, Ho Chi Minh City, Vietnam
  • Anh Trong Nguyen Ho Chi Minh City University of Technology, VNU-HCM, Vietnam
  • Huy Quang Pham University of Information Technology, VNU-HCM, Ho Chi Minh City, Vietnam
  • Kiet Van Nguyen University of Information Technology, VNU-HCM, Ho Chi Minh City, Vietnam https://orcid.org/0000-0002-8456-2742

DOI:

https://doi.org/10.15625/1813-9663/20618

Keywords:

Legal textual entailment recognition, support vector machine (SVM), transformer.

Abstract

The breakthroughs in natural language processing (NLP) are not only a crucial step in technological evolution but also deliver significant benefits across various fields demanding high intelligence and precision. One of the notable NLP applications is in the analysis and processing of legal texts. Capitalizing on this trend, the 10th Workshop on Vietnamese Language and Speech Processing (VLSP) 2023 hosted a new challenge: Legal textual entailment recognition (RTE). The task involves determining whether a given statement is logically entailed by the relevant legal passage. Our proposed method leverages a novel layer based on Support Vector Machine (SVM) kernel formulations, effectively capturing nuanced relationships in the input data. Additionally, it capitalizes on the advantages of the natural language inference (NLI) datasets which are very close to textual entailment recognition (RTE) for enhancing performance and generalization. Our approach not only yielded accurate results but also demonstrated efficiency in the use of data resources, helping our A3N1 team achieve notable accuracy, with a score of 0.7194 on the test set, and ranking third on the leaderboard.

References

H. T. Anh, N. T. M. Huyen, N. Lien et al., “Vlsp 2021-vnnli challenge: Vietnamese and english-vietnamese textual entailment,” VNU Journal of Science: Computer Science and Communication Engineering, vol. 38, no. 2, 2022.

R. Bar-Haim, I. Dagan, B. Dolan, L. Ferro, D. Giampiccolo, B. Magnini, and I. Szpektor, “The second pascal recognising textual entailment challenge,” 2006.

L. Bentivogli, P. Clark, I. Dagan, and D. Giampiccolo, “The sixth pascal recognizing textual entailment challenge,” in Text Analysis Conference, 2009. [Online]. Available: https://api.semanticscholar.org/CorpusID:858065

——, “The seventh pascal recognizing textual entailment challenge.” in TAC, 2011.

J. J. Castillo and L. A. Alemany, “An approach using named entities for recognizing textual entailment,” Theory and Applications of Categories, 2008.

C.-N. Chau, T.-S. Nguyen, and L.-M. Nguyen, “Vnlawbert: A vietnamese legal answer selection approach using bert language model,” in 2020 7th NAFOSTED Conference on Information and Computer Science (NICS), 2020, pp. 298–301.

A. Conneau, K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzmán, E. Grave, M. Ott, L. Zettlemoyer, and V. Stoyanov, “Unsupervised cross-lingual representation learning at scale,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Jul. 2020, pp. 8440–8451.

A. Conneau, G. Lample, R. Rinott, A. Williams, S. R. Bowman, H. Schwenk, and V. Stoyanov, “Xnli: Evaluating cross-lingual sentence representations,” arXiv preprint arXiv:1809.05053, 2018.

C. Cortes and V. N. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, pp. 273–297, 1995.

I. Dagan, O. Glickman, and B. Magnini, “The pascal recognising textual entailment challenge,” in Machine learning challenges workshop, 2005, pp. 177–190.

——, “The pascal recognising textual entailment challenge,” in Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment, 2006, pp. 177–190.

J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Minneapolis, Minnesota: Association for Computational Linguistics, Jun. 2019, pp. 4171–4186.

D. Giampiccolo, H. T. Dang, B. Magnini, I. Dagan, E. Cabrio, and B. Dolan, “The fourth pascal recognizing textual entailment challenge.” in TAC, 2008.

D. Giampiccolo, B. Magnini, I. Dagan, and W. B. Dolan, “The third pascal recognizing textual entailment challenge,” in Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, 2007, pp. 1–9.

P. He, J. Gao, and W. Chen, “Debertav3: Improving deberta using electra-style pre-training with gradientdisentangled embedding sharing,” 2021.

P. He, X. Liu, J. Gao, and W. Chen, “Deberta: Decoding-enhanced bert with disentangled attention,” arXiv preprint arXiv:2006.03654, 2020.

D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), 2015.

M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics, Jul. 2020, pp. 7871–7880.

Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.

P. Malakasiotis and I. Androutsopoulos, “Learning textual entailment using SVMs and string similarity measures,” in Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, Prague, Jun. 2007, pp. 42–47.

D. Q. Nguyen and A. Tuan Nguyen, “PhoBERT: Pre-trained language models for Vietnamese,” in Findings of the Association for Computational Linguistics: EMNLP 2020, Nov. 2020, pp. 1037–1042.

C. Peng, X. Yang, K. E. Smith, Z. Yu, A. Chen, J. Bian, and Y. Wu, “Model tuning or prompt tuning? a study of large language models for clinical concept and relation extraction,” Journal of Biomedical Informatics, p. 104630, 2024.

L. Phan, H. Tran, H. Nguyen, and T. H. Trinh, “Vit5: Pretrained text-to-text transformer for vietnamese language generation,” arXiv preprint arXiv:2205.06457, 2022.

Q. L. Phan, T. H. P. Doan, N. H. Le, N. B. D. Tran, and T. N. Huynh, “Vietnamese sentence paraphrase identification using sentence-bert and phobert,” in Intelligence of Things: Technologies and Applications, 2022, pp. 416–423.

I. M. S. Putra, D. Siahaan, and A. Saikhu, “Recognizing textual entailment: A review of resources, approaches, applications, and challenges,” ICT Express, 2023.

C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” The Journal of Machine Learning Research, vol. 21, no. 1, pp. 5485–5551, 2020.

N. H. Thanh, B. M. Quan, C. Nguyen, T. Le, N. M. Phuong, D. T. Binh, V. T. H. Yen, T. Racharak, N. Le Minh, T. D. Vu et al., “A summary of the alqac 2021 competition,” in 2021 13th international conference on knowledge and systems engineering (kse). IEEE, 2021, pp. 1–5.

N. L. Tran, D. M. Le, and D. Q. Nguyen, “BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese,” in Proceedings of the 23rd Annual Conference of the International Speech Communication Association, 2022.

T. O. Tran, P. Le Hong et al., “Improving sequence tagging for vietnamese text using transformerbased neural models,” in Proceedings of the 34th Pacific Asia conference on language, information and computation, 2020, pp. 13–20.

V. Tran, H.-T. Nguyen, T. Vo, T.-S. Luu, H.-A. Dang, N.-C. Le, T.-T. Le, M.-T. Nguyen, T.-S. Nguyen, and L.-M. Nguyen, “Vlsp 2023 - lter: A summary of the challenge on legal textual entailment recognition,” in Proceedings of the 10th International Workshop on Vietnamese Language and Speech Processing, 2023.

L. Tunstall, N. Reimers, U. E. S. Jo, L. Bates, D. Korat, M. Wasserblat, and O. Pereg, “Efficient few-shot learning without prompts,” arXiv preprint arXiv:2209.11055, 2022.

T. Van Huynh, K. Van Nguyen, and N. L.-T. Nguyen, “Vinli: a vietnamese corpus for studies on opendomain natural language inference,” in Proceedings of the 29th International Conference on Computational Linguistics, 2022, pp. 3858–3872.

A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.

A. Williams, N. Nangia, and S. Bowman, “A broad-coverage challenge corpus for sentence understanding through inference,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers).

Association for Computational Linguistics, 2018, pp. 1112–1122.

W. Zhu and M. Tan, “Improving prompt tuning with learned prompting layers,” arXiv preprint arXiv:2310.20127, 2023.

Downloads

Published

24-09-2025

How to Cite

[1]Q. V. Nguyen, A. T. Nguyen, H. Q. Pham, and K. V. Nguyen, “Pre-trained language models fine-tuned with SVM for legal textual entailment recognition”, J. Comput. Sci. Cybern., vol. 41, no. 3, p. 305–321, Sep. 2025.

Issue

Section

Articles

Most read articles by the same author(s)

Similar Articles

<< < 3 4 5 6 7 8 9 10 11 12 > >> 

You may also start an advanced similarity search for this article.