Hostname: page-component-89b8bd64d-b5k59 Total loading time: 0 Render date: 2026-05-08T08:35:25.646Z Has data issue: false hasContentIssue false

Parameter-efficient feature-based transfer for paraphrase identification

Published online by Cambridge University Press:  19 December 2022

Xiaodong Liu*
Affiliation:
Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Hokkaido, Japan
Rafal Rzepka
Affiliation:
Faculty of Information Science and Technology, Hokkaido University, Sapporo, Hokkaido, Japan
Kenji Araki
Affiliation:
Faculty of Information Science and Technology, Hokkaido University, Sapporo, Hokkaido, Japan
*
*Corresponding author. E-mail: xiaodongliu@ist.hokudai.ac.jp

Abstract

There are many types of approaches for Paraphrase Identification (PI), an NLP task of determining whether a sentence pair has equivalent semantics. Traditional approaches mainly consist of unsupervised learning and feature engineering, which are computationally inexpensive. However, their task performance is moderate nowadays. To seek a method that can preserve the low computational costs of traditional approaches but yield better task performance, we take an investigation into neural network-based transfer learning approaches. We discover that by improving the usage of parameters efficiently for feature-based transfer, our research goal can be accomplished. Regarding the improvement, we propose a pre-trained task-specific architecture. The fixed parameters of the pre-trained architecture can be shared by multiple classifiers with small additional parameters. As a result, the computational cost left involving parameter update is only generated from classifier-tuning: the features output from the architecture combined with lexical overlap features are fed into a single classifier for tuning. Furthermore, the pre-trained task-specific architecture can be applied to natural language inference and semantic textual similarity tasks as well. Such technical novelty leads to slight consumption of computational and memory resources for each task and is also conducive to power-efficient continual learning. The experimental results show that our proposed method is competitive with adapter-BERT (a parameter-efficient fine-tuning approach) over some tasks while consuming only 16% trainable parameters and saving 69-96% time for parameter update.

Information

Type
Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable