Prompt Combines Paraphrase: Teaching Pre-trained Models to Understand Rare Biomedical Words

Examples for different paradigms

Abstract

Prompt-based fine-tuning for pre-trained models has proven effective for many natural language processing tasks under few-shot settings in general domain. However, tuning with prompt in biomedical domain has not been investigated thoroughly. Biomedical words are often rare in general domain, but quite ubiquitous in biomedical contexts, which dramatically deteriorates the performance of pre-trained models on downstream biomedical applications even after fine-tuning, especially in low-resource scenarios. We propose a simple yet effective approach to helping models learn rare biomedical words during tuning with prompt. Experimental results show that our method can achieve up to 6% improvement in biomedical natural language inference task without any extra parameters or training steps using few-shot vanilla prompt settings.

Publication
In The 29th International Conference on Computational Linguistics
Nuwa Xi
Nuwa Xi
Graduate Student

Hi there. This is Nova 😊, currently a master at HIT-SCIR lab. My research interests include Natural Language Processing and its application in science.