Biomedical pre-trained language models (BioPLMs) have been achieving state-of-the-art results for various biomedical text mining tasks. However, prevailing fine-tuning approaches naively train BioPLMs on targeted datasets without considering the class distributions. This is problematic, especially with dealing with imbalanced biomedical gold-standard datasets for named entity recognition (NER). Regardless of the high-performing SOTA fine-tuned NER models, they are biased towards other (O) tags and misclassify biomedical entities. To fill the gap, we propose WELT, a cost-sensitive BERT that handles the class imbalance for the task of biomedical NER. We investigate the impact of WELT against the traditional fine-tuning approaches on mixed-domain and domain-specific BioPLMs. In addition, we examine the effect of handling the class imbalance on another downstream task which is named entity linking (NEL).Experimental results from the NER task on five gold-standard biomedical datasets show the outperformance of WELT over the corresponding original fine-tuned models that do not address the class imbalance problem. Our results analysis proves the positive impact on four out of five datasets for NEL task after using the recognized entities from WELT.
SEEK ID: https://fairdomhub.org/studies/1116
Biomedical named entity recognition
Projects: PoLiMeR - Polymers in the Liver: Metabolism and Regulation
Study position:
Creator
Submitter
Views: 309
Created: 9th Nov 2022 at 16:30
Last updated: 30th Aug 2023 at 09:02
This item has not yet been tagged.