يزداد عدد الوثائق الطبية الحيوية بسرعة.وفقا لذلك، يتزايد الطلب على استخراج المعرفة من النصوص الطبية الحيوية على نطاق واسع.تعرف النماذج القائمة على بيرت بأدائها عالية في المهام المختلفة.ومع ذلك، غالبا ما يكون باهظ الثمن بشكل حسابي.بيئة GPU متطورة غير متوفرة في العديد من المواقف.لتحقيق كل من الدقة عالية وسرعة الاستخراج السريع، نقترح مجموعات من النماذج المبهب المدربة مسبقا.تتفوق طريقةنا على أحدث طراز أحدث نماذج ومقرها بيرت على جاد كوربوس.بالإضافة إلى ذلك، تظهر طريقةنا بسرعة ما يقرب من ثلاث مرات سرعة استخراج أسرع من النماذج القائمة على Bert
The number of biomedical documents is increasing rapidly. Accordingly, a demand for extracting knowledge from large-scale biomedical texts is also increasing. BERT-based models are known for their high performance in various tasks. However, it is often computationally expensive. A high-end GPU environment is not available in many situations. To attain both high accuracy and fast extraction speed, we propose combinations of simpler pre-trained models. Our method outperforms the latest state-of-the-art model and BERT-based models on the GAD corpus. In addition, our method shows approximately three times faster extraction speed than the BERT-based models on the ChemProt corpus and reduces the memory size to one sixth of the BERT ones.
References used
https://aclanthology.org/
The domain-specialised application of Named Entity Recognition (NER) is known as Biomedical NER (BioNER), which aims to identify and classify biomedical concepts that are of interest to researchers, such as genes, proteins, chemical compounds, drugs,
Named entity disambiguation (NED), which involves mapping textual mentions to structured entities, is particularly challenging in the medical domain due to the presence of rare entities. Existing approaches are limited by the presence of coarse-grain
As it has been unveiled that pre-trained language models (PLMs) are to some extent capable of recognizing syntactic concepts in natural language, much effort has been made to develop a method for extracting complete (binary) parses from PLMs without
To alleviate human efforts from obtaining large-scale annotations, Semi-Supervised Relation Extraction methods aim to leverage unlabeled data in addition to learning from limited samples. Existing self-training methods suffer from the gradual drift p
The rise of pre-trained language models has yielded substantial progress in the vast majority of Natural Language Processing (NLP) tasks. However, a generic approach towards the pre-training procedure can naturally be sub-optimal in some cases. Parti