
2022.emnlp.org/blog/TACL-Paper-1
Preview meta tags from the 2022.emnlp.org website.
Linked Hostnames
9- 18 links to2022.emnlp.org
- 2 links totwitter.com
- 2 links towww.facebook.com
- 1 link toaclanthology.org
- 1 link todirect.mit.edu
- 1 link togithub.com
- 1 link tojekyllrb.com
- 1 link tomademistakes.com
Search Engine Appearance
TACL Paper (published in the MIT Press): Multi-task Active Learning for Pre-trained Transformer-based Models
Multi-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes, which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models (Full Text).
Bing
TACL Paper (published in the MIT Press): Multi-task Active Learning for Pre-trained Transformer-based Models
Multi-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes, which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models (Full Text).
DuckDuckGo

TACL Paper (published in the MIT Press): Multi-task Active Learning for Pre-trained Transformer-based Models
Multi-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes, which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models (Full Text).
General Meta Tags
9- titleTACL Paper (published in the MIT Press): Multi-task Active Learning for Pre-trained Transformer-based Models - emnlp 2022
- charsetutf-8
- descriptionMulti-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes, which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models (Full Text).
- authorWebsite Chairs
- article:published_time2022-12-01T00:00:00+00:00
Open Graph Meta Tags
6- og:typearticle
og:locale
en_US- og:site_nameemnlp 2022
- og:titleTACL Paper (published in the MIT Press): Multi-task Active Learning for Pre-trained Transformer-based Models
- og:urlhttps://balhafni.github.io/EMNLP_2022//blog/TACL-Paper-1/
Twitter Meta Tags
5- twitter:site@emnlpmeeting
- twitter:titleTACL Paper (published in the MIT Press): Multi-task Active Learning for Pre-trained Transformer-based Models
- twitter:descriptionMulti-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes, which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models (Full Text).
- twitter:urlhttps://balhafni.github.io/EMNLP_2022//blog/TACL-Paper-1/
- twitter:cardsummary
Item Prop Meta Tags
3- headlineTACL Paper (published in the MIT Press): Multi-task Active Learning for Pre-trained Transformer-based Models
- descriptionMulti-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes, which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models (Full Text).
- datePublishedDecember 01, 2022
Link Tags
14- alternate_pages/home.md
- apple-touch-icon/assets/images/apple-touch-icon-57x57.png
- apple-touch-icon/assets/images/apple-touch-icon-60x60.png
- apple-touch-icon/assets/images/apple-touch-icon-72x72.png
- apple-touch-icon/assets/images/apple-touch-icon-76x76.png
Links
28- https://2022.emnlp.org
- https://2022.emnlp.org/blog
- https://2022.emnlp.org/blog/Discounted-Student-Accommodations
- https://2022.emnlp.org/blog/TACL-Paper-2
- https://2022.emnlp.org/calls/main_conference_papers