2022.emnlp.org/blog/TACL-Paper-1

Preview meta tags from the 2022.emnlp.org website.

Linked Hostnames

9

Search Engine Appearance

Google

https://2022.emnlp.org/blog/TACL-Paper-1

TACL Paper (published in the MIT Press): Multi-task Active Learning for Pre-trained Transformer-based Models

Multi-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes, which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models (Full Text).



Bing

TACL Paper (published in the MIT Press): Multi-task Active Learning for Pre-trained Transformer-based Models

https://2022.emnlp.org/blog/TACL-Paper-1

Multi-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes, which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models (Full Text).



DuckDuckGo

https://2022.emnlp.org/blog/TACL-Paper-1

TACL Paper (published in the MIT Press): Multi-task Active Learning for Pre-trained Transformer-based Models

Multi-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes, which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models (Full Text).

  • General Meta Tags

    9
    • title
      TACL Paper (published in the MIT Press): Multi-task Active Learning for Pre-trained Transformer-based Models - emnlp 2022
    • charset
      utf-8
    • description
      Multi-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes, which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models (Full Text).
    • author
      Website Chairs
    • article:published_time
      2022-12-01T00:00:00+00:00
  • Open Graph Meta Tags

    6
    • og:type
      article
    • US country flagog:locale
      en_US
    • og:site_name
      emnlp 2022
    • og:title
      TACL Paper (published in the MIT Press): Multi-task Active Learning for Pre-trained Transformer-based Models
    • og:url
      https://balhafni.github.io/EMNLP_2022//blog/TACL-Paper-1/
  • Twitter Meta Tags

    5
    • twitter:site
      @emnlpmeeting
    • twitter:title
      TACL Paper (published in the MIT Press): Multi-task Active Learning for Pre-trained Transformer-based Models
    • twitter:description
      Multi-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes, which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models (Full Text).
    • twitter:url
      https://balhafni.github.io/EMNLP_2022//blog/TACL-Paper-1/
    • twitter:card
      summary
  • Item Prop Meta Tags

    3
    • headline
      TACL Paper (published in the MIT Press): Multi-task Active Learning for Pre-trained Transformer-based Models
    • description
      Multi-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes, which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models (Full Text).
    • datePublished
      December 01, 2022
  • Link Tags

    14
    • alternate
      _pages/home.md
    • apple-touch-icon
      /assets/images/apple-touch-icon-57x57.png
    • apple-touch-icon
      /assets/images/apple-touch-icon-60x60.png
    • apple-touch-icon
      /assets/images/apple-touch-icon-72x72.png
    • apple-touch-icon
      /assets/images/apple-touch-icon-76x76.png

Links

28