upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

  • Paper
  • Aug 28, 2020
  • #MachineLearning #Naturallanguageprocessing
Colin Raffel
@colinraffel
(Author)
arxiv.org
Read on arxiv.org
1 Recommender
1 Mention
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language proce... Show More

Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Sören Mindermann @sorenmind · Oct 24, 2019
  • Post
  • From Twitter
Great thread & paper about NLP transfer learning!
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta