BibSonomy

The blue social bookmark and publication sharing system.

( en | de | ru )

 

group
  • tag
  • user
  • group
  • author
  • concept
  • BibTeX key
  • search
ai_dl_club
  • sign in
  • register
  • groups
  • genealogy
  • popular 
    • posts
    • tags
    • authors
    • concepts
    • discussions
  • sign in
  • register

Login

Log in with your username.

@

I've lost my password.


Log in with your OpenID-Provider.

  • Other OpenID-Provider
  1. group
  2. ai_dl_club
  3. BERT

Publication title

bookmarks  (hide)3
  • display
  • all
  • bookmarks only
  • bookmarks per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • sort by
  • added at
  • title
  • RSS
  • BibTeX
  • XML

  •  

     
    1RoBERTa: An optimized method for pretraining self-supervised NLP systems
     

    https://ai.facebook.com/blog/roberta-an-optimized-method-for-pretraining-self-supervised-nlp-systems/
    6 years ago by @straybird321
    show all tags
    • nlp
    • bert
     
      nlpbert
      copydelete
      • community post
      • history of this post
       
       
    •  

       
      2The Illustrated BERT, ELMo, and co. (How NLP Cracked Transfer Learning) – Jay Alammar – Visualizing machine learning one concept at a time
       

      http://jalammar.github.io/illustrated-bert/
      7 years ago by @straybird321
      show all tags
      • nlp
      • Transformer
      • ELMo
      • transfer_learning
      • BERT
       
        nlpTransformerELMotransfer_learningBERT
        copydelete
        • community post
        • history of this post
         
         
      •  

         
        1Biological Structure and Function Emerge from Scaling Unsupervised Learning to 250 Million Protein Sequences | bioRxiv
         

        https://www.biorxiv.org/content/10.1101/622803v1.full
        7 years ago by @straybird321
        show all tags
        • nlp
        • protein
        • BERT
         
          nlpproteinBERT
          copydelete
          • community post
          • history of this post
           
           
        • ⟨⟨
        • ⟨
        • 1
        • ⟩
        • ⟩⟩

        publications  (hide)1  
        • display
        • all
        • publications only
        • publications per page
        • 5
        • 10
        • 20
        • 50
        • 100
        • sort by
        • added at
        • title
        • author
        • publication date
        • entry type
        • help for advanced sorting...
        • RSS
        • BibTeX
        • RDF
        • more...

        •  

           
          2Reducing BERT Pre-Training Time from 3 Days to 76 Minutes
           

          Y. You, J. Li, J. Hseu, X. Song, J. Demmel, and C. Hsieh. (2019)cite arxiv:1904.00962.
          7 years ago by @straybird321
          show all tags
          • nlp
          • BERT
           
            nlpBERT
            copydeleteadd this publication to your clipboard
            • community post
            • history of this post
            • URL
            • DOI
            • BibTeX
            • EndNote
            • APA
            • Chicago
            • DIN 1505
            • Harvard
            • MSOffice XML
             
             
          • ⟨⟨
          • ⟨
          • 1
          • ⟩
          • ⟩⟩

          ai_dl_club

          @ai_dl_club

          AI paper discussion

          CVexplore
          join

          browse

          • BERT as tag from all users

          related tags

          • + | nlp
          • + | Transformer
          • + | protein
          • + | ELMo
          • + | transfer_learning
          • + | bert

          tags

          • nlp
          • nas
          • pruning
          • adversarial
          • Federated_Learning
          • unsupervised-learning
          • AutoML
          • BERT
          • Adversarial
          • NLP
          • large_model
          • Invariance
          • SGD
          • Randomly_Wired_Neural_Networks
          • disentanglement
          • semi-supervised-learning
          • embeddings
          • i-ResNets
          • gradient_descent
          • review
          • ELMo
          • sgd
          • protein
          • ml
          • transformers
          • soat
          • basic
          • graphs
          • tensorflow
          • transfer_learning
          • Wasserstein
          • GAN
          • Data_Augmentation
          • Distributed
          • Reinforcement_Learning
          • NAS
          • compression
          • self-supervised-learning
          • Introspective_Neural_Networks
          • Invertible
          • computing
          • Robustness
          • deployment
          • benchmarking
          • machine_learning
          • blog
          • GPU
          • sequencing
          • PyTorch
          What is BibSonomy?
          Getting Started
          Browser Buttons
          Help
          Developer
          Overview
          API Documentation
          Contact & Privacy
          Contact
          Privacy & Terms of Use
          Cookies
          Report Issues
          BibSonomy Wiki
          Integration
          PUMA
          TYPO3 Extension
          WordPress Plugin
          Java REST Client
          Supported Sites
          more
          About BibSonomy
          Team
          Blog
          Mailing List
          Social Media
           Follow us on Twitter

          BibSonomy is offered by the Data Science Chair of the University of Würzburg, the Information Processing and Analytics Group of the Humboldt-Unversität zu Berlin, the KDE Group of the University of Kassel, and the L3S Research Center.