![]() ![]() BERT was developed by researchers at Google in 2018 and has been proven to be state-of-the-art for a variety of natural language processing tasks such text classification, text summarization, text generation, etc. Like word embeddings, BERT is also a text representation technique which is a fusion of variety of state-of-the-art deep learning algorithms, such as bidirectional encoder LSTM and Transformers. If you have no idea of how word embeddings work, take a look at my article on word embeddings. BERT is a text representation technique like Word Embeddings. In this article we will study BERT, which stands for Bidirectional Encoder Representations from Transformers and its application to text classification. In the previous article of this series, I explained how to perform neural machine translation using seq2seq architecture with Python's Keras library for deep learning. This is the 23rd article in my series of articles on Python for NLP.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |