![]() ![]() ![]() This allows us to efficiently learn different types of encoding functions, and we show that the model learns high-quality sentence representations. Given a sentence and the context in which it appears, a classifier distinguishes context sentences from other contrastive sentences based on their vector representations. Drawing inspiration from the distributional hypothesis and recent work on learning sentence representations, we reformulate the problem of predicting the context in which a sentence appears as a classification problem. Abstract: In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data. ![]() (edited Feb 23, 2018) ICLR 2018 Conference Blind Submission Readers: Everyone
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |