Can BERT embeddings be used to reproduce the original content of the text?
From what I understand, BERT provides contextualized embeddings that are not deterministic the way Word2Vec embeddings (i.e. the word "Queen" doesn't always produce the same vector, it'll be different depending on the context)
Is there a way to "reverse" these contextualized embeddings to produce an output related to the original content of the text? For instance, how would I do machine translation, or style transfer?
Topic bert neural-style-transfer deep-learning machine-translation nlp
Category Data Science