Is there an encoder–decoder model that allows editing latent embeddings and regenerating text?

21 hours ago 2
ARTICLE AD BOX

I have a sentence embedding created by a semantic encoder such as:

embedding = model.encode("I am happy")

I then compute an emotion direction vector in the same embedding space, for example:

emotion_vec = embedding_happy - embedding_neutral

My goal is:

Encode an input sentence into the embedding space.

Add the emotion direction:

shifted = original_embedding + emotion_vec

Decode shifted back to text with emotional content added.

However, typical sentence encoders like Sentence-BERT only provide encoding; they do not decode modified embeddings back to text.

For example, I considered BART, but BART is primarily a sequence-to-sequence model — it does not expose a simple encode → modify → decode API from a continuous vector space.

My question

Is there a model architecture that:

Provides an encoder to map text to a continuous semantic space,

Allows latent editing in that space,

Provides a decoder to reconstruct text from modified latent vectors,

and that ideally can run reasonably on a CPU (no GPU required)?

Read Entire Article