← back to stream

Embedding

#ai

An embedding is a model that turns text (or images, or audio) into a vector of floating-point numbers where semantically similar inputs end up close in vector space. The distance between two vectors measures how similar the inputs are — this is the backbone of semantic search and RAG.

Main kinds:

  • Word embeddings — Word2Vec, GloVe, FastText. One word → one fixed vector, regardless of context.
  • Sentence embeddings — Sentence-BERT, Universal Sentence Encoder, OpenAI's text-embedding-*. A whole sentence or paragraph → one vector that accounts for context.
  • Image embeddings — similar idea but for images.

The difference matters: the word "bank" always gets the same word-embedding vector whether it means a financial institution or a riverbank. A sentence embedding of "I'm going to the bank to withdraw money" vs "I opened a jar of jam by the bank" will land in very different places.

Before you can embed documents you need to split them — see chunking.