Model Selection for Text Search
Marqo supports models which can be utilised via the sentence_transformers
API. There is a large list of compatible models, we provide a number of good choices out of the box but there is also flexibility to load other models as well.
In this guide we will provide some recommendations for selecting a model for text search.
Recommended Text Retrieval Models
For most use cases we recommend models from the E5 family as they benchmark consistently well across a range of tasks and are available in three sizes with an english only and multilingual version.
English only Models
The V2 E5 models are available in three sizes:
hf/e5-small-v2
(384 dimensional embeddings)hf/e5-base-v2
(768 dimensional embeddings)hf/e5-large-v2
(1024 dimensional embeddings)
The models benchmark better on retrieval tasks as the size increases, however, the larger models are slower to encode text. The larger embedding dimensions also require more memory to store and process.
If you are using a CPU then the small or base models are recommended. If you are using a GPU then the base or large model can be used.
Multilingual Models
The multilingual E5 models are available in three sizes:
hf/multilingual-e5-small
(384 dimensional embeddings)hf/multilingual-e5-base
(768 dimensional embeddings)hf/multilingual-e5-large
(1024 dimensional embeddings)
As with the English only models, the larger models benchmark better on retrieval tasks but are slower to encode text and require more memory for inference and within the index. The multilingual models are all larger than the English only models despite their equivalent embedding dimensions.
If you are using a CPU then the small or base models are recommended. If you are using a GPU then the base or large model can be used.
Model Prefixing and Best Practices
Some models are trained with specific prefixes for different tasks. Some notable examples are:
- The E5 model family which expect queries to be prefixed with
query:
and documents to be prefixed withpassage:
for asymmetric retrieval. - E5 instruct models which expect queries to be prefixed with
Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery:
- The BGE model family which expects the prefix
Represent this sentence for searching relevant passages:
.
Fortunately Marqo handles this for you, all models included in Marqo are automatically set up with the correct prefixing for asymmetric retrieval tasks. Asymmetric retrieval tasks are where you have a query and a document and you want to find the most relevant document for the query, whereas symmetric retrieval tasks are where you have a document and you intent to find similar documents.
If you need to override the prefixing you can do this at search or add documents like so:
# override search prefix
mq.index("my-first-index").search(
"cats and dogs", text_query_prefix="my custom prefix:"
)
# override add documents text chunk prefix
mq.index("my-first-index").add_documents(
documents, text_chunk_prefix="my custom prefix:"
)