Unclaimed project
Are you a maintainer of text-embeddings-inference? Claim this project to take control of your public changelog and roadmap.
Changelog
A blazing fast inference solution for text embeddings models
Qwen3 was not working fine on CPU / MPS when sending batched requests on FP16 precision, due to the FP32 minimum value downcast (now manually set to FP16 minimum value instead) leading to null values, as well as a missing to_dtype call on the attention_bias when working with batches.
fmt by re-running pre-commit by @alvarobartt in https://github.com/huggingface/text-embeddings-inference/pull/671version to 1.7.4 by @alvarobartt in https://github.com/huggingface/text-embeddings-inference/pull/677Full Changelog: https://github.com/huggingface/text-embeddings-inference/compare/v1.7.3...v1.7.4