New
ModelCache for LLMs v0.2.0
Fixed the issue with the remove module. Added clip embedding capabilities, initially adapting to multi-modal scenarios.
Unclaimed project
Are you a maintainer of ModelCache? Claim this project to take control of your public changelog and roadmap.
Changelog
A LLM semantic caching system aiming to enhance user experience by reducing response time via cached query-result pairs.
Fixed the issue with the remove module. Added clip embedding capabilities, initially adapting to multi-modal scenarios.