New
v0.1.6
What's Changed
- feat: implement chat feature to rwkv by @yorkzero831 in https://github.com/Atome-FE/llama-node/pull/63
- feat: update cuda dynamic compiling by @hlhr202 in https://github.com/Atome-FE/llama-node/pull/66
- feature: optimiza llama.cpp loading, fix llama.cpp tokenizer, unify logger by @hlhr202 in https://github.com/Atome-FE/llama-node/pull/75
- update: refractor onnx by @hlhr202 in https://github.com/Atome-FE/llama-node/pull/87
- update: upgrade llm to 0.2.0-dev by @fardjad in https://github.com/Atome-FE/llama-node/pull/86
New Contributors
- @fardjad made their first contribution in https://github.com/Atome-FE/llama-node/pull/86
Full Changelog: https://github.com/Atome-FE/llama-node/compare/v0.1.4...v0.1.6