0.18.0: RoAd, ALoRA, Arrow, WaveFT, DeLoRA, OSF, and more
Highlights
FIXME update list of all changes, so some more commits were added
New Methods
RoAd
@ppetrushkov added RoAd: 2D Rotary Adaptation to PEFT in #2678. RoAd learns 2D rotation matrices that are applied using only element-wise multiplication, thus promising very fast inference with adapters in unmerged state.
Remarkably, besides LoRA, RoAd is the only PEFT method that supports mixed adapter batches. This means that when you have loaded a model with multiple RoAd adapters, you can use all of them for different samples in the same batch, which is much more efficient than switching adapters between batches:
model = PeftModel.from_pretrained(base_model, <path-to-road-adapter-A>, adapter_name="adapter-A")
model.add_adapter("adapter-B", <path-to-road-adapter-B>)
inputs = ... # input with 3 samples
# apply adapter A to sample 0, adapter B to sample 1, and use the base model for sample 2:
adapter_names = ["adapter-A", "adapter-B", "__base__"]
output_mixed = model(**inputs, adapter_names=adapter_names)
gen_mixed = model.generate(**inputs, adapter_names=adapter_names)
ALoRA
Activated LoRA is a technique added by @kgreenewald in #2609 for causal language models, allowing to selectively enable LoRA adapters depending on a specific token invocation sequence in the input. This has the major benefit of being able to re-use most of the KV cache during inference when the adapter is only used to generate part of the response, after which the base model takes over again.
Arrow & GenKnowSub
@TheTahaaa contributed not only support for Arrow, a dynamic routing algorithm between multiple loaded LoRAs in #2644, but also GenKnowSub, a technique built upon Arrow where the 'library' of LoRAs available to Arrow is first modified by subtracting general knowledge adapters (e.g., trained on subsets of Wikipedia) to enhance task-specific performance.
WaveFT
Thanks to @Bilican, Wavelet Fine-Tuning (WaveFT) was added to PEFT in #2560. This method trains sparse updates in the wavelet domain of residual matrices, which is especially parameter efficient. It is very interesting for image generation, as it promises to generate diverse outputs while preserving subject fidelity.
DeLoRA
Decoupled Low-rank Adaptation (DeLoRA) was added by @mwbini in #2780. This new PEFT method is similar to DoRA in so far as it decouples the angle and magnitude of the learned adapter weights. However, DeLoRA implements this in a way that promises to better prevent divergence. Moreover, it constrains the deviation of the learned weight by imposing an upper limit of the norm, which can be adjusted via the delora_lambda parameter.
OSF
Orthogonal Fine-Tuning (OSF) was added by @NikhilNayak-debug in #2685. By freezing the high-rank subspace of the targeted weight matrices and projecting gradient updates to a low-rank subspace, OSF achieves good performance on continual learning tasks. While it is a bit memory intensive for standard fine-tuning processes, it is definitely worth checking out on tasks where performance degradation of previously learned tasks is a concern.
Enhancements
Text generation benchmark
In #2525, @ved1beta added the text generation benchmark to PEFT. This is a framework to determine and compare metrics with regard to text generation of different PEFT methods, e.g. runtime and memory usage. Right now, this benchmark is still lacking experimental settings and a visualization, analogous to what we have in the MetaMathQA benchmark. If this is something that interests you, we encourage you to let us know or, even better, contribute to this benchmark.
Reliable interface for integrations
PEFT has integrations with other libraries like Transformers and Diffusers. To facilitate this integration, PEFT now provides a stable interface of functions that should be used if applicable. For example, the set_adapter function can be used to switch between PEFT adapters on the model, even if the model is not a PeftModel instance. We commit to keeping these functions backwards compatible, so it's safe for other libraries to build on top of those.
Handling of weight tying
Some Transformers models can have tied weights. This is especially prevalent when it comes to the embedding and the LM head. Currently, the way that this is handled in PEFT is not obvious. We thus drafted an issue to illustrate the intended behavior in #2864. This shows what our goal is, although not everything is implemented yet.
In #2803, @romitjain added the ensure_weight_tying argument to LoraConfig. This argument, if set to True, enforces weight tying of the modules targeted with modules_to_save. Thus, if embedding and LM head are tied, they will share weights, which is important to allow, for instance, weight merging. Therefore, for most users, we recommend to enable this setting if they want to fully fine-tune the embedding and LM head. For backwards compatability, the setting is off by default though.
Note that in accordance with #2864, the functionality of ensure_weight_tying=True will be expanded to also include trainable tokens (#2870) and LoRA (tbd.) in the future.
Support Conv1d and 1x1 Conv2 layers in LoHa and LoKr
@grewalsk extended LoHa and LoKr to support nn.Conv1d layers, as well as nn.Conv2d with 1x1 kernels, in #2515.
New prompt tuning initialization
Thanks to @macmacmacmac, we now have a new initialization option for prompt tuning, random discrete initialization (#2815). This option should generally work better than random initialization, as corroborated on our PEFT method comparison suite. Give it a try if you use prompt tuning.
Combining LoRA adapters with negative weights
If you use multiple LoRA adapters, you can merge them into a single adapter using model.add_weighted_adapter. However, so far, this only worked with positive weights per adapter. Thanks to @sambhavnoobcoder and @valteu, it is now possible to pass negative weights too.
Changes
Transformers compatibility
At the time of writing, the Transformers v5 release is imminent. This Transformers version will be incomptabile with PEFT < 0.18.0. If you plan to use Transformers v5 with PEFT, please upgrade PEFT to 0.18.0+.
Python version
This PEFT version no longer supports Python 3.9, which has reached its end of life. Please use Python 3.10+.
Updates to OFT
The OFT method has been updated to make it slightly faster and to stabilize the numerics in #2805. This means, however, that existing checkpoints may give slightly different results after upgrading to PEFT 0.18.0. Therefore, if you use OFT, we recommend to retrain the adapter.
All Changes
- add xpu support for boft/controlnet example by @kaixuanliu in https://github.com/huggingface/peft/pull/2674
- enabe boft_dreambooth on XPU by @yao-matrix in https://github.com/huggingface/peft/pull/2679
- Add XPU support for dna_language_model example by @kaixuanliu in https://github.com/huggingface/peft/pull/2689
- validated lora dreambooth on xpu, pass by @yao-matrix in https://github.com/huggingface/peft/pull/2696
- validated lorafa on xpu, passed by @yao-matrix in https://github.com/huggingface/peft/pull/2697
- enable corda finetuning on xpu by @yao-matrix in https://github.com/huggingface/peft/pull/2687
- validated cpt, ephemeral_gpu_offloading and eva finetuning on XPU by @yao-matrix in https://github.com/huggingface/peft/pull/2694
- validated PISSA on xpu, pass by @yao-matrix in https://github.com/huggingface/peft/pull/2703
- validated MISS on xpu, pass by @yao-matrix in https://github.com/huggingface/peft/pull/2704
- fix bug for feature_extraction example by @kaixuanliu in https://github.com/huggingface/peft/pull/2706
- Use
hub_online_oncein trainable token tests by @githubnemo in https://github.com/huggingface/peft/pull/2701 - Bump version to 0.17.1.dev0 after release by @BenjaminBossan in https://github.com/huggingface/peft/pull/2707
- validated multi_adapter on xpu, pass by @yao-matrix in https://github.com/huggingface/peft/pull/2711
- verified mlp on xpu, pass by @yao-matrix in https://github.com/huggingface/peft/pull/2712
- use CPU instead of XPU for face_alignment by @kaixuanliu in https://github.com/huggingface/peft/pull/2713
- Add conditional_generation example xpu support by @kaixuanliu in https://github.com/huggingface/peft/pull/2684
- validated POLY on XPU, pass by @yao-matrix in https://github.com/huggingface/peft/pull/2702
- add XPU support for hra_dreambooth example by @kaixuanliu in https://github.com/huggingface/peft/pull/2717
- enable xpu device for causal_language_modeling example by @kaixuanliu in https://github.com/huggingface/peft/pull/2680
- add xpu support for fp4_finetuing example by @kaixuanliu in https://github.com/huggingface/peft/pull/2714
- bench mark scripts by @ved1beta in https://github.com/huggingface/peft/pull/2525
- enable oft-dreambooth on xpu, and fix example bugs, pass by @yao-matrix in https://github.com/huggingface/peft/pull/2718
- enable qalora on xpu, pass by @yao-matrix in https://github.com/huggingface/peft/pull/2719
- enabled randlora on xpu, pass by @yao-matrix in https://github.com/huggingface/peft/pull/2720
- validated semantic-segmentation peft on xpu, pass by @yao-matrix in https://github.com/huggingface/peft/pull/2721
- add xpu support for image-classification example by @kaixuanliu in https://github.com/huggingface/peft/pull/2722
New Contributors
- @ved1beta made their first contribution in https://github.com/huggingface/peft/pull/2525
- @Apurro12 made their first contribution in https://github.com/huggingface/peft/pull/2708
- @3outeille made their first contribution in https://github.com/huggingface/peft/pull/2741
- @ppetrushkov made their first contribution in https://github.com/huggingface/peft/pull/2678
- @rojagtap made their first contribution in https://github.com/huggingface/peft/pull/2744
- @grewalsk made their first contribution in https://github.com/huggingface/peft/pull/2515
- @kgreenewald made their first contribution in https://github.com/huggingface/peft/pull/2609
- @TheTahaaa made their first contribution in https://github.com/huggingface/peft/pull/2644
- @tanuj-rai made their first contribution in https://github.com/huggingface/peft/pull/2775
- @JamesSand made their first contribution in https://github.com/huggingface/peft/pull/2812
- @Bilican made their first contribution in https://github.com/huggingface/peft/pull/2560
- @macmacmacmac made their first contribution in https://github.com/huggingface/peft/pull/2815
- @Che-Xu made their first contribution in https://github.com/huggingface/peft/pull/2793
- @sambhavnoobcoder made their first contribution in https://github.com/huggingface/peft/pull/2811
- @shantanugupta2004 made their first contribution in https://github.com/huggingface/peft/pull/2837
- @nirbo made their first contribution in https://github.com/huggingface/peft/pull/2810
- @mwbini made their first contribution in https://github.com/huggingface/peft/pull/2780
- @romitjain made their first contribution in https://github.com/huggingface/peft/pull/2803
- @NikhilNayak-debug made their first contribution in https://github.com/huggingface/peft/pull/2685
- @aflueckiger made their first contribution in https://github.com/huggingface/peft/pull/2863
- @DargorAbraxas made their first contribution in https://github.com/huggingface/peft/pull/2871
- @YangKai0616 made their first contribution in https://github.com/huggingface/peft/pull/2843
Full Changelog: https://github.com/huggingface/peft/compare/v0.17.1...v0.18.0