SqueezeBits
GraLoRA: Boosting Fine-Tuning Accuracy Without Extra Cost
LoRA excels at efficient fine-tuning but suffers at higher ranks due to gradient entanglement. We introduce GraLoRA, which addresses these issues through finer-grained, block-wise updates, significantly enhancing performance and expressivity without overhead. GraLoRA outperforms LoRA across tasks, achieving up to +8.5% improvement in HumanEval+ Pass@1.
SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks
A brief review of the research paper from our team, published at ICML 2024.
Feb 17, 2025
TechResearch