Skip to main content

Breadcrumb

  1. ACCESS Home
  2. Support
  3. Knowledge Base
  4. Knowledge Base Resources

Knowledge Base Resources

These resources are contributed by researchers, facilitators, engineers, and HPC admins. Please upvote resources you find useful!
Add a Resource

Filters

Topics

  • Show all (4)
  • (-) faster (1)
  • (-) optimization (1)
  • performance-tuning (1)
  • tuning (1)

Topics

  • Show all (4)
  • (-) faster (1)
  • (-) optimization (1)
  • performance-tuning (1)
  • tuning (1)

If you'd like to use more filters, please login to view them all.

Fine-tuning LLMs with PEFT and LoRA
0
  • Fine-tuning LLMs with PEFT and LoRA
As LLMs get larger fine-tuning to the full extent can become difficult to train on consumer hardware. Storing and deploying these tuned models can also be quite expensive and difficult to store. With PEFT (parameter -efficent fine tuning), it approaches fine-tune on a smaller scale of model parameters while freezing most parameters of the pretrained LLMs. Basically it is providing full performance that which is similar if not better than full fine tuning while only having a small number of trainable parameters. This source explains that as well as going over LORA diagrams and a code walk through.
fasteroptimizationperformance-tuningtuning
0 Likes

Login to like
Type
video_link
Level
Intermediate, Advanced