Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flux LoRA can't load anymore after 0.32.0 #10512

Open
Mino1289 opened this issue Jan 9, 2025 · 7 comments
Open

Flux LoRA can't load anymore after 0.32.0 #10512

Mino1289 opened this issue Jan 9, 2025 · 7 comments
Labels
bug Something isn't working

Comments

@Mino1289
Copy link

Mino1289 commented Jan 9, 2025

Describe the bug

When using this LoRA: https://civitai.com/models/796382?modelVersionId=1026423, after version 0.32.0 there is an error.
It's working when using diffusers 0.31.0

Reproduction

import torch

import os
from diffusers import (
FluxTransformer2DModel,
FluxPipeline,
)
from transformers import T5EncoderModel
from optimum.quanto import freeze, qfloat8, quantize

device = torch.device(
"cuda"
if torch.cuda.is_available()
else "mps" if torch.backends.mps.is_available() else "cpu"
)

transformer = FluxTransformer2DModel.from_pretrained(
"black-forest-labs/FLUX.1-dev",
cache_dir="/home/user/genAI/models/FLUX.1-dev/",
subfolder="transformer",
torch_dtype=torch.bfloat16,
)

quantize(transformer, weights=qfloat8)
freeze(transformer)

text_encoder_2 = T5EncoderModel.from_pretrained(
"black-forest-labs/FLUX.1-dev",
cache_dir="/home/user/genAI/models/FLUX.1-dev/",
subfolder="text_encoder_2",
torch_dtype=torch.bfloat16,
)

quantize(text_encoder_2, weights=qfloat8)
freeze(text_encoder_2)

pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
cache_dir="/home/user/genAI/models/FLUX.1-dev/",
transformer=transformer,
text_encoder_2=text_encoder_2,
torch_dtype=torch.bfloat16,
)

pipe.enable_vae_slicing()
pipe.enable_vae_tiling()

pipe.to(device)
if device.type != "cpu":
pipe.enable_model_cpu_offload(device=device)

adapter_names = []
adapter_values = []

pipe.load_lora_weights(
"/home/user/genAI/models/LoRAs/Flux/dl/UltraRealistic_Lora_v2.safetensors", # downloaded LoRA path from civitAI.
adapter_name="realism",
)

adapter_names.append("realism")
adapter_values.append(1.2)

pipe.set_adapters(adapter_names, adapter_values)

image = pipe(
prompt="a cat holding a sign that says hello, world",
height=1024,
width=1024,
guidance_scale=3.5,
num_inference_steps=24,
max_sequence_length=512,
).images[0]

image.save("flux.1-dev.png")

Logs

ERROR:
Traceback (most recent call last):
  File "/home/user/genAI/test.py", line 56, in <module>
    pipe.load_lora_weights(
  File "/home/user/miniconda3/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py", line 1867, in load_lora_weights
    transformer_lora_state_dict = self._maybe_expand_lora_state_dict(
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/miniconda3/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py", line 2490, in _maybe_expand_lora_state_dict
    base_weight_param = transformer_state_dict[base_param_name]
                        ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
KeyError: 'single_transformer_blocks.0.attn.to_k.weight'

System Info

Python 3.12
diffusers 0.32.0 (I tested 0.32.1 and install from git)

Who can help?

@sayakpaul

@Mino1289 Mino1289 added the bug Something isn't working label Jan 9, 2025
@sayakpaul
Copy link
Member

Can you try with diffusers installation from main?

pip uninstall diffusers -y
pip install git+https://github.com/huggingface/diffusers

@lhjlhj11
Copy link

Can you try with diffusers installation from main?

pip uninstall diffusers -y
pip install git+https://github.com/huggingface/diffusers

The problem is still exit after this operation

@sayakpaul
Copy link
Member

Do you have a minimal reproducible snippet? The provided one isn't minimal and self-contained. I keep asking for that because we have an integration test for Kohya LoRAs here:

def test_flux_kohya(self):

It was run yesterday, too, and it worked fine.

@tyyff
Copy link

tyyff commented Jan 10, 2025

Do you have a minimal reproducible snippet? The provided one isn't minimal and self-contained. I keep asking for that because we have an integration test for Kohya LoRAs here:

def test_flux_kohya(self):

It was run yesterday, too, and it worked fine.

This issue only occurs when loading LoRA after quantizing the FLUX transformer using optimum.quanto. If the model is not quantized, LoRA can be loaded normally. In version 0.31 of diffusers, LoRA could be loaded successfully even after quantization.

@lhjlhj11
Copy link

Do you have a minimal reproducible snippet? The provided one isn't minimal and self-contained. I keep asking for that because we have an integration test for Kohya LoRAs here:

def test_flux_kohya(self):

It was run yesterday, too, and it worked fine.

transformer = FluxTransformer2DModel.from_single_file("https://huggingface.co/Kijai/flux-fp8/blob/main/flux1-dev-fp8.safetensors", torch_dtype=dtype)
quantize(transformer, weights=qfloat8)
freeze(transformer)
text_encoder_2 = T5EncoderModel.from_pretrained(bfl_repo, subfolder="text_encoder_2", torch_dtype=dtype)
quantize(text_encoder_2, weights=qfloat8)
freeze(text_encoder_2)
pipe = FluxPipeline.from_pretrained(bfl_repo, transformer=None, text_encoder_2=None, torch_dtype=dtype)
pipe.transformer = transformer
pipe.text_encoder_2 = text_encoder_2
# this is a 8steps lora
self.pipe.load_lora_weights(load_file(os.path.join(self.model_root, self.config["8steps_lora"]), device=self.device), adapter_name="8steps")
self.pipe.set_adapters(["8steps"], adapter_weights=[0.125])

@sayakpaul
Copy link
Member

@tyyff if you could help me with a minimally reproducible snippet that would be great, ideally with a supported quantization backend like bitsandbytes.

@Mino1289
Copy link
Author

Mino1289 commented Jan 10, 2025

I used the script and quantization method here :
https://gist.github.com/sayakpaul/b664605caf0aa3bf8585ab109dd5ac9c
The script by AmericanPresidentJimmyCarter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants