-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flux LoRA can't load anymore after 0.32.0 #10512
Comments
Can you try with pip uninstall diffusers -y
pip install git+https://github.com/huggingface/diffusers |
The problem is still exit after this operation |
Do you have a minimal reproducible snippet? The provided one isn't minimal and self-contained. I keep asking for that because we have an integration test for Kohya LoRAs here: diffusers/tests/lora/test_lora_layers_flux.py Line 847 in 83ba01a
It was run yesterday, too, and it worked fine. |
This issue only occurs when loading LoRA after quantizing the FLUX transformer using optimum.quanto. If the model is not quantized, LoRA can be loaded normally. In version 0.31 of diffusers, LoRA could be loaded successfully even after quantization. |
|
@tyyff if you could help me with a minimally reproducible snippet that would be great, ideally with a supported quantization backend like |
I used the script and quantization method here : |
Describe the bug
When using this LoRA: https://civitai.com/models/796382?modelVersionId=1026423, after version 0.32.0 there is an error.
It's working when using diffusers 0.31.0
Reproduction
import torch
import os
from diffusers import (
FluxTransformer2DModel,
FluxPipeline,
)
from transformers import T5EncoderModel
from optimum.quanto import freeze, qfloat8, quantize
device = torch.device(
"cuda"
if torch.cuda.is_available()
else "mps" if torch.backends.mps.is_available() else "cpu"
)
transformer = FluxTransformer2DModel.from_pretrained(
"black-forest-labs/FLUX.1-dev",
cache_dir="/home/user/genAI/models/FLUX.1-dev/",
subfolder="transformer",
torch_dtype=torch.bfloat16,
)
quantize(transformer, weights=qfloat8)
freeze(transformer)
text_encoder_2 = T5EncoderModel.from_pretrained(
"black-forest-labs/FLUX.1-dev",
cache_dir="/home/user/genAI/models/FLUX.1-dev/",
subfolder="text_encoder_2",
torch_dtype=torch.bfloat16,
)
quantize(text_encoder_2, weights=qfloat8)
freeze(text_encoder_2)
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
cache_dir="/home/user/genAI/models/FLUX.1-dev/",
transformer=transformer,
text_encoder_2=text_encoder_2,
torch_dtype=torch.bfloat16,
)
pipe.enable_vae_slicing()
pipe.enable_vae_tiling()
pipe.to(device)
if device.type != "cpu":
pipe.enable_model_cpu_offload(device=device)
adapter_names = []
adapter_values = []
pipe.load_lora_weights(
"/home/user/genAI/models/LoRAs/Flux/dl/UltraRealistic_Lora_v2.safetensors", # downloaded LoRA path from civitAI.
adapter_name="realism",
)
adapter_names.append("realism")
adapter_values.append(1.2)
pipe.set_adapters(adapter_names, adapter_values)
image = pipe(
prompt="a cat holding a sign that says hello, world",
height=1024,
width=1024,
guidance_scale=3.5,
num_inference_steps=24,
max_sequence_length=512,
).images[0]
image.save("flux.1-dev.png")
Logs
System Info
Python 3.12
diffusers 0.32.0 (I tested 0.32.1 and install from git)
Who can help?
@sayakpaul
The text was updated successfully, but these errors were encountered: