Instructions to use almost/my_first_lora_v1-lora with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use almost/my_first_lora_v1-lora with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Qwen/Qwen-Image-Edit", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("almost/my_first_lora_v1-lora") prompt = "Turn this cat into a dog" input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") image = pipe(image=input_image, prompt=prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
my_first_lora_v1-lora
Model trained with AI Toolkit by Ostris
Trigger words
No trigger words defined.
Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
Use it with the 🧨 diffusers library
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('Qwen/Qwen-Image-Edit', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('almost/my_first_lora_v1-lora', weight_name='my_first_lora_v1_000002500.safetensors')
image = pipeline('a beautiful landscape').images[0]
image.save("my_image.png")
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
- Downloads last month
- 5
Model tree for almost/my_first_lora_v1-lora
Base model
Qwen/Qwen-Image-Edit