ð€ Diffusers ã¯ãHugging Face ãéçºããæå ç«¯ã®æ¡æ£ã¢ãã«ã©ã€ãã©ãªã§ãããç»åãé³å£°ãããã«ã¯ååã®3Dæ§é ã®çæã«ç¹åããŠããŸããåçŽãªæšè«ãœãªã¥ãŒã·ã§ã³ãæ¢ããŠããå Žåã§ããç¬èªã®æ¡æ£ã¢ãã«ããã¬ãŒãã³ã°ãããå Žåã§ããð€ Diffusers ã¯äž¡æ¹ããµããŒãããã¢ãžã¥ãŒã«åŒã®ããŒã«ããã¯ã¹ã§ãã
ãããžã§ã¯ãã¢ãã¬ã¹: https://github.com/huggingface/diffusers
æå ç«¯ã®æ¡æ£ãã€ãã©ã€ã³ (Diffusion Pipelines)
亀æå¯èœãªãã€ãºã¹ã±ãžã¥ãŒã© (Noise Schedulers)
äºååŠç¿æžã¿ã¢ãã« (Pretrained Models)
# å
¬åŒããã±ãŒãž
pip install --upgrade diffusers[torch]
# ã³ãã¥ããã£ãã¡ã³ããã³ã¹ãã conda ããŒãžã§ã³
conda install -c conda-forge diffusers
pip install --upgrade diffusers[flax]
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipeline.to("cuda")
pipeline("An image of a squirrel in Picasso style").images[0]
from diffusers import DDPMScheduler, UNet2DModel
from PIL import Image
import torch
scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256")
model = UNet2DModel.from_pretrained("google/ddpm-cat-256").to("cuda")
scheduler.set_timesteps(50)
sample_size = model.config.sample_size
noise = torch.randn((1, 3, sample_size, sample_size), device="cuda")
input = noise
for t in scheduler.timesteps:
with torch.no_grad():
noisy_residual = model(input, t).sample
prev_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample
input = prev_noisy_sample
image = (input / 2 + 0.5).clamp(0, 1)
image = image.cpu().permute(0, 2, 3, 1).numpy()[0]
image = Image.fromarray((image * 255).round().astype("uint8"))
image
ã¿ã¹ã¯ | ãã€ãã©ã€ã³ | ããããã¢ãã« |
---|---|---|
ç¡æ¡ä»¶ç»åçæ | DDPMPipeline | google/ddpm-ema-church-256 |
ããã¹ãããç»å | StableDiffusionPipeline | stable-diffusion-v1-5/stable-diffusion-v1-5 |
ããã¹ãããç»å (unCLIP) | UnCLIPPipeline | kakaobrain/karlo-v1-alpha |
ããã¹ãããç»å (DeepFloyd IF) | IFPipeline | DeepFloyd/IF-I-XL-v1.0 |
ããã¹ãããç»å (Kandinsky) | KandinskyPipeline | kandinsky-community/kandinsky-2-2-decoder |
å¶åŸ¡å¯èœãªçæ | StableDiffusionControlNetPipeline | lllyasviel/sd-controlnet-canny |
ç»åç·šé | StableDiffusionInstructPix2PixPipeline | timbrooks/instruct-pix2pix |
ç»åããç»å | StableDiffusionImg2ImgPipeline | stable-diffusion-v1-5/stable-diffusion-v1-5 |
ç»å修埩 | StableDiffusionInpaintPipeline | runwayml/stable-diffusion-inpainting |
ç»åããªãšãŒã·ã§ã³ | StableDiffusionImageVariationPipeline | lambdalabs/sd-image-variations-diffusers |
ç»åè¶ è§£å | StableDiffusionUpscalePipeline | stabilityai/stable-diffusion-x4-upscaler |
æœåšç©ºéè¶ è§£å | StableDiffusionLatentUpscalePipeline | stabilityai/sd-x2-latent-upscaler |
ããã¥ã¡ã³ãã¿ã€ã | åŠç¿å 容 |
---|---|
Tutorial | ã¢ãã«ãšã¹ã±ãžã¥ãŒã©ã䜿çšããŠæ¡æ£ã·ã¹ãã ãæ§ç¯ããããç¬èªã®æ¡æ£ã¢ãã«ããã¬ãŒãã³ã°ãããªã©ãã©ã€ãã©ãªã®åºæ¬çãªã¹ãã«ãåŠç¿ããŸã |
Loading | ã©ã€ãã©ãªã®ãã¹ãŠã®ã³ã³ããŒãã³ãïŒãã€ãã©ã€ã³ãã¢ãã«ãã¹ã±ãžã¥ãŒã©ïŒãããŒãããã³æ§æããæ¹æ³ãããã³ããŸããŸãªã¹ã±ãžã¥ãŒã©ã®äœ¿ç𿹿³ |
Pipelines for inference | ãã€ãã©ã€ã³ã䜿çšããŠããŸããŸãªæšè«ã¿ã¹ã¯ããããçæãçæåºåãšã©ã³ãã æ§ã®å¶åŸ¡ãè¡ãæ¹æ³ |
Optimization | ã¡ã¢ãªãå¶éãããããŒããŠã§ã¢ã§ãã€ãã©ã€ã³ãå®è¡ããæšè«ãé«éåããããã«ãã€ãã©ã€ã³ãæé©åããæ¹æ³ |
Training | ããŸããŸãªã¿ã¹ã¯ã®ããã«ç¬èªã®æ¡æ£ã¢ãã«ããã¬ãŒãã³ã°ããæ¹æ³ |
ð€ Diffusers ã¯ãçŸåšæãå®å šã§äœ¿ããããæ¡æ£ã¢ãã«ã©ã€ãã©ãªã®1ã€ã§ããè±å¯ãªäºååŠç¿æžã¿ã¢ãã«ãšãã€ãã©ã€ã³ãæäŸããã ãã§ãªããã«ã¹ã¿ã ãã¬ãŒãã³ã°ãšæé©åããµããŒãããŠããŸããAIç ç©¶è ãéçºè ãã¯ãªãšã€ã¿ãŒã®ãããã§ãã£ãŠãããã®ã©ã€ãã©ãªã§å¿ èŠãªããŒã«ãèŠã€ããŠãããŸããŸãªçæAIã¢ããªã±ãŒã·ã§ã³ãå®çŸã§ããŸãã