在本地调用 Stable Diffusion 的 API 接口
2024-06-28
在本地部署并调用 Stable Diffusion 的 API 接口,通过配置认证和调整请求参数,使用 Python 脚本生成图片。
-
部署本地 WEB-UI:
- 确保已经在本地部署好 Stable Diffusion 的 WEB-UI。
- 启动项目时,添加
--api
参数以开启 API 接口。如果使用的是 GitHub 上的项目,在启动web-ui.bat
时添加该参数。
-
配置 API 认证:
- 如果使用的是秋叶大佬的整合包,请在启动时勾选相关选项并设置用户名和密码。
- 启动 WEB-UI 后,确认成功启动并记下用户名和密码。
-
发送请求:
- 在 Python 脚本中使用
requests
库发送 POST 请求,生成图片。
- 在 Python 脚本中使用
示例代码如下:
import requests import io import base64 from PIL import Image # API URL url = "http://127.0.0.1:7860" # 请求数据 payload = { "prompt": "puppy dog", "steps": 20 } # 发送 POST 请求 response = requests.post(url=f'{url}/sdapi/v1/txt2img', json=payload) # 处理响应数据 r = response.json() for i in r['images']: image = Image.open(io.BytesIO(base64.b64decode(i.split(",", 1)[0]))) image.save('output.png') print("图片生成成功,已保存为 output.png")
- 处理认证:
- 如果遇到
{'detail': 'Not authenticated'}
错误,说明需要进行身份验证。 - 访问
http://localhost:7860/docs
,找到相关接口,点击Try it out
,并执行登录操作。 - 登录后在浏览器开发者工具中找到
Authorization
值,将其添加到 Python 请求中。
- 如果遇到
认证后的请求示例代码:
headers = { "Authorization": "Basic YWFhYTphYWFh" } response = requests.post(url=f'{url}/sdapi/v1/txt2img', json=payload, headers=headers) r = response.json() for i in r['images']: image = Image.open(io.BytesIO(base64.b64decode(i.split(",", 1)[0]))) image.save('output.png') print("图片生成成功,已保存为 output.png")
- 调整请求参数:
- 根据需求调整
payload
中的参数。常用参数如下:
- 根据需求调整
payload = { "prompt": "puppy dog", "negative_prompt": "wrong hands", "steps": 20, "width": 512, "height": 512, "cfg_scale": 7, "sampler_index": "Euler" }
Stable Diffusion API 调用示例
以下是如何在本地调用 Stable Diffusion 的 API 的详细示例,包括如何使用 ControlNet 和 Segment Anything 的 API。
1. txt2img 接口(不带 ControlNet)
import requests import base64 import io from PIL import Image url = "http://127.0.0.1:7860/sdapi/v1/txt2img" data = { "prompt": "提示词", "negative_prompt": "反向提示词", "seed": -1, "sampler_name": "Euler a", "cfg_scale": 7.5, "width": 640, "height": 360, "batch_size": 4, "n_iter": 4, "steps": 30, "return_grid": True, "restore_faces": True, "send_images": True, "save_images": False, "do_not_save_samples": False, "do_not_save_grid": False, "enable_hr": True, "denoising_strength": 0.5, "override_settings": { "sd_model_checkpoint": "sd_xl_base_1.0.safetensors [31e35c80fc]", "sd_vae": "Automatic", }, "override_settings_restore_afterwards": True } response = requests.post(url, json=data) result = response.json() for img_data in result['images']: image = Image.open(io.BytesIO(base64.b64decode(img_data.split(",", 1)[1]))) image.save('output.png') print("图片生成成功,已保存为 output.png")
2. txt2img 接口(带 ControlNet)
import requests import base64 import io from PIL import Image url = "http://127.0.0.1:7860/sdapi/v1/txt2img" data = { "prompt": "提示词", "negative_prompt": "反向提示词", "seed": -1, "sampler_name": "Euler a", "cfg_scale": 7.5, "width": 640, "height": 360, "batch_size": 4, "n_iter": 4, "steps": 30, "return_grid": True, "restore_faces": True, "send_images": True, "save_images": False, "do_not_save_samples": False, "do_not_save_grid": False, "alwayson_scripts": { "controlnet": { "args": [{ "enabled": True, "control_mode": 0, "model": "t2i-adapter_diffusers_xl_lineart [bae0efef]", "module": "lineart_standard (from white bg & black line)", "weight": 0.45, "resize_mode": "Crop and Resize", "threshold_a": 200, "threshold_b": 245, "guidance_start": 0, "guidance_end": 0.7, "pixel_perfect": True, "processor_res": 512, "save_detected_map": False, "input_image": "" }] } }, "enable_hr": True, "denoising_strength": 0.5, "override_settings": { "sd_model_checkpoint": "sd_xl_base_1.0.safetensors [31e35c80fc]", "sd_vae": "Automatic", }, "override_settings_restore_afterwards": True } response = requests.post(url, json=data) result = response.json() for img_data in result['images']: image = Image.open(io.BytesIO(base64.b64decode(img_data.split(",", 1)[1]))) image.save('output.png') print("图片生成成功,已保存为 output.png")
3. 使用 Segment Anything 生成蒙版
import requests url = "http://127.0.0.1:7860/sam/sam-predict" data = { "sam_model_name": "sam_vit_b_01ec64.pth", "input_image": "base64_str", "sam_positive_points": [ [317.7521, 174.7542], [304.25, 174.75], [295.25, 152.25], [292.25, 174.75], [284.75, 168.75] ], "sam_negative_points": [], "dino_enabled": True, "dino_model_name": "GroundingDINO_SwinT_OGC (694MB)", "dino_text_prompt": "" } response = requests.post(url, json=data) result = response.json() print("蒙版生成成功:", result)
4. img2img 接口
import requests import base64 import io from PIL import Image url = "http://127.0.0.1:7860/sdapi/v1/img2img" data = { "init_images": ["base64", ], "mask": "base64", "prompt": "", "negative_prompt": "", "batch_size": 4, "n_iter": 4, "seed": -1, "sampler_name": "Euler a", "mask_blur": 4, "inpaint_full_res": True, "inpaint_full_res_padding": 4, "inpainting_mask_invert": 0, "cfg_scale": 7, "send_images": True, "save_images": False, "width": 640, "height": 360, "denoising_strength": 0.5, "steps": 30, "override_settings": { "sd_model_checkpoint": "sd_xl_base_1.0.safetensors [31e35c80fc]", "sd_vae": "Automatic", }, "override_settings_restore_afterwards": True } response = requests.post(url, json=data) result = response.json() for img_data in result['images']: image = Image.open(io.BytesIO(base64.b64decode(img_data.split(",", 1)[1]))) image.save('output.png') print("图片生成成功,已保存为 output.png")
通过以上示例,您可以了解如何在本地调用 Stable Diffusion 的不同 API 接口,包括 ControlNet 和 Segment Anything 的 API。根据需要调整 payload
中的参数,以获得最佳效果。