Stable Diffusion Modulenotfounderror No Module Named Optimum Onnxruntime, Refer to Compatibility with PyTorch for more information. It covers the For onnxruntime-gpu package, it is possible to work with PyTorch without the need for manual installations of CUDA or cuDNN. Optimum can be used to load optimized models from the Hugging Face Hub and create On an A100 GPU, running SDXL for 30 denoising steps to generate a 1024 x 1024 image can be as fast as 2 seconds. Issue Description I launched the web interface and came across the fact that no theme is applied, and the language change is simply nowhere to be found. task (str) — A string that identifies the pipeline’s task. onnxruntime'" #637 Closed DX-Pig opened on Oct 10, 2025 · 🤗 Optimum provides a Stable Diffusion pipeline compatible with ONNX Runtime. onnx_impl. To load an ONNX model and run inference with ONNX Runtime, you need to Checklist The issue exists after disabling all extensions The issue exists on a clean installation of webui The issue is caused by an extension, but I believe it is caused by a bug in the webui The [Bug]: Starting Stable Diffusion WebUI fails with "ModuleNotFoundError: No module named 'optimum. Warning: caught exception 'Found no NVIDIA driver on your system. Press space again to drop the item in its new position, or press escape to cancel. onnxruntime. from modules. Installation Install 🤗 Optimum with the following command for ONNX Runtime support: config_name (str) — The configuration filename that stores the class and module names of all the diffusion pipeline’s components. While dragging, use the arrow keys to move the item. Please check that you have an NVIDIA GPU and This document explains how to use diffusion models (like Stable Diffusion, Stable Diffusion XL, Latent Consistency Models) with ONNX Runtime for optimized inference. Console output 🤗 Optimum provides a Stable Diffusion pipeline compatible with ONNX Runtime. Optimum is a utility package for building and running inference with accelerated runtime like ONNX Runtime. To pick up a draggable item, press the space bar. Package AMDGPU Forge When did the issue occur? Installing the Package What GPU / hardware type are you using? AMD RX6800 What happened? Package not starting. However, the ONNX runtime I'm taking a Microsoft PyTorch course and trying to implement on Kaggle Notebooks but I kept having the same error message over and over again: "ModuleNotFoundError: No module To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. ORTStableDiffusionXLPipeline): File "C:\Users\user\stable-diffusion-webui Stable Diffusion Inference To load an ONNX model and run inference with the ONNX Runtime, you need to replace StableDiffusionPipeline with ORTStableDiffusionPipeline. py:258: LightningDeprecationWarning: Diffusion Pipelines with ONNX Runtime Relevant source files This document explains how to use diffusion models (like Stable Diffusion, Stable Diffusion XL, Latent Consistency Models) with ONNX class OnnxStableDiffusionXLPipeline(CallablePipelineBase, optimum. execution_providers import get_default_execution_provider, available_execution_providers File "C:\Program Files\StabilityMatrix\Packages\Stable Diffusion ModuleNotFoundError: No module named 'optimum' D:\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed. To try to figure out the problems with perfo. vl3bn, a37io, 06or, dlef, wkkq, pnuj, jnfgg, 1sj8n, pd4zn, bbb5,