Mi25, Stable Diffusion's $100 hidden beast

Someone is selling cooling shrouds:

I did not test these nor do I know the seller or heard anything about them, just found them by googling…

Thank you! I will look into these.

My next PC build is going to have one or two of these as compute and some dingus card as a display out. I’m kinda excited to play with them.

Xidax looked at me rly weird though when I asked if they could throw two in a box with an A380 for display and waterloop it. Oh well.

shrouds for other fire pro/instinct cards can also work since they often have similar or the same intake dimensions, . found this

Radeon Pro V540 120mm Fan Duct by TechAmbrosia - Thingiverse

look similar enough that it could work , might need some some minor modifications though.

personally, I just used cardboard and tape for mine.

funny that there is a shroud for the v540 engineering samples, but not the mi25 :person_shrugging:

As you already mentioned - some numbers here seem odd. FP16 on VEGA 10… CDNA2 Card should be much faster…

ROCm 5.5 just got released…

Maybe updating the software and getting rid of the database bug (“could degrade performance”) resolves some issues?

There are some GFX900 improvements mentioned - so maybe…

It’ll all depend on what version of pytorch supports it

Are there lower TDP wx9100 bios’ available? The little blower fan and my 300w PSU can’t keep up with the default 220w TDP.

I believe it’s limited to 170 by defualt
You can lower the wattage with ROCm-smi

Little gotcha I ran into using a ryzen cpu with vega integrated graphics, I needed to disable integrated graphics before ROCM would work on my card. Once I was running only the mi25, ROCM picked it up nicely.

Has anyone reflashed bios of the card on Windows 11? Just curious if ATi Flash 2.9.3 AND windows 10 is a requirement.

amdvbflash.exe -i
Adapter not found

EDIT: I just used a CH341A Programmer to flash it. 1000x easier imo

which one did you use? I still have one of those old ISA EEprom Flashers - time to upgrade, I think :slight_smile:

Lets see if Blender support will improve, I just browsed through the last render meeting logs:

  • AMD HIP-RT code was merged, but is not yet enabled due to issues found in testing. Brian will send an updated HIP-RT SDK, and mention the right drivers to use for testing the the pull request. Brecht will then make a new build and test.
  • AMD ROCm 5.5 was released, which should enable us to re-enable HIP on Linux by upgrading to this compiler version. Brecht will test it. This driver release should also fix viewport crashes with RDNA2 graphics card. It may take a bit for Linux distributions to upgrade to this version.

You might be interested in this newer upgraded model ch341a programmer v1.7 1.8v level shift
I just ordered one the reviews look good and it has voltage change directly read and write 5v, 3.3v, 2.5v, 1.8v chips.

thank you for the link - it leads into a “sorry, product not available” message.

They are all the same

KOOBOOK 1Set CH341A 24 25 Series EEPROM Flash BIOS USB Programmer+SOIC8 SOP8 Test Clip+SPI Flash 1.8V Adapter+SOP8 SOIC8 to DIP8 Adapter Socket Converter

I have a Radeon VII and I yolo’d with installing SD on Debian sid with Python 3.11 and kernel 6.1.0. Unfortunately the GUI didn’t launch and the process stops at:

Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(0): 
Model loaded in 5.8s (calculate hash: 2.3s, load weights from disk: 0.2s, create model: 2.0s, apply weights to model: 0.3s, load VAE: 0.2s, move model to device: 0.7s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 70.5s (import torch: 0.9s, import gradio: 1.1s, import ldm: 3.6s, other imports: 0.4s, list SD models: 57.9s, load scripts: 0.3s, load SD checkpoint: 6.0s, create ui: 0.1s).

This happened during the first run after installation with

TORCH_COMMAND='pip install --pre torch torchvision --extra-index-url https://download.pytorch.org/whl/nightly/rocm5.4.2' python launch.py --skip-torch-cuda-test --precision full --no-half

I hoped that the first run can take longer and eventually the GUI will launch, but I terminated it after 40 minutes of waiting.
Anyone has any clue if SD can work on this setup?

Make sure you follow the steps exactly

I’ve done a fresh install and downloaded pytorch for rocm5.2 and the process still hangs after Startup Time: ...

The only thing that I have changed was adding --skip-torch-cuda-test. But the installer still looks for CUDA device:

Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
No module 'xformers'. Proceeding without it.

I’m running ROCm stack from unstable Debian repo (packages vary between 5.2 and 5.4 versions), so maybe some libraries that are not packaged are missing. I suspect this might be the source of the problem.

Might be better on a docker container or a fresh install
I only know how to use bare metal Ubuntu sorry

1 Like

The kink still shows to be good, but might be because I’m login there auto.
https://www.aliexpress.us/item/3256804279436549.html?spm=a2g0o.order_detail.order_detail_item.3.3537f19cGhPr9e&gatewayAdapt=glo2usa
I signed out so that link should be good. With this version it’s newer, there is no need for mods for voltage changes via soldering wires or buying extra adaptors.