The AMD RX 580 was released in April 2018. AMD had already dropped ROCm/HIP support for it by 2021. They only supported the card for 3 years. 3 years. It's so lame it's bordering on fraudulent, even if not legally fraud.
I know CUDA via their HIP is a moving target they don't control making it hard and expensive for them to prevent bit rot but this is still an AMD caused problem due to opencl not getting any love by anyone anymore. AMD included.
Also, while AMD's OpenCL implementation has more features on paper, the runtime is frequently broken where NVIDIA's claimed features actually all work. Everything I've heard from people who've used it is that they ended up with so much vendor-specific code to patch around AMD's bugs and deficiencies that they might as well have just written CUDA in the first place.
This is an old article but the old "vendor B" stuff still rings incredibly true with at least AMD's OpenCL stack as well.
Thus NVIDIA actually has even less of a lock-in than people think. If you want to write a better OneAPI ecosystem and run it on OpenCL runtime... go hog wild! NVIDIA is best at that too! You just don't get the benefit of NVIDIA's engineers writing libraries for you.
I think Intel is still pushing opencl on GPUs. Maybe with other layers on top. Sycl or oneapi or similar. AMD mostly shares one implementation between hip and opencl so the base plumbing should work about as well on either, though I can believe the user experience is challenging.
I wrote some code that compiles as opencl and found it an intensely annoying experience. There's some C++ extension model to it now which might help but it was still missing function pointers last time I looked. My lasting impression was that I didn't want to write opencl code again.
Sticking to an old kernel and libraries is a pain id your hardware is not purpose-specific. Newer downstream dependencies change and become incompatible: e.g. Tensorflow 2 (IIRC) was incompatible with the ROCm versions that work with the 580. New models on places like HuggingFace tend to work with recent libraries, so not changing to a new toolchain locks you in to SoTA a few years in the past. In my case, thr benchmarking I did for my workloads showed comparable perf between my RX580 and Google Colab. So I chose to upgrade my kerbel and break ROCm
Yeah, that's fair. Staying in the past doesn't work forever.
There are scars in the implementation which suggest the HSA model was really difficult to implement on the hardware at the time.
It doesn't look like old hardware gets explicitly disabled, the code that runs them is still there. However writing new things that only work on newer hardware seems likely, as does prioritising testing on the current gen. So in practice the older stuff is likely to rot unless someone takes an interest in fixing it.
I know CUDA via their HIP is a moving target they don't control making it hard and expensive for them to prevent bit rot but this is still an AMD caused problem due to opencl not getting any love by anyone anymore. AMD included.