Comments on: Import GPU: Python Programming with CUDA https://hackaday.com/2025/02/25/import-gpu-python-programming-with-cuda/ Fresh hacks every day Wed, 26 Feb 2025 09:18:57 +0000 hourly 1 https://wordpress.org/?v=6.7.2 By: Doktoreq https://hackaday.com/2025/02/25/import-gpu-python-programming-with-cuda/#comment-8103162 Wed, 26 Feb 2025 09:18:57 +0000 https://hackaday.com/?p=765574#comment-8103162 In reply to Miles.

Vaguely remember there being bios hacks on older AMD platforms allowing you to go way over AMD established limits. Would have to rummage through my bookmarks pile to find the source for that.

]]>
By: Miles https://hackaday.com/2025/02/25/import-gpu-python-programming-with-cuda/#comment-8103140 Wed, 26 Feb 2025 06:26:18 +0000 https://hackaday.com/?p=765574#comment-8103140 The question is whether the next generation of AMD graphics built into their processors will support 64GB+ of VRAM allocation from the system RAM, and if that will help large datasets with medium compute requirements.

]]>
By: azeem https://hackaday.com/2025/02/25/import-gpu-python-programming-with-cuda/#comment-8103139 Wed, 26 Feb 2025 06:24:34 +0000 https://hackaday.com/?p=765574#comment-8103139 via openCL or even make algorithm specific hardware accelerators.]]> In reply to azeem.

plus you can do things like run code on fpgas 😊 via openCL or even make algorithm specific hardware accelerators.

]]>
By: azeem https://hackaday.com/2025/02/25/import-gpu-python-programming-with-cuda/#comment-8103138 Wed, 26 Feb 2025 06:23:04 +0000 https://hackaday.com/?p=765574#comment-8103138 this is cool but just learn openCL. works on alot more compute devices and is much more performant. i like python because it’s simple but openCL is faster.

]]>