Comments on: Learn GPU Programming With Simple Puzzles https://hackaday.com/2024/09/25/learn-gpu-programming-with-simple-puzzles/ Fresh hacks every day Fri, 27 Sep 2024 09:35:53 +0000 hourly 1 https://wordpress.org/?v=6.7.2 By: JBaker https://hackaday.com/2024/09/25/learn-gpu-programming-with-simple-puzzles/#comment-8044402 Fri, 27 Sep 2024 09:35:53 +0000 https://hackaday.com/?p=724284#comment-8044402 In reply to JBaker.

Can’t reply to you directly for some reason Dave but I would say it seems like it is somewhat similar but more like splitting up complex operations across cores instead of a set of bits.

I think of it like hyperthreading to the exteme where instead of unused sections working on a separate operation within a core the unused portions are minimized and devoted to more cores entirely.

]]>
By: Dave Rowntree https://hackaday.com/2024/09/25/learn-gpu-programming-with-simple-puzzles/#comment-8043925 Thu, 26 Sep 2024 09:15:22 +0000 https://hackaday.com/?p=724284#comment-8043925 In reply to JBaker.

I think you’re reinventing bitslice design. It’s a thing with a long history. I designed a hybrid structure of FPGA (for routing and data munging tasks) and bitslice ‘cores’ back in college 25 years ago. Well, I wrote it down anyway, it never made it to reality. I did come across an EE researcher who was studying 1-bit CPU cores, optimised for super high clock rates. I didn’t hear much more about that either.

]]>
By: JBaker https://hackaday.com/2024/09/25/learn-gpu-programming-with-simple-puzzles/#comment-8043913 Thu, 26 Sep 2024 07:57:38 +0000 https://hackaday.com/?p=724284#comment-8043913 In reply to clancydaenlightened.

This reminds me of a thought I’ve had brewing for awhile now. I was imagining a CPU with very basic stripped down cores but loads of them. Use a segment of them for software defined fetching and scheduling.
Could strip them down real far and do multiplication as addition across a bunch of cores for example.
I wonder if you could get the small to fit enough and get a high enough clock that, with compiler optimization for it, it would be performant.

]]>
By: Zai1208 https://hackaday.com/2024/09/25/learn-gpu-programming-with-simple-puzzles/#comment-8043821 Thu, 26 Sep 2024 00:50:21 +0000 https://hackaday.com/?p=724284#comment-8043821 In reply to clancydaenlightened.

Well someone was dedicated https://bford.info/pub/os/gpufs-cacm.pdf

]]>
By: clancydaenlightened https://hackaday.com/2024/09/25/learn-gpu-programming-with-simple-puzzles/#comment-8043701 Wed, 25 Sep 2024 18:08:24 +0000 https://hackaday.com/?p=724284#comment-8043701 In reply to clancydaenlightened.

But you technically can run an entire is on a gpu

Like my old amd Fiji with 3rd party overclock support and 4GB hbm ram

1.4 ghz and slight memory overclock

Use a few shaders and some floating point and mmu

You could get windows running on a gpu running on a windows PC cpu

Both natively and in real time

Won’t be play gta v tho

]]>
By: clancydaenlightened https://hackaday.com/2024/09/25/learn-gpu-programming-with-simple-puzzles/#comment-8043699 Wed, 25 Sep 2024 18:04:22 +0000 https://hackaday.com/?p=724284#comment-8043699 In reply to clancydaenlightened.

And the instructions are tailored more to algebra, calculus, and trigonometry based math

]]>
By: clancydaenlightened https://hackaday.com/2024/09/25/learn-gpu-programming-with-simple-puzzles/#comment-8043698 Wed, 25 Sep 2024 18:03:09 +0000 https://hackaday.com/?p=724284#comment-8043698 Biggest thing with a gpu

Is that unlike a CPU, one instruction can modify more than one memory location or you can do several different instructions on different memory location at the same time

In parallel, not serially like a CPU

So things like branching and conditionals and boolean may work different

Though gpu Instructions are more simple than CPU, gpu also have hardware support for floating point operations

You can think gpu instruction set like risc but the instructions work in parallel and somewhat non linear fashion

]]>