Comments on: Hackaday Links: February 23, 2025 https://hackaday.com/2025/02/23/hackaday-links-february-23-2025/ Fresh hacks every day Tue, 25 Feb 2025 22:03:41 +0000 hourly 1 https://wordpress.org/?v=6.7.2 By: irox https://hackaday.com/2025/02/23/hackaday-links-february-23-2025/#comment-8103028 Tue, 25 Feb 2025 22:03:41 +0000 https://hackaday.com/?p=759166&preview=true&preview_id=759166#comment-8103028 In reply to Bob.

But the before and after results would be the same, since the model is static there is not going to be a before and after. There is no retraining or updating the model or anything (other than giving it a prompt to define its behavior) that will change its behavior. So there can be no cognitive decline. Any AI slop picked up was there from the very start.

What the article is really claim (counter the HaD clickbait pseudo-science sound bits) is that more signs of cognitive impairment have been detected in older models and less signs found in new models. Which implies we are getting better a build models. It does not imply that over time models show increasing signs of cognitive impairment. This is kind of the opposite of what Truth is implying.

]]>
By: Bob https://hackaday.com/2025/02/23/hackaday-links-february-23-2025/#comment-8102856 Tue, 25 Feb 2025 11:12:28 +0000 https://hackaday.com/?p=759166&preview=true&preview_id=759166#comment-8102856 In reply to irox.

Not that hard to do, you just compare results from original launch to now.

That’s assuming they’ve been picking up AI slop as @Truth mentions. Or maybe it was from trawling Reddit.

]]>
By: irox https://hackaday.com/2025/02/23/hackaday-links-february-23-2025/#comment-8102633 Mon, 24 Feb 2025 21:40:09 +0000 https://hackaday.com/?p=759166&preview=true&preview_id=759166#comment-8102633 In reply to Truth.

Understood. The part that I have issues with is “older LLMs are already showing signs of cognitive *decline(“, which implies they were not showing signs early, but have some how “declined”.

If the claim was “cognitive impairment” this would make sense. But otherwise it comes across is pseudo-science-y click bait..

]]>
By: Truth https://hackaday.com/2025/02/23/hackaday-links-february-23-2025/#comment-8102334 Mon, 24 Feb 2025 06:08:30 +0000 https://hackaday.com/?p=759166&preview=true&preview_id=759166#comment-8102334 In reply to irox.

The problem is garbage in, garbage out.
Originally models were trained with human generated data only, now they are being trained by a mixture of LLM generated hallucinations and human generated content. It is a bit like a photocopy of a photocopy of a photocopy of a photocopy of a …. of a half remembered dream. It may look Ok at a cursory glance, but eventually you realize it is not.

]]>
By: SparkyGSX https://hackaday.com/2025/02/23/hackaday-links-february-23-2025/#comment-8102300 Mon, 24 Feb 2025 00:58:01 +0000 https://hackaday.com/?p=759166&preview=true&preview_id=759166#comment-8102300 The older the model, the worst it performed? So, you mean they are getting better, since the newer models perform better?

Now if the newer models would perform worse, that would be news! I would expect that to happen at some point, if they don’t find a way to stop inbreeding the models.

]]>
By: irox https://hackaday.com/2025/02/23/hackaday-links-february-23-2025/#comment-8102294 Mon, 24 Feb 2025 00:20:54 +0000 https://hackaday.com/?p=759166&preview=true&preview_id=759166#comment-8102294 The LLM cognitive decline stuff just seems like pseudo-science click bait. I high doubt there is any real data that would support LLM degradation over time… Wish HaD would pick it apart rather than just passing it on. Using the excuse that the test is used incorrectly in humans to justify promoting bad science deserves a big eye roll.

]]>