Hacker News story: Ask HN: Are past LLM models getting dumber?

Ask HN: Are past LLM models getting dumber?
I’m curious whether others have observed this or if it’s just perception or confirmation bias on my part. I’ve seen discussion on X suggesting that older models (e.g., Claude 4.5) appear to degrade over time — possibly due to increased quantization, throttling, or other inference-cost optimizations after newer models are released. Is there any concrete evidence of this happening, or technical analysis that supports or disproves it? Or are we mostly seeing subjective evaluation without controlled benchmarks? 2 comments on Hacker News.
I’m curious whether others have observed this or if it’s just perception or confirmation bias on my part. I’ve seen discussion on X suggesting that older models (e.g., Claude 4.5) appear to degrade over time — possibly due to increased quantization, throttling, or other inference-cost optimizations after newer models are released. Is there any concrete evidence of this happening, or technical analysis that supports or disproves it? Or are we mostly seeing subjective evaluation without controlled benchmarks?

Hacker News story: Ask HN: Are past LLM models getting dumber? Hacker News story: Ask HN: Are past LLM models getting dumber? Reviewed by Tha Kur on February 10, 2026 Rating: 5

No comments:

Powered by Blogger.