I low-key wish there were a separate AI leaderboard. It would be really interesting to see how fast bots can actually solve a problem as soon as it goes up, and it’d be nice to compare that to last year.
Honestly, I’d be very surprised if AI could even solve those problems.
The only AI I’ve heard of that might eventually be able to actually solve problems is OpenAI’s Project Q, and that one isn’t public yet. The publicly available AI tools can just repeat things they have already seen online (with a rather large chance of repeating it wrong due to their lossy nature). So, unless the riddles exist somewhere online in a reasonably similar form, I’d expect the chatbots to fail at solving them.
(They can, however, help a human developer solve them quicker than the developer could without AI assistence.)
Whoever makes the Advent of Code problems should test all them on GPT4 / other LLMs and try to make it so the AI can’t solve them.
That class of problems is small, and shrinking, and would block out any newer programmers from entering.
Is it though? Has someone tried with some of the past problems?
Removed by mod
I hope AoC is niche enough that the community won’t use AI before leaderboards are filled.
Would be interesting to compare this year’s times with the years before and see if there is a trend.