Discussion about this post

User's avatar
Geoff Gallinger's avatar

Good insight. I’ve arrived at the same conclusion (that LLMs are better at solving known problems than new ones) just by messing with AI for my personal projects.

Sometimes I actually lean into it: I prompt Claude to “prioritize selecting a solution that would be preferred by 99% of world class developers.”

Prompt Engineering is not as important as it was in the GPT 3.5 days, but it still helps to narrow the window that the LLM calculates probability from. And narrowing it to the middle of the bell curve, where the most “battle tested” solutions live, seems to be especially effective.

Scott's avatar

All our most of those LLMs will write you a perfectly working Zig implementation if you give it the context from the Zig reference manual, meaning you download it from the official source and then attach it to your prompt. I do this with Pinescript 6 and it works with any long context LLM perfectly fine. I'm not trying to defend AI or anything like that but sometimes you just need to add a little bit of extra knowledge when it comes to newer stuff and it'll make less mistakes because it's referencing documents that are in its memory as opposed to the huge ball of information that's stored in its training data

5 more comments...

No posts

Ready for more?