Discussion about this post

User's avatar
rakkuroba's avatar

This is a great read! Thank you for the practical advice.

But to be clear, LLMs as currently constituted will simply never completely overcome the hallucination problem. They lack a coherent internal model of the world, don’t have any intuitive understanding of logic, and frequently make the wrong conclusions even when presented with factual information.

It’s an inherent limitation to even the most advanced LLMs and there is no obvious solution (other than entirely new architectures such as world models).

I recommend checking out Gary Marcus for more on this.

Angelique C. Kamsteeg's avatar

This is such great advice!!

I’ve always considered chatgpt a great tool to brainstorm with, but now am realising I am depriving myself of what makes my work and thinking unique.

Thank you for providing your framework ! I’ll be integrating this into my workflow :)

9 more comments...

No posts

Ready for more?