Discussion about this post

User's avatar
rakkuroba's avatar

This is a great read! Thank you for the practical advice.

But to be clear, LLMs as currently constituted will simply never completely overcome the hallucination problem. They lack a coherent internal model of the world, don’t have any intuitive understanding of logic, and frequently make the wrong conclusions even when presented with factual information.

It’s an inherent limitation to even the most advanced LLMs and there is no obvious solution (other than entirely new architectures such as world models).

I recommend checking out Gary Marcus for more on this.

Expand full comment
Ciara Maclean's avatar

What a refreshing read. Thank you!

Expand full comment
10 more comments...

No posts

Ready for more?