𝕏X (Twitter)
God of Prompt (@godofprompt)
MIT researchers discovered a phenomenon called "context pollution" where llms get WORSE by reading their own prior responses errors, hallucinations, and stylistic artifacts from earlier turns propagate forward because the model treats its own output as ground truth and removing that history fixes it 🤯
Cargando tweet...