• CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    3 months ago

    But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained.

    There’s no mechanism in LLMs that allow for anything. It’s a blackbox. Everything we know about them is empirical.

    LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think.

    It’s a lot like a brain. A small, unidirectional brain, but a brain.

    LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text.

    I’ll bet you a month’s salary that this guy couldn’t explain said math to me. Somebody just told him this, and he’s extrapolated way more than he should from “math”.

    I could possibly implement one of these things from memory, given the weights. Definitely if I’m allowed a few reference checks.


    Okay, this article is pretty long, so I’m not going to read it all, but it’s not just in front of naive audiences that LLMs seem capable of complex tasks. Measured scientifically, there’s still a lot there. I get the sense the author’s conclusion was a motivated one.