cross-posted from: https://lemmy.ml/post/20858435

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

  • @ContrarianTrail

    > A chess engine is intelligent in one thing: playing chess

    No. That’s not how the adjective “intelligent” works, outside of marketing drivel of course (“intelligent washing machine” etc).

    > Artificial General Intelligence (AGI) is the artificial version of human cognitive capabilities

    Can you give a definition of “intelligence” or “human cognitive abilities” that would allow us to somehow unequivocably establish that “X is intelligent” or “X has human cognitive abilities”?

    • lightstream@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      1 month ago

      You’re right that we need a clear definition of intelligence if we are to make any predictions about achieving AGI. The researchers behind this article appear to mean “human-level cognition” which doesn’t seem to be a particularly objective or useful yardstick. To begin with, which human are we talking about? If they’re talking about an idealised maximally intelligent human, then I don’t think we should be surprised that we aren’t about to achieve that. The goal is not to recreate human cognition as if that’s some kind of holy grail. The goal is to make intelligent systems which can give results which are at least as good as what would be produced by a skilled and well-trained human working on the same problem.

      Can I ask you how you would define intelligence? And in particular, how would you - if you would at all - differentiate intelligence from being clever, or from being well educated?

      • @lightstream I wouldn’t, because I am not the one making claims about “AGI” being just around the corner.

        That’s the thing, OpenAI and others benefiting from the hype make extraordinary claims – along the lines of “human-level AGI is just around the corner” – so they are the ones that need to define their terms.

        You are asking all the right questions here (“which human are we talking about”), the point is that these questions should be answered by those who make such extraordinary claims.

        • lightstream@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          I certainly am not surprised that OpenAI, Google and so on are overstating the capabilities of the products they are developing and currently selling. Obviously it’s important for the public at large to be aware that you can’t trust a company to accurately describe products it’s trying to sell you, regardless of what the product is.

          I am more interested in what academics have to say though. I expect them to be more objective and have more altruistic motivations than your typical marketeer. The reason I asked how you would define intelligence was really just because I find it an interesting area of thought which fascinates me and has done long before this new wave of LLMs hit the scene. It’s also one which does not have clear answers, and different people will have different insights and perspectives. There are different concepts which are often blurred together: intelligence, being clever, being well educated, and consciousness. I personally consider all of these to be separate concepts, and while they may have some overlap, they nevertheless are all very different things. I have met many people who have very little formal education but are nonetheless very intelligent. And in terms of AI and LLMs, I believe that an LLM does encapsulate some degree of genuine intelligence - they appear to somehow encode a model of the universe in their billions of parameters and they are able to meaningfully respond to natural language questions on almost any subject - however an LLM is unquestionably not a conscious being.

    • _NoName_@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      IIRC, within computer science, which is the field most heavily driving AI design and research forward, an ‘intelligent agent’ is essentially defined as any ‘agent’ which takes external stimulai from a collection of sensors in some form of environment, processes that stimulai in a dynamic fashion (one of the criteria IIRC is a branching decision tree based on the stimulai), and then applies that processing to a collection of affectors in the environment.

      Yes, this definition is an extremely low bar and includes a massive amount of code, software and scripts. It also includes basic natural intelligences such as worms, ants, amoeba, and even viruses. One example of mechanical AI are some of Theo Jansen’s StrandBeasts

      • coffeetest@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        “Artificial intelligence” is for the marketing department’s benefit. At least mainly so. What people envision with the term AI is because of preconceived notions based science fiction not what it actually is.

      • @JayDee so two things.

        First: sure, we can redefine words in any way we want, but then:

        1. talking about “AI” becomes much less interesting if it merely means “walking a decision tree based on data coming from external sensors”

        2. the whole talk about “intelligence” becomes a bait-and-switch, as the conversation started with the term “intelligence” being used in the general sense we tend to apply to people and some higher-order animals.

        • _NoName_@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          1 month ago

          I am not bait-and-switching here. The switchers were the business-minded grifters which made the term synonymous with LLMs and eventually destroyed its meaning completely.

          The definition I gave is from the most popular and widely used CS textbook on AI and has been the meaning used in the field since the early 90s. It’s why videogame NPCs are always called AI, because they fit the conventional CS definition, and were one of the major things it was about the most.

          As for your ‘1’, AI is a wide-but-very-specialized field and pertains from everything from robots to text autocomplete. If you want the most out of it, you need to get down into the nitty gritty and really research the field.

          On a Seperate note, while AI safety, AGI, and the risk of the intelligence explosion are somewhat related to computer science’s pursuit of AI systems, they are much more philosophical currently, and adhere to much vaguer definitions of AI, Such as Alan Turing’s.

          • @JayDee I didn’t say you are, I clarified in my later post. Sorry, should have been clearer.

            I am vehemently agreeing with you here, in fact.

            The context is the conversation above in the thread, where it was claimed that “AGI” is “pretty inevitable”.

            And the point I’ve been making is:

            1. we don’t have a good definition of what “intelligence” is, in the sense presumably used above;

            2. if we decide to use a somewhat simplistic definition, the whole “AI” issue stops being all that exciting.

            • @JayDee AI as the wide, specialized field you mention makes no claims about building anything with *actual* human-like intelligence, I feel. People who understand how the math and code work in these systems know better than to do that.

              And yes, “AGI” debate is a philosophical one. The problem is it is not recognized as such, because of the AI hype. People seem to think that AGI is “inevitable” and “just around the corner”, because salespeople from companies that benefit from that hype say so.

              • _NoName_@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                1 month ago

                Alright, I see what you’re saying now. We’re on the same page.

                As an additional thing regarding AGI, I think it should be noted that ‘human-level’ and ‘human-like’ are importantly distinct when talking about this topic.

                In reality, if an AGI is ever created, it will most likely not be human-like at all. Humans think the way we do out of an evolutionary conditioning for survival, a history an AGI will not be coming from. One example given by Robert Miles is a staple making machine becoming an ASI, where it essentially would exist solely to make as many staples as it could with its hyperintelligence.

                We mean to say that this AGI is a ‘human-level’ intelligence in that it can learn to utilize abstractions and tools, be able to function in a large variety of environments without intervention or training, and be able to learn in a realtime fashion.

                Obviously, these criteria for any AI shows just how far away we are from achieving anything right now.these concepts are very vague and the arguments for each one’s impossibility or inevitability are equally vague and philosophical. It’s still mostly just stuffy academics arguing with each other.

                One statement I agree with, though, comes from the AI safety collective: We don’t know what we’re doing, and we should really sort that out. If any of this is actually possible and we accidentally make an AGI/ASI before having any failsafes or contingencies, it could be very bad.