• traches@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    2
    ·
    2 months ago

    They don’t have to, algorithms do whatever they are designed to do. Long division is an algorithm.

    Profit motives are the issue here.

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    26
    ·
    2 months ago

    Wasn’t this literally the shady research that Facebook got caught doing with Cambridge Analytica? Specifically tweaking a user’s feed to be more negative resulted in that user posting more negative things themselves and more engagement overall.

    • TheReturnOfPEB@reddthat.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      2 months ago

      I wonder exactly how much of Hawaii Zuckerberg has to own before people start to question what they are getting from facebook.

    • sp3ctr4l@lemmy.zip
      link
      fedilink
      arrow-up
      5
      ·
      2 months ago

      Yep!

      Facebook figured out how to monetize trolling.

      Over 10 years later, it’s destroyed society, but made them a lot of money.

  • the post of tom joad@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    2 months ago

    The term “algorithm” in this context is simply a convenient term hiding the intentional right wing radicalization of users to push them towards pro-business policies, so can we please call this out more often?

    I’m quite tired of “algorithm” standing in for the intentions behind the owners who write and maintain it.

    It was also an “algorithm” that inflated rent around the country, right?

    An algorithm, yes. Written with the intention of inflating rent.

    It’s not an accident. Algorithm my hair-hole

  • nehal3m@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    ·
    2 months ago

    The old thread I posted this in was deleted, but I wrote this:

    Okay so hear me out. I have this pet theory that might explain some of the divide between genders, but also political parties, causing paralysis which ultimately might lead to humanity’s extinction. Forgive me if I’m stating the obvious.

    I’m going to set up two axioms to arrive at an extrapolated conclusion.

    One: Human psychology tends to ascribe more weight to negative things than positive things in the short term. In the long term this generally balances out, but in the short term it’s more prudent in a biological sense to pay attention to the rustling in the bushes than the berries you might pick from them. This is known as the negativity bias.

    Two: The modern gatekeepers of social interaction, Big Tech, employ blind algorithms that attempt to steer your attention towards spending more time on their platforms. These companies are the arbiters of the content we experience daily and what you do and don’t see is mostly at their discretion. The techniques they employ, in simple terms, are designed to provoke what they call ‘engagement’. They do this because at the end of the day FAANG have not only a financial interest, but a fiduciary duty to sell advertisements at the behest of their shareholders. The more they can engage you, the more ads they can sell. They employ live A-B testing, divide people into cohorts and poke and prod them with psychological techniques to try and glue your eyeballs to their ads.

    Extrapolated conclusion: These companies have a financial and legally binding interest to divide the population against itself, obstructing politics and social interaction to the point where we might not be able to achieve any of the goals that we need to reach to prevent oblivion.

    Thank you for coming to my TED Talk.

    • sp3ctr4l@lemmy.zip
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      2 months ago

      I don’t even think this is controversial in any way, in fact I used to assume this was just common knowledge after Cambridge Analytica…

      I deleted, as in permanently, totally deleted my FB presence when that came out… but everyone else I explained … basically what you’ve just explained … to, thought I was insane or overreacting and paranoid.

      Its simple.

      Engagement, usage, time on platform is being optimized for.

      What drives these things most effectively?

      Hatred, outrage, extremely offensive and divisive things.

      … And they know that they can, through exposing people to such things, make said people more extreme and hateful and anxious and depressed.

      So… from an ‘optimize for platform usage’ standpoint… perfect! It’s a reinforcing loop!

      Zuckerberg stated at one point that his goal with Facebook was to be able to profile (and manipulate, but he didn’t say that part) users so well that he’d be able to predict what they’d post next.

      He really did/does just view all social interaction as a very complex problem that can be ‘solved’, like a physics question can be solved, to make a predictive model.

      They literally know that their business model is to ruin social discourse, ruin peoples mental health and their lives, to polarize society.

      It should not be surprising in any way that, well now society is extremely polarized and mentally ill.

  • MouseKeyboard@ttrpg.network
    link
    fedilink
    arrow-up
    9
    ·
    2 months ago

    For a long time Facebook counted an angry react as equal to five likes for measuring engagement. It’s very much intentional.

  • iorale@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 months ago

    Old man rant:

    Back in the day, forum mods punished those who engaged along with the troll, the golden rule was to not feed the troll and it was enforced.
    When moderation was removed from users and small communities, trolls found a new protection, now their bait was rewarded and they could easily get what they craved, a reaction and spread their hate.

    I’d say we should apply something along those lines, ignore the trolls and the propaganda, just report them on sight.

  • DiogenesOfMiami@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    2 months ago

    I disagree with OP’s editorialized title.

    As an avid video gamer, I find myself constantly encountering subtle and overt bigotry in most online games I play. I will always call them out for it, no matter how much whooping it incites from kids just eating their popcorn and enjoying the fight.

    Ignoring them is how you let the Andrew Tates of the world win, because they’re certainly not taking the high road by remaining silent about their beliefs.

  • ristoril_zip@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    What I don’t get about this is why in this day and age with all the analytics tools we have do companies continue to just happily pay for simple eyeball exposure?

    The only time they seem to have any pause at all on this model is if people post screenshots of ads for their products next to posts literally praising Nazis.

    These so called AIs (LLMs) can learn to tell the difference between positive/happy/uplifting posts, neutral posts, and angry/sad/disturbing posts. The advertisers should be asking for their products to be featured next to the first and second groups of posts.

    People engage based on anger, sure. They click posts and reply and whatnot. But do they click the ad next to a post that pisses them off and then buy the product?

    Or is this purely a subconscious intrusion effort? Do the advertisers just want their products in front of eyeballs regardless of what’s around the ad? It seems like the answer is “no” when they’re called out. But maybe it’s “yes” if they can get away with it?

  • AVincentInSpace@pawb.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    …as opposed to platforms like Lemmy, where the only political ideologies you’ll find are “leftists” who, when asked what they even believe, respond with “what are you, a cop?”

  • pelespirit@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    ·
    2 months ago

    They did a study around the 2020 elections and have found the following to work with trolls:

    Respond once with the facts (if you must), and then walk away. I have found Lemmy not needing that most of the time, just downvoting seems to work. But if you’re on the place that shall not be named, this works.

  • RememberTheApollo_@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    2 months ago

    I’ve been participating in Threads (yeah, I know, should be ashamed) and I’m unfortunately a sucker for some of the ragebait, especially political.

    Guess what Threads pushes at me. A lot of the dumbest ragebait. Not people that actually want to have a conversation. My fault for being a sucker, but the algorithms work.

    Doesn’t really matter, I’m shadowbanned. Pissed off too many republican propagandists by refuting them, so as usual, the “report” button is their remedy.

  • RGB@lemmy.today
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    Algorithms simply determine which posts will get the most interaction and feed it to people. Does it benefit corps? Of course! But it’s driven by people who choose to engage in this content.