The TikTok ban and Donald Trump's rise to power show how fragile our social media accounts are. We must normalize and invest in decentralized social media.
@a1studmuffin@ceenote the only reason these massive Web 2.0 platforms achieved such dominance is because they got huge before governments understood what was happening and then claimed they were too big to follow basic publishing law or properly vet content/posters. So those laws were changed to give them their own special carve-outs. We’re not mentally equipped for social networks this huge.
Techy people are a lot more likely to jump through a couple of hoops for something better, compared to your average Joe who isn’t even aware of the problem
Techy people are a lot more likely to jump through hoops because that knowledge/experience makes it easier for them, they understand it’s worthwhile or because it’s fun.
If software can be made easier for non-techy people and there’s no downsides then of course that aught to be done.
I know people don’t want to hear it but can we expect non-techies to meet techies half way by leveling their tech skill tree?
In order to charge her iphone, my mom first turns on airplane mode, and THEN she powers it down. Turns it off completely. I asked why she does any of that. She says “Because they won’t flip the charge switch for me until they do! I wish I could take the battery out first, and THEN turn off the phone. But I suppose if they can’t see my battery with airplane mode on first, this is just as good.”
10th largest instance being like 10k users… we’re talking about the need for a solution to help pull the literal billions of users from mainstream social media
There isn’t a solution. People don’t want to pay for something that costs huge resources. So their attention becoming the product that’s sold is inevitable. They also want to doomscroll slop; it’s mindless and mildly entertaining. The same way tabloid newspapers were massively popular before the internet and gossip mags exist despite being utter horseshite. It’s what people want. Truly fighting it would requires huge benevolent resources, a group willing to finance a manipulative and compelling experience and then not exploit it for ad dollars, push educational things instead or something. Facebook, twitter etc are enshitified but they still cost huge amounts to run. And for all their faults at least they’re a single point where illegal material can be tackled. There isn’t a proper corollary for this in decentralised solutions once things scale up. It’s better that free, decentralised services stay small so they can stay under the radar of bots and bad actors. When things do get bigger then gated communities probably are the way to go. Perhaps until there’s a social media not-for-profit that’s trusted to manage identity, that people don’t mind contributing costs to. But that’s a huge undertaking. One day hopefully…
We have a human vetted application process too and that’s why there’s rarely any bots or spam accounts originating from our instance. I imagine it’s a similar situation for programming.dev. It’s just not worth the tradeoff to have completely open signups imo. The last thing lemmy needs is a massive influx of Meta users from threads, facebook or instagram, or from shitter. Slow, organic growth is completely fine when you don’t have shareholders and investors to answer to.
A few have registration requirements, but it’s usually something banal like “say I agree in Spanish to prove your Spanish enough for this instance” etc.
This is a choice any instance can make if they want, none are but that doesn’t mean they can’t or it doesn’t work.
You don’t need blockchain for reputations systems, lol. Stuff like Gnutella and PGP web-of-trust have been around forever. Admittedly, the blockchain can add barriers for some attacks; mainly sybil attacks, but a friend-of-a-friend/WoT network structure can mitigate that somewhat too,
I think a web-of-trust-like network could still work pretty well where everyone keeps their own view of the network and their own view of reputation scores. I.e. don’t friend people you don’t know; unfriend people who you think are bots, or people who friend bots, or just people you don’t like. Just looked it up, and wikipedia calls these kinds of mitigation techniques “Social Trust Graphs” https://en.wikipedia.org/wiki/Sybil_attack#Social_trust_graphs . Retroshare kinda uses this model (but I think reputation is just a hard binary, and not reputation scores).
I dont see how that stops bots really. We’re post-Turing test. In fact they could even scan previous reputation points allocation there and divise a winning strategy pretty easily.
I mean, don’t friend, or put high trust on people you don’t know is pretty strong. Due to the “six degrees of separation” phenomenon, it scales pretty easily as well. If you have stupid friends that friend bots you can cut them off all, or just lower your trust in them.
“Post-turing” is pretty strong. People who’ve spent much time interacting with LLMs can easily spot them. For whatever reason, they all seem to have similar styles of writing.
I mean, don’t friend, or put high trust on people you don’t know is pretty strong. Due to the “six degrees of separation” phenomenon, it scales pretty easily as well. If you have stupid friends that friend bots you can cut them off all, or just lower your trust in them.
Know IRL? Seems it would inherently limit discoverability and openness. New users or those outside the immediate social graph would face significant barriers to entry and still vulnerable to manipulation, such as bots infiltrating through unsuspecting friends or malicious actors leveraging connections to gain credibility.
“Post-turing” is pretty strong. People who’ve spent much time interacting with LLMs can easily spot them. For whatever reason, they all seem to have similar styles of writing.
Not the good ones, many conversations online are fleeting. Those tell-tale signs can be removed with the right prompt and context. We’re post turing in the sense that in most interactions online people wouldn’t be able to tell they were speaking to a bot, especially if they weren’t looking - which most aren’t.
We also need a solution to fucking despot mods and admins deleting comments and posts left-and-right because it doesn’t align with their personal views.
I’ve seen it happen to me personally across multiple Lemmy domains (I’m a moron and don’t care much to have empathy in my writing, and it sets these limp-wrist morbidly obese mods/admins to delete my shit and ban me), and it happens to many people as well.
Freedom of expression does not mean freedom from consequences. As someone who loves to engage on trolling for a laugh online I can tell you that if you get banned for being an asshole you deserve it. I know I have.
I mentioned this in another comment, but we need to somehow move away from free form text. So here’s a super flawed makes-you-think idea to start the conversation:
Suppose you had an alternative kind of Lemmy instance where every post has to include both the post like normal and a “Simple English” summary of your own post. (Like, using only the “ten hundred most common words” Simple English) If your summary doesn’t match your text, that’s bannable. (It’s a hypothetical, just go with me on this.)
Now you have simple text you can search against, use automated moderation tools on, and run scripts against. If there’s a debate, code can follow the conversation and intervene if someone is being dishonest. If lots of users are saying the same thing, their statements can be merged to avoid duplicate effort. If someone is breaking the rules, rule enforcement can be automated.
Ok so obviously this idea as written can never work. (Though I love the idea of brand new users only being allowed to post in Simple English until they are allow-listed, to avoid spam, but that’s a different thing.) But the essence and meaning of a post can be represented in some way. Analyze things automatically with an LLM, make people diagram their sentences like English class, I don’t know.
Instances that don’t vet users sufficiently get defederated for spam. Users then leave for instances that don’t get blocked. If instances are too heavy handed in their moderation then users leave those instances for more open ones and the market of the fediverse will balance itself out to what the users want.
The sad truth is that when Reddit blocked 3rd party apps, and the mods revolted, Reddit was able to drive away the most nerdy users and the disloyal moderators. And this made Reddit a more mainstream place that even my sister and her friends know about now.
Agreed. But we need a solution against bots just as much. There’s no way the majority of comments in the near future won’t just be LLMs.
Closed instances with vetted members, there’s no other way.
Too high of a barrier to entry is doomed to fail.
It’s how most large forums ran back in the day and it worked great. Quality over quantity.
@a1studmuffin @ceenote the only reason these massive Web 2.0 platforms achieved such dominance is because they got huge before governments understood what was happening and then claimed they were too big to follow basic publishing law or properly vet content/posters. So those laws were changed to give them their own special carve-outs. We’re not mentally equipped for social networks this huge.
Programming.dev does this and is the tenth largest instance.
Techy people are a lot more likely to jump through a couple of hoops for something better, compared to your average Joe who isn’t even aware of the problem
Techy people are a lot more likely to jump through hoops because that knowledge/experience makes it easier for them, they understand it’s worthwhile or because it’s fun. If software can be made easier for non-techy people and there’s no downsides then of course that aught to be done.
Ok, now tell the linux people this.
It’s not always obvious or easy to make what non-techies will find easy. Changes could unintentionally make the experience worse for long-time users.
I know people don’t want to hear it but can we expect non-techies to meet techies half way by leveling their tech skill tree a bit?
In order to charge her iphone, my mom first turns on airplane mode, and THEN she powers it down. Turns it off completely. I asked why she does any of that. She says “Because they won’t flip the charge switch for me until they do! I wish I could take the battery out first, and THEN turn off the phone. But I suppose if they can’t see my battery with airplane mode on first, this is just as good.”
And you want this woman to learn terminal?
10th largest instance being like 10k users… we’re talking about the need for a solution to help pull the literal billions of users from mainstream social media
There isn’t a solution. People don’t want to pay for something that costs huge resources. So their attention becoming the product that’s sold is inevitable. They also want to doomscroll slop; it’s mindless and mildly entertaining. The same way tabloid newspapers were massively popular before the internet and gossip mags exist despite being utter horseshite. It’s what people want. Truly fighting it would requires huge benevolent resources, a group willing to finance a manipulative and compelling experience and then not exploit it for ad dollars, push educational things instead or something. Facebook, twitter etc are enshitified but they still cost huge amounts to run. And for all their faults at least they’re a single point where illegal material can be tackled. There isn’t a proper corollary for this in decentralised solutions once things scale up. It’s better that free, decentralised services stay small so they can stay under the radar of bots and bad actors. When things do get bigger then gated communities probably are the way to go. Perhaps until there’s a social media not-for-profit that’s trusted to manage identity, that people don’t mind contributing costs to. But that’s a huge undertaking. One day hopefully…
We have a human vetted application process too and that’s why there’s rarely any bots or spam accounts originating from our instance. I imagine it’s a similar situation for programming.dev. It’s just not worth the tradeoff to have completely open signups imo. The last thing lemmy needs is a massive influx of Meta users from threads, facebook or instagram, or from shitter. Slow, organic growth is completely fine when you don’t have shareholders and investors to answer to.
If you could vet members in any meaningful way, they’d be doing it already.
Most instances are open wide to the public.
A few have registration requirements, but it’s usually something banal like “say I agree in Spanish to prove your Spanish enough for this instance” etc.
This is a choice any instance can make if they want, none are but that doesn’t mean they can’t or it doesn’t work.
Reputation systems. There is tech that solves this but Lemmy won’t like it (blockchain)
You don’t need blockchain for reputations systems, lol. Stuff like Gnutella and PGP web-of-trust have been around forever. Admittedly, the blockchain can add barriers for some attacks; mainly sybil attacks, but a friend-of-a-friend/WoT network structure can mitigate that somewhat too,
Slashdot had this 20 years ago. So you’re right this is not new.or needing some new technology.
Space is much more developed. Would need ever improving dynamic proof of personhood tests
I think a web-of-trust-like network could still work pretty well where everyone keeps their own view of the network and their own view of reputation scores. I.e. don’t friend people you don’t know; unfriend people who you think are bots, or people who friend bots, or just people you don’t like. Just looked it up, and wikipedia calls these kinds of mitigation techniques “Social Trust Graphs” https://en.wikipedia.org/wiki/Sybil_attack#Social_trust_graphs . Retroshare kinda uses this model (but I think reputation is just a hard binary, and not reputation scores).
I dont see how that stops bots really. We’re post-Turing test. In fact they could even scan previous reputation points allocation there and divise a winning strategy pretty easily.
I mean, don’t friend, or put high trust on people you don’t know is pretty strong. Due to the “six degrees of separation” phenomenon, it scales pretty easily as well. If you have stupid friends that friend bots you can cut them off all, or just lower your trust in them.
“Post-turing” is pretty strong. People who’ve spent much time interacting with LLMs can easily spot them. For whatever reason, they all seem to have similar styles of writing.
Know IRL? Seems it would inherently limit discoverability and openness. New users or those outside the immediate social graph would face significant barriers to entry and still vulnerable to manipulation, such as bots infiltrating through unsuspecting friends or malicious actors leveraging connections to gain credibility.
Not the good ones, many conversations online are fleeting. Those tell-tale signs can be removed with the right prompt and context. We’re post turing in the sense that in most interactions online people wouldn’t be able to tell they were speaking to a bot, especially if they weren’t looking - which most aren’t.
Do you have a proof of concept that works?
https://docs.ergoplatform.com/eco/reputation-system/
We also need a solution to fucking despot mods and admins deleting comments and posts left-and-right because it doesn’t align with their personal views.
I’ve seen it happen to me personally across multiple Lemmy domains (I’m a moron and don’t care much to have empathy in my writing, and it sets these limp-wrist morbidly obese mods/admins to delete my shit and ban me), and it happens to many people as well.
lemm.ee and lemmy.dbzer0.com both seem like very level-headed instances. You can say stuff even if the admins disagree with it, and it’s not a crisis.
Some of the big other ones seem some other way, yes.
Lemm.ee hasn’t booted me yet? Much like OP, I’m not the most empathetic person, and if I’m annoyed then what little filter that I have disappears.
Shockingly, I might offend folks sometimes!
Yeah you can go fuck yourself for pinning your flavor of bullshit on ADHD. Take some accountability for your actions.
So much irony in this one
Good job chief 🤡
Freedom of expression does not mean freedom from consequences. As someone who loves to engage on trolling for a laugh online I can tell you that if you get banned for being an asshole you deserve it. I know I have.
Who is the asshole here?
That tells me all I need to know
Yes
I mentioned this in another comment, but we need to somehow move away from free form text. So here’s a super flawed makes-you-think idea to start the conversation:
Suppose you had an alternative kind of Lemmy instance where every post has to include both the post like normal and a “Simple English” summary of your own post. (Like, using only the “ten hundred most common words” Simple English) If your summary doesn’t match your text, that’s bannable. (It’s a hypothetical, just go with me on this.)
Now you have simple text you can search against, use automated moderation tools on, and run scripts against. If there’s a debate, code can follow the conversation and intervene if someone is being dishonest. If lots of users are saying the same thing, their statements can be merged to avoid duplicate effort. If someone is breaking the rules, rule enforcement can be automated.
Ok so obviously this idea as written can never work. (Though I love the idea of brand new users only being allowed to post in Simple English until they are allow-listed, to avoid spam, but that’s a different thing.) But the essence and meaning of a post can be represented in some way. Analyze things automatically with an LLM, make people diagram their sentences like English class, I don’t know.
A bot can do that and do it at scale.
I think we are going to need to reconceptualize the Internet and why we are on here at all.
It already is practically impossible to stop bots and I’m a very short time it’ll be completely impossible.
Instances that don’t vet users sufficiently get defederated for spam. Users then leave for instances that don’t get blocked. If instances are too heavy handed in their moderation then users leave those instances for more open ones and the market of the fediverse will balance itself out to what the users want.
I wish this was the case but the average user is uninformed and can’t be bothered leaving.
Otherwise the bigger service would be lemmy, not reddit.
Just like classical macroeconomics, you make the deadly (false) assumption that users are rational and will make the choice that’s best for them.
The sad truth is that when Reddit blocked 3rd party apps, and the mods revolted, Reddit was able to drive away the most nerdy users and the disloyal moderators. And this made Reddit a more mainstream place that even my sister and her friends know about now.