Essentially, yes. Great point! I think it needs more features to function more like a social network (transitive topic-based sharing, for one)
- 0 Posts
- 12 Comments
Hah, I designed one as well!
I think the flow of information has to be fundamentally different.
In mine, people only receive data directly from people they know and trust in real life. This makes scaling easy, and makes it impossible for centralized entities to broadcast propaganda to everyone at once.
I described it at freetheinter.net if you’re interested
helopigs@lemmy.worldto Technology@lemmy.world•OpenAI declares AI race “over” if training on copyrighted works isn’t fair useEnglish1·5 months agothe issue is that foreign companies aren’t subject to US copyright law, so if we hobble US AI companies, our country loses the AI war
I get that AI seems unfair, but there isn’t really a way to prevent AI scraping (domestic and foreign) aside from removing all public content on the internet
helopigs@lemmy.worldto Technology@lemmy.world•Sergey Brin says AGI is within reach if Googlers work 60-hour weeksEnglish1·5 months agoSorry for the late reply - work is consuming everything :)
I suspect that we are (like LLMs) mostly “sophisticated pattern recognition systems trained on vast amounts of data.”
Considering the claim that LLMs have “no true understanding”, I think there isn’t a definition of “true understanding” that would cleanly separate humans and LLMs. It seems clear that LLMs are able to extract the information contained within language, and use that information to answer questions and inform decisions (with adequately tooled agents). I think that acquiring and using information is what’s relevant, and that’s solved.
Engaging with the real world is mostly a matter of tooling. Real-time learning and more comprehensive multi-modal architectures are just iterations on current systems.
I think it’s quite relevant that the Turing Test has essentially been passed by machines. It’s our instinct to gatekeep intellect, moving the goalposts as they’re passed in order to affirm our relevance and worth, but LLMs have our intellectual essence, and will continue to improve rapidly while we stagnate.
There is still progress to be made before we’re obsolete, but I think it will be just a few years, and then it’s just a question of cost efficiency.
Anyways, we’ll see! Thanks for the thoughtful reply
helopigs@lemmy.worldto Technology@lemmy.world•Reddit will warn users who repeatedly upvote banned contentEnglish3·5 months agoniche communities are still struggling due to the chicken-and-egg problem (and reddit dominance), but it’s improving
if there is a party, it’s about lemmy’s inevitable growth amidst reddit enshittification
helopigs@lemmy.worldto Technology@lemmy.world•Sergey Brin says AGI is within reach if Googlers work 60-hour weeksEnglish1·5 months agorelative to where we were before LLMs, I think we’re quite close
helopigs@lemmy.worldto Ask Lemmy@lemmy.world•What is this reality? As comprehensively as you can, in about 40 words or less.3·8 months agoA companion generator for God.
Happy to see noita here, it belongs
After 1500 hours I beat nightmare mode, but still haven’t beaten the 33 orb kolmi
I don’t know why more people haven’t mentioned tilix.
Makes me wonder if I’m missing out by using it 😂
helopigs@lemmy.worldto Linux@lemmy.ml•GTK 4.16.0 released, now defaults to Vulkan renderer on Wayland8·11 months agohardware deceleration?
helopigs@lemmy.worldto Ask Lemmy@lemmy.world•How does Lemmy feel about "open source" machine learning, akin to the Fediverse vs Social Media?5·11 months agoI’m in favor of a “ML-GPL”, where models must be made available for free to those whose data was used to train them.
I think 10x is a reasonable long term goal, given continued improvements in models, agentic systems, tooling, and proper use of them.
It’s close already for some use cases, for example understanding a new code base with the help of cursor agent is kind of insane.
We’ve only had these tools for a few years, and I expect software development will be unrecognizable in ten more.