Hello everyone,
I recently came across an article on TorrentFreak about the BitTorrent protocol and found myself wondering if it has remained relevant in today’s digital landscape. Given the rapid advancements in technology, I was curious to know if BitTorrent has been surpassed by a more efficient protocol, or if it continues to hold its ground (like I2P?).
Thank you for your insights!
A better question is, what would you improve over current way that torrents work.
I wish there were some way to enable availability to persist even when torrents’ peak of popularity has passed - some kind of decentralized, self-healing archive where a torrent’s minimal presence on the network was maintained. Old torrents then could become slow but the archival system would prevent them being lost completely, while distributing storage efficiently. Maybe this isn’t practical in terms of storage, but the tendency of bittorrent to lose older content can be frustrating.
I don’t see what you can do at the protocol level to improve availability, you still need people storing the file and acting as peers. Some trackers try to improve that by incentivizing long term seeding.
It’s called private trackers, and they are great.
Meh… I get itchy when I hear private. We could also improve the experience for seeding publicly and for longer. Not only by education but maybe even using some kind of intensive to keep seeding.
The issue is that public trackers are too easy for people to monitor and pursue copyright infringement claims for. Private trackers, by design, are much harder to do that with, which makes them leaps and bounds safer to use.
Don’t think about it as keeping the common man out, it is about keeping The Man out.
A better question is; What would you change in the current Internet/WWW to make it as decentralized as Torrents are?
I wish there was a decentralised way of hosting websites. Kind of like torrents.
Sounds like maybe what you’re looking for is ipfs? https://ipfs.tech/
Problem with IPFS, is that it’s not really that decentralized as I wish it was. Since by default the data is not shared across the network, meaning if nobody is downloading and hosting that node, you are still the only one having a copy of the data. Meaning if your connection is gone or if you get censored, there is no other node where the IPFS data is living. It only works if somebody else is activily downloading the data.
Ow, and then you also need to Pin the content, or the data will be removed again -,-
Furthermore, the look-up via DHT is very slow and resolving the data is way too slow in order to make sense. People expect today max 1 or 2 seconds look-up time + page load would result in 4 or 5 seconds… Max… However with IPFS this could be 20, 30 seconds or even minutes…
That’s just for files though. Imagine a specific decentralised protocol for hosting websites.
You can technically host a website on IPFS but it’s a nightmare and makes updating the website basically impossible 2021 wikipedia IPFS Mirror. A specific protocol would make it far more accessible.
Websites are just files. For something like running a site on ipfs, you’d want to pack everything into a few files, or just one, and serve that. Then you just open that file in the browser, and boom, site.
I’m not really sure it qualifies as a web site any more at that point, but an ipfs site for sure. Ipfs has links, right?
With LibreWeb I tried to go this route, using IPFS protocol. But like I mention above, IPFS is not as decentralized by design as people might think. People still need to download the content first and hosting a node… And then ALSO pin the content… It’s not great. And look-up takes way too long as well with their DHT look-up.
Well… it’s not really designed for that use case, so yeah you’ll have to deal with issues like that. For interplanetary file transfers, that’s acceptable.
I’m searching for better alternatives, ideas are welcome.
I’m personally trying to fix it… https://libreweb.org. Still a proof of concept though.
Looks really cool. Thanks for the share
Why MIT license and not something like GPLv3?
MIT license is more permissive.
Yeah but then companies can use your work and not provide compensation. But to each their own.
Yes that is true.
That would be very cool, I know we have onion sites that operate on the Tor network that use keypairs for the domains, but the sites themselves are still centrally hosted by a person, anonymously hosted but still centrally hosted.
There is actually a JS library called Planktos that can serve static websites over BitTorrent. I don’t know how good it is, but it sounds like a starting point.
https://github.com/xuset/planktos
There’s some cryptobro projects about sticking distributed file sharing on top of ~ THE BLOCKCHAIN ~.
I’m skeptical, but it might actually be a valid use of such a thing.
Blockchain is a nice technology, but not all the solutions need blockchain technology. Just like BitTorrent doesn’t require blockchain, a decentralized internet alternative also doesn’t need blockchain.
The profit motive
Make mutable torrents possible.
What’s the advantage to that? I don’t want the torrent I’m downloading to change.
I want that. For example you downloaded debian iso version 13 and after some time it can be updated to 13.1. Obviously it shouldn’t be an automatic operation unless you allowed it before starting download.
I wouldn’t call that mutable, more like version tracking in which each torrent is aware of future versions.
I kind of like that, but you might be able to accomplish it with a plugin or something.
Put a file in the torrent called “versions” or something like that, and in there would be a url that the client can use to tell you if there is a new version.
It wouldn’t change the protocol though, since the new version and old version would still need to be separate entities with different data and different seeding.
Like the 13.1 torrent being only a patch to the 13 one and listing it as a dependency? Downloading the 13.1 torrent would transparently download the 13 if it wasn’t already, then download the 13.1 patch and apply it. But I don’t think any of this needs to be at the protocole level, that’s client functionality.
Resilio sync can do this, I’m pretty sure.
Although if implemented as an extension to BitTorrent, I’d want it to be append-only, because I don’t want to lose 1.0 just because 1.1 becomes available.
The last 0.01 percent comes in at the same speed as the rest of it