Well, one part of it is that Flatpak pulls data over the network, and sometimes data sent over a network doesn’t arrive in the exact same shape as when it left the original system, which results in that same data being sent in multiple copies - until one manages to arrive correctly.
I think this is actually very unlikely, flatpak is most likely using some TCP based protocol and TCP would take care of this transparently, flatpak wouldn’t know if any packets had to be retransmitted.
But it’s unlikely those would look like this. Flatpak would only see packets in series, so the only effect a failure could have would be the need for resuming the download - at worst you could receive a couple of bytes again due to alignment, but not this much.
Hence why Fedora Linux actually recently removed delta updates for DNF. Turns out it used more data in retries than just downloading a whole package again.
Shoddy implementation they can’t be arsed to fix. They do all kinds of shenanigans like show the size of all locales but only download one, or the other way around, it does not count dependencies and then realizes it has to download something extra etc. It’s all over the place and I’ve given up on it making any sense. I’ve just made sure it’s on a drive with plenty of space and hope for the best.
I am learning flatpak. Can someone explain why is like that???
Well, one part of it is that Flatpak pulls data over the network, and sometimes data sent over a network doesn’t arrive in the exact same shape as when it left the original system, which results in that same data being sent in multiple copies - until one manages to arrive correctly.
deleted by creator
I think this is actually very unlikely, flatpak is most likely using some TCP based protocol and TCP would take care of this transparently, flatpak wouldn’t know if any packets had to be retransmitted.
deleted by creator
But it’s unlikely those would look like this. Flatpak would only see packets in series, so the only effect a failure could have would be the need for resuming the download - at worst you could receive a couple of bytes again due to alignment, but not this much.
deleted by creator
Could also be that the HTTP server lied about the content length.
It’s a protocol violation to do that, not least because it precludes connection reuse
Hence why Fedora Linux actually recently removed delta updates for DNF. Turns out it used more data in retries than just downloading a whole package again.
Interesting, didnt know that! That sounds like a fixable issue though…
I think they have moved from trying to fix it in DNF, to using the capabilities found in BTRFS for Copy on write. Can’t quite remember exactly.
??? Retransmitted packets don’t get counted towards downloaded file size
something something ostree and how complicated the stuff it does actually is
I mean ostree is just git for binaries, isnt it?
But it will likely be the issue here.
Shoddy implementation they can’t be arsed to fix. They do all kinds of shenanigans like show the size of all locales but only download one, or the other way around, it does not count dependencies and then realizes it has to download something extra etc. It’s all over the place and I’ve given up on it making any sense. I’ve just made sure it’s on a drive with plenty of space and hope for the best.
deleted by creator