Main page I Packages I Help I Forum
a place to talk about SvarDOS and other DOS-related things
> Or maybe even better - pkgnet could provide both a BSUM *and* a CRC, so users can verify whatever they prefer.+1
> but at the cost of adding a complication to the package structure (necessity of computing a listing of BSUMs for each file stored in the package)... the repo server could check for new packages, unpack each, compute the BSUM for the individual files, and place them into a .zip comment.
> whether it takes 5s or 20s to install a package probably does not make a difference to anybodyFor 32 packages during SvarDOS installation it's 10 minutes just for validation.
> and it is nice to have something standard (zip/crc32) to rely on.Indeed.
> Is this "worth the effort and extra complication"?That's a good question! Doing such change just for the sake of a different approach is definitely a waste of time, but if it could *significantly* speed up the process of installing packages on old PCs, then it might be quite a win. Today, the installation of SvarDOS on a 8088 @4Mhz PC takes some 40 minutes: - about 5 minutes to boot the floppy and partition+format the disk - 7 minutes to copy the packages to disk - and then about 25 minutes to inflate and CRC them That's A LOT of time. Not sure how much could be saved, but it's at least worth doing some preliminary benchmarks. A simple first test will be to disable CRC32 in PKG and see what the speed gain is. As for the compression algorithms, I did not find any strong candidate for deflate replacement so far. There are options that are much faster, but apparently all of them have a poorer compression ratio, and I'd really don't want to make packages bigger than they are already. https://github.com/facebook/zstd https://github.com/atomicobject/heatshrink https://github.com/lz4/lz4 Mateusz
> I would suggest evaluating the lzsa2 formatI did not know about this algorithm, thank you for pointing at it. The graph that is shown on the lzsa2 github page suggests that while it is significantly faster than deflate, it is still less size-efficient. Is that consistent with your experience? https://github.com/emmanuel-marty/lzsa/raw/master/pareto_graph.png
> you can benefit from your own pack format if you compress solidlyTotally, yes. This is something also on my TODO list. But I do not think the gain will be tremendous because solid archives tend to be very good when associated with a huge compression window (which easily spans over multiple files). In our case we have to work with a constrained environment (8088/256K RAM) where even a 32K window is challenging, hence the advantage of "solid" vs "zip-like" might be not so great and I expect it will come mostly from the fact that a common dictionary is used for multiple files. In any case, I will definitely have to do some real-situation benchmarks to have hard facts/numbers at hand. Mateusz
> I did not know about this algorithm, thank you for pointing at it. The graph that is shown on the lzsa2 github page suggests that while it is significantly faster than deflate, it is still less size-efficient. Is that consistent with your experience?I have never implemented a deflate depacker so I do not know. You will have to run tests yourself.
> even a 32K window is challenging,As I mentioned, heatshrink's window size caps out at 16 KiB currently ;P
> and I expect it will come mostly from the fact that a common dictionary is used for multiple files.Heatshrink and LZ4 and LZSA2 all do, I believe, only compress using backreferences into the window, so there is no separate dictionary for them.