I have a question for you smart people.
I've written a clever little script that takes an ordered inclusion/exclusion list of paths for backup, then figures out how many cd's it would take to back the whole thing up, partitions the data, and burns that many cd's. It puts an index file on every disk, so you can always tell which disk to go to for any given file.
Later I'll write a script that can read the index file and restore a subtree, prompting for each relevant disk as needed.
It's very straightforward, but there's one problem. I'm currently accumulating file sizes in bytes to figure out how to partition files onto cd's, but I know some sort of disk block rounding will occur. (Every file, no matter how small, will take 1k or 4k or something). Plus I'm sure there's some amount of ISO 9660 filesystem overhead which I should account for too. If I don't account for these things, I'll end up trying to fit more on each cd than can actually fit.
So my question is this: Is there any principled way for me to account for block size rounding and filesystem overhead when I'm working out how many files I can cram on a CD? Or should I just give up and just leave a 10% buffer for "overhead"?