Backup Locations - Poor Implementation
A few days ago, I started a thread in which, by trial and error - and unfortunately with zero response from Acronis Support - I found that my ATI 11 Home (build 8101) validates ALL files in my backup location after a backup job has run, and not just the file ATI has just created:
http://forum.acronis.com/forum/3478
My backup location is configured to allow up to 120GB of data and my daily archive is set to create a full archive followed by 5 differential archives.
As of this morning, the backup location contained 115GB of data and my daily backup had created a full archive plus 2 differential files, the last diff file being 800MB in size. In theory, I still had 5GB still to play with in the backup location and I'd done very little to the machine the previous day so expected ATI to create another differential file around 800-900MB in size, still well within its 120GB quota.
Instead, for reasons I don't understand, it created a full archive that is 20GB in size. This took the total size of files in the backup location to 135GB, ie 15GB over its maximum allowed quota and triggered some very weird behavior.
In order to get the size of the backup location down below its allowed maximum quota of 120GB, ATI clearly needed to delete some older files. The obvious candidates were a full archive from 3 August that was 25GB in size, together with its 6 associated differential archives, which varied from 150Mb to 870MB in size (and totaled 3.6GB) .
ATI needed to shave off at least 15GB from the backup location. It shouldn't have been too difficult - or too slow - and the situation was straightforward. The full archive was 25GB in size and the total size of the 6 differential files came to 3.6GB. The only way to free up (at least) 15GB is to delete the 25GB file. And since the differntial files can't exist without the associated full archive, they would need to be deleted too.
Instead, here's the behavior I actually observed:
1. Correctly identify the oldest logical group of files.
2. Copy the 25GB to a temporary file in the Backup Location. (After much disk thrashing, the Backup Location now contains 160GB of data, so the situation has, for now, actually been made worse!)
3. Identify the oldest differential archive file in the logical group.
4. Copy all files except the oldest diff file to temporary files in the Backup Location. (It now contains over 163GB of data!)
5. Delete all the original data files and rename the new copies, such that the full file keeps its old name and the differential files are "cascade" renamed to replace the deleted file, so that diff2 is renamed to diff1, diff3 is renamed to diff2, diff4 becomes diff3, etc.
6. See if the operation has freed up enough space. If not, go back to step 3.
So, in order to free up at least 15GB of space in my Backup Location, ATI repeated the above operation until all 6 of thhe differential files were removed and the full backup was also removed. In all, it had to read and rewrite around 200GB of data (ie 400GB of I/O operations) when a very simple calculation of file sizes would have told it that the whole operation was a complete waste of time - and took 3 hours to accomplish.
What I also observed was that the file sizes of the copied files were not identical to those of the original files, so ATI is actually changing the files' data on the fly. There might be a good reason for this but it is a worrying observation. Acronis Support?
I appreciate that not all situations are going to be as straightforward as was the case here. Clearly, for example, you can't go around deleting the oldest file in a chain of incremental archives and there might also be the possibility of having both differential and incremental archives associated with the same full archive file, But surely Acronis can do better than this?
Incidentally, does anyone know if this method has found its way into ATI Home 2009? I'd be very interested to hear from Acronis Support.
Martin

- Se connecter pour poster des commentaires

I.e., rather than fix Backup Locations implementation, they've changed to a new method which introduces new bugs and other issues -- for example, it relies on an internal database which means you can't treat tib files as ordinary files (you could do this with prior versions). Many of us are hoping that they gie up on the "new" process" and bring back a bullet proof feature in the version that will soon be released.
Doesn't validation still run on all the diffs related to a full, even in version ATI12/2009?
- Se connecter pour poster des commentaires

@Alexander: Thank you for taking the trouble to respond. Perhaps you are under the impression that I have reported a fault that needs to be investigated. This is not the case. I am not suggesting that your software doesn't work, only that it works poorly. If you look at the title of my post you will see that I am suggesting a poor implementation of your Backup Locations.
Some examples:
1. The method to free up space in the Backup Location is unintelligent. Sure, it works, but it could work a lot better. To quote the example above, a simple piece of arithmetic should have been able to indicate that it was a pointless exercise trying to consolidate 6 differential archives totaling just 3.6GB if you need to free up at least 15GB of disk space. It was clear right from the outset that you needed to delete the (25GB) full archive and all its (6 ) associated differential files in order to free up sufficient space, yet ATI 11 wasted 3 hours consolidating the files before it finally (allelujah!) decided to delete the full archive.
2. If my scheduled archive includes the option to verify the archive file, ATI 11 proceeds to verify every single file in the Backup Location, not just the one it has created. What's the point of re-verifying all those other files each time I create a new archive file? Beats me. As with the freeing up of space, the implementation is unintelligent and could work a lot better.
3. If the scheduled verification comes across a bad file in the Backup Location, it reports a problem but doesn't (seem to) tell you which file is causing the problem and, to compound matters, stops verification of any further files. That's also bad implementation, IMO.
I hope this helps to clarify my position. I also hope that these same strategies have not been imported into ATI 2009, though, based on Scott Hieber's comment, it seems that you might have done so.
Martin
- Se connecter pour poster des commentaires