TrueImage 2013 performance
I need to upgrade to a newer version of Acronis True Image, since the 2010 version behaves very poorly in the presence of EXT4 volumes and I want to move to SSD.
However, I read a number of complaints about performance on the forum, so I thought it would be best to play with the demo version for a bit first.
I have my own backup and file management system for personal files, since I run multiple operating systems on most of my machines, and several machines with applications that might require exclusive access to files.
So, I use Acronis True Image bootable media exclusively for bare-metal backup and recovery of the OS and support -- periodically for safety, and always before upgrades. I only need to have Acronis handle my OS volumes, since my data are stored on another volume and backed up using my own methods. I install Acronis True Image, install the Plus Pack, make the bootable media images that I will need, then revert my system to what it was before I installed them.
All that to explain that my testing only applies to the bootable media. 'Maximum verbosity'...
My test system is a Lenovo W700, Core2 quad extreme 2.53GHz, 8GiB RAM, nVidia QuadroFX 3700M (1GiB dedicated), 240GB Samsung 830 SSD, 320GB Hitachi TravelStar 7K320 hard disc (times two, with one on USB), Sony BD5750H blu-ray burner, a 42GB JMicron-based SSD in an expresscard slot, and a 50GB mini-PCIe (NOT mSATA!) SSD where Lenovo originally put the TurboMemory card. Not bad for its time... The network server is a dedicated server, with no other traffic, sharing the file via CIFS/SMB, connected via IPv4 over gigabit ethernet.
I am using the same image (38GiB TIB of Win7Ultimate system volume, maximum compression, 256-bit encryption) located in different places to do the testing. Generally, because of my choices, verifying is the quickest and least processor loading thing I can do (creating the image usually takes about twice as long and saturates the CPU the whole time; verifying usually stays below 50% CPU load and is more I/O bound), but the demo copy will not let me create a new image.
Acronis bootable media that I have used all run under Linux. One thing I have noticed is that some recent Linux versions are particularly problematic on systems with aggressive power management (the W700 is a good example of this interaction, despite its paltry two-hour battery life). Generally, the new kernels, if they are also configured per common defaults, will just stall unless there are certain interrupts. There are ways to fix this if you are building the kernel yourself, but Acronis does not offer this option. Kernel arguments are the next alternative.
With the background out of the way, let's see the timing results for verifying that image... Times are 'wall clock' times based upon starting when I click the button that finally starts the verify until the dialogue indicating completion comes up.
Acronis True Image 2010 Plus Pack:
Image on SSD (across SATA): 13 minues 30 seconds
Image on HDD (across USB): 29 minutes 45 seconds
Image on network server: 34 minutes
Acronis True Image 2013 demo: (defaults)
(did not complete any of the runs in less than 6 hours)
Well, that's a problem. Probably Acronis has moved to using a more recent kernel, and left all the power management aggressive. If I hold shift down or constantly shake the mouse, it will prevent the narcolepsy induced by the newer kernels upon the W700.
Acronis True Image 2013 demo (deliberately keeping it awake):
Image on SSD (across SATA): 9 minutes
Image on HDD (across USB): 20 minutes 30 seconds
Image on network server: 38 minutes 15 seconds
Right. Acronis *has* moved to a more recent kernel and left power management aggressive. Bad interaction with my machine. I have to work around this during the install for some Linux versions, so I know what to do: add 'apm=off powersaved=off nohz=off processor.max_cstate=1' to the kernel arguments. The bootable media creator lets me provide some arguments, maybe they are fed to the kernel?
Acronis True Image 2013 demo (with above kernel arguments):
Image on SSD (across SATA): 8 minutes
Image on HDD (across USB): 19 minutes 5 seconds
Image on network server: 34 minutes
Okay, good. Those kernel arguments basically turned off much in the way of power management, but perhaps that's not so bad -- probably should always be plugged when doing something like recovering the OS.
And look at that -- the newer version times are actually a little better than the old version (much better over USB)! I think this is the first time in years that I have seen a software vendor claim performance improvements that actually showed up when benchmarking on the same hardware! I can only hope image creation sees similar improvement (I may add those results after I upgrade).
Oh, and about the time estimates provided by the program while it is doing the work: don't believe a bit of it. The 2010 version's initial estimate of how long it will take is over 47000 days, and it bounces about worse than the Win95 file copy dialogue over a dialup connection. The 2013 version's initial estimate is 1.5 minutes, and it never goes above two minutes (even running across a network with the narcoleptic power management interaction -- after 12 hours it had done less than 25% and still estimated less than two minutes remaining); it seems to be based upon remaining file size rather than how long it is taking -- I want the hardware where this was calibrated!
- Log in to post comments
Colin B wrote:You mention using maximum compression,... make the image slightly more susceptible to corruption.
Is there any evidence of that? Yes, it will take a little longer to backup, but is there any documented evidence that higher levels of compression are any more likely to corrupt than lower levels?
- Log in to post comments
Tuttle,
I have no measured proof of either mine or conducted elsewhere, however, what is known that the more a file or binary data is manipulated the more likely there is going to be corruption at some point, this being one reason why items that are encoded (including RF propagation) have corresponding more thorough data integrity checks. It follows therefore that the more an item is compressed the more susceptible it becomes to the vagaries of the system from hiccups in data transfer from disk to RAM, RAM to IO port and then the physical transport and then of course there is the same process backwards when recovering any data.
I don't know what the actual ppm of likelihood are, and, I'm sure it is not something that is going to happen to everyone who compresses any file to a high degree (this goes for any compressions standard - video, music, mobile phone - it is just these compressions types allow loss which is not noticeable to any extent by humans, file restoration though fails if just one bit isn't quite right), all I'm saying is that the risk increase and obviously not by much otherwise the forum would be full of people having problems on this point, but if there are problems in making an image this is one item that should be considered.
It is a little like saying, stepping outside your house onto the pavement has a greater increase in stepping in some animal presents than if you just tramped around your garden, which has a higher likelihood than if you just stayed in your house.
- Log in to post comments
It might be an idea to ask Acronis developers about that. It would be interesting to hear their perspective. As all Acronis .tib archives are compressed, I'm unconvinced that using the higher compression levels represent any greater risk.
- Log in to post comments
I think my point was missed (I don't see the problem in Windows -- only on the Linux based bootable media), but I'll answer your comments as well...
Colin B wrote:You mention using maximum compression, do you really need maximum?
No, but I don't like wasting space. These backups are kept long-term. I have found the images are nearly incompressible even without compression because I also like to have them encrypted (which itself slows things down).
Besides, the test was run on the demo recovery media, which only seems to permit restore or verify. I will be happy to post a timing comparison of backup once I upgrade. It is possible that systems like mine would see backup run normally and then verify/restore take forever, since backup is often a 100% CPU load operation, while verify and restore seldom use more than 60% on my system for local media and 25% for network media..
Colin B wrote:This will slow your imaging down and make the image slightly more susceptible to corruption.
The first part is easily demonstrated. The second part makes some sense but only if I compare any compression to zero compression -- if I compress it any, decompressing a bad sector could corrupt the dictionary or other compression structures and thus render other nearby data corrupt, while if the image is not compressed at all, it would mean only perhaps a sector is lost (unless the sector contained metadata, then how much is lost would depend upon how well Acronis could recover after such corruption -- prior versions have handled this quite poorly). How much it is compressed only affects how badly the corruption spreads, but usually any undetected corruption in the compressed stream (even if only light compression) spreads shockingly, and prior versions of Acronis have immediately balked when they were able to detect corruption. I have not tested this new version against corrupted files yet.
Considering my 28GiB image covers a volume with over 70GiB of software (Win7Ultimate, several application suites, an alarmingly big game, and a bunch of other relatively tiny stuff), keeping the uncompressed images lying about would be far more difficult. Since BD-DL (and deeper) media are expensive, I'm even considering having the images split at 22GiB so the individual parts can be moved to BD for archival storage. Multiple terabyte high quality (not that consumer grade junk) drives are also expensive, and I refuse to run without a mirror/parity set and a second copy these days since even the high-end drives fail too often for my taste. My luck with storage devices is such that when I have a mirror/parity set plus backup, I seldom see a failure, but when I have a single drive, it will catastrophically fail within two days after I put something important on it.
Colin B wrote:You might find that in exchange for a slightly larger image size you might not need to enter the power settings and have much faster image making.
No, the underlying kernel on the 2013 version performs equally hideously if I turn off compression and encryption. It is an interaction between my PC and the Linux kernel, and appears to be most affected by the 'nohz' feature (Linux will disable the programmable timer and use on-die counters instead to handle delays). For some reason, if the processor is allowed to slip beyond C1, it will not wake up again except for certain interrupts (keyboard, mouse, external timer, &c). Running without compression or encryption drops the CPU load so far it enters this state almost immediately after the last keyboard or mouse interrupt, as opposed to a few seconds afterward. I'm not sure whether the hard disc interrupts or network card interrupts will wake it, but it seems they do not keep it awake as keyboard and mouse interrupts will, and the programmable timer interrupt would occur often enough to keep it scheduling work properly without the NOHZ feature. With the standard kernel settings, it works almost properly if I keep it awake (put the mouse on a paint stirrer, weight a shift key down, or something similar).
Colin B wrote:I have a Lenovo E420 with a 500GB (230GB used) drive (secondary OS's are in VHDs) and standard compression imaging over my network to a drive on my PC takes around 18 minutes.
Is the E420 an i7 or Core2? Note that my actual timings above (with power management turned way down) are quite reasonable (and, as a major surprise to me, actually better than a prior version).
Just 'nohz=off' seems to be adequate to make it work on my W700 (which tells me there is probably something about the on-die timers being affected by slipping into C2 or beyond, since providing the scheduling interrupt externally keeps things working). I'm not sure if that would apply in general, but one thing I would like to say is that people who are having similar issues with the bootable media might at least want to try providing the 'nohz=off' argument (and possibly the other items I suggested earlier when building their bootable media.
Again, it only applies to the Linux based recovery media, and I'm not going to leave it sitting in that mode for any longer than necessary, especially since the recovery media can switch the machine off when it finishes. I'm not sure that the difference between running backup/restore with minimal power savings (only speedstep and basic power management), versus supporting C2+ states and no timer tics when idle, will make a big difference in my long-term power use. Frankly, even if I am restoring while on battery, it would be better for it to have a chance of finishing in the one to two hours I get before the battery dies instead of not finishing in the more than six hours that it was taking without turning down power management. In one case I left it running in the default state for over 24 hours and it looked like it was almost 10% done, claiming it had just under two minutes to go, but the same operation completed in under 35 minutes with the power management turned down.
Yes, I'm probably going to upgrade to the new version (I need EXT4 support). I can't test how well the backup feature runs on the recovery media with the demo version, but if it works at least as well as the verify feature seems to work, I expect to be content with that. As for the other features, I will have to evaluate those separately; the bare-metal backup/verify/restore is the important part to me right now.
- Log in to post comments
tuttle wrote:It might be an idea to ask Acronis developers about that. It would be interesting to hear their perspective. As all Acronis .tib archives are compressed, I'm unconvinced that using the higher compression levels represent any greater risk.
Specifically, I don't think it makes it any more susceptible to corruption, but higher compression does tend to spread any corruption over more of the stream. This should make sense intuitively since more data are packed into a smaller space, corrupting the same amount of that space should contaminate more of the uncompressed data, but the relationship seldom seems so obvious. If the corruption hits part of the compression dictionary (or whatever the compression technique used calls it) or similar, the corruption could spread over a far larger stretch of the uncompressed stream than one might expect, and possibly even contaminate parts of the entire remainder of the stream.
Tack on encryption (should always do this after compression, since good crypto renders pretty much anything uncompressible) and it is possible that a few bits corrupt in the encrypted compressed file will spread like wildfire when decrypted, and those corruptly decrypted bits will contaminate a lot more than a few compressed stream bits, which will naturally propagate to even more of the uncompressed stream.
I accused Acronis of handling corruption quite poorly, since I have only seen (the earlier versions of) the product handle a corrupt file by simply saying it is corrupt and abandoning the operation at that point (the dialogue only offers an 'ok' button). Let me clarify that there is an even worse way of handling it -- going ahead and completing the operation without saying anything about the corruption to the user. I think the program should emit a warning about the corruption, and ideally this warning would include a list of the files (or range of sectors if it is a sector image) that would be affected, and ask if the user wants to try to continue anyway (unless the corruption is so bad that it can not figure out how to resync the compressed and uncompressed data streams, in which case it should say so rather than giving up with little explanation).
I don't know if Acronis uses a compression technique that can resync after corruption, though, so it is possible that any corruption is unrecoverable with their product. This would be a bit disappointing, though, since one would hope to be able to recover from a disaster with the product, and some of us have experienced the 'but the backup verified cleanly last month' case where a few sectors are now unreadable (cheap magnetic hard discs that suffer bit rot are annoyingly common and getting more so, and magnetic tape and some optical media can be as bad or worse)...
Some archivers (such as RAR) not only detect and work around corruption, remaining able to extract corrupted files (warning the user) and files that occur after them, but can add data to an archive to allow recovery from minor corruption. Yes, I know adding such information would make the images a bit bigger, but would you take a 1% bigger image in exchange for the ability to recover from up to 1% of the image being damaged somehow? I would, even though I tend to prefer maximum compression! With encryption, the recovery data would have to be targeted at recovering the final image (compressed and then encrypted) rather than at the uncompressed (or even the compressed but before encryption) data.
I think it would be fascinating to know more about how Acronis deals with these matters, but I suspect they are 'trade secrets' and we are unlikely to ever know the really interesting details.
- Log in to post comments