Backup Takes 42 Hours - I Must Be Doing Something Wrong?
I have a 1 TB (931 GB actual) secondary SSD drive on my laptop. The drive has 4 partitions (38 GB, 139 GB, 728 GB, 25 GB). It currently has 724 GB of data total across those 4 partitions.
I am doing a Disk backup for the first time. It's a non-scheduled (manual) Full backup, automatic cleanup to store no more than 1 version. I have validation on at creation. Compression level max (I really want to limit the resulting size). Priority high (the laptop is pretty much not being used while backing up). No other real settings of note (just ask and I'll clarify any others)
This is intended as my disaster recovery backup (in case I totally lose the drive and need to replace it) that I will run manually every 1-3 months. I also run a separate scheduled incremental file backup much more frequently for ongoing file recovery needs.
Anyway, I fired off the backup yesterday. As of today, it's only 30% done, and if this pace continues, it will take a total of about 42 hours.
I also did a similar Disk backup of the 256 GB primary SSD drive, with 184 GB of data in one primary partition (plus the System Reserved partition and I hope any other stuff needed for a true full backup, etc.). That backup took a total of 4 hours 37 minutes. That's a little odd, because that pace indicates the big drive should take more like 18 hours (I'm right about at 12 hours and 31% complete right now).
What gives? Why so slow? Is it because of the high compression setting? The validation?
I'm running True Image 2014 Premium build 6673. I've seen a couple of comments that build 6144 is faster, but I don't want to back down versions if not needed.
How can I do a thorough disaster backup (that can restore to a replacement drive and hopefully one that takes minimal space) in a reasonable amount of time? Should I be doing something else?
Is the incremental file backup the way to go for day-to-day file recovery protection? I'm assuming that will be a lot faster, or am I kidding myself on that?
Many thanks in advance for any tips!
Paul

- Log in to post comments

Thanks for the reply! Sorry, I forgot to describe the target.
The target is a share on a Synology DS1513+ NAS via a Windows mapped drive over a 100 Mbps network to a 10.9 TB volume that has 5.6 TB free.
Target space is definitely not an issue, the NAS box is pretty hefty in terms of performance, and the interface to the storage should not be limiting (up to 100 Mbps).
Looking at the traffic, it's averaging only about 50 Kbps with peaks up to 600Kbps. It really seems the backup isn't driving very much data per second out to the NAS, which confuses me. That seems awfully low performing.
The laptop is a Dell Precision M4600, 2.3 GHz i7-2820QM, so plenty of horsepower. The True Image processes barely consume any CPU (maybe 6% peak), and it really seems sometimes it stalls completely at 0% CPU usage. I have the Priority set to High, so it confuses me why it isn't thumping the processor if compression is the limiting factor?
Next time, I will try it with less compression. What is the best setting for reasonable space savings while minimizing time - Normal?
If I'm understanding, I should validate in all cases, whether as part of the backup or separately, so I won't change that.
I will wipe this task and create a new one with different settings next time. I guess I kinda expect that changing settings should work correctly, but if that's the way it is, so be it.
I have no need for Incremental for this purpose. I simply want a disaster recovery backup (suitable for restoration to a replacement drive) that's reasonably current. I'm expecting I can restore the drive and then catch up with a more current File backup, or is that a misunderstanding on my part?
- Log in to post comments

Don't rule out the use of Incrementals to assist the main backup. These could be monthly in between the full. The file backup assist is ok for data files but not for system files.
Question regarding the 256 system disk. Is this MBR or GPT?
When creating the backup, do make a disk image rather than a partition image.
Refer my signature link 2-A below for example (first picture).
- Log in to post comments

BTW, Grover, your guides rock! I'm learning a lot more tips from them than I can figure out from reading the standard manual.
Thanks for taking the time to put all that together!
It seems to me I should just do a Drive backup for each of my drives, do a weekly Incremental with perhaps 8 incremental versions (or is that too many - is there a penalty for too many?), with 2 chains stored.
Is it really necessary to do a Drive backup for disaster drive-replacement recovery, and then a separate File backup for more "normal" quick file recovery use?
I guess I'm not clear on the usage of each type of backup and the scenarios they cover, and what combination (or single backup type) provides decent small business-level coverage?
- Log in to post comments

Sorry - replies crossed each other in the ether. :)
Both SSD drives on the laptop are MBR.
- Log in to post comments

Grover has given some top notch advice as expected. Your backup speed is limited by many factors. Compression, validation, backup size, are just a few. Most if not all NAS boxes use a Linux based OS thus data transfer is made using the CIFS/SMB protocol. Windows and SMB do not always get along in this transfer because both want to be in charge of the transfer. I am providing a link that describes some tweaks that you may wish to try to increase transfer speed. Do not expect miracles however.
http://betanews.com/2011/01/20/use-hidden-windows-tweaks-to-speed-up-yo…
The TI app contains a built in data transfer throttle of 100MB/s so you have a large enough pipe for the app to work with. I would go with normal compression because as Grover says the gain in space saved is not worth the time spent. That should give you a fair increase in overall speed. If you can determine that all of your network equipment, NIC, router, switch support Jumbo Frames it would be worth while enabling that feature. All your equipment needs to support it though or it may have the reverse effect if enabled on only one or two items in the chain.
Realistically data transfer is usually limited by the hardware in the chain. Many home networks are comprised of equipment from a number of different manufacturers and when it comes to networking mixing manufacturers is truly a mixed bag as mileage can vary!
- Log in to post comments

I would encourage that the 256 backup be a disk image (or drive image if you mean everything).
As for the secondary, that appears to be just another data disk and the partitions could even be backed up individually for redundancy.
A weekly incremental accumulating to 8 before the next full (storing 2 chainsw) is workable or usable. Do understand the limitations of the inc type as any newer inc has no value without the presence and readability of the prior inc's since the last full.
2014 can handle the MBR type backups and restores.
Edit:
GH57. Assorted task features.
For your purposes, you may prefer the differential type backup instead of the incremental type.
http://www.acronis.com/en-us/support/documentation/ATIH2014/index.html#…
A restore of a diff backup only needs the full plus any single diff.
A restore of a inc bacup requires the full plus all inc's in between the selected restored inc.
- Log in to post comments

Thanks for everyone's comments and help! I have some reading to do. :)
I appreciate the tips on backup types, compression, etc. I should be able to end up with a better scheme for my needs.
Compared to other file transfer operations to/from the NAS, it still seems disproportionately slow to me, even considering TI is analyzing the files, compressing data, copying the (smaller) compressed data to the NAS, and reading it all back from the NAS to verify it. 16.5 hours and still working on it (it says 44% complete). Hopefully that can be noticeably sped up with the compression settings.
- Log in to post comments

NAS is tricky. Many users have issues. Sometimes it's not your PC or NAS that's the issue, but your router. Some types of routers get "flooded" or overloaded by the enormous data throughput that ATI creates, causing the transfer to fail or generate errors. I recall a couple of past threads where users' network issues cleared up after switching to a different router.
- Log in to post comments

Well, I was running some LAN tests today, just to check for the type of issues you are mentioning.
File transfers of various sizes from 10 MB to 10 GB achieve a network throughput of 50 Mbps to 60 Mbps. When running a bunch of other apps, like torrents and other network transfers, I've seen my laptop report up to 87% of the 100 Mbps being used. There doesn't seem to be anything wrong with the router, and even with the disk IO and processing of the laptop and NAS, it's still getting 50-60% of the potential 100 Mbps network on a straight file transfer.
I'm not seeing ATI generating any sort of "enormous data throughput." During the backup phase, it was generating network throughput on the order of 50-60 KBps (I mistyped on an earlier reply and said Kbps, so actually 450-540 Kbps). I'm not seeing evidence of network errors, although I may not know how to properly check on that (tips welcome).
The validate phase that's running right now seems to be using about 1.2 Mbps to 1.6 Mbps of network bandwidth. Even considering the file transfer, decompression and compare overhead, I guess I would expect a validate of 724 GB of data to take less than 17-18 hours.
I'm sure there's a lot I don't know about all this (I'm technical but certainly not an expert in networking), but I'm assuming since I'm configuring ATI to backup to a mapped shared drive on the NAS (the same one I used for the speed test) that it's essentially doing the same type of file read/write to the NAS share, no?
I will have to watch the backup next time it runs. I don't think I was seeing ATI use much CPU during the backup phase, which confused me - I would have thought the compression would be thumping the CPU. It didn't seem ATI is multi-threaded - only one core seemed to be utilized and even then at about 15%-25%.
I know I'm doing a big frickin' backup of 724 GB, so I'm not expecting miracles. But it really seems like the backup + validate (yes, I know I set it to max compression this time) is going to end up taking about 40 hours. That just seems really long to me. Is that the level of performance everyone is seeing (18 GB per hour for backup + validate at max compression)? Is there a thread where people publish performance metrics to compare? I'll live with what I need to, but I want to fix and tune what I can.
I'll try again in a week or two when I have some free time to monitor things, with the compression level set to Normal next time, and see what happens.
Thanks again to everyone for tips and pointers!
- Log in to post comments

An interesting angle to my backup slowness that I discovered...
From my prior posts, it seemed odd that the network bandwidth was not being maxed out, and the CPU was not being maxed out (even with the Max compression setting), but yet the backup seemed really sluggish. A straight file write/read test to the NAS showed an acceptably fast transfer. So it seemed that the slowness was ATI.
The 1 TB drive that took the 42 hours to backup is a Samsung 840 EVO SSD. It benchmarks pretty zippy, but...
Apparently, there may be some issues with performance degradation with this drive. Recently written files read very quickly as expected. Files that were written a while ago (the aging seems to be something larger than a month) show markedly slower read times. The drop off reported is dramatic, for example from 450 MBps to 150 MBps.
A thread on this issue can be found at: http://www.overclock.net/t/1507897/samsung-840-evo-read-speed-drops-on-…
I don't want to jump to conclusions, and honestly I don't grok everything going on here, but if my backup is reading a lot of older files (which it is for a full backup) and the 840 EVO really does have this problem, it would explain why things seem so slow when ATI is advertised (and reported by others and reviews) to be reasonably fast, and my laptop CPU, network and NAS are measurably fast enough in other circumstances. I've noticed incremental and differential backups seem to be pretty quick, which would of course be very recently written files.
Thoughts?
- Log in to post comments

That is truly weird. I had not heard of such an issue with a HD.
In my long experience with ATI, usually when backup is unexpectedly slow it's due to some sort of hardware issue, even if that issue is difficult to discover.
- Log in to post comments

Yep, there is definitely an issue with the Samsung 840 EVO SSD drives. Sectors on the drive with data more than 3 months old (some reports say as little as 1 month), have dramatically slower read speeds.
Both the thread I referenced above, and many others in other forums are talking about it. Samsung is reportedly working on a firmware update to fix it, but no release date for that yet.
I confirmed I was hit with the bug. I ran a benchmark of the read speed of my drive:
Note the crazy slow read speeds for about 500 GB (half) of the drive! Around 50 MBs, yikes! The average across the drive was only 144 MBs, for a drive that should benchmark around 300-400 MBs in normal use.
I ran a utility called DiskFresh on the drive. DiskFresh will read and rewrite each sector on the drive. Theoretically, this should make all data on my drive new and restore the read speed. Another benchmark after the refresh:
Yep, there we go, back to 315 MBs average read speed, and note how even the speed is across the disk. This is an increase of +219%!
So, now I'm running another full disk image backup. It's targeted to finish up about 9:00 PM this evening, and I'll post a summary when it's done. I can tell you it shows Acronis is now using the full capability of my network, but that the overall backup time didn't really get much faster. Stay tuned...
- Log in to post comments

The app backup time estimate has always been inaccurate, will be interesting to see how it really does. Nice find by the way!
- Log in to post comments

I need to clear up some confusion in this thread. The OP is backing up 724GB of data over a 100Mbps network connection ... that's insane. Allow me to explain.
Network speeds are designated using megabits per second, in this case 100Mbps. The easiest way to do the math is to show the network speed as megabytes per second. There are 8 bits in a byte so a 100Mbps connection would be 12.5MBps (100/8). Note that small "b" is bits and capital "B" is bytes.
There are 1024 megabytes in a gigabyte so 724GB of data would be 741,376MB. Even if you could achieve the theoretical max of 12.5MBps it would take 16 1/2 hours to move that much data (741,376MBps/12.5MBps=59310s/60=988.5m/60=16.5h). Typically you lose around 10% of the bandwidth to overhead so a reasonable transfer rate would be closer to 10MBps, however if it's taking you 42 hours that means you are averaging only 4.9MBps. If the NAS is also using a 100Mbps connection then any other traffic accessing the NAS will severely impact the transfer rate of any given session.
My advice, replace your switch with a 10/100/1000 version (preferable one that can do bonding) and watch your backup time drop to 5 hours.
I have a client with a Synology 1512 (2012 version of your unit) and he hits 110MBps consistently from his laptop out of a theoretical limit of 125MBps. His NAS is connected to the switch with both network jacks bonded for a 2Gbps link from the NAS to the switch, so even if there are multiple users accessing it, performance doesn't take a hit.
I understand what you are saying about the 184GB backup taking 4 hours and 37 minutes (which is pretty impressive on a 100Mbps connection given that the average transfer rate was 11.3Mbps), but please realize that using a better switch that would only take about 30 minutes.
- Log in to post comments

Well said Daniel, link aggregation definitely improves performance that is for certain. I think the OP is using a laptop here however so that probably isn't possible here. I would agree that a decent switch between the laptop and the NAS would work better than having a router in the mix.
- Log in to post comments

Well said Daniel, link aggregation definitely improves performance that is for certain. I think the OP is using a laptop here however so that probably isn't possible here. I would agree that a decent switch between the laptop and the NAS would work better than having a router in the mix.
- Log in to post comments

Aw, man, you're stealing my thunder! :) I was going to wait until the backup finished in another 7 hours or so, but yeah, my conclusion is that I'm saturating the network bandwidth now, and I'll never get a full backup of the 725+ GB of data in less than 35 or so hours. I think Acronis is getting pretty much everything it can out of my environment. Fixing the SSD weirdness bought me a little improvement, but not much.
I think the primary drive backup was proportionately faster because the data compressed way more (184 GB original size to 84 GB backup size, about 54%) than the secondary drive that had lots of already compressed FLAC audio on it (724 GB original size to 679 GB backup size, only about 6%). Using the network rate I am observing for the backup (about 37 GB/hr, the 84 GB would have moved in about 2 hours 15 minutes, which is half the 4 1/2 hours I reported, the other half for the validate. All the calculations work out with the real data in front of me.
As far as being "insane," well, I'm crazy for a variety of reasons, but this crept up on me over the past couple of years. I'm not a networking expert, so until I really sat down yesterday and looked at the metrics and did the calculations, I really didn't grok all that was going on. I was just really surprised by the 42 hours.
I do a lot of audio recording and editing on my laptop. Previously with a smaller data drive, I had not kept so much audio on the laptop, and with the 1 TB drive I started being less diligent about cleaning it off regularly, plus I had just completed a project this month with a lot of new audio.
The 100 Mbs network has been very sufficient for everything in the office to date. But now I have a lot of data on the drive, and my backups have gotten way long with the slower network. It's really the only data transfer that happens right now that shows up the limitation of the network bandwidth.
Yes, I'm looking into a gigabit router/switch, since both the laptop and NAS have gigabit network ports on them (the NAS has 4 parallel gigabit ports, so it can have a really big pipe in-out, the laptop only one but perhaps it could get an additional adapter added somehow?), and I expect I will be doing more projects that load up the laptop with audio.
- Log in to post comments

Few things ...
1st - I wasn't suggesting that you are insane, only surprised that none of the responses seemed to address the 100Mbps bottleneck :)
2nd - the 4 ports on the NAS allow you to utilize a pair of bonded connections, not a single connection over quad cables. The purpose is to provide failover protection in the event one of the bonds drops for whatever reason. You would still be getting 2Gbps maximum even using all 4 ports. That being said, there is no point in even bonding if you don't access the NAS from more than 1 device. Still, I would get a switch that supports bonding even if only for future proofing. I used a Cisco SG-200 series for my client, affordable and powerful. If you can spare the extra coin go for the 300 series (layer 3 switching is the bomb, in hindsight I should have done that)
3rd - adding an adapter wouldn't give you the option of bonding ... bonding is a function of specific network controllers which have more than 1 physical point of connection. So you can't just ask Windows to combine a couple of random network adapters that are plugged into a system.
4th - it's still too new to really rock and roll, but wireless AC is supposed to have a throughput of up to 1.3Gbps in the near future and 3.47Gbps in the long term. You may want to have a look at what AC adapters and access points are available and see if they can give you a wireless gigabit connection.
- Log in to post comments

Guys, this is all good stuff but I need to point out here that the ATI 2014 app limits network bandwidth to 100Mbps.
- Log in to post comments

Enchantech wrote:Guys, this is all good stuff but I need to point out here that the ATI 2014 app limits network bandwidth to 100Mbps.
Wow, that's a head-scratcher. Why would ATI not make use of gigabit networks?
To be honest, I am not liking ATI 2015 at all. I want a full-featured backup program that let's me have detailed control over everything. 2015 seems to have stripped out too many features and the metro interface seems to be designed for the lowest common denominator, not power users.
So, I was considering not updating and just sticking with ATI 2014. But if it's indeed been crippled for making use of gigabit networks, then I guess I'm going to need to find something else that meets my needs. Kinda pisses me off after having invested non-trivial time into ATI - not this adventure, but I had to learn the hard way that changing the settings of backups often doesn't work and I had to tinker around to get an image that actually backed up and restored the system drive properly. Plus I have a lot of ATI backup sets around, but I guess I can make an ATI boot CD before I uninstall ATI.
- Log in to post comments

OK, as promised, to wrap this thread up (I'll try to keep my B's and b's straight for the metrics :)...
The short summary is that there is nothing slow with ATI 2014. It ended up using all the capacity it had available. The limiting constraint in my environment is the 100 Mbps network.
After fixing my SSD issue (see about 9 messages above), I re-ran a drive image backup of the 1 TB SSD drive.
The drive has a total of 728.3 GB of data on it. The resulting backup size is 685.9 (only 6% compression at max setting, lots of FLAC audio files).
Backup settings are for an unscheduled, full drive mode backup, validation on, max compression, high priority, no transfer rate limit. I left the compression at max to compare it more directly to my prior experience above (it turns out the compression didn't seem to matter).
Enchantech asked about the reported estimated completion. Here's a summary (all times are hh:mm):
- 9/21 09:02 - Started the drive image backup
- 9/21 10:02 - 01:00 elapsed, reports 25:00 remaining (26:00 estimated total), reports 2% done (calculates to 50:00 total)
- 9/21 11:02 - 02:00 elapsed, reports 28:00 remaining (30:00 estimated total), reports 5% done (calculates to 40:00 total)
- 9/21 14:02 - 05:00 elapsed, reports 28:00 remaining (33:00 estimated total), reports 14% done (calculates to 35:42 total)
- 9/21 16:02 - 07:00 elapsed, reports 26:46 remaining (33:46 estimated total), reports 19% done (calculates to 36:48 total)
- 9/21 18:02 - 09:00 elapsed, reports 25:18 remaining (34:18 estimated total), reports 25% done (calculates to 36:00 total)
- 9/21 21:02 - 12:00 elapsed, reports 22:48 remaining (34:48 estimated total), reports 33% done (calculates to 36:22 total)
- 9/22 04:05 - 19:03 elapsed, finished the backup, started the validate (I was asleep, didn't catch the reported time and %)
- 9/22 10:02 - 25:00 elapsed, reports 11:05 remaining (36:05 estimated total), reports 68% done (calculates to 36:46 total)
- 9/22 13:02 - 28:00 elapsed, reports 08:18 remaining (36:18 estimated total), reports 76% done (calculates to 36:51 total)
- 9/22 17:02 - 32:00 elapsed, reports 04:44 remaining (36:44 estimated total), reports 86% done (calculates to 37:13 total)
- 9/22 20:02 - 35:00 elapsed, reports 01:50 remaining (36:50 estimated total), reports 94% done (calculates to 37:14 total)
- 9/22 22:13 - 37:11 elapsed, finished validate, validate elapsed time 18:03
For the first 5 hours, the reported remaining time, and the percentage complete and the resulting calculated elapsed time, were off. For the first 2 hours, they were off by a lot. But then the reported remaining and percentage stabilized and were actually within 10% of the actual. For the last 5 hours, they were really close.
I observed some performance metrics during the backup:
- The network usage was pretty stable at 82.9 Mbps average, which calculates to 36.45 GB/hr, which calculates to an elapsed time for the backup of 18:49. The actual time was 19:03, within 14 minutes!
- The disk IO for reading was averaging 10 MBs, but was very bursty.
- The CPU showed one core was running an average of 30%, ranging from 20% to 50%. There were 4 cores showing activity above 10% during the backup, so maybe ATI is multi-threaded?
- The working set for ATI was 115 MB, and there was 4.8 GB of 16 GB physical memory free.
My conclusions for the backup:
- The network was pretty maxed out at 82% of capacity. Perhaps I can tune this to get a little more out, but I saw readings of up to 88 Mbps so I am pretty satisfied this is the constraint on the backup speed and it's not going to get any faster.
- The SSD is capable of averaging 315.7 MBs per my benchmarking, so ATI was using only about 3% of the disk read capacity.
- The CPU was nowhere near maxed out, even at max compression. If the one core being used most heavily is the constraint, it still had another 50% available.
Metrics for the validate:
- Network usage a little higher at 87.3 Mbps average, which calculates to a validate time of 17:43. The actual was 18:03.
- Disk IO was really low, less than 1 MBps on the average. This doesn't seem right?
- CPU usage was about the same, one core averaging about 30%. But there was no activity on other cores, so validate seems single threaded.
- Working set was 124 MB, still over 4 GB free.
My conclusions for the validate:
- Same deal, constrained by the network transfer rate. Opposite direction this time, but pretty much maxed out.
The total elapsed time was 37 hours 11 minutes. Compared to 42 hours for my prior backup that led to my original posting. It seems fixing the SSD issue gained about 5 hours of time (about a 20% increase, nothing to sneeze at). But that just revealed the next lowest constraint which is the 100 Mbps network.
If I upgraded to a gigabit network, well apparently ATI wouldn't use it anyway. But if it would, it seems the next lowest constraint would be the remaining 50% of the CPU core, so I might cut the time by half to about 18 1/2 hours. Then, maybe reducing the compression rate from max to normal might gain me some more. I doubt it would ever reach the limit of the SSD read rate, since it could go 33 times faster before doing that, and I don't think I have enough CPU to drive that - well, maybe with no compression at all.
Anyway, like I said, nothing wrong at all with ATI in all this. I am disappointed to hear it won't go above 100 Mbps even if I upgrade my network. Combined with the dismal first look at ATI 2015, and I guess I'm going to be looking for a power-user backup program that can make full use of a gigabit network. Suggestions welcome (so far, I like Macrium).
- Log in to post comments

I must disagree with Enchantech's assertion that ATI throttles bandwidth to 100Mbps. I just backed up 9GB of data over my network in 4m 25s which works out to an average of 278Mbps, not exactly stellar but still ... I did a 2nd file backup of 4GB and watched the Performance graphs, ATI network usage peaked at just over 500Mbps.
Agile - ATI does indeed utilize multiple cores. Not sure why your not getting equal treatment across the cores, but a client of mine using an old Athlon II X2 255 (dual core) gets around 80% usage on both cores while performing a backup and my i5 3330 hits about 35% on all 4 during backup.
Discuss amongst yourselves :)
- Log in to post comments

Well, I guess the only way I can know for sure is to upgrade my network and run some backups.
Yes, 4 cores were being used by the backup (well I assume they were, since it was the only thing running and using any CPU at the time). One of those cores was being used an average of 30%, ranging between 20% and 50%. The other three cores were being used an average of 15%, with a range from 5% to 20%, so not as much as the one core was. My laptop CPU is an i7 2820QM, with 4 cores (8 threads) @ 2.3/3.4 GHz, so it has some muscle.
I'm guessing one thread is managing everything, hence the greater CPU usage, while the others are worker threads for compression or something. I probably just have a powerful enough CPU vs. the network constraint that none of the cores was being slammed in any way. It probably didn't need to compress much data to keep ahead of the network transfer. I bet if I could open up the network bandwidth (and ATI used it), the processor usage would be the next constraint, and they'd show lots more usage.
But during the validate, only one core was being used. Or at least if the others were being used, it was negligible. That core was being used about the same as the backup - 30% average, between 20% and 50%.
I'm guessing the validate needs less CPU, probably because decompressing and comparing data is less CPU intensive than compressing.
As a comparison, I did an image backup of my primary system disk with Macrium. The results (the ATI numbers were rounded):
- Acronis: 4 hours 37 minutes for 184 GB of data (39.86 GB/hr), resulting backup size 84 GB (54.3% compression)
- Macrium: 5 hours 20 minutes for 193.7 GB data (36.31 GB/hr), resulting backup size 98.47 GB (49.1% compression)
So, they're mostly comparable. Acronis was 9.7% faster, and compressed 10.6% more, on this test.
I'm, probably just gonna sit tight for now, and keep using ATI 2014 until I upgrade the network sometime in the next 1-2 months.
All my testing showed ATI is getting all it can from my environment. Macrium or some other backup program would have to show some dramatic benefit to switch right now. All the comparisons I've read and done show that ATI 2014 is among the best anyway, unless it really does throttle the network usage at 100 Mbps and others don't. ATI does have some weirdness when it can't access old backup chains sometimes (says it can't locate some of the backups even when they're there), which makes me a little nervous to rely on it for disaster recovery.
From what I've seen, and all the comments I've read in the forums, no way I'm going to update to ATI 2015 right now because of the feature removal and the interface for dummies. We'll see how that plays out over the next month or two.
Thanks for everyone's comments and help! It was an adventure, and I learned some new things along the way.
- Log in to post comments

I must apologize gentlemen, I mistyped the max bandwidth myself! The limit is 100MBps, it happens when your in a hurry. See this thread:
- Log in to post comments

Interesting. Being the geek that I am, I of course had to follow that link and trail. :)
The thread you reference is referring to disk transfer speed, not network transfer speed. It would still be an overall backup speed limitation, of course, if disk speed was the constraint (like it seems to be in the linked posting which was a disk-to-disk backup on the same machine).
The mysterious file mentioned that contains the backup setting is located (on my machine) at C:\ProgramData\Acronis\TrueImageHome\Scripts\ . The file names are pretty cryptic, but I only have two backups and one was created new on Sunday, so by looking at the time stamp, it was obvious which one it was (in my case 20B384F2-946A-4EFF-AF51-D6B531702DB4.tib.tis ).
It's an XML file, so it's readable and editable by a standard text editor like Notepad or PilotEdit or whatever. All the tags are named pretty well, so it's pretty easy to see what all the settings are.
There are two setting that seem to control speed limits:
<net_speed_limit speed_limit_mode="absolute" value="0" />
<disk_speed_limit speed_limit_mode="absolute" value="99999" />
The net-speed_limit setting is available from the UI on the Performance tab of the backup settings. The disk_speed_limit setting is not.
It appears the disk_speed_limit setting is what is being referenced in the linked post. That poster thinks the 99999 is in KBps, which would be 97.66 MBps (781.2 Mbps).
My Samsung disk benchmarked up to 364.4 MBps (it can theoretically hit 540 MBs sequential read rate), so I might hit this limit if I upgraded to a gigabit network. Once I upgrade my network, I made a note to try raising this value. It will be interesting to see if the backup rate maxes out at ~780 Mbps, and if it can be increased with this setting. But I think I'd max out the CPU before that...
- Log in to post comments

You are correct, network speed and disk speed are closely related in that if you overload the network with data at a faster rate than the rest of your hardware (disk drive subsystem) can handle the data then you will or could panic the the network which would result in failure. In the link I provided the illustration is that it is indeed possible to increase transfer speed disk to disk. The added network layer is of course another layer of possible bottleneck in the process. So, an investment in some decent network equipment will allow for faster data transfer times over the network. If you have disk systems that can handle higher bandwidth then they can take full advantage of the better network equipment. In the end it is all about matching the equipment to the task to the desired end result. You know where you would like to go and what you would like to do and by now you should have a very good idea of how to get there.
- Log in to post comments

Just to provide a bit of clarity, it is technically incorrect to suggest that you could "panic" the network and possibly cause failure. The various pieces of hardware could care less what they are interconnected with. Consider the following scenario:
- the disk drive subsystem of the backup source is capable of reading data at 540MBps
- the backup destination can write data at 196MB/s
- the network is 100Mbps
Acronis will process the data on the source machine and Windows will buffer it to the network adapter. If there are transmission errors (typically caused by collision when the network link is saturated) there will be retransmission and the transmission speed will automatically be lowered until the errors cease. At this point the speed will slowly be increased until errors start appearing again, and so on and so forth. The transmission speed is entirely (and automatically) controlled by the network hardware, Windows has nothing to do with it ... it just keeps the buffer full. The net result will be a transfer rate somewhere in the realm of 10MB/s. The source hard drive is bored, the source CPU is bored, and ditto for the destination. The network hardware is going full tilt.
Now look at the same thing except with a 1GB/s network. Theoretical maximum transfer rate is 125MB/s, and the source and destination are still both capable of exceeding that. The network is still the bottleneck, except now you should see a net result of over 100MB/s depending on the network hardware.
Finally consider building a 2nd Synology to back the 1st one up to. Using a switch that can bond the connections, you could create a 2GB/s link between the diskstations. That would give a theoretical transfer rate of 250MB/s ... the source diskstation can read at 387MB/s but the destination would become the bottleneck writing at 196MB/s.
Any questions?
- Log in to post comments

Yeah, chasing after the next constraint and trying to optimize every link in the chain would be a never-ending quest. Fun, in a geeky sort of way, but not really the point.
I was hoping I could run a full image backup of the 1 TB drive overnight. That's ultimately my goal, to fire it off at 11:00 PM and have it be done by 7:00 AM. That's why the original 42 hours was such a shock, and even 37 hours is not what I have in mind.
I don't know if I can ever get it 4.5 times faster with the gigabit network. Like I said, I think the CPU might be the next bottleneck, but I also have less compression to play with there since the majority of the audio data won't compress anyway.
If I could at least fire off a full image backup Friday night at 11:00 PM and have it be done by Saturday afternoon, that would be a good goal. That's only asking for 2.5 times the performance.
I also should mention that after reading through the user guide for ATI 2015, I was probably a little too harsh on the interface. I don't really like the user experience of it, but it does seem to allow access to all the detailed backup settings that I use in ATI 2014.
- Log in to post comments