[SOLVED] Backup / restore speed via Linux Recovery environment still lacks
ATIH b6023
I have done a full backup of a system (Samsung EVO 850) via network (1 GBbit) to another computer (SMB Share on a Samsung EVO 850) and expected that ATIH will be able to make use of that so taking up the 1 GBit Link and delivering a comprehensive backup speed.
This was not the case.
Write speed was just about 50 MB / sec and Network usage was just the half link speed.
To be honest when I transfer the tib file practically from one Win 10 computer to another one I will have a speed of 112 MByte per Second. I don't know why Acronis is not able to compete here and take advantage of the given infrastructure and fast devices.
From my observations backup and restore performance is only quite ok when using local backup / restores - this is also valid for the ATIH 2016 Windows environment. As soon it goes out to use SMB (LAN) it slows down a lot with no real good reason. This has been reported several times but was not able to track down yet

- Accedi per poter commentare

I wonder what drivers are being used in the LINUX recovery environment? Dmitry, is it possible that they are older drivers and/or set at 1/2 duplex speeds by default instead of full duplex?
EDIT: Karl, how is the difference in transfer compare when using the offline WinPE bootable recovery? If possible, can you create WinPE with the Windows10 ADK as it should theoretically use the newest drivers possible.
- Accedi per poter commentare

I do not think it is a driver problem. I can confirm what Dmitry posts on my network using the 2 methods being discussed. I do reach higher speeds than Dmitry reports however that is due to my ability to use Jumbo Frames on my network, something that Karl is unable to achieve. That fact leads me to think that he has possible hardware problems. It would be necesary to setup an identical network such as Karl has to bear that out I think.
- Accedi per poter commentare

Indeed I still fail to use jumbo packets because my Cisco L2 switch will refuse to forward them, but splitting them in to usual 1.5 K MTU size, despite it is configured on this switch to allow 9K jumbo frames. I think I need to create a support case at Cisco, because my L3 switch does accept the jumbo frames.
I use mturoute to test functionality.
Unfortunately I have no longer network cable to connect my computer directly to the L3 switch.
@Bobby @Dmitry I have a free day tomorrow so I might test the restore speed on Linux and WinPE (Win10)
Allegato | Dimensione |
---|---|
330152-125785.zip | 42.64 KB |
- Accedi per poter commentare

If you have jumbo frames enabled on any hardware in your network but the network does not support this can cause conflict including transfer speed. My suggestion to you is to set all hardware to MTU 1500 and see if things improve.
- Accedi per poter commentare

Well Enchantech, as CCNA I have a good understanding of things. I have completely disabled Jumbo Frames, as I said and set everything to MTU 1500 as one of my L2 switches for some reason errorneous will not forward bigger frames despite it is setup. However the topic of Jumbo Frames was not the original reason for this thread.
In fact while I had Jumbo Frames active on the NICs and both switches, the speed degraded to 60-80 Mbyte per second due to the fact the L2 switch fragmented the 9K frames into into 1,5K frames. But again, that is not the issue were are discussion here at all :)
I am currently testing the original issue more extensively and see if I can repro that Linux enviroment is slower than Windows but at the moment it looks even vice versa, I will post full results later on.
- Accedi per poter commentare

That was a hard day of testing to be honest, and at the end of the day I have proven my own thesis wrong. In fact Windows backup is much slower than Linux based one. Haven't tried restore as this already took a lot of time for all these passes.
Here are the hard facts
source: Windows 10.1586.104, GPT, uncompressed size 58,2 GB, Samsung EVO 850
Backup target | Backup file size | Time (m:ss) | Efficiency compression / time | ATIH 2016 environment |
---|---|---|---|---|
compression off | ||||
Local USB 3.0 HDD | 52 GB | 9:32 | Linux | |
Local USB 3.0 HDD | 51.2 GB | 14:01 | Windows takes a lot longer than Linux! | Windows |
compression on / normal | ||||
Local USB 3.0 HDD | 37.2 GB | 6:35 | 71% compression, 69% time. here compression does save a considerably amount of time in relation to the saved data amount | Linux |
Local USB 3.0 HDD | 36.2 GB | 8:07 | 70% ompression, 57% time Here compression does save a considerably amount of time in relation to the saved data amount | Windows |
source: Windows 10 OS, GPT, uncompressed size 58,2 GB, Samsung EVO 850
Backup target | Backup file size | Time (m:ss) | Efficiency compression / time | ATIH 2016 environment |
---|---|---|---|---|
compression off | ||||
Local SSD Samsung EVO 850 | 52 GB | 2:06 | Linux | |
Local SSD Samsung EVO 850 | 51.2 | 2:36 | Windows | |
compression on / normal | ||||
Local SSD Samsung EVO 850 | 37.2 GB | 1:57 | 71% compression, 92.8% time. Compression is inefficient as it hardly saves any time despite the data amount to transfer is reduced by 29%! No cpu bottleneck! | Linux |
Local SSD Samsung EVO 850 | 36.2 GB | 2:46 |
Compressed backup in Windows even takes longer than uncompressed! |
Windows |
source: Windows 10 OS, GPT, uncompressed size 58,2 GB, Samsung EVO 850, Network parameters: 1 Gigabit Ethernet, cable, MTU size 1500
Backup target | Backup file size | Time (m:ss) | Effective network speed | Efficiency compression / time | ATIH 2016 environment |
---|---|---|---|---|---|
compression off | |||||
Remote OCZ Vertex 4 | 52 GB | 8:30 | 920 Mbit | Linux | |
Remote OCZ Vertex 4 | 51.2 | 10:36 | 640-710 Mbit | backup via Windows offers less network speed in general! | Windows |
compression on / normal | |||||
Remote OCZ Vertex 4 | 37.2 GB | 6:16 | 870-920 Mbit | 71% compression / 73 % time, compression is inefficient as it degrade network speed by 2%, no cpu bottleneck! | Linux |
Remote OCZ Vertex 4 | 36.2 GB | 8:52 | 640-710 Mbit | 70% compression / 83,6% time compression is inefficient as it degrade network speed by 13,6%, no cpu bottleneck! | Windows |
Summary: if you need a fast backup use Linux and disable compression even if you have a very powerful machine (CPU / SSD)!
It is very interesting that Linux NIC drivers seem to operate much better than ATIH and Windows, compared to Windows alone. I have no explainations for that. Have a look on the results here http://forum.acronis.com/de/node/99462#comment-329272
My initial claim that Linux environment is running slower than Windows when backing up is at least utterly proven wrong and even proven opposite.
edit: updated performance relations, added missing USB 3.0 HDD Linux w. compression data
- Accedi per poter commentare

Very nice findings! I would say compression is never inefficient though - probably a matter of whether storage space or speed is more important though. 29% compression on a 1TB drive would allow for roughly 290GB more images, allowing for more backups to be stored, or a longer period of backups depending on scheduling and grooming has been setup.
I'm suprised that in all tests, regardless of network speeds, compression actually resulted in faster completion times than without compression. Even if it does slow down network speeds some, the drive write speeds will still be the bottleneck (even on the fast 850 EVO which tops out at 540 under ideal conditions), so I'm inclined to say that compression would actually the preferred method (at least for me). I may have to test further at home. I have the same drives as you - 850EVO 250GB and an OCZ Vertex 4 240GB.
- Accedi per poter commentare

Karl,
I think our language differences caused my thinking that you still had jumbo frames enabled on your L2 switch, my apologies. Glad to see your test results, they correlate and confirm what I have found true on my network.
I think that the difference between the Linux based media and that of Windows is due to how well Linux utilizes the SMB protocal vs Windows. Windows also has more resource overhead which is a big factor as well.
Bobbo,
I concur that drive speeds result in the greatest impact on performance. I have been an advocate of that philosophy for many, many years. I have been using SSD's in my rigs since the first generation of them hit the market back in 2005. Recently I have moved to the latest NVME and PCIe Express variations of these devices. All I can say is WOW, these things are truly BLAZING FAST!!
In my test with these devices I found that on a Z-97 board using an Samsung M.2 256GB PCIe Express drive as the OS drive in an on board M.2 socket with a few small programs installed having a total footprint of 50.66GB of data and running a True Image 2016 build 6027 backup using a WinPE booted media with Normal Compression I was able to achieve a complete backup in just 1 minute 10 seconds. Validation took another 1 minute. Backup was run accross my gigabit network jumbo frames enabled to a FreeNAS box which uses a ZFS drive pool of a little over 9TB of space. The pool consist of 4 3TB WD Red NAS drives which are 7200 RPM devices.
This same system also sports a Samsung 850 EVO 500GB internally connected drive so just for kicks and giggles I decided to test run a backup to the EVO. The results were outstanding if not outright astonishing! Same backup configuration of normal compression and the system booted to the WinPE media. Total time for the backup was 33 seconds. Validation was another 20 seconds. Total time 53 seconds!
I know that this is an over the top system and not the norm here but this will give you some idea of what is achievable using fast SSD's. Have fun in your own testing and please post your results.
- Accedi per poter commentare

Thanks for your extensive and cheering feedback guys!
@Bobby, I think the backup speeds should be even more faster than they already are I think technically the drives are not yet on the limit neither the network is, nor the other hardware I use (when compression is enabled).
Some questions remain for me here:
1. why does compression inflict network speed
2. why is the backup speed with compression on nearly same fast than without in a SSD to SSD backup
3. why is one specific backup so much slower (uncompressed USB 3.0) than Linux, while the times with compression seem about fair, so it cannot be a USB 3.0 driver issue.
@Enchantech: even if we consider that Windows is running more tasks in background I am able to do file transfers of big files higher than those ATIH does, even though both use the same protocol. So if there is overhead or a bottleneck in any case it is cases by ATIH.
Please refer to this topic again to get an idea about how performant SMB works for me under Windows 10 with large files (tib-files naturally are, too)
http://forum.acronis.com/de/node/99462#comment-329272
Given that, at the moment I do not find any reason in Windows itself, for the slowness of ATIH when using SMB to backup.
@all: To gain much better results and comparisons all these tests I've made should be repeated with a recent WinPE ATIH environment. Well as these one already took me a half day it may take a while to get myself motivated to do so :)
@Enchantech: I am looking forward for my next rig. Perhaps 2017. I am also using SSD since Vertex 2 and except one failure I have only positive things too tell. Even the prices are becoming affordable and I convinced my boss to order all new computers (Intel NUCs) with an SSD instead of those oldschool HP desktops we have with traditional and "horribly slow" HDDs. It is a pain for me to work on any laptop, too that is not equipped with an SSD. (advertisement disclaimer) Trueimage can be a good help to migrate, too (tm) ;)
- Accedi per poter commentare

Karl,
I did review your linked post above. I have tried to understand the topology of your network structure as well. I take it you have 2 - L2 (Layer 2) Cisco managed switches, are these switches in the mix in your testing scenario?
I did some reading about the L2 switch configuration (very briefly) and I found that for NAS optimaztion Jumbo Frame enabled is recommended. I know you said the switches were configured for Jumbo Frame at one time but one of them would not work that way. I am confident you have very good knowledge of these switches and their setup but it is easy to miss something along the way. Please do not take offense here but I am attaching a link which discusses setting up Layer 2 switches and talks about the Jumbo Frame and Flow Control as well. You might wish to check this against your configuration.
http://www.informit.com/library/content.aspx?b=CCNP_Studies_Switching&s…
Have you considered taking these swithces out of the mix to see if things improve, provided as I assume they are in the mix?
I can say that on a much simpler network where Jumbo Frames cannot be used it is usually beneficial to disable Large Send Offload (LSO) of the computer installed NIC's in Device Manager Properties Advanced. Depending on the NIC's installed you may have other options some of which can affect transmission rates.
Wish I could be of more help. Hope you can improve things.
- Accedi per poter commentare

Thanks Enchantech, for helping. I will investigate this, but just for other readers, jumbo frames is a different topic.
I think it would be better to open a different topic.
Let's head over here: https://forum.acronis.com/de/node/112788
- Accedi per poter commentare

I agree.
- Accedi per poter commentare