[TECH TALK] How to enable jumbo frames / does this improve performance for ATIH?
ATIH 2016 b6027
This thread is a splitup of an ongoiing discussion about using larger than the standard L2 frame size which is typically an MTU of 1518 bytes per frame (1,5K) (+ 4 bit for VLAN / QoS)
It is more a hardware and driver dependent discussion but at the end of the day it would be interesting to see how and if this could improve ATIH backup / restore speed as so called Jumbo frames (typically 8 or 9K instead) should lower the response times and CPU load.
However from the current situation I already know that this might only help Windows users, as ATIH Linux recovery environment lacks any configuration abilities on the NIC drivers, and Jumbo frames has to configured on all devices in use accordingly and aswell on all participiants (NICs, switches, routers etc)..
While this sounds easy and comfortable it is not as soon you have any incompatibilities. In case that a device is refusing the bigger frame size it will need to fragment it to the standard 1,5 K and recalculate the checksum and tag a new order. This costs some time and so improperly configured jumbo frames will noticeable degrade your networks performance (transmission speed), so for normal users it is not a simple thing to deal with without little more knowledge about networks.
---
Enchantech wrote:I do not think it is a driver problem. I can confirm what Dmitry posts on my network using the 2 methods being discussed. I do reach higher speeds than Dmitry reports however that is due to my ability to use Jumbo Frames on my network, something that Karl is unable to achieve. That fact leads me to think that he has possible hardware problems. It would be necesary to setup an identical network such as Karl has to bear that out I think.
Karl Heinz wrote:Indeed I still fail to use jumbo packets because my Cisco L2 switch will refuse to forward them, but splitting them in to usual 1.5 K MTU size, despite it is configured on this switch to allow 9K jumbo frames. I think I need to create a support case at Cisco, because my L3 switch does accept the jumbo frames.
I use mturoute to test functionality.
Unfortunately I have no longer network cable to connect my computer directly to the L3 switch.
Enchantech wrote:Karl,
I did review your linked post above. I have tried to understand the topology of your network structure as well. I take it you have 2 - L2 (Layer 2) Cisco managed switches, are these switches in the mix in your testing scenario?
I did some reading about the L2 switch configuration (very briefly) and I found that for NAS optimaztion Jumbo Frame enabled is recommended. I know you said the switches were configured for Jumbo Frame at one time but one of them would not work that way. I am confident you have very good knowledge of these switches and their setup but it is easy to miss something along the way. Please do not take offense here but I am attaching a link which discusses setting up Layer 2 switches and talks about the Jumbo Frame and Flow Control as well. You might wish to check this against your configuration.
http://www.informit.com/library/content.aspx?b=CCNP_Studies_Switching&s…
Have you considered taking these swithces out of the mix to see if things improve, provided as I assume they are in the mix?
I can say that on a much simpler network where Jumbo Frames cannot be used it is usually beneficial to disable Large Send Offload (LSO) of the computer installed NIC's in Device Manager Properties Advanced. Depending on the NIC's installed you may have other options some of which can affect transmission rates.
Wish I could be of more help. Hope you can improve things.
Thanks Enchantech, for helping. I will investigate this.
I think this could be my issue I will have to investigate the effective frame size with Wireshark. Seems the SG-200 is blocking ICMP bigger than 1518 MTU so this is why mturoute tool will not work, as it uses ICMP.
https://supportforums.cisco.com/discussion/11262811/sg-200-08-jumbo-fra…


- Log in to post comments

Yes cabling is an underestimated problem. When we had our CCNA course we learned to troubleshoot well, and so were advised to either investigate from layer 1 (incl. cabling) to layer 8 (user) or the other way around, if there is no visible configuration issue or configuration does not work as expected. :)
In our company I have recently replaced old switches to Gigabit ones but then the efficiency of this change has been diminished by the fact that the infrastructure cabling is just CAT 5, so dang.. on some computers and NICs (more forgiveful ones like Intel) users will have Gigabit links against all usual rules but, most of course don't have.
As for my network I am using CAT 6/7 cables and layer 1 is definitely ok for my side.
Here is my network setup
(WAN)---R---S(L3)--ACLS(L2) -----------BACKUP SOURCE
|
|
BACKUP TARGET
R = Router
S(L3) = Switch (VLAN Routing, Trunking), Access Layer
S(L2) = Switch (Trunking), Access Layer
To keep it simple I just listed the neccessary devices for this topic. Backup source and target are member of the same VLAN.
ATIH performance with jumbo frames disabled (standard MTU 1,5K)
source: Windows 10 OS, GPT, uncompressed size 58,2 GB, Samsung EVO 850, Network parameters: 1 Gigabit Ethernet, cable, MTU size 1500
Backup target | Backup file size | Time (m:ss) | Effective network speed | Efficiency compression / time | ATIH 2016 environment |
---|---|---|---|---|---|
compression off | |||||
Remote OCZ Vertex 4 | 52 GB | 8:30 | 920 Mbit | Linux | |
Remote OCZ Vertex 4 | 51.2 | 10:36 | 640-710 Mbit | backup via Windows offers less network speed in general! | Windows |
compression on / normal | |||||
Remote OCZ Vertex 4 | 37.2 GB | 6:16 | 870-920 Mbit | 71% compression / 73 % time, compression is inefficient as it degrade network speed by 2%, no cpu bottleneck! | Linux |
Remote OCZ Vertex 4 | 36.2 GB | 8:52 | 640-710 Mbit | 70% compression / 83,6% time compression is inefficient as it degrade network speed by 13,6%, no cpu bottleneck! | Windows |
Further results without Jumbo Frames:
ATIH 6027, no compression / to exclude CPU bottlenecks, full disk backup, average speed 660-760 Mbit, SMB, CPU load was about 11-12%
ATIH 6027, no compression, Windows 10 ISO, average speed 760 Mbit, SMB, CPU load was about 13-15%
Windows, Windows 10 ISO file copy, average speed 980 Mbit, CPU load was about 10%
original post: http://forum.acronis.com/de/node/99462#comment-329272
I think I will try to get a very long network cable at least CAT 5e from our stocks in my company and try if I establish jumbo frames if I connect my computer to the L3 switch directly.
- Log in to post comments

I am posting to get this thread bumped back into the front page in hopes of generating some more discussion here on this topic.
Karl, Thanks for posting your network topology diagram which helps all to understand the basic layout of your network. That helps a great deal at least from my perspective.
Were you able to connect directly to your L3 switch and achieve usage of Jumbo Frames? I have been wondering if and how that turned out for you?
I found a problem existing in my network with regards to my NAS connection which is just way to involved to get into here. I also have had limited time to do any serious testing but hope to get to that soon.
Looking for comments from other users here as well! All are invited to chime in!
- Log in to post comments

Thanks Enchantech, I was on a holiday for some days. Just now I received my first Locky Virus Email (2nd gen already) and I am experimenting with it in a VM to find better countermeasures for our company.
In regard of your question, I have also got a long network cable and so I will be able to make some more extensive tests with just the L3 switch involved as soon I can find some spare time.
- Log in to post comments

Hello again Karl,
I had an opprtunity to do a small round of tests today transfering data accross my small network and thought you would be interested in the results. They are different than your posted results but not to a large degree. In performing these tests I attempted to setup a test scenario which would provide a level playing field so to speak in hopes to achieve an apples to apples comparison.
To set this up I thought it best to perform the test between 2 almost identical PC's both with Gigabit Ethernet connections Jumbo Frames enabled. I also wanted to remove what bottlenecks I could which aren't many with these 2 machines so to achieve that I ran the tests with transfers from SSD to SSD machine to machine.
I ran a series of 3 tests. 2 of those were backups using True Image 2016 bld 6027 Windows installed version and 1 simple Windows copy test.
Below are system specs and test details:
System 1:
ASUS Z97 Deluxe Mobo
Intel i5 4690k Devils Canyon CPU
16GB G.Skill ARES 2400 DDR3 running at 2400MHz
Mushkin 120GB ECO2 SATA III SSD as Source disk for OS Backup tests
Mushkin 120GB ECO2 SATA III SSD as Source disk for data copy test (2nd physical disk)
Intel i211 Gigabit NIC
System 2:
ASUS Z97 Deluxe Mobo
Intel i5 4690k Devils Canyon CPU
32GB G.Skill ARES 2400 DDR3 running at 2400MHz
Samsung XP941 128GB M.2 NGFF PCIe SSD x 2 in a spanned drive array as destination disk
Intel i218 Gigabit NIC
TEST 1:
Setup an OS True Image backup test of full OS disk with all exclusions removed except for pagefile, hiberfile, swapfile, and recyle bin. Compression set to none and verification at completion of backup selected.
Total data size = 29.7GB
Send/Recieve performance range 789Mbps to 856Mbps during backup
Send/Recieve performance range 810Mbps to 916Mbps during validation
Source Disk read averaged 118MBps
Destination disk write averaged 101MBps
TEST 2:
Setup identical test as test 1 except compression set to Normal
Total data size=29.7GB
Total size of compressed back up file=20.51GB
Send/Recieve performance range 635Mbps to 735Mbps during backup
Send/Recieve performance range 735Mbps to 910Mbps during validation
Source disk read averaged 118MBps
Destination disk write averaged 101MBps
TEST 3:
Setup a simple Windows file copy of a full disk image normally compressed backup tib file using an identical Mushkin Source disk to the same destination.
Total size of test file=23.9GB
Send/Recieve performance 986Mbps to 1.0GBps during entire copy process (mostly 1.0GBps noted)
Source disk read performance 125MBps to 139MBps
Destination disk write performance 153MBps to 169MBps
My take from this is that data compression degrades network transfer speed slightly although not significantly during the backup transfer process.
Validation also degrades network transfer speed although not as much with non compressed data.
Copy test of compressed data might (and I think does) reflect optimization for file copy process so transfer was at an almost steady rate of 1.0GBps during the entire copy process.
Simple copy test reveals that with Jumbo Frames enabled on all components in the network structure allows for complete saturation of the Gigabit connection. I attribute that fact to the disks, CPU, and RAM memory used in the test.
The Mushkin EVO 2 SSD have read ratings of 540MBps and write of 535MBps.
The Samsung drives were setup on a dual PCIe expansion card in a 16 lane X3 slot using Microsoft Storage Spaces as a spanned volume of 234.88GB and appear in disk management as a Basic GPT disk however the disk when created was shown as Virtual Space in the Storage Spaces setup routine.
I have no idea what the performance of the spanned array is because available benchmarking tools cannot deal with the disk. The numbers produced are all over the place from typical single SSD performance to over the top you ain't never seen anything like this numbers with one utility measuring sequential reads of over 17,780MBps.
Bottom line is disk latency was I think completely removed from the equation in these tests so the end results I believe accurately reflects the performance of the network when memory, CPU, and disk drives do not get in the way.
Although the numbers posted here are comparable to your results with respect to backup in Windows the simple copy results reflect the advantage of Jumbo Frames but even then not a huge difference as the limiting factor becomes the Gigbit pipe itself. Now if we had 10GBe I believe we would have a way different story.
So in answer to the question of, do Jumbo Frames provide an advantage in the backup process on a gigabit network, it looks like NO, not really. Do they provide any benefit in simple copy, YES, but limited by the gigabit bandwidth itself.
Notes: Both PC's in these tests running Windows 10 X64 PRO 1511. Mobo's are equipped with 2 NICs each Intel i211 and i218 both use the same driver version. Ethernet cabling used CAT 5e. Network consists of ASUS AC3200 TriBand Wireless Gigabit router and an 8 port Trendnet Gigabit unmanaged switch Jumbo Frames enabled. PC NIC's verified running 9000MTU using netsh interface ipv4 show subinterfaces from admin command prompt. CPU loading averaged 8 to 12 percent during all testing. Memory usage negligable (not worth mentioning).
- Log in to post comments

I am back once again to add another post to this thread with new findings on this subject as well as that of Windows 10 Network Security Policy.
After the release of Windows Cumulative Update KB3140743, new release version 10586.122 on March 1 the Network Discovery issue was fixed so that SAMBA shares or any device using NetBIOS for discovery was fixed. That is a very good thing as this was causing many headaches for users.
This update also brought with it an enhancement to atuhentication which brought me another huge headache with connectivity for me to my FreeNAS Storage Server. I had established a Guest account on this server for storage of media in a Plex Media Server app on the device. The authentication enhancement set policy restrictions that prohibit Guest Account use on this machine and I spent several days working through a reconfiguration to once again gain acccess to shares on the machine. While doing that I encountered an issue with one of the mirrored boot drives for the machine which caused another set of headaches that took an additional day to sort due to the fact that this caused the network interface to become corrupt. Boy, when it rains it pours!
So enough of that and on with test results.
After getting all the issues sorted above I decided to run a few file copy tests from my PC that has the spanned array of M.2 PCIe SSD's to my FreeNAS. The results are a bit surprizing so thought it would be a good idea to post back here.
To begin testing I decided to disable Jumbo Frames on the PC and NAS and run a file copy to the NAS. The file used is an uncompressed TI 2016 .tib file of 29.7GB. The transfer ran at a network speed of between 785Mbps to 840Mbps with disk reads of 89MBps to 95MBps.
I then ran a second test using the same source file with Jumbo Frames enabled verfied by using a ping IPaddress -f -l 8792 command which showed success of no packet loss and 0ms access time. This transfer, a bit to my surprize, ran at a network speed of 740Mbps to 746Mbps with disk reads of 88.2MBps to 88.9Mbps.
I then ran another test using a 20.5GB normally compressed TI 2016 .tib backup file with Jumbo Frames enabled and the result was performance identical to that of the previous test of the non compressed file.
On the surface these numbers seem to indicate that enabling Jumbo Frames had a detrimental effect on performance however, if you note that there is a narrowing of the low to high speeds recorded in both network transmission speed and disk read speed the end result was a faster copy completion time when Jumbo Frames are used vs. when they are not used.
I believe the reason for the differences in performance noted is that of packet fragmentation that occurrs when Jumbo Frames are not enabled during large file transfers which happens when the default MTU value is at the default 1500 MTU size value. Once Jumbo Frames are enabled this fragmentation is not occurring so overall performance increases as a result.
My results indicate that file compression does not have the impact I originally thought it did as reported in my previous post at least not when transferring data to my NAS. I think that disk latency plays a part here as my NAS uses an HDD pool for storage whereas my previous post testing was performed SSD to SSD. There is also consideration of the fact that in my previous post test being run Windows PC to Windows PC and may have given the transfer some advantage that I have not yet discovered.
If I can find the time I think I will try to assemble a folder of uncompressed small files and perform a copy test of that folder for camparison. I will also time allowing perform a TI backup test from PC to NAS to see what results I have there.
What I can say is I am pleased with the performance of my network setup at this point. I think that the use of Jumbo Frames if possible does bring an advantage, albeit minimal in certain cases but, an increase in performance non the less to data transfer over a Gigabit network connection.
- Log in to post comments

Thanks for your massive work Enchantech, I still haven't found time to elaborate things myself, except reading through your plenty research and interesting results.
" There is also consideration of the fact that in my previous post test being run Windows PC to Windows PC and may have given the transfer some advantage that I have not yet discovered. "
have you checked the SMB version in use when using the Windows to Windows or Windows to NAS transfer? I think I have posted how to do this using powershell (launch as admin) get-smbconnection
- Log in to post comments

Hi Karl,
I checked SMB versions. On the Windows to Windows SMB version is 3.1.1 On the Windows to NAS the NAS is set an SMB3.0. I can change the NAS to version 3_00 which is documented to be used by Windows 8.1 Do you think making that change would make a difference and why?
Hope you can find some time to do some more testing yourself, would be interesting to compare results.
- Log in to post comments

There is a list of advantages and differences on Technet but I am unsure if the difference between 3.0 and 3.11 should not have a to big impact on performance.
- Log in to post comments

Ok, I will take a look at Technet but I like you think the differences are slight, still? I will say that my NAS in default configuration uses SMB 2.0 for CIFS and I changed it to 3.0 for better compatibility. At any rate thanks for your replies and I will watch for further responses here. It would be nice to see some others chime in as well!
- Log in to post comments

Hi Enchantech, I am a bit afraid that this topic seem to turn more into a private conversation :), more over even a one way convo. I am really sorry haven't found any time to investigate the matter as much I wanted, because I find it very interesting in particular. I had a week off work but well that was even to short to compensate the time I needed to recover and other tasks. But I haven't forgot about it, and the longer CAT cable I needed, is still laying here "ready to serve".
- Log in to post comments

I hear that, no problem on my side.
- Log in to post comments

Alright!
- Log in to post comments