Skip to main content

Performance Hyper-V backup ...

Thread needs solution

Hello,

we use Acronis Backup Advanced 12.5 current build on native and Hyper-V servers. On native servers the new version is blazing fast, but on our Hyper-V host we are not able to push Gigabit to its limit.

Environment:

Hyper-V Server 2012 R2 based on HP Proliant DL380 Gen9 3,1Ghz / 40 Cores with Fiber attached MSA2040; configured with Hyper-V Agent and Storage Node (for weekly tape use). Backup Target is a QNAP TS-1263U-RP with 6x5GB WD Red HDDs. All Devices / servers have current drivers, updates and firmware.

When we copy data from server to nas we get >100MB/s, if I calculate the throughput from the last backup job the result is 75MB/s.

Job is configured with 10 VMs, priority normal, compression normal. Amount of data 3,6TB, daily full backup, max. 2 VMs at same time. Biggest VM is a Exchange server 2016 with 2,1TB.

Whats the reason for the throuhput difference? Any idea how we can configure the environment to increase the throughput?

Many thanks,

Erik

 

Note: Acronis Advanced 12.5 is migrated from 11.7 ...

0 Users found this helpful
frestogaslorastaswastavewroviwroclolacorashibushurutraciwrubrishabenichikucrijorejenufrilomuwrigaslowrikejawrachosleratiswurelaseriprouobrunoviswosuthitribrepakotritopislivadrauibretisetewrapenuwrapi
Posts: 22
Comments: 3800

Hi Erik,

Can you please clarify how you calculated the throughput? Have you checked the network bandwidth consumption from the Hyper-V host which VMs were backed up? Will it be different if you increase the amount of VMs processed simultaneously? Some screen shots illustrating the behavior would be most helpful to understand where the issue could be, e.g. to detect the bottleneck.

Thank you.

Hi Vasily,

thank you for your answer - I've calculated it manually related to the time the job takes for the amount of data the job details show. Currently we've migrated to 10GBE to eliminate a bandwidth problem - which was true because now we get (calculated) > 140MB/s. This is much better than before but not the limit of the target NAS system; which is >200MB/s on average.

We'll playing around with job priority and the amount of VMs like you mentioned and I'll post the results here.

Many thanks,

Erik

Now we've >150MB/s - it's clear that our huge Exchange VM is the one who takes the time, if we increase the number of VMs processed simultaneously the ending time is the same, because all other VMs than the Exchange are finished.

The question is how can we increase the backup performance of this VM? Its an 2012 R2 with current Integration Services stored on a MSA 2040 which makes >200MB/s without any problems. Is it possible to tune some things like more cores to use with the agent for example?

After changing the priority to "High" we have 189MB/s ;-)

10GBE + job configuration was the solution ...

frestogaslorastaswastavewroviwroclolacorashibushurutraciwrubrishabenichikucrijorejenufrilomuwrigaslowrikejawrachosleratiswurelaseriprouobrunoviswosuthitribrepakotritopislivadrauibretisetewrapenuwrapi
Posts: 22
Comments: 3800

Hi Erik,

Thank you for the notes and update. Indeed priority setting affects performance. Plus (for future references) keep in mind the following options which have their own effect onto performance: 1) Compression 2) Encryption (AES-256 is most sensitive to CPU resources for example) 3) Simultaneous VM backup settings.

Thank you.