2nd Full Backup Speed Problem Still Unresolved!
My company is a long time user of Acronis backup software, but are about to give up and find a different product.
I have seen this discussed elsewhere on the Forum, and have gone around and around with Acronis Tech Support, providing all logs, etc. requested, but it took weeks before they seemed to understand the issue, and then I was just told that their development staff were "working on it".
My backup settings are to do a full backup, follower by several (usually 5) incrementals, and then a new full. Since the switch to the .TIBX format the behavior has been very consistent -- the initial full backup will take a reasonable amount of time (say 4 hours), and the intervening incrementals are much faster. But the next full backup will often take close to 24 hours. I usually can't leave the laptop being backed up connected to the external destination drive for that long. so I have had to delete (or hide in a folder) the existing backups and start over with a new full backup, which runs at a reasonable rate. This is just not accpetable.
I am backing up my drives separately, and not using validation. I have seen unofficial workarounds discussed here on how to to keep using the .tib format instead of .tibx, but am reluctant to experiment with something as important as backups.
I know from forum posts I am not the only one having this problem, so I cannot understand why Acronis has gone so long without solving it.
Is there another way I can set up my backups to avoid this, or do I just give up?
Thanks
- Barry


- Log in to post comments

Hi.
Yes, I am using build 39216.
The total amount of data is about 1.8TB, split between two SSD drives. The destination drive is a Western Digital "Elements" 14GB external drive connected via USB3.
As I said, an initial full backup (I did one last night by moving all the existing .TIBX files to a sub-folder of my usual destination folder) to just under 5 hours. The day before yesterday when there was already an initial full backup and 5 incrementals all in one .TIBX file it started to do a second full backup that was going to take over 24 hours.
This behaviour is quite repeatable and has been true since the changeover to .TIBX. One post somewhere on these forums said the problem is that subsequent full backups are making large numbers of accesses to the first full backup, causing the poor performance.
I have run all the diagnostics requested by Acronis and sent them many log files. As I said they seemed to have finally admitted this was a software bug with the .TIBX format but no promise as to when it would be fixed.
I am just wondering if I can setup my backups in some other way than to do a full followed by 5 daily incrementals and then the next full so that this problem won't show up.
- Barry
- Log in to post comments

Barry,
More questions than answers at this point.
Can you provide specifics on the laptop hardware? CPU, amount of ram, how much free space on hard disk, are the disks involved SSD or HDD?
Do you have Restore Points enabled on this machine?
Do you use File History?
- Log in to post comments

Barry, thanks for the further information and confirmation of the version / build details.
The total amount of data is about 1.8TB, split between two SSD drives. The destination drive is a Western Digital "Elements" 14GB external drive connected via USB3.
1.8TB is a very large amount of source data, so my first suggestion would be to make separate backups, one for each SSD and reduce the source size which should also reduce the backup time for both full and incremental backup slices.
I have seen comments in the forums that these very large external backup drives can also contribute to slower performance but have never used any drives larger than 3TB personally.
Ideally, one method of avoiding any performance impact of any subsequent full backup would be for each new backup chain to go to a new destination folder but Acronis do not provide any means of achieving this other than doing this manually!
It is possible to force ATI 2021 to use .tib files instead of .tibx for disk backup tasks but this has become more difficult in the later builds of 2021 because Acronis continue to 'protect' the script '.tib.tis' files even when Acronis Active Protection has been turned off! I have done this to test that the process does still work but there is no guarantee that Acronis won't render this even more difficult in any further builds or new versions!
- Log in to post comments

I have the 1.8TB total split between two separate backups, one for each drive.
Laptop is a Lenovo Thinkpad P51 with a XEON E3-1535M 3.1 GHz processor and 64 GB of RAM. Over 1Tb of free space on each drive being backed up. Each of the two drives is a fast SSD internal to the laptop.
Speed of the external drive explains why the initial full backups take 4-5 hours (which is fine), but not why the subsequent fulls want over 24 hours to finish.
Restore points are enabled. File History not enabled, but Code 42 Crashplan is doing cloud backups in the background. I can try to turn that off when the 2nd full backup is trying to occur, but I am skeptical that will matter.
- Log in to post comments

Barry,
Okay, so you have roughly 900 GB of data backed up by one task and another 900 GB of data backed up by another task both using the WD Elements target disk which I believe is a 14 TB capacity even thought your previous post states that disk size is 14 GB.
So now I ask how much space is free on the target disk?
So I presume these SSD source disks are 2 TB in size by your statements. The amount of free space available on these disks is sufficient for creating shadow copies of the the source from which the backup is created however, your statement about Code 42 Crashplan running in the background while these full disk backups by TI are running I find concerning. I would presume that Crashplan uses VSS shadow copies in it's backup process as does TI so in theory the available free space on source disk maybe to small for TI shadow copy function as intended simply because TI needs to set all available free space on disk for shadow copy purposes during backup by design. Having half or better of free space occupied by a shadow copy may have a detrimental impact on TI backup performance as a result.
- Log in to post comments

Yes, I of course meant TB. I will try turing Crashplan off during the next attempt at the second full backup and see.
But does your theory explain why the first full backup always takes a reasonable amount of time but the subsequent ones are slow? The target drive is less than half full in both cases.
- Log in to post comments

Barry,
The first backup of any task is a full version of all selected data. This is true for full disk images and folder/file backups.
The second full version of all selected data is a process where all backed up data from all backup files created subsequent to the first full are analyzed and then any changed data is marked for update in the new full version. This includes updating all metadata used in backup tracking. This is a process of the .tibx file format and applies only to disk image backups.
Folder/file backups are different in that these do not use the .tibx format and thus lack the metadata component. I am of the opinion that this difference is at root of the "slow" performance of the disk image .tibx file format. I also believe that the next new version release will address this issue to some degree. I say some as I do not believe that the slowness experienced by users like yourself will be totally overcome at that point.
Again my opinion but, the hardware used is where I believe the biggest bottleneck exists and is the largest contributor to poor performance. Case in point, the .tibx file format was introduced to True Image Home products after a successful debut in Acronis Business products line. Most businesses at that level use superior hardware to that of the consumer market. So for example, I myself just a few months ago built a Windows Storage Spaces device that in use mimics to a degree a clustered storage system commonly found at the Enterprise level. In recent testing I performed a folder backup of all folders on a single disk which in total data equals 457.3 GB on a WD 2 TB Gold Enterprise class HDD. The initial full backup took 1 hour and 20 minutes averaging 1533.2 Mbps.
The Storage Spaces device itself is a Tiered Storage array and consists of 5 - 8 TB Seagate EXOS Enterprise HDD's configured as a Capacity storage tier virtually coupled to 2 - 500 GB Samsung 860 EVO SSD's mirrored and configured as a Performance storage tier which is served by a high capacity data cache device consisting of a single Samsung 250 GB 970 EVO NVME disk virtually coupled to a 16.4 GB RAM cache.
The Capacity tier of the above drive pool is housed in a 5 bay enclosure connected to the PC via USB 3.0 SuperSpeed connection. The SSD's and NVME disk are all internal to the PC.
This is a small scale example of what data storage looks like at the enterprise level these days. My next full backup is scheduled to run 6 days from now. The backup is a differential method which runs daily and will only retain 2 version chains.
How does the performance here compare to what you see?
- Log in to post comments

Barry,
This performance issue began in ATI 2020, most likely with the introduction of the tibx format. ATI 2019 and prior versions with the tib format were fine with no increase in time. It seems to increase in severity with the amount of data backed up to the point where I find it unusable beyond about 200GB backups. Having worked with ATI tech support on this issue in the past I would suggest not bothering with that approach.
This issue is repeatable on all 3 of my PC's, even the ones with new hardware and fast nvme storage. For very large backups, I recreate the backup job each week so I never run into the issue. For my 200GB system drive, I just let it run longer since the orignal backup took 20 minutes and now my subsequent backups take an hour and I just deal with it.
However, it is just unusable with larger backup sets once the second and subsequent full backups kick in.
- Log in to post comments

Posters,
Another method to address this issue is to reduce the total size of your backup chain. Using Automatic clean up you can limit the backup chain size to say 3 recent versions which can help a great deal. I have done this myself for all my scheduled backups and have acceptable performance as a result.
- Log in to post comments

The second backup (first after the incrementals) started last night, with Crashplan turned off, and after 12 hours was only half done. So that is not the problem.
As I think I mentioned earlier, somewhere on the forums I recall reading that it has to do with subsequent full backups having to make huge numbers of reads from the previous full backup. If so I can't imagine why they have not fixed it.
But perhaps others have a different way of doing periodic full backups with incrementals (or differentials?) in between that would solve my problem (and the problem of the other who have seen this).
The best solution is probably for my company to switch to different software that will not have this problem.
Thanks
- Log in to post comments

Aside from reducing the size of the backups I know of nothing that will address the issue.
- Log in to post comments

I appreciate the detailed replies.
To answer Enchantech's question, the simple system I have now (two SSD's with a total of 1.8TB going to a USB3 external 14TB Western Digital drive take about 4 hours. That is not too much slower than your 457GB in 1 hour 20 minutes.
You also mentioned limiting the size of the backup chain to 3 recent versions -- I am not sure what that means, because I cannot have it doing more than one full backup, no matter the number of incrementals in between.
- Barry
- Log in to post comments

Barry, 900GB per each of your SSD's is still a very large volume of data - do you have any options for dividing this data into separate backup tasks by type of data, i.e. Photos / Videos?
Typically, I keep one dedicated partition (C:) for only the Windows OS & installed applications, I have other separate partitions for different categories of user data, and have individual backup tasks per these, allowing for different schedules according to frequency of change etc.
The reference regarding limiting the backup chain to 3 recent versions is for automatic cleanup settings but I suspect that this will still be an issue with the current size of data for the task when a second chain is being created with a subsequent new Full backup image.
Beyond this, then you really will need to persuade Acronis to investigate why there is such a difference in backup completion time between your first and second full backup execution when the source data size remains the same!
- Log in to post comments

Just to add some things to the discussion...
I just watched a review of the WD Elements 14 TB drive and here is the CrystalDiskMark benchmark results:
This is a 5400 rpm drive and you can see the random read and writes are far below the sequential. Assuming a lot of non-sequential reading/writing going on with the second full backup, that could explain a lot. Also, during the review I was watching, an initial large file copy from Windows to the drive was running about 160 MB/s. This seems to comport well with the 4 hour initial backup.
Running Disk/Partition backups are faster than File/Folder backups. While I have no idea how you have allocated data to your drives, one way to make multiple smaller backups without repartitioning is to use multiple Disk/Partition backups for the same drive. For example, one backup of C: could just copy the basic system stuff (Windows, User, ProgramData, Program Files, Program Files (x86)) while excluding other folders. A second disk backup could backup data and exclude these folders. Also be sure to exclude the usual on both (Recycle Bin, swapfile.sys, etc.).
Just my two cents worth.
- Log in to post comments

I agree with Steve and BrunoC here. I suggest that rethinking how and what you backup could provide the answer you are searching for. Since the introduction of the .tibx file format I have changed by backup methods completely and those changes have resulted in no real performance loss in my backups.
What has changed for me:
- Switch from incremental to differential methods
- Backup only user data on a daily basis
- Backup full OS disks only after Windows updates or application changes/updates
- Use Windows File History for select user folders to an external device.
- Use Windows DISM tools to make full OS disk image clone files then convert those files to VHDX files (only applicable to UEFI/GPT disks)
- Log in to post comments

Für große Datenmengen, war das neue tibx Format schon immer problematisch.
Für das alte .tib Format ist 2,8TB Datenmenge kein Problem.
Ich denke, das das alte .tib Format verwendet werden sollte, oder eine ältere Acronis True Image 2019.
------------------------------
The new tibx format has always been problematic for large amounts of data. 2.8TB of data is no problem for the old .tib format. I think the old .tib format should be used, or an older Acronis True Image 2019.
https://forum.acronis.com/forum/acronis-true-image-2020-forum/how-creat…
Attachment | Size |
---|---|
578829-273777.pdf | 1.81 MB |
- Log in to post comments

It seems that I can change to "Files and FolderS' a sthe way to setup my backups, and it will use .TIB files instead of .TIBX. Tried the first full backup that way, and it seemed quite a bit slower. Will see what the second full runs like.
Is this not a viable solution because of slow speed all the time?
- Log in to post comments

Barry, you should not use Files & Folders if you are including the Windows OS and/or installed applications in your backup tasks, this is because this backup type will not capture vital locked system data and registry files etc.
- Log in to post comments

Thanks you for that important information. I may just go back to Acronis 2019.
- Barry
- Log in to post comments

Uninstalled Acronis 2021 and installed 2019 version.
Seems to be working, except it think my 14TB drive is almost full (it is almost empty), and I have to tell it to ignore that message. Is this a known bug with 2019 and very large drives? Could be a real problem for automatic backups with no human intervention.
- Log in to post comments


So how many Restore Points do you have on this disk? Windows will use 10 to 15% of free space on disk for restore points. These restore points are stored in the System Volume Information folder. If Windows is using 15% of free space on disk for Restore Points you potentially could have 2.1TB less space on the disk than you think you do.
- Log in to post comments

Did Acronis 2020 still use .tib -- perhaps I should use that.
- Log in to post comments

2020 was the first version to start using .tibx for new disk backups. You could create a disk backup with 2019 then carry on that backup on 2020 or 2021 and it will stay as .tib.
- Log in to post comments

ANy thoughts on why it thinks the external drive does not have enough space when it is almost empty? Seems like a bug with it not interpreting the size of large drives correctly....
- Log in to post comments

Barry, unfortunately even if this size issue is a bug it is in an unsupported older version and Acronis would only consider it if it can be shown to be present in ATI 2021.
I have never used a very large backup drive so have no method of testing this issue on the various versions I have access to on my own systems / VM's.
If Windows sees the correct size for the external drive then my expectation would be for Acronis to do the same!
- Log in to post comments

Thanks. I will try your suggestion to do backups in 2019 and then upgrade to 2020 and hopefully that bug is gone.
- Barry
- Log in to post comments

Try to defragment your destination desk. I did this recently and it greatly improved speed.
To be safe, be sure to validate your backups after doing this. And then create new ones just to be safe.
- Log in to post comments

That will only work with mechanical drives; for SSD trim may help by releasing storage. Also, if you are doing a Disks & Partitions backup, which uses sector-by-sector, the first backup after defrag is run will be much larger due changed position of data.
Ian
- Log in to post comments