disk_data.ds.0 gets bigger and bigger!!
Hello there
I've got the file "disk_data.ds.0" in the root of by deduplicated storage. This file grows continously while index-task is runnig. The size of it is now about 850GB!!!
How can I make it smaller? Can I delete it from time to time?
Thanks for help... my storage is running out of space!!!
Greetings from switzerland
Thomas Säuberli

- Log in to post comments

Hello Thomas,
Thank you for posting. I will do my best to help you.
Unfortunately this issue will require additional investigation. I am sorry for the inconvenience.
I found your support case in our system and I am confident that this issue will be resolved.
Please let me know if you have additional questions.
Thank you.
- Log in to post comments

Hi Thomas
Good to see you have logged a case with Acronis support, they should be able to get this sorted out for you!
In the mean time please ensure when you deployed your deduplication vault did you run a single backup job and let it finish indexing/compressing?
If the step above was not done and you ran all your backups at ones or one after another to a fresh deduplication vault you might find there is simply too much data that needs to be indexed/compressed in one go and Acronis is trying to find similar data between all those backups (it’s a big and lengthy task).
Best practice is to setup your dedupe vault, run a single backup and let that finish indexing/compressing. Once that is done run another backup and let it finish. This means any future backups that run will not be sending as much data to the vault as the agents will only send ‘new’ data it has not seen before and will only need to process the new ‘unique’ data.
Also ABR11’s deduplication has been improved significantly in terms of resources it can use, back end databases (64bit support) and amount of RAM is can access/use. To get a better understanding how it all works it might be good to have a read through the ABR11 manual at the following location:
http://www.acronis.com/support/documentation/ABR11/#3349.html
Again support would be best and keep going with this but just wanted to offer you some good background information and check on the point above if initial backups were done to start off the dedupe vault (if not it might be quicker to remove the backups and start again, giving it time to process the first few backups before running many backups together/after each other).
All the best and look forward to an update.
- Log in to post comments

I'm a bit angry about Acronis, because I nobody wants to help me there! I just got two standartanswers and a answer with wrong solution.
Nobody can tell me what the file "disk_data.ds.0" is!!
OK, not to start with all backups once may be a good idea. Bat what with the existing "disk_data.ds.0"-file? Now me vault is full and backups ends!! What to do? Can I delete "disk_data.ds.0"? Can I start a reindex-Process? Can I shrink the "disk_data.ds.0"?
Thanks fpr help!
Greetings Thomas
- Log in to post comments

Hi Thomas,
I could be wrong but having a look through the structure of a dedupe vault I’ve just created, I believe the '...data.ds.0' file is the main deduplicated data file that contains all the back data and is internally split into chuncks.
When you run a backup from an agent it will see if that data exists in that archive and if so it will not back it up again, if it does not it will transfer the data to some of the subfolders in your vault and indexing will eventually over time build up that '...data.ds.0' file with any similar data it finds from other archives/backups/systems so I would say do NOT remove that file it seems to be your main backup file in the vault.
What I would suggest if you have run out of space is to remove all backups from the vault or even remove the vault completely and start again.
Run the backup on your first system, let the backup finish index/cleanup..etc..etc.. then ran the next system, let it finish and so on for as many systems as you can (you should only have to do a few but if you are limited on resources/space doing it this way for the initial backup will help maximise your space and reduce the amount of tempory space required as well as reducing the time/CPU/Disk power that is required to try and index huge groups of backups.
Would be interested to see how that goes for you.
P.S. Also as mentioned before ABR11 has also improved/tweaked dedupe allot so you might find it will run better for you.
- Log in to post comments

Hi Thomas
Was going through the manual for another question and came across the following section I thought would be of help to you... It’s out of the ABR11 manual (I have not checked ABR10) but I’m sure it might help explain the back end a bit better as well as some of the “Best Practices” when using Dedupe, highly recommend the read:
http://www.acronis.com/support/documentation/ABR11/index.html#3349.html
All the best!
- Log in to post comments

Hello Thomas and Datastor,
Thank you for your posts and I really appreciate your help Datastor.
Thomas, I apologize for the inconvenience. I can see that your case has been escalated to our Expert team and I am certain we will get this issue resolved.
Let me know if you have additional questions please.
Thank you.
- Log in to post comments