Deduplication database on extra disc
So far so good - I could agree to do this for performance reasons...
The question I have in mind is:
Which kind of disc must this be (SSD, SATA, SAS)?
Can it be a single slow drive or RAID5,6,10...??
Do I have to create backups of this DB?
Can it be rebuild from original data?
THX

- Anmelden, um Kommentare verfassen zu können

Hi Endurance
I’m not sure if you have seen the following details as yet so though I would share them with you as it has some good information included:
http://www.acronis.com/support/documentation/ABR11/index.html#3349.html
In terms of what disks to use its recommended to have the DB on a separate dedicated HDD due to the higher I/O that is required due to the database although ABR11 is allot better than ABR10’s database.
In regards to what RAID to use I guess this depends on a few factors, eg, if you want reliable but yet fast storage RAID 10 would be a good option (but you will lose capacity) I would not recommend RAID 0 or 1 as the first has no redundancy and the second is slow. I would also recommend hardware RAID solutions and not software to ensure good performance.
You are correct to say that the DB can be lost withough having any ‘data’ (backup data) loss. As Acronis can rebuild the DB (although it might take some time it’s possible, hear is a snip from the manual (link above): If the database is corrupted or the storage node is lost, while the vault retains its contents, the new storage node rescans the vault and re-creates the vault database and then the deduplication database
I would highly recommend reading all topics listed from the link above as there is some very good tips and important details included that will help save allot of time down the track (eg, Backup a typical machine first before running other backups to that that index first, check the amount of RAM in the server is sufficient, check the CPU’s you are deploying will be sufficient, look into a 64bit OS if possible, only run one deduplication vault per storage node..etc..etc..) These are all things that will save you allot of time and help achieve a smoother roll-out/experience.
All the best with your roll-out and hope that helps with most of your questions.
- Anmelden, um Kommentare verfassen zu können

Hi,
thank for your comments
I have aprox 10 5 agents access the storage node (windows 7 clients) - around 15 backupplans (disc and file based).
Storage node is a
xeon 4 core 2.5GhZ - OK
16GB RAM - OK
RAID50 with 12 SATA discs LOTS of free space - OK
2x 1GBIT LAN - OK
Currently I see no io issues while checking the performance KPIs using the windows tools. But still sometimes the backup takes hours sometimes just 15minutes - the behaviour is not reproducable but regularly.
Temporarily I had the Dedup DB on a seperate RAID1 but no change (which might be related to RAID level). Next step would be change all backup tasks to dedup on target and use a SSD for the DB. Since the DB can be recreated I will use a simple disc no raid.
What do you think?
- Anmelden, um Kommentare verfassen zu können

Hi Endurance
Thanks for the details, your setup sounds good and was that 105 agents you are currently backing up?
In regards to a single disk for your DB, it’s always best if you have a RAID in place as re-creating/re-generating the DB I’m guessing would be an involved and time consuming process.
Also in regards to backups taking different lengths of time, do you know what backups schemas are being used? (eg, full, differential or incremental backups?) are you running any clean up processes such as consolidation or expiration..etc..etc? All these different processes will also impact on the amount of time required.
- Anmelden, um Kommentare verfassen zu können

Datastor Australia wrote:Hi Endurance
Thanks for the details, your setup sounds good and was that 105 agents you are currently backing up?
Hmm remove 100 ;) I wanted to write 5 agents - it is just a SOHO network
Plans are a mixture of
GFS (filebased backups) - had always full with ABR10 witched now to GFS - every day (1-10GB per backup)
full backups of systems every month (10-100GB per backup)
- Anmelden, um Kommentare verfassen zu können

Hi Endurance
What you might find is file level backups can be substantially slower at times when compared to image based backups due to file system overheads and HDD performance when reading many small file for example. It might be more efficient if you perform just one type of backup (eg, Image or File). What I would suggest trying is perhaps using the GFS but with imaged based backups. This way you have the option to restore at both an image level (entire system) as well as a file level out of the once single backup.
- Anmelden, um Kommentare verfassen zu können