Skip to main content

Huge files server backup strategy

Thread needs solution

Greetings!

I was wondering what the best and effective backup strategy to implement for huge files server(72TB) with 0.2%(200GB) data change daily, over the private WAN link to public DR datacenter.

We are concern about the suitable free space on the file server required and bandwidth required.

Currently RSYNC(file-base) approach is using, But lack of RPO and bandwidth consumed, lead me to change the approach. Hoping that with block-level approach with source-dedup feature will address my requirement.

Thanks in advance for any helpful suggestions.

0 Users found this helpful

Hello Kanit,

I would suggest checking out our deduplication white paper for an explanation on how this is used for a WAN optimization scenario like the one you describe.

The only real problem with this is related to the storage server performance. Worst case scenario, if all 72 TB is unique data on the block level, you will need something like 200 GB of RAM for the storage server (3 GB per TB of unique data). It is impossible  that all 72 TB are unique, but the required resources could still be considerable and it all depends on the actual data so it would be hard to predict in advance without analyzing what sort of data you have on the file server.

Thank you