Skip to main content

StorageServer.exe high memory usage

Thread needs solution

Hi,

Im using Acronis Backup Recovery 11 Virtual Edition on vSphere 5.0 and back up data to NAS device with deduplication.

After Backup task started StorageServer.exe growing up max memory size. My ABR 11 server has 4 gb memory and StorageServer.exe using 3.3 gb physical memory. Eventually back up process very slow and not finish.

Screenshots are attached.

0 Users found this helpful

This is normal.

StorageNode should have 8GB RAM. It is not clear if the RAM requirements grow with RAW (deduped) data or gross backup data (undeduped). My system has now about 540GB used space in the data vault and it's hogging the whole ESX cluster with its IO (and it's going to be removed from it soon).

So by trial and frequent calls to support, I figured a few things:

* Inhale the best practices: http://www.acronis.com/support/documentation/ABR11/index.html#7143.html
* Have at least 8GB RAM - more is better. 64bit OS is pretty much a minimum.
* Support told me "StorageServer.exe will take any memory it can grab". And sure it does. It certainly benefits from more memory.
* Have lightning fast storage, seperate ones for
* The OS (where the Storage Catalogue will live) -> throughput is unimportant, fast seeks are important. SSD is king
* The dedup database -> throughput is unimportant, fast seeks are important. SSD is king
* The actual backups -> well - lots of space is pretty much the only thing you need here
* If your plans start deleting old backups, be prepared that your Storage Node will spend most of its time updating the catalog. This might be a bug in build 17318, but it could be a design flaw. Keep in mind that the catalogue will grow tremendously during catalog updates (a 7GB catalogue grows up to 40GB and more, because the node commits the DBLogs only when idle). Unfortunately, cataloguing cannot be disabled easily.
* Don't even think about putting anything but the backup vault on shared storage. Try to dedicate fast drives (I ordered 2x15K SAS drives)

Hi,
4GB RAM ist REALLY undersized I think.
I operate 4 StorageNodes with 1 Managementserver and each of them has 48GB RAM which is well used during deduplication.
Pls. read the already linked "Best Practices for Deduplication", there is a formula to calculate RAM in correlation to your backed up data.

Regards, Ralf

Hello Ralf,

I'd be really interested in data about your setup. How many clients? How much data (raw and cooked?) What build?

From other conversations on the forum I got the impression that storage nodes (and esp. Dedup) are rarely used.

I too am running into this. We're limited as to what we have available for disk options in our co-location.

I've got a server with dual QC Xeon; 16gb of ram; 2x1TB SATA in Mirror; 6x2TB SATA 7.2k rpm in RAID6 (4+2). Being that we need a lot of space for our vault, backing up 20 servers, etc...the dedup database had to be on the same location as vault.

All backups are working fine...it's that storageserver.exe which is a pain. Acronis told me to put the storage node on another server. That's not going to happen. I wanted this server to primarily be the backup server....i shouldn't have to split up it among a number of servers.

I think this high memory usage is just a bug of sorts. I can't find any reason it would need to crunch that long. CPU usage is nothing...I don't see why so much memory is needed for the process.

MP,

according to my observation, your Server has way too much CPU and way too slow storage.

It's no use to have many cores because in the current beta all CPU-intensive processes only use a single core. With your 8 cores you will see a maximum CPU-usage of about 13% for storageserver.exe

At least in the current beta (17318) your dedup-database needs to be on the fastest drives you can get. I bought two 15k SAS drives for that and they're barely fast enough.

The problem with (probably) all deduplication engines is that you need to store lots of hashes (one for every 4k block in this case) and you need to look them up fairly randomly. That's pure seeking - so you need drives with very fast seeking. The more you can cache in RAM the better the chance your dedup engine does not have to ask the drive for that hash but look it up in the RAM cache which is faster.
That's why StorageServer.exe using lots of RAM is generally a good thing. In the best case all your hashes are in RAM.

The bad thing is that storageserver.exe does not seem to handle the database (which is a SQLITE-DB) well. During writes it seems to sync every hash - so if the dedup database gets new hashes it's going to be extremely slow - regardless of how much RAM you have because storageserver forces your OS to sync every hash. The workaround for this would be either a BBWC on your RAID or turn on aggressive write caching on your storage controller - but be warned: If your power goes out (without BBWC) it's very likely your dedup database is damaged and rebuilding it can take a week or two - during that time your storage node (including its data) will be offline.

Best bet would be get a small SSD (128G should be okay) and put your dedup database on that. Your storagenode will get significantly faster.

Also consider backing up your Depot from time to time. My StorageNode(s) already corrupted two Depots and with the support always taking 2-3 weeks for any reaction (NOT a solution, just a reaction) you'd better backup your backups.

Either that or turn off dedup until it's ready.

Heiko,

Thanks for taking the time to respond to my issue. You information was very helpful!

Being that we're backing up only about 17 machines which are almost exactly the same, I didn't think the deduplication feature would be all that taxed. I agree with you, it seems to be a poor execution of the feature although the dedup algorithms are pretty good and offer quite a bit of space savings.

That said, we're quite limited what we can do and configure on these systems running at a hosted location. We're paying through the nose for these servers and while solid, they're not optimally configured. If I were to request 15k drives or *gasp* SSD drives, I can't imagine what the bill would look like.

I took your advice and created a separate non-dedup enabled Vault. I reconfigured my policies to point to it. I wasn't able to purge any existing backups from the dedup vault...the delete jobs just ran and ran. I stopped them and deleted the entire dedup vault instead. I'll just grab new backups going forward. Luckily we're at the beginning stages of using our hosted facility so it's not like I lost anything critical. *knock on wood*

StorageServer.exe is hovering during backups at about 180mb...which is a ton better. Two backups ran nicely yesterday and everything seems smoother now.

Thanks again for your input!

Hi Ercan, as others have pointed out this is expected behaviour when deduplication is being used. Basically ABR11 will try to load the entire deduplication database into RAM to speed up the deduplication process the more RAM you have the better and smoother the process will be.

It’s best referring to the linked documentation as above as it has some very good details and best practices that need to be followed else you can run into performance issues and find very high disk I/O as there is not enough memory to hold the dedup database and perform the required lookups/transactions..etc

All the best, I’m sure with the details above from many users it should help get your backups running allot better.

Hi Everyone, found this reason and configuration in the web
Configuring a storage node with Windows registry
Cataloging

The following parameter enables or disables cataloging on a storage node. The parameter is useful when updating or loading the data catalog takes a long time.

The parameter is a string value that should be manually added to the corresponding Catalog key in the registry. If this parameter is missing in the registry, cataloging is enabled on the storage node.

Enabled

Possible values: 0 (disables cataloging) or 1 (enables cataloging)

Registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Acronis\ASN\Configuration\Catalog\Enabled

If cataloging is disabled, the storage node will not catalog backups in the managed vaults. Therefore, the Data view and Data catalog will not display this data.

The preferred indexing algorithm

By default, a storage node is configured to use the newest indexing algorithm whenever possible. You can change this behavior by using the PreferedDedupIndex parameter.

Possible values: 0 (use the most recent algorithm), 1 (use the pre-Update 6 algorithm), or 2 (use the Update 6 algorithm)

Registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Acronis\ASN\Configuration\StorageNode\PreferedDedupIndex

Default value: 0

The parameter applies to the deduplication databases that are created after the parameter has been changed. For existing databases, the corresponding algorithm is selected automatically.

Memory allocation settings

When the Acronis Storage Node Service is started, it allocates a certain amount of memory for itself to keep the index and other data. By default, the storage node is configured to consume 80 percent of RAM, but leave at least 2 GB of RAM for the operating system and other applications. You can change this behavior by using the DatastoreIndexCacheMemoryPercent and DatastoreIndexReservedMemory parameters.

The amount of allocated memory is calculated based on the following rule:

Allocated memory = DatastoreIndexCacheMemoryPercent percents, but not more than total available RAM minus DatastoreIndexReservedMemory

This rule ensures a balance between the storage node performance and the operating system memory requirements, for systems with RAM ranging from 8 to 64 and more gigabytes. If the server has plenty of RAM, the storage node takes most of the memory for better performance. If the server lacks RAM (less than 10 GB with the default parameter values), the storage node reserves the fixed amount of memory for the operating system.

DatastoreIndexCacheMemoryPercent

Possible values: Any integer number between 0 and 100, in percents

Registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Acronis\ASN\Configuration\StorageNode\DatastoreIndexCacheMemoryPercent

Default value: 80%

To apply the change, restart the Acronis Storage Node Service.

DatastoreIndexReservedMemory

Possible values: 0 up to RAM size, in megabytes

Registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Acronis\ASN\Configuration\StorageNode\DatastoreIndexReservedMemory

Default value: 2048 MB

To apply the change, restart the Acronis Storage Node Service.

http://www.acronis.com/en-sg/support/documentation/AcronisBackup_11.5/i…

Hi Kevin,

thank you for sharing this guide here!

I want only to add that this instruction applies to Acronis Backup 11.5 version + here you can find more information on the new deduplication algorithm, which was introduced in Acronis Backup 11.5 Update.

Thank you.