Indexing errors at storage node
We are experiencing frequent errors in the ABR logs from one of our storage node servers (pasting an example error message below).
-- What "file system limitation" is involved in this?
-- How dangerous is this error message?
Thanks in advance for your thoughts on this...
--------------------
Log Entry Details
--------------------
Type: Error
Date and time: 27/06/2012 11:50:14 PM
Backup plan: [None]
Task: [None]
Managed entity type: [None]
Managed entity: [None]
Machine: wpg138app3.SHARED.MBGOV.CA
Code: 20,250,685(0x135003D)
Module: 309
Owner: ASN User
Message:
Command 'Indexing vault' has failed.
Additional info:
--------------------
Error code: 61
Module: 309
LineInfo: 4a8728dc8a1c94f4
Fields: $module : storage_server_vsa64
Message: Command 'Indexing vault' has failed.
--------------------
Error code: 123
Module: 39
LineInfo: 8ed4fba2a46c4757
Fields: $module : storage_server_vsa64
Message: Failed to index backup data '6A2EFDF8-3360-44A1-A447-820DDB33595D' from vault 'VEMABackups_OffSite'.
--------------------
Error code: 116
Module: 39
LineInfo: aeb34f528199bc7c
Fields: $module : storage_server_vsa64
Message: Failed to save the items.
--------------------
Error code: 33
Module: 39
LineInfo: f7d1612d6b579caf
Fields: $module : storage_server_vsa64
Message: Storage node error.
--------------------
Error code: 98
Module: 39
LineInfo: aeb34f528199bc57
Fields: $module : storage_server_vsa64
Message: Data store transaction has failed.
--------------------
Error code: 3
Module: 4
LineInfo: f330789b59b3f2f2
Fields: $module : storage_server_vsa64
Message: Error occurred while writing the file.
--------------------
Error code: 3
Module: 4
LineInfo: 795414836ed3acda
Fields: $module : storage_server_vsa64
Message: Error occurred while writing the file.
--------------------
Error code: 3
Module: 4
LineInfo: 7ceb2cdc9fb12068
Fields: function : WriteFile, $module : storage_server_vsa64
Message: Error occurred while writing the file.
--------------------
Error code: 65520
Module: 0
LineInfo: bd28fdbd64edb8e0
Fields: code : 2147943065, $module : storage_server_vsa64
Message: The requested operation could not be completed due to a file system limitation
--------------------
Acronis Knowledge Base: http://kb.acronis.com/errorcode/
Event code: 0x0135003D+0x0027007B+0x00270074+0x00270021+0x00270062+0x00040003+0x00040003+0x00040003+0x0000FFF0+0x80070299
--------------------

- Anmelden, um Kommentare verfassen zu können

Thanks for that information, Fedor. I will work on your suggestions, and let you know how it works out.
Frankly, I'm not eager to install the 17438 build, because 17437 took us down for more than a week until your team found a solution.
Thanks again for your quick response to my question.
Regards,
Karl
- Anmelden, um Kommentare verfassen zu können

OK, Fedor, I have run the contig utility, and you were correct, there were some severely fragmented files in our backup directory.
I will check our logs again tomorrow morning and let you know how they look...
Thanks again for your help with this...
Cheers,
Karl
- Anmelden, um Kommentare verfassen zu können

For what it's worth, the indexing error happened again after last night's backup jobs ran.
I re-ran the contig utility, and it reported that the file unified_data.ds.1 is in 1819667 fragments but was unable to defragment it.
I am copying that file to another disk, will delete the file, de-frag the disk it was on, and copy it back.
Are you sure that the latest build fixes the problem with indexing?
- Anmelden, um Kommentare verfassen zu können

It's fixed in 17438 . Corresponding entry in release notes (http://www.acronis.com/support/updates/changes.html?p=10381 )
* Acronis Storage Node Service does not progressively pre-allocate free space for data store and temporary files on NTFS vaults.
- Anmelden, um Kommentare verfassen zu können

OK, Fedor, thanks for that info. I have been able to set up a daily disk defrag task on the drive where the backups are stored, so that should help prevent these errors from re-occurring.
I still want to wait a week or two before installing 17438 to make sure we don't repeat the major downtime we experienced from 17437. (See support case 01544011 for all the gory details...)
- Anmelden, um Kommentare verfassen zu können