Lot of extra vmdk files put in folder for virtual machine
I'm wondering if Acronis vmProtect is the culpret for doing this but I have 2 virtual machines taking a lot of space on our datastores. I went to browse these datastores and notice a huge amount of duplicated vmdk files. Is this Acronis vmProtect's doing and if so how can I manage all of this space? All the snapshots with the % symbols in in and consolidate helper 0, etc... have been deleted yet these extra disk files still remain. This is a mission critical box and I want to ensure its healthy.
Attachment | Size |
---|---|
extravmdkfiles.jpg | 267.76 KB |

- Log in to post comments

Hello,
I did have to open a case [01576909] and sent more info to Rocco Schiavone.
VMWare support did a webex and determined that AcronisAppliance's disks 3 through 7 were attached to various snapshot vmdk files for 2 problem servers in question. The one server has over 56 snapshots and vmware only supports 32 snapshots reliably, so they suggest vmware converter to do a V2V. Thing is, we will have to schedule time to do this on a weekend and hope that our almost 900GB VM is converted to another working VM in the weekends time to avoid downtime for Monday morning.
The other server only had 8 snapshots. Like the other server they were invisible in the GUI and consolidating them resulted in file lock errors. This was because AcronisAppliance had them mapped as a drive for some reason? We hard powered off the acronis appliance and the snapshot consolidation is now running on the server with 8 snapshots. The vmware support rep in this case was able to create a snapshot then delete all to consolidate. It is taking a long time, but there is no downtime.
But again back to the other server. Since it is more than 32 snapshots, even if they were comfortable that it would merge correctly, it would take a very LONG time.
Here is the problem VMWare support (My ticket number with VMWare storage group #12174065305). They are seeing Acronis vmProtect is making a snapshot and it looks like it mounts the disk to itself and runs backup off it? But for some unknown reason Acronis does not detatch this disk, therefore the delta and Servername-00000x.vmdk file is locked, so vstorage api cannot consolidate it. Sometimes we are left with a consolidate helper-0 snapshot in the vmware UI. Sometimes we are left with nothing in the UI and we would have to create a snapshot first, then delete it to merge (or use commandline like vim-cmd vmsvc). They recommend removing disks 3 through 7 from the AcronisAppliance, then power it back on and delete the backup jobs for those 2 servers. Then when we are finally back with 0 snapshots (hopefully next week sometime as V2V will take A LONG TIME thanks to 56+ snapshots), create new jobs and see what happens. They indicated we need to open a case with Acronis, hence my ticket 01576909 sent via email.
- Log in to post comments

Rocco had to escalate it further.
I also notice 2 other VM's have snapshot files but there are no snapshots. Here is an example of one of them.
The inside of the vmsd: cat servername.vmsd
.encoding = "UTF-8"
snapshot.lastUID = "187"
Now for the heck of it I created a snapshot:
vim-cmd vmsvc/snapshot.create 1088 test testing 0 0
After a few minutes and after confirming it was there in snapshot manager and also snapshot get, I removed all:
vim-cmd vmsvc/snapshot.removeall 1088
Just to double check if there are snapshots:
vim-cmd vmsvc/snapshot.get 1088
Now finally to look at the vmsd again
.encoding = "UTF-8"
snapshot.lastUID = "188"
This particular machine has these files which havent been touched since Feb, and it's may now. They aren't refrenced in the vmx file either.
-rw------- 1 root root 1.9M Feb 11 19:02 servername-000003-ctk.vmdk
-rw------- 1 root root 17M Feb 11 19:02 servername-000003-delta.vmdk
-rw------- 1 root root 387 Feb 11 19:02 servername-000003.vmdk
-rw------- 1 root root 1.9M Feb 14 19:05 servername-000004-ctk.vmdk
-rw------- 1 root root 17M Feb 14 19:05 servername-000004-delta.vmdk
-rw------- 1 root root 387 Feb 14 19:05 servername-000004.vmdk
-rw------- 1 root root 17M Feb 11 19:02 servername_1-000003-delta.vmdk
-rw------- 1 root root 393 Feb 11 19:02 servername_1-000003.vmdk
-rw------- 1 root root 17M Feb 14 19:05 servername_1-000004-delta.vmdk
-rw------- 1 root root 393 Feb 14 19:02 servername_1-000004.vmdk
-rw------- 1 root root 17M Feb 21 19:04 servername_1-000005-delta.vmdk
-rw------- 1 root root 326 Feb 21 19:04 servername_1-000005.vmdk
I know Acronis vmProtect is creating these snapshots because I never did. But they are orphaned because vmWare doesn't even know they exist. They are old and not attached to anything. In fact on another VM there were 2 orphan 000000x and 00000x-delta files. I moved them to a TMP folder and the VM didn't care one bit.
I wish Acronis would clean up after itself. I hope they add that feature in vmProtect 8. I already spent a good chunk of money on this and if I try to get upper management to approve a product like veeam, they will look at me like i have two heads.
- Log in to post comments

let me put it this way. Acronis vmProtect is doing more harm than good. A mission critical VM has 54 snapshot disks right now. Its eating up 1.6 TB of space. This weekend we need to do a v2v conversion on it, and thank goodness its a 3 day weekend with Memorial day or else we might not be able to get it done. VMWare said even if we tried to consolidate it via command line, it would likely fail and corrupt the whole machine. They only support 32 snapshots.
Now we need to make a new VM for it and cross our fingers that goes smoothly. Add to that the worry of something happening in the mean time, and also periodically logging in over the weekend to monitor the v2v progress.
- Log in to post comments

I did a find for all the delta files accessable by one of my virtual machine hosts.
Now IF I ever create a snapshot its very RARE, and I'm quick to merge it. So these snapshots were all created by vmProtect and never cleaned up.
All I ask is that vmProtect clean up after itself. I changed the server names but as you can see this is a lot of stuff!
# find /vmfs/volumes/ -name "*delta*" -type f -print0 | xargs -0 du --human-readable --total
64K /vmfs/volumes/a037688c-4b8f4be8/AcronisAppliance/_vm1_sata__Server4_Server4_000002-delta.vmdk
176K /vmfs/volumes/a037688c-4b8f4be8/AcronisAppliance/_vm1_sata__Server4_Server4_1_000002-delta.vmdk
3.0M /vmfs/volumes/4a941feb-c523ee7a/Server5/Server5-000002-delta.vmdk
536K /vmfs/volumes/7ba984ee-de659c1c/Server1/Server1-000003-delta.vmdk
368K /vmfs/volumes/7ba984ee-de659c1c/Server1/Server1_1-000003-delta.vmdk
416K /vmfs/volumes/7ba984ee-de659c1c/Server1/Server1-000004-delta.vmdk
368K /vmfs/volumes/7ba984ee-de659c1c/Server1/Server1_1-000004-delta.vmdk
352K /vmfs/volumes/7ba984ee-de659c1c/Server1/Server1_1-000005-delta.vmdk
844M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000003-delta.vmdk
1.6G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000024-delta.vmdk
1.3G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000001-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000001-delta.vmdk
3.4M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000002-delta.vmdk
2.2G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000001-delta.vmdk
312K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000003-delta.vmdk
633M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000005-delta.vmdk
252M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000005-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000005-delta.vmdk
1.7G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000008-delta.vmdk
1.8G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000004-delta.vmdk
1.2M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000002-delta.vmdk
480K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000002-delta.vmdk
266M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000003-delta.vmdk
667M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000004-delta.vmdk
312K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000004-delta.vmdk
1.5G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000006-delta.vmdk
659M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000006-delta.vmdk
13M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000006-delta.vmdk
1.2G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000007-delta.vmdk
689M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000007-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000007-delta.vmdk
647M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000008-delta.vmdk
1.3G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000014-delta.vmdk
592M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000014-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000014-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000008-delta.vmdk
2.2G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000009-delta.vmdk
575M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000009-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000009-delta.vmdk
706M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000011-delta.vmdk
1.4G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000010-delta.vmdk
539M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000010-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000010-delta.vmdk
1.3G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000011-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000011-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000013-delta.vmdk
2.8G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000017-delta.vmdk
1.5G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000017-delta.vmdk
542M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000017-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000018-delta.vmdk
1.3G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000012-delta.vmdk
576M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000012-delta.vmdk
487M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000012-delta.vmdk
1.4G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000013-delta.vmdk
709M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000013-delta.vmdk
1.3G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000015-delta.vmdk
959M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000015-delta.vmdk
344K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000015-delta.vmdk
1.1G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000018-delta.vmdk
2.1G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000016-delta.vmdk
995M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000016-delta.vmdk
336K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000016-delta.vmdk
732M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000018-delta.vmdk
810M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000023-delta.vmdk
632K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000023-delta.vmdk
684M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000024-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000024-delta.vmdk
1.5G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000020-delta.vmdk
579M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000020-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000020-delta.vmdk
1.9G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000019-delta.vmdk
634M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000019-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000019-delta.vmdk
3.2G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000021-delta.vmdk
1.1G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000021-delta.vmdk
544K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000021-delta.vmdk
1.8G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000022-delta.vmdk
1.1G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000022-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000022-delta.vmdk
3.0G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000023-delta.vmdk
1.2G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000027-delta.vmdk
657M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000027-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000027-delta.vmdk
1.3G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000025-delta.vmdk
717M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000025-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000025-delta.vmdk
1.4G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000026-delta.vmdk
1.2G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000030-delta.vmdk
1.1G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000026-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000026-delta.vmdk
1.2G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000031-delta.vmdk
580M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000031-delta.vmdk
631M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000030-delta.vmdk
312K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000030-delta.vmdk
571M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000032-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000032-delta.vmdk
1.3G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000042-delta.vmdk
2.2G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000028-delta.vmdk
14G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000028-delta.vmdk
6.3G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000028-delta.vmdk
642M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000029-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000029-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000031-delta.vmdk
1.4G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000032-delta.vmdk
1.2G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000029-delta.vmdk
1.3G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000035-delta.vmdk
713M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000035-delta.vmdk
336K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000035-delta.vmdk
1.3G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000036-delta.vmdk
639M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000036-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000038-delta.vmdk
1.2G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000039-delta.vmdk
706M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000039-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000039-delta.vmdk
1.9G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000040-delta.vmdk
312K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000036-delta.vmdk
1.3G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000038-delta.vmdk
1.2G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000033-delta.vmdk
685M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000033-delta.vmdk
3.0M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000033-delta.vmdk
1.8G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000034-delta.vmdk
15G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000034-delta.vmdk
22G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000034-delta.vmdk
652M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000042-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000042-delta.vmdk
1.7G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000043-delta.vmdk
802M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000043-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000043-delta.vmdk
1.2G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000037-delta.vmdk
626M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000037-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000037-delta.vmdk
915M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000038-delta.vmdk
14G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000040-delta.vmdk
22G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000040-delta.vmdk
1.3G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000041-delta.vmdk
655M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000041-delta.vmdk
336K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000041-delta.vmdk
1.5G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000044-delta.vmdk
1.4G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000044-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000044-delta.vmdk
2.2G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000045-delta.vmdk
15G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000045-delta.vmdk
22G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000045-delta.vmdk
3.5M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000046-delta.vmdk
1.2M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000046-delta.vmdk
496K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000046-delta.vmdk
1.2G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000047-delta.vmdk
779M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000047-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000047-delta.vmdk
1.3G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000050-delta.vmdk
867M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000050-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000050-delta.vmdk
3.5G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000051-delta.vmdk
15G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000051-delta.vmdk
1.2M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000055-delta.vmdk
496K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000055-delta.vmdk
2.5G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000056-delta.vmdk
19G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000056-delta.vmdk
45G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000056-delta.vmdk
3.5M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000048-delta.vmdk
1.2M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000048-delta.vmdk
504K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000048-delta.vmdk
1.7G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000049-delta.vmdk
1.1G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000049-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000049-delta.vmdk
1022M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000052-delta.vmdk
563M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000052-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000052-delta.vmdk
1.3G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000053-delta.vmdk
1.1G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000053-delta.vmdk
328K /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000053-delta.vmdk
2.3G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000054-delta.vmdk
839M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_1-000054-delta.vmdk
672M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000054-delta.vmdk
3.7M /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2-000055-delta.vmdk
23G /vmfs/volumes/7ba984ee-de659c1c/Server2/Server2_2-000051-delta.vmdk
344K /vmfs/volumes/7ba984ee-de659c1c/temp/Server3-000002-delta.vmdk
408K /vmfs/volumes/7ba984ee-de659c1c/temp/Server3_1-000002-delta.vmdk
- Log in to post comments


They asked for more information and gave me a special FTP link to send them compressed files. I gathered up everything they asked for so the ball is in supports court right now.
As for our machine with over 55 snapshots... well we ran a VMWare converter on it and did a V2V. It took the entire memorial 3 day weekend to run but it worked. I'm just in a habit to check for snapshots DAILY now so we don't get ourselves into that situation again.
- Log in to post comments

I too have just come across this. It actually got to the point whereby the servers/VMs crashed completely.
I am now consolidating the many gb of data, however now resigned to restoring from backup (which i hope worked!)
We believe that it was the replications that were causing this. Was it backup or replication from your side?
Thanks Acronis!
- Log in to post comments

I too have just come across this. It actually got to the point whereby the servers/VMs crashed completely.
I am now consolidating the many gb of data, however now resigned to restoring from backup (which i hope worked!)
We believe that it was the replications that were causing this. Was it backup or replication from your side?
Thanks Acronis!
- Log in to post comments

chi-ltd, what version are you on?
I haven't seen this happen much in the newer versions and even the 9 beta.
I only had it happen to 2 machines, and I cought it in time. One was running on a -00005.vmdk for its virtual C: and D: drives. I had to detach the disks from the Acronis virtual appliance, and then create another snapshot on the affected VM. After that was done I could consolidate all snapshots.
The only time it really happens is if the virtual appliance running the backup doesn't let go of it's additional virtual hard disks. If it doesn't let go of these disks, then the snapshot consolidation process fails because the -0000#.vmdk file is locked by AVP6,7,8 or 9 beta.
- Log in to post comments

Acronis vmProtect 8.0 Agent (build 8.0.8184)
Yes, 2x machines here also.
I had to down the host, down acronis appliance, remove the attached virtual disks from the appliance and then run a consolidation of the 2x VM's which took 18 hrs.
It looks like the replication failed and then subsequent backups and replications created up to 000062 vmdks.
I want to move to veeam, but assume both use the same api so not going to help!?
Also use backup exec outside of the backup/repl windows but clearly causing vss service issues..
- Log in to post comments

I'm not sure if veeam would have the same problem. I haven't found these types of posts in veeam forums though. Also consider checking out Unitrends. We are evaluating 3 options for our 2014 budget. Stick with Acronis and renew maintenance, switch to Unitrends Enterprise, or Veeam v 7.
I'm getting numbers on switching just in case we go that route.
I wasn't considering switching but then last week it happened on 2 vms, one was on a -00005.vmdk and one was on a 00002.vmdk, both would not consolidate until I removed the attached disks from the virtual appliance.
I think what Acronis should do is periodically poll itself and see if it has snapshot disks attached but no backups are running, it should detach them then query those particular machines. Maybe even email a report nightly saying these machines have snapshots.
I don't think Veeam actually "attaches" vmdks to itself.
- Log in to post comments


Veeam does not have the same issues. We have been using it happily for over a year now.
- Log in to post comments

Hi all,
The most likely root cause of the snapshots being left on the backed up VM is that the vmProtect Virtual Appliance had crashed in the end of the backup process and thus there were no disk detach events and therefore snapshot deletion was impossible. We've fixed several related problems in the newer Linux kernel included in vmProtect 9 and also there is a protection mechanism which automatically detects the orphaned snapshots left and tries to remove them. The snapshot deletion process is performed on VMware side, while disk detach event is performed by the appliance and that's where it fails most likely. In order to investigate such issues throughly we'll need to chec the vmProtect log file (View->Show Logs->Save All To File) which gives a history of what has happened (including crash events) + /tmp/core files, i.e. dumps of the crash event.
If any of you are running into these problems please let me know the support case IDs in a private message, so that I can check them out and advise accordingly.
Thank you.
--
Best regards,
Vasily
Acronis vmProtect Program Manager
- Log in to post comments

Miles: Are you using windows installation or the appliance?
- Log in to post comments

At the time I was using it, I believer it was the appliance. I don't recall having the option to use a windows install with Acronis. Now I am using the windows install of the other product.
- Log in to post comments



Sometimes I notice vmProtect failing and leaving these snapshots behind, over time if left unchecked I can see it causing many issues
Wouldn't it be easier for each time vmProtect does a backup task remove any prior snapshots made by vmProtect only
Ie if the naming convention for the snapshot was "vmProtect - 6%2f09%2f2013 2:51:00 PM"
Each time vmProtect does a job removes any Snapshots starting with "vmProtect - "
This would additionally leave manually created snapshots as well.
That way if the backup failed on attempts 1,2,3,4,5 but on attempt 6 was successful you are not left with 5 vmProtect snapshots
Or is there a better way? - excluding backups never failing :)
- Log in to post comments

Hi BL460c,
Actually there is such mechanism already and it does work. The orphaned snapshots which were detected by vmProtect as the ones left from the previous failed jobs are removed automatically in 1 hour (1 hour is required to ensure there is no usage of this snapshot from another vmProtect agent) after the task accident failure (for example after backup demon crash). This algorithm had some issues in 8th version where it didn't work all the times, but we have fixed it in the 9th version of vmProtect. You will see similar events in the logs:
evt msg="Disk '[datastore4] Server-02/Server-02_1-000002.vmdk' has not been detached from machine '69'. Its detachment will start now." j="00000000" tm="05/09/13 01:43:59.190" tp="I"
evt msg="Detaching hard disk 'SCSI1:2'." j="00000000" tm="05/09/13 01:43:59.973" tp="I"
evt msg="Disk '[datastore1] Server-02/Server-02-000002.vmdk' has not been detached from machine '69'. Its detachment will start now." j="00000000" tm="05/09/13 01:44:06.108" tp="I"
evt msg="Detaching hard disk 'SCSI1:1'." j="00000000" tm="05/09/13 01:44:06.476" tp="I"/
evt msg="Snapshot '68-snapshot-403' has not been deleted from machine '68'. Its deletion will start now." j="00000000" tm="05/09/13 01:44:12.607" tp="I"
evt msg="Remove snapshot (68-snapshot-403)." j="00000000" tm="05/09/13 01:44:12.628" tp="I"
Note however that there could be another case: for example the backup has completed successfully and the disks were detached from the appliance, however the snapshot deletion event has failed in vSphere itself. Such incidents are not tracked (though this should be a rare case anyway).
If you can send me (in pritave message) the log files from vmProtect (View->Show Logs->Save All To File) which show the failure after which the snapshot was left - it would be really helpful to give more details on the reasons for the error in your particular case.
Thank you.
--
Best regards,
Vasily
Acronis vmProtect Program Manager
- Log in to post comments

bl460: yes i agree from what i have seen with my setup, the exact problem occured for me and the VM eventually crashed....
Have you use veeam appliance and is it more reliable?
ps - one other thing i have noticed is that the exchange vm replication job doesn't use cbt. this could also be why the vm crashed ie.. long replication jobs and failures...
- Log in to post comments

@Vasily,
That's good news. We're looking at upgrading to v9 in our production environment next week so good to hear there is a workaround.
We also had issues with Storage vmotion in version 8 while VMs were being backed up, is this fixed now in version 9?
@Chi-Ltd
Yes I have used veeam in the past, it was reliable for backups. But I did have issues with GPT partitions so its not perfect either (but this was back in a previous version).
Veeam has several features vmProtect doesn't (Hyper-V support to name one), and vmProtect has features Veeam doesn't have so what to use will depend specifically on your environment
for example,
Acronis also includes P2V and ESXI backup, and a simple Web interface and configuration
Veeam has really good integration with HP StoreVirtual SAN's (Lefthand) where the snapshots are performed at the SAN level and the data copied direct from those Snapshots (quicker)
There really is no X is better than X as even though they are both VMware api based backup products I believe they are targeted at different markets. A fairer comparison might be ABR vs Veeam as they are more similar priced and similar feature set
As for exchange and CBT I didn't realise this, is there a specific limitation with Exchange and lack of CBT support?
- Log in to post comments

Well AVP 9 is much improved over prior versions. I tried the beta and I really didn't have any issues to report on it. It worked rather well. So I did upgrade both of our virtual appliances to the final build of AVP 9. A pretty simple, painless process really.
I'm testing veeam v 7 and Unitrends Enterprise Backup. Veeam is an application that you install on a Windows Server VM. So if you go that route be prepared to deploy yet another full blown Windows server to install Veeam to. Unitrends is an ovf template that you import into vCenter. It's a linux based virtual appliance where all you do in the text based console really is set the IP address. Then you go to it via the web browser and do everything there.
However Unitrends is a lot more complex. First off I can't even get it to authenticate to any of my windows machines or openfiler. I even tried our special vmprotect user we created for openfiler, and while that works for vmprotect, it doesn't for Unitrends. I was able to get some cifs storage from an EMC NX4 attached to it and I'm backing up Exchange with it for the trial. It does the same thing as vmprotect. You'll see it snapshot a vm, that vm will then be running on a -00001.vmdk, it's original disk gets attached to the unitrends appliance, and I guess it processes that. When done, that disk detaches and it initiates snapshot consolidation of the guest that you are backing up.
Veeam also does it the same way as well, even though its running on a Windows Server. Veeam has a nice UI and I like it better over Unitrends (so far). I have it backing up a SQL server for the test. So you will see the exact same process that Unitrends and vmprotect both use.
Now with vmprotect, who knows why it would leave sparse snapshots out there. I guess its when the backup engine 'crashed'. Who knows why it would have crashed. Buggy? Memory issue? IO issue? Linux kernel issue? It could be anything. The key is, if it doesn't crash then this shouldn't happen. Now it sounds like there's a protection mechanism in VMP9 and that's great! You don't hear about this issue happening with Unitrends or Veeam. Why is that? Is it because their backup engine doesn't crash? If their backup engine does crash, does it immediately invoke a failover where it can recover and continue backup, or have they been automatically remove the sparse snapshots? I'm not sure what they do..
Leads me to this...
After trying other products, I would like to see more development from Acronis in version 10 and beyond. I would like to see an option where you can backup a physical machine with the web gui. You know, push out an agent but let the virtual appliance handle everything else. Unitrends does this. We may be 95% virtual, but what about that external domain controller? What about a phone system box that you can't virtualize because of specialty cards or dongles? What about specialty software or fax servers that need multi-line pots cards? How about security systems that need special interface cards to the security system? There's a lot of stuff people can't virtualize and having the option to back that stuff up all in one product would be nice. That's why I wanted to try Unitrends (but I don't like Unitrends... its complicated and not connecting to my Openfiler).
Then how about the ability to deploy a proxy server. A way to backup vm's at a remote office and proxy them to the headquarters (veeam has this role available). That proxy server does deduplication and compression so WAN bandwidth is optomized.
I'd love to see much nicer e-mail reports from Acronis. For example Veeam has a nice table that shows the backup job duration, size in GB, compressed size, dedupe %, compressed %, etc.. My SQL server says total size 260GB, Data read 248GB, Transferred 21.6 GB, Dedupe 6.7%, Compression 1.9%, duration 1:15:28. Unitrends also follows up with an overall schedule email which is a graphic calendar with green, yellow or red blocks showing backup status at a glance. Acronis really needs to evolve from the simple text only based e-mail reports. They should include more readable information, some graphics and add some color into the reports.
I don't think there is one size fits all product out there either. Even though Veeam seems to be pretty good, for my needs it would be roughly $1250 per socket licencing. VMProtect is more than half that.
Still evaluating all options, but just my two cents is that the other two I'm trying seem to back things up the same way. If that way is prone to failure then they may eventually see the same issues. I don't know...
- Log in to post comments

Hi KJSTech,
Very solid comment and I really appreciate your feedback. Just a few cents:
I've seen reports on other products (including the ones you mentioned) that the snapshots/disks are left/attached after the backup completion and really it may happen with any product, simply due to the fact that the crash of the backup engine may happen anywhere and assuming that all solutions are using the same workflow (create snapshot, attach disk, backup, remove snapshot) - the snapshots will be left obviously if you crash either ot the backup agents from any vendor. All what matters is how often the crashes occur:
What concerns the vmProtect - the auto-clean up mechanism I mentioned above was done not as the "final solution", but rather as an insurance feature which improves the reliability. In each such case when the snapshots are left on the backed up VM we are investigating the actual reasons for the failure and fix the crashes (as you saw improvements since the 7th version - we do work on that). Just recently I've got a case where the backup daemon crashed since it simply ran out of memory (1GB of RAM appeared to be insufficient to process 2TB+ VM backing up into a 4TB archive). That was resolved by simply adjusting the RAM on the appliance (increase up to 2GB). There are also many small things which may take an effect and we are trying to mitigate them in the code, like deploy appliance with 2GB of RAM by default, increase its internal disks from 2GB to 4GB, etc., etc.
From my experience the amount of similar cases reported is really going down compared to what I saw with 6th or 7th version of vmProtect, so I believe we're moving in right direction here especially with the 9th version.
What concerning the comparison (as you correctly outlined): I would not say that vmProtect competes with Veeam, since we are targeted for different markets and purposes. Yes we have similar functionality and features, but depending on the environment one solution may shine where the other one will not.
P.S. what concerns the additional reporting (like task statistics, etc.) - it's pending for Acronis vmProtect 10 right now, though the final scope is not defined it, so I'm not yet sure whether it will be there (but likely yes).
Thank you.
--
Best regards,
Vasily
Acronis vmProtect Program Manager
- Log in to post comments

i have been requesting a schedule, which shows storage calculations over a long period of time. does v9 do this?
- Log in to post comments

Thanks for the reply vasily.
Just so you know, we are not committed to switching vendors or anything just yet. I'm simply doing my due diligence to see what the alternatives have to offer. I think I eliminated Unitrends entirely because it's just too convoluted and complex, nor does it authenticate to our Openfiler (maybe because were using NT LM level 5 and other high security SAM domain policies).
I did end up finding a post on veeam forums where someone makes a small mention of snapshots not removing correctly. I was originally searching how many vm's per backup job.
Kslawrence wrote :
"We have a vsphere host with 6 vms running on it that have been added over time. I have created separate backup (to NAS) and replicas jobs for each server. So I have 12 jobs in total, all running at various times. This seems to work fine, albeit with the odd snapshot removal error, probably due to clashing jobs which seems to rectify itself fairly quickly."
Source: http://forums.veeam.com/viewtopic.php?f=2&t=8899&sid=54f8993f3f0932aa76…
So your right, it could happen to anyone.
- Log in to post comments

chi-ltd wrote:i have been requesting a schedule, which shows storage calculations over a long period of time. does v9 do this?
It's not yet implemented in Acronis vmProtect 9, but this feature is a part of reporting functionality (report on vaults, i.e. backup locations) in Acronis Backup and Recovery 11.5 product. We may add it to vmProtect 10, though it depends on whether the overal reporting capabilities will be included or not (it will be a marketing decision actually and it's still under discussion).
- Log in to post comments

If you want to have continued success with vmProtect that I think its necessary to add such capabilities.
Not only do customers want bugs fixed, but features implemented to stay current with the industry trends and capabilities.
Bringing over other features already in ABR is a good start
However you could offer it other products do by having multiple editions
vmProtect Standard - the current version including the centralised dashboards, version 10 would include additional features from ABR
vmProtect Enterprise - has extended reporting capabilities, deployed a single separate appliance
vmProtect Cloud - has all of the above + the cloud stuff for MSP's
The reporting could even just be an "Add-On" single purchase appliance
- Log in to post comments

I think BL460c has a good idea.
Anyway Veeam is no better. Same issues backing up my exchange error. Need to find something that is able to brute force a backup.
Veeam says this:
Error: VSSControl: -2147467259 Backup job failed. Cannot create a shadow copy of the volumes containing writer's data. VSS asynchronous operation is not completed.
Acronis also complains about vss all of the time for our mail server. Unittends backed it up, but not application aware... Ie) restore all or nothing... Not deep dive into mailbox stores.
- Log in to post comments

KJSTech wrote:I think BL460c has a good idea.
Anyway Veeam is no better. Same issues backing up my exchange error. Need to find something that is able to brute force a backup.
Veeam says this:
Error: VSSControl: -2147467259 Backup job failed. Cannot create a shadow copy of the volumes containing writer's data. VSS asynchronous operation is not completed.Acronis also complains about vss all of the time for our mail server. Unittends backed it up, but not application aware... Ie) restore all or nothing... Not deep dive into mailbox stores.
regards exchange aware, we gave up on this in the end..
- Log in to post comments

KJSTech wrote:I think BL460c has a good idea.
Anyway Veeam is no better. Same issues backing up my exchange error. Need to find something that is able to brute force a backup.
Veeam says this:
Error: VSSControl: -2147467259 Backup job failed. Cannot create a shadow copy of the volumes containing writer's data. VSS asynchronous operation is not completed.Acronis also complains about vss all of the time for our mail server. Unittends backed it up, but not application aware... Ie) restore all or nothing... Not deep dive into mailbox stores.
From the error I can conclude that it is related to failing VSS inside the guest OS. I'd recommended contacting Microsoft regarding this issue, since likely the native Windows backup engine will fail in similar way (there must be some events in the Applications log).
- Log in to post comments

When we reboot our exchange server, backups work fine for a few days to a few weeks. Be it Acronis, Veeam, windows backup, etc... Then eventually no backup product will work and the server will need a reboot to correct it.
Seems like VSS is flawed by a memory leak or something getting stuck that only a reboot clears. It's a 2003 R2 server so I doubt Microsoft has any intention of fixing it.
Next year we will add more vmware hosts and have resources available to move off of Exchange 2007 / server 2003 and get the latest products. I'm hoping that fixes it. For now I think a task scheduler event to reboot Exchange weekly during the off hours would get us by until we can upgrade to a new server.
- Log in to post comments


Sorry to bump up this old post but has anyone found a work around on this yet?
- Log in to post comments
