Skip to main content

ESXi Acronis VA

Thread needs solution

Hi all,

we've recently got an issue when working with the Acronis Virtual Appliance for ESXi. We deployed this virtual appliance and from the beginning it worked fine with the "Run as VM" option. I've noticed that after installing the appliance, it automatically attached a NFS datastore to the ESXi calling the IP address of the appliance with :/share with name "AVA-........................"

Today, the server started deleting all the VMs that were created with the "Run as VM" option, the tasks were all started by "Backup service" and also, the NFS Datastore got disconnected suddenly and all the VMs we had in the ESXi host were removed from the inventory (All VMs that were related with the Run as VM I mentioned before).

All these VMs were created in other datastore (LOCAL), in fact, I see their folders in this other datastore and I can import the machines again to the inventory but I cannot power on them because I see an error (attached the error).

  • Why is this NFS Datastore created when the acronis virtual appliance is deployed? Is it mandatory?
    • I removed the VA from the AMS and deleted from the ESXi host. Then, I deployed again the VA but the NFS Datastore was not created this time, why?
  • Why the deletion happened without any user action?
  • How can we recover the VMs that were deleted but their folders exists in the correct Datastore?

Thanks in advance.

 

 

 

 

Attachment Size
ava.png 27.79 KB
0 Users found this helpful
frestogaslorastaswastavewroviwroclolacorashibushurutraciwrubrishabenichikucrijorejenufrilomuwrigaslowrikejawrachosleratiswurelaseriprouobrunoviswosuthitribrepakotritopislivadrauibretisetewrapenuwrapi
Posts: 22
Comments: 3800

Hi,

The NFS datastore is created by appliance only when "Run VM from backup" functionality is used, e.g. when you mount a VM from backup. This NFS datastore is basically a channel to the backup repository and VA emulates VM disks on this datastore by reading data from the backup storage. The local datastore you specify while configuring "Run VM" is used to store only changes made to mounted VM while it's running - it contains the .vmdk delta disks, while base .vmdk disks are emulated on the NFS datastore backed by NFS server running on VA.

If you reboot VA while there are mounted VMs - they will be lost, since upon reboot there will be auto-cleanup performed which will try to clean up the traces left from mounted VMs and dismount them. That's also one of the reasons why mounted VMs are not supposed to be kept indefinitely - you need to run "Finalization" or perform full VM recovery from backup in order to persist the changes (this is specifically outlined in the documentation). 

If the .vmdk delta disks are still left on the local datastore, then it's possible to keep these changes by doing the following:

  • Mount (Run VM) the same VM from the same recovery point as originally (before it was unmounted) and do NOT power it ON.
  • Notice the new location of the delta .vmdk virtual disks on the local datastore via vSphere datastore browser
  • Replace these newly created delta .vmdks with the ones which were left in the old folder, while keeping the current names.
  • Remember to run Finalization after mounting in order to persist the changes completely on the local datastore.

There is no logics in product which automatically unmounts VMs running from backup upon timeout - it may happen only due to VA reboot. If this is not the case (and VMs are somehow unmounted without VA reboot) then the issue should be investigated with help from our support team.

Thank you.

Hi Vasily.

Everything makes sense now.

Thank you SO MUCH for your explanation.

Best regards.