Skip to main content

Unable to Recover and Unable to Clone to a PCIe NVMe SSD disk using Recovery Media

Thread needs solution

I was running out of space on my system disk and I got this new disk to replace my old one with. Since I have a lot of software installed and configured, I really don't like the idea of having to reinstall the operating system and starting all over again. So I thought it would be a straightforward process to replace the old disk with the new one, since I have my entire system backed up by Acronis True Image 2019. I could not have been more wrong.

  • Source disk: Samsung 850 EVO SATA SSD
  • Target disk: Samsung 970 EVO Plus PCIe NVMe SSD

The physical installation was very easy, and my UEFI BIOS detected the new disk with ease. The software part is what's really giving me headaches.

I see from reading other people's posts on this forum alone that this is no news to computer users. Also, True Image is said to support PCIe NVMe SSD disks since at least 2016 edition.

  1. I made a bootable USB disk using Acronis Media Builder, method Simple. This means it's based on WinRE. I selected USB as destination.
  2. I made a bootable USB disk using Acronis Media Builder, method Advanced. I took the Linux option. I selected USB as destination.
  3. I made a bootable USB disk using Acronis Media Builder, method Advanced. I took the WinPE for Windows 10 option. I selected USB as destination.

Case 1

Booting from the USB works. But after selecting "what to recover" I am left with a winding clock animation. After about 12 minutes I got a message saying "generating report, please wait". I am then left with a window with an overview of what log files have been generated, but I cannot view them, although I can save them to disk.

Case 2

Booting from the USB does not work. The resulting USB write is not made UEFI compatible. I had to repeat the process and I wrote it to an ISO file. Then I used Rufus 3.1 to write it to USB for proper UEFI compatibility, and then I could boot from it. But after selecting "what to recover" I am left with a winding clock animation. After about 12 minutes I got a message saying "generating report, please wait". I am then left with a window with an overview of what log files have been generated, but I cannot view them, although I can save them to disk.

Case 3

For WinPE to become available, I had to install ADK and PE addon. Booting from the USB works. But after selecting "what to recover" I am left with a winding clock animation. After about 12 minutes I got a message saying "generating report, please wait". I am then left with a window with an overview of what log files have been generated, but I cannot view them, although I can save them to disk.

I am still in the search for a workaround or an alternative method of arriving at my goal: migrating old disk data to the new disk.

I know Samsung has its Data Migration tool that can clone a disk, but it does so live, inside the running Windows system, and it cannot lock files and other running software according to some sources. I have not tested it yet, but that's definitely an option, but it is my last resort.

But having to use a Samsung (free) software doesn't speak for Acronis (paid) software. I put my trust in Acronis True Image, and I pay for it, so I expect it to work properly.

I am currently looking at ways to re-pack the Samsung drivers for its NVMe disks. They have it available for download, but it only comes in the form of an EXE installer file.

Looking at the Device Manager, I can see that the new disk is using the following drivers.

Acronis
C:\WINDOWS\system32\DRIVERS\fltsrv.sys

Microsoft:
C:\WINDOWS\system32\DRIVERS\disk.sys

Microsoft:
C:\WINDOWS\System32\drivers\partmgr.sys

Microsoft:
C:\WINDOWS\System32\drivers\EhStorClass.sys

So from the looks of this, Microsoft does have generic drivers that work with this disk class. Are these included when I create WinPE media? And what's the Acronis driver for?

References

61632: Acronis True Image 2019: how to create bootable media

61621: Acronis True Image 2019: How to restore your computer with WinPE-based or WinRE-based media

 

0 Users found this helpful

Is cloning a viable option between these two types of disks? You know, in terms of things like optimal block size, allocation unit size and what not...

I would really prefer to restore the contents of a backup image rather than clone a disk, just to be safe and avoid any kind of discrepancies between dissimilar disks.

 

Case # 2 won't work.  Linux doesn't have the drivers to support PciE NVME drives.

Case #1 and Case #3 should both work pretty much the same. WinRE is based off of WinPE, but includes your local system drivers by default, and also includes support for wireless and bitlocker natively.  WinRE is also based off of the version of Windows OS installed on the system.  Windows ADK doesn't natively support wireless, and does not have the additional packages included for things like Bitlocker (although they can be added manually, or with a tool like the custom MVP winPE builder linked to the right of this forum thread).

Is cloning a viable option between these two types of disks? You know, in terms of things like optimal block size, allocation unit size and what not...

Yes, I've cloned from SSD to PCIe NVME a few times now.  A couple of thoughts, suggestions...

01) Take a backup of the original drive to be on the safe side.  Just in case - you want this safety net

02) If you go into disk management and delete the volumes of the newly cloned drive, you can use Acronis to do a live clone (like Samsung Migration tool) as well.  It actually works quite well in Acronis 2019.  The caveat being that in order to do this live, the destination disk needs to not contain any OS on it already or it will want to reboot and go into the Linux environment, which isn't going to work for this type of drive.

03) If you don't want to do a live clone, the WinRE or WinPE should be fine.  Just make sure that when you boot them, it sees the source and destination drive.  It's also important to start rescue media in the same manner the OS was installed as.  For instance, if your OS is a GPT install, then boot the rescue media in UEFI mode.  If it is a legacy/MBR install, boot the rescue media in legacy mode.  These limitations are the same as a Windows installer disc as how you boot the media, determines the partition scheme of the resulting install.  Please check out this KB article:  https://kb.acronis.com/content/59877

Also, make sure to shutdown the OS with a full shutdown (command prompt and "shutdown /p").  This will ensure the drive is not "locked" by a Windows faststart hibernation file that may be causing issues reading the disk.

04) Alternatively, since you are going to be taking a backup for safety (yes, please do this!), just restore it to the new disk.  The end result will be exactly the same as a clone.  And it has fewer limitations than cloning, which may save you time in the long run.

I'm not sure what you're seeing when the log comes up.  I've never seen this behavior.  So, I'm assuming there is an issue, such as the locked disk from fast start... or maybe an issue reading a sector on the source drive (bitlocker enabled?, need to run chkdsk?)

If you're able to run a system report afterwards, then it should capture the log too.

So from the looks of this, Microsoft does have generic drivers that work with this disk class. Are these included when I create WinPE media? And what's the Acronis driver for?

Yes, the WinPE and WinRE rescue media has the drivers for this.  Most of these drivers are for the integration of the Acronis proprietary snapshot method (snapapi).  By default, Acronis 2017 and newer are using Windows VSS, but can still be configured to use the snapapi method. There's probably a newer KB article on this, but this is the one that came up first just now:

https://kb.acronis.com/content/1512

 

Also, just curios,but what OS are you using (assuming Windows 10), and although you have a UEFI bios, is the original OS installed in UEFI (GPT) mode or legacy mode?  Asking because if it was still installed as legacy, you'll want to backup and restore instead of a clone to convert from legacy to GPT.  A pcie nvme drive must be GPT in order to be bootable.  People who have Windows 7 have more back end work to do since they don't support pcie NVME drives natively without applying some hotfixes first).  And in some cases, where the existing OS was installed as legacy, although you can clone, it won't boot because you can't boot PCIe NVME drives in legacy mode and must convert to GPT.  This is done by booting the rescue media in UEFI mode (use your one time boot menu to ensure it's UEFI and not legacy) and the restore will convert the partition scheme from MBR to GPT automatically.

A clone is like for like so to make this change, backup and restore is needed.

I will start by answering some of your questions.

  1. It's a UEFI system with a modern GUI and mouse support.
  2. The OS is Windows 10 indeed.
  3. The OS is installed in GPT mode indeed.
  4. Fast startup is enabled.
  5. BitLocker is disabled.

Case # 2 won't work. Linux doesn't have the drivers to support PciE NVME drives.

Linux kernel supports NVMe disks natively since version 3.3 (Feb 7, 2012 based on the nvme.c commit).

https://github.com/torvalds/linux/blob/c16fa4f2ad19908a47c63d8fa436a1178438c7e7/drivers/block/nvme.c

https://github.com/torvalds/linux/commit/797a796a13df6b84a4791e57306737059b5b2384

Regardless! This is not the reason why the computer would not boot at all from the Linux based USB media. This has more to do with the way Acronis prepared it (MBR/GPT/BIOS/CSM/UEFI/EFI/ETC...).

If you go into disk management and delete the volumes of the newly cloned drive, you can use Acronis to do a live clone (like Samsung Migration tool) as well.  It actually works quite well in Acronis 2019.  The caveat being that in order to do this live, the destination disk needs to not contain any OS on it already or it will want to reboot and go into the Linux environment, which isn't going to work for this type of drive.

Let me get this straight.

Source disk: A
Target disk: B

Acronis True Image can be used to clone disk A to B in normal Windows mode (live) if disk B is empty (not initilized). Else, if disk B is not empty, any volumes or partitions on disk B must be deleted first (e.g. using Disk Management in Windows). Else, the cloning must be done outside of normal Windows mode such as Linux live environment, which will not work if we assume that Linux does not have drivers for NVMe disks (which it has, kernel itself supports this disk class and requires no drivers).

Well, my target disk is empty indeed. So this may be an option for me, and a replacement for Samsung Data Migration. Although I would prefer to use WinPE/WinRE/Linux environment, just to prove the concept that it actually works in reality and not just on paper, if that day should ever come where the normal OS is unbootable. So I can't rely on that. That's why we have so called rescue media and emergency kits or as Acronis puts it: "survival kit". I need to prove to myself that I can survive without a working OS, otherwise it's just marketing.

If you don't want to do a live clone, the WinRE or WinPE should be fine. Just make sure that when you boot them, it sees the source and destination drive.

How do I ensure that it sees the right disks? Diskpart?

It's also important to start rescue media in the same manner the OS was installed as. For instance, if your OS is a GPT install, then boot the rescue media in UEFI mode.

Yes, I always get a boot option like "UEFI: My Cute USB". In fact, this is why Linux based USB failed to boot earlier: it said "My Cute USB" instead of "UEFI: My Cute USB". So Acronis prepared it in a wrong manner.

I'm not sure what you're seeing when the log comes up.

I get a dialog box.

The system report is successfully generated. To save the report to file click Save As.
The report contains the following information:

Disks report
UEFI Boot Variables Report
OS Selector report
Version Report
License information
TrueImage.exe.dmp
Groups that contain the user account
Common Application Data
User's Application Data
acroreg.reg
PnP controllers
PnP controllers with vendor's information
PnP storage controllers and PnP network controllers

I can Cancel or Save As.

An example of this and the non-working links I mentioned can be seen in the following forum thread:

Non-working links after System Report Creation

If you're able to run a system report afterwards, then it should capture the log too.

In Acronis True Image in normal Windows mode you mean? I could do that. But I already have two such reports saved. They are archived Zip files. I only looked at one of them briefly. There is a lot of suff in there, a lot of meta data, some of which I would not want to share with Acronis. So I could release parts of it, if it's of any help. Perhaps the EXE dump could be sent off for analysis.

People who have Windows 7 have more back end work to do since they don't support pcie NVME drives natively without applying some hotfixes first).

For sure, I came across a lot of Windows 7 discussions in regards to NVMe disks yesterday. I went through the trouble of extracting Samsung drivers from their EXE installer, and then embedding them with the WinPE media. Sadly it did not help solve my problem with Acronis, but it was a good exercise nevertheless.

 

Samir, it won't work in Acronis Linux, generally speaking.  Linux doesn't support RAID mode which is what about 99.99% of machines are coming with as the SATA mode in the bios when  Windows 10 is installed by them from the manufacturer, even when there is only 1 NVME PCIE drive. For the average home user this is an additional complication, especially if they have a true RAID setup.

There are possible work a rounds like this one:(https://www.dell.com/support/article/us/en/19/sln299303/loading-ubuntu-on-systems-using-pcie-m2-drives?lang=en) from Dell, but require switching to AHCI (and other under the hood modifications in the temp preboot OS that aren't easily accessible). Basically, it's a lack of IRST support that makes the Linux boot environment unusable in most cases because of RAID SATA mode in the bios.

Yes, I meant the logs in the recovery environment where it's failing. It must be detecting an error and automatically creating a system report as one is not normally just created.

For the active cloning in windows. The disk would need to be initialized and given a volume letter. In winpe, you  should use "add new disk" to do the same. You just don't want it to have any remnants of an older OS install (partial or otherwise) in Windows so that it doesn't need to reboot and start up the temp Linux environment.  Basically, it's best if the drive is initialized as GPT and given a volume letter and has nothing else on it.

 

I note that fast startup is enabled; this can play havoc with attempting to boot from USB device. Try temporarily disabling fast startup; the recovery media should then load, although as it is Linux based it probably will not get you anywhere. You should create Windows RE or PE recovery media.

Hope this helps

Ian

Good point on fast start up too Ian! Completely missed that.

Seen it as an issue in the forums multiple times. This write-up explains it quite well

https://answers.microsoft.com/en-us/windows/forum/windows_10-performance-winpc/win10-fast-start-up-quick-boot-warning-suggestion/5fec376c-d876-4033-85f6-32c2d2cb5e03

Problem solved!

I don't know exactly what did the trick, but I suspect it's one or all three of the following things.

• While extracting Samsung drivers from the EXE file, I inadvertently installed them on the system. The result of which may have been inclusion of right drivers in Win PE or Win RE based media.

• I updated Acronis True Image 2019 to build 17750 and then made my recovery media. This, I suspect, made an update to the Recovery Media Builder which now properly writes UEFI formatted Linux based media.

• I unplugged all the disks from the system that are not necessary, leaving only the NVMe and the external USB disk with the backup image and the USB flash drive with bootable media in.

I did the recovery operation with Win PE media. Once it was booted up, I selected source disk and partitions and proceeded to the next step. The winding clock animation appeared again. Of course, just like before, I allowed it to do its thing, I did not want to interrupt it even though it was taking awfully long time. Luckily it did not crash nor did it generate a report (with broken links that lead nowhere). Instead, I was presented with a view where I could select the destination disk. It only took 48 minutes and 16 seconds. Yes, I timed everything.

Recovery Time from Start to Finish
Event Time Stamp (hh:mm) Diff (hh:mm) Sum (hh:mm)
acronis true image 12:12 0 0
locking partition 12:45 0:33 0:33
analyze partition 12:57 0:12 0:45
destination partition 13:00 0:03 0:48
summary 13:35 0:35 1:23
Success 14:11 0:36 1:59

 

Samir, it won't work in Acronis Linux, generally speaking. Linux doesn't support RAID mode which is what about 99.99% of machines are coming with as the SATA mode in the bios when  Windows 10 is installed by them from the manufacturer, even when there is only 1 NVME PCIE drive. For the average home user this is an additional complication, especially if they have a true RAID setup.

I will try to break this down and rephrase it a little bit if you don't mind.

It won't work in Acronis Linux, generally speaking.

Generally speaking? Well, let's not speak in general terms then. Can you be more specific, what is it that won't work in Linux based Acronis recovery media?

Linux doesn't support RAID mode.

Are we speaking in general terms again? This is just absurd. Of course it supports RAID mode! If you mean that Acronis on top of Linux does not support it, then that's a different topic. Linux supports more kinds of RAID than Windows ever will in its wildest dreams, including but not limited to Non-RAID.

About 99.99% of machines that ship with at least one PCIe NVMe SSD and Windows 10 as the pre-installed OS are pre-configured for RAID support by the manufacturer.

What kind of machines? Are you talking about laptops? Consumer desktop PCs? Professional workstations? What are you basing your number on? Just because they are shipped with Windows 10 pre-configured does not mean that the same machine, with the same hardware configuration cannot operate under Linux as its OS.

I think you need to widen your views a little. Have a look at HP Z series workstations for example. You can have these delivered from HP with Linux pre-installed on them, with the same kind of RAID advantage you would expect from ones that ship with RAID pre-configured and Windows 10 installed on them.

HP Workstations continue a legacy of UNIX® system performance, security, and reliability derived from decades of experience. Since 1999, HP has featured quality Linux support, becoming the first workstation vendor to deliver desktop Linux platforms to customers requiring accelerated 3D graphics capabilities.

https://www8.hp.com/us/en/workstations/linux.html

You can even pick from a number of different Linux distributions.

For the average home user this is an additional complication, especially if they have a true RAID setup.

By true RAID you mean a RAID card? Regardless if it's soft or hard, Linux can handle it. I don't want to fall in the trap of discussing other, unrelated topics. But any problems that are encountered by Acronis users that have some kind of RAID setup should be dealt with accordingly, it shouldn't be blamed on Linux blindly. There are a lot of points for error to occur.

There are possible work a rounds like this one:(https://www.dell.com/support/article/us/en/19/sln299303/loading-ubuntu-…) from Dell, but require switching to AHCI (and other under the hood modifications in the temp preboot OS that aren't easily accessible).

Well, there's your problem. It says...

Add the following kernel argument at boot time:
nvme_load=YES

https://www.dell.com/support/article/us/en/19/sln299303/loading-ubuntu-…

This is what enables loading of the NVMe block driver I referenced earlier. You won't get far without it if you're using an NVMe disk as your only disk and you want to install a Linux based system on it.

"Kernel" is Linux. This is where you want the driver enabled. In the examples, they were using Ubuntu 14.04 and 15.04, and Ubuntu is not Linux. Ubuntu is a system based on Linux kernel. So if anyone, Canonical (the company behind Ubuntu) should be blamed for not foreseeing the need for this driver.

Basically, it's a lack of IRST support that makes the Linux boot environment unusable in most cases because of RAID SATA mode in the bios.

Is this what you're basing your case of "Linux doesn't support RAID mode" on? Well, I got news for you. You can totally use Linux on top of such configuration today, it's not a problem anymore and it has not been a problem for many years now. Intel and Red Hat are two major contributors for all things Linux. Intel is the developer behind the NVMe block driver for Linux that I linked to earlier (you can see that in the code file comments).

IRST (Intel Rapid Storage Technology) is a software RAID format implemented in the firmware of many high-end motherboards with Intel chipsets. I say "many" because it has become very common. IRST was previously known as Matrix RAID and it was managed by an option ROM program called IMSM (Intel Matrix Storage Manager). In the early days, which was more than a decade ago, only a handful of high-end motherboards and server motherboards had support for this new up-and-coming technology.

1. The initial problem with RST was that Intel did not release the specification for it's RST RAID format. This was still a fresh new technology and not many people even knew about it.

2. The second problem was that when they did release the specification, not a lot of people out there knew what to do with it, and those who did know how to follow a specification and implement support for this format did not feel the urge to take up this task and do it. Someone eventually did, but that took time. That someone was Intel itself.

3. The third problem is that, once the support was there, many people still lived in the past and the "fact" that RST doesn't work under Linux had stuck by.

4. The fourth problem is that, even today, people still think that RST doesn't work under Linux. Because as with all things Linux, some adjustments and considerations have to be made. When things don't work out of the box, people just assume that it's Linux fault. They go on to do some web searching and find age old discussions, they read it briefly and it reinforces their belief that RST never did and never will work under Linux. Then they move on... to Windows usually where everything somehow just magically works.

These are the main ingredients you will need for using RST under Linux.

• md – block driver

• mdadm – RAID management utility

Both of these have been laregely developed by Intel. Here is an old Intel whitepaper that describes allt hese things in great detail.

https://www.intel.com/content/dam/www/public/us/en/documents/white-pape…

"The recommended software RAID implementation in Linux* is the open source MD RAID package. Intel has enhanced MD RAID to support RST metadata and OROM and it is validated and supported by Intel for server platforms."

Under "protocols supported" they list AHCI and SAS. So that means NVMe disks will not work? Well, you have to remember that this is an old paper from 2011, and I believe Intel first released Matrix RAID in 2008. If you want NVMe support, then you also need the nvme block driver, also a contribution from Intel. In fact, there is a newer paper detailing PCIe NVMe support under Linux...

https://www.intel.com/content/dam/support/us/en/documents/memory-and-storage/ssd-software/5-4_VROC_Linux_User_Guide.pdf

Note that this paper is describing "RSTe" which means "RST enterprise". As with all things Intel and Linux, everything starts with servers and datacenters. I haven't had time to look more into this, but I don't see why this would not work for regular Linux based systems like Ubuntu. It's probably just like before, it's a matter of taking your time and putting it all together, or programming it yourself if you have to.

So now that we knwo all this, the main ingredients you need for using RST under Linux with a PCIe NVMe disk are...

• md – block driver

• mdadm – RAID management utility

• nvme – block driver

Don't ask me if it actually works. I haven't toyed with RST since it was called (the) Matrix. But the message should be clear: it can work if you want it to. There is no reason why a big company like Acronis would not have the know-how on how to pull this off.

I note that fast startup is enabled; this can play havoc with attempting to boot from USB device. Try temporarily disabling fast startup; the recovery media should then load, although as it is Linux based it probably will not get you anywhere. You should create Windows RE or PE recovery media.

Oh yes, fast startup can be a PITA. I know this from previous experience when toying with Linux and Windows on the same hardware.

Note that I did not disable fast boot at any point, but I did issued the shutdown /p command right before I started unplugging all disks from the system. That may or may not have helped, I have no idea. Not that I haven't had the PC turned off completely before, I often use the Shutdown from the UI in Windows 10, because Windows 10 has this awful habit of waking up my PC in the middle of the night so I have to put it down. In any case, this command it a good one to make note of.

Also, you are wrong about Linux. I have explained it in detail above. But basically, Linux is just as good as Windows, but it does usually requires some more work/tinkering. I'm mean generally speaking, as in Linux vs. Windows. When it comes to creating a Linux based recovery media in Acronis True Image, vs. creating a Windows PE/RE based one, it's just as easy. The Linux based creation process also has the advantage of not needing extra packages whereas WinPE based media requires download and installation of ADK and PE addon.

Seeing that Acronis True Image created a Linux based media that boots just fine, I'm not exactly sure why it did not want to boot the first time around. I have since reformatted the USB drive several times, and I did update True Image to a newer build (17750). But I know this for sure: writing to USB directly made the USB non-bootable, but after writing it to an ISO file first and then writing it to USB with Rufus 3.1 made it boot just fine. And also, writing it to USB directly in Acronis Media Creation Builder as of now (build 17750) makes it bootable straight away.

Trial and error... trial and error... trial... success.

 

Samir,

Not sure if your response got moderated, or something else.  I saw an email reply, but no update here. And when I click on the email link, it goes to an access denied page.

Lot's of details and info in your response.  All good stuff and appreciate the feedback.   Just going to paraphrase my responses.

- 1st, what was the previous version of Acronis 2019 Linux recovery media?  Glad that the 17750 media seems to be working correctly now.  You mentioned that you did the recovery with WinRE though, so did the Linux media help here?   I'm not sure how it applies to the fix since you said it was completed with WinRE.

- 2nd, as you found, if you have the storage controller drivers installed on your local machine already, they should be included in your WinRE build, since WinRE has your system drivers built into it already.  You can also manually add drivers into the rescue media build when using the advanced option for building WinRE or WinPE in Acronis 2019. 

We actually built the MVP custom media builder tool with this thought in mind originally, because it was not previously available directly in Acronis.  This was especially important for PCIE NVME drives where the bios SATA mode is set to RAID (more about this below in just a bit).

Also, the the Samsung NVME drivers actually aren't used for POST.  If I can find the forum from Samsung on it, I'll link it here.  They say they are for the integration with Samsung Magician / Rapid Mode and not intended to be used as a boot driver.  There are lots of forums where people still use these drivers for boot repair, but Samsung says that this is not intended to work for this and can cause issues. Instead, use the IRST drivers for intel chipsets.  

- 3rd, I don't have factual percentages of my statement that 99.9% of new systems coming with a PCIe NVME boot drive have been configured to use RAID as the SATA mode in the bios.  However, from experience, I can tell you every system that comes through our shop, has been like this.  I can say with certainty that any Dell with a PCIE NVME and any Microsoft Surface is configured like this. I can say with near certainty that other vendors are doing the same thing.  Here's a reference for Dell's.  

https://www.dell.com/community/Laptops-General-Read-Only/Dell-M-2-FAQ-regarding-AHCI-vs-RAID-ON-Storage-Drivers-M-2-Lanes/td-p/5072571

The reason this is happening is because AHCI is a subset of RAID, but has a limiting factor for queue depth which directly impacts PCIe NVME drive full potential performance. This is why vendors are using RAID as the SATA mode in the bios vs AHCI as the default with these as the OS boot drive.

https://www.anandtech.com/show/7843/testing-sata-express-with-asus/4

  NVMe AHCI
Latency 2.8 µs 6.0 µs
Maximum Queue Depth Up to 64K queues with
64K commands each
Up to 1 queue with
32 commands each
Multicore Support Yes Limited
4KB Efficiency One 64B fetch Two serialized host
DRAM fetches required

- 4th, no, this does not have to be an actual RAID setup with multiple disks or a separate controller.  Vendors are sending the systems straight from the factor with the SATA mode configured in RAID mode and the installed Operating System prepped for this setup.  This is the same, even when there is only one hard drive in the system.  

This is why the Acronis Linux media is usually no good.  It's a lightweight build of busybox and driver support isn't great to begin with.  Yes, there are newer and better versions of Linux/Unix out there and some that support PCIe NVME drives in RAID now, but up until recently, this was more of an enthusiast feature.  Also, yes, you can switch the bios to AHCI and it will pick up a single pcie nvme drive with the Linux rescue media.  However, most consumers find this too difficult or scary to deal with.  With Windows 10, you can usually still boot if you forget to change it back, but anyone still using Windows 7 will find that their OS is unbootable then.  There are hotfixes that can allow for a permanent change in Windows 7 to allow this switch, but again, more in depth than most home consumers want to deal with. Also, although switching the bios SATA mode from RAID to AHCI temporarily works for a single drive setup, it certainly won't work in a real RAID setup.

So again, the comments I made earlier are for general purposes, because it won't apply to everyone, but pretty much covers the majority of systems you're going to see consumers getting from vendors that have any type of PCIe NVME boot drive (desktop or laptop and a Windows OS). 

I'm not a big Linux/Unix user so my experience with them is limited.  However I'm a long time Acronis user and very familiar with the needs and limitations of the rescue media in both the Linux and WinPE/WinRE environments and how users in the forums generally interact with them.  I can't cover all scenarios and use cases and don't really plan to as I'm just here in my spare time passing along my knowledge and experience with True Image.  

 

Not sure if your response got moderated, or something else.  I saw an email reply, but no update here. And when I click on the email link, it goes to an access denied page.

Yes, this seems to happen to me often. Or more precisely, every time I post something it goes to the moderation queue. My post was approved and published, but then I made a small edit, and it was sent for moderation again. So who knows... maybe it will be published again in the next 24 hours. Sorry! I don't know what the problem is.

What was the previous version of Acronis 2019 Linux recovery media?

I don't know if Recovery Media Builder comes with a version number, but I will try to find out.

You mentioned that you did the recovery with WinRE though, so did the Linux media help here? I'm not sure how it applies to the fix since you said it was completed with WinRE.

I can't go back now to see what I wrote, but I'm pretty sure I wrote WinPE. I used the Linux based media only after finishing the recovery using WinPE media. I just wanted to prove to myself that Linux based media will boot and detect the NVMe disk, and it did. But then I aborted of course, since I had already finished the recovery with WinPE media.

You can also manually add drivers into the rescue media build when using the advanced option for building WinRE or WinPE in Acronis 2019.

Yes, I have seen that option, and I did try adding the drivers like that once. If you recall I had gone through the trouble of extracting the Samsung drivers from their EXE installer, just so I can add them when creating the WinPE media in advanced mode. (As you probably know, we can't add EXE files in the drivers pool.) I did that, I added the drivers (after extracting them from EXE file obviously) but that did not help me get to the disk selection view in WinPE so I was still not able to recover from backup.

We actually built the MVP custom media builder tool with this thought in mind originally, because it was not previously available directly in Acronis.

I haven't tried it myself, but I love it when a community comes together and does their best to resolve a problem, even if it requires them to make their own tools to do so.

Also, the the Samsung NVME drivers actually aren't used for POST.  If I can find the forum from Samsung on it, I'll link it here. They say they are for the integration with Samsung Magician / Rapid Mode and not intended to be used as a boot driver. There are lots of forums where people still use these drivers for boot repair, but Samsung says that this is not intended to work for this and can cause issues. Instead, use the IRST drivers for intel chipsets.

I can't comment on what they are used for. I have not had chance yet to do a proper clean installation of my computer, which will be more telling than anything if these drivers are actually applied to the Samsung NVMe disk (as seen in Device Manager). Right now, I can tell you that the drivers in use are the Microsoft generic drivers, they are not Intel drivers. I have in fact already listed them in my very first post.

But what you say sounds reasonable (Samsung drivers being used in conjunction with Samsung Magician). Because I came across an interesting post on TechNet Social where someone was asking for a download link for Microsoft NVMe drivers, and there was a link to Windows programmer documentation on MSDN. That MSDN article describes in great detail what the Microsoft NVMe drivers are used for. It mentions that starting with Windows 10, these drivers (Microsoft generic NVMe drivers that is) are programmed with a pass-through mechanism that allows for device manufacturers to send commands to their NVMe devices directly, passing through the Microsoft generic drivers.

So one could argue that manufacturer drivers such as those from Samsung are complementary, and not a replacement of the generic Microsoft generic NVMe drivers, where Samsung drivers can handle the very specific instructions that are specific to the brand and model of NVMe disk and Microsoft drivers can handle everything else.

And of course all things Samsung SSD are handled through the Samsung Magician. And I recall seeing some features unavailable to me on the Magician interface. This may or may not have to do with the fact that it's an NVMe SSD and not an AHCI SSD.

I don't have factual percentages of my statement that 99.9% of new systems coming with a PCIe NVME boot drive have been configured to use RAID as the SATA mode in the bios. However, from experience, I can tell you every system that comes through our shop, has been like this. I can say with certainty that any Dell with a PCIE NVME and any Microsoft Surface is configured like this. I can say with near certainty that other vendors are doing the same thing.

I have personally only seen this on HP Z series workstations. I can tell you, it made Windows 10 rollout on these computers much more difficult than the rest. I'm curious though, do Microsoft Surface laptops really have a RAID setup?

The reason this is happening is because AHCI is a subset of RAID, but has a limiting factor for queue depth which directly impacts PCIe NVME drive full potential performance.

This is news to me – "AHCI is a subset of RAID". Do you have a whitepaper or some credible source to back this claim?

I reviewed some age old forum threads, such as these two:

https://www.bleepingcomputer.com/forums/t/505395/will-ahci-work-when-se…

https://forums.tomshardware.com/threads/raid-vs-ahci.647793/

Forum 2 guy says: "AHCI is a subset of raid, so you could specify raid, and you will get trim for the ssd, so long as it is not part of a raid array."

Forum 1 guy says: "So AHCI is a subset of raid, so i could specify raid, and you will get trim for the ssd, so long as it is not part of a raid array, right?"

From what I can gather this is a misconception that stems from the way BIOS settings are laid out. Consider this! You have the following options.

  • SATA Mode: IDE
  • SATA Mode: AHCI
  • SATA Mode: RAID

If you set it to IDE, AHCI and IRST will be disabled

If you set it it to AHCI, IDE and IRST will be disabled.

If you set it to RAID, IDE will be disabled. Note that neither AHCI nor IRST is disabled. Does this make AHCI a "subset" of RAID? I don't think so. It's a side effect that when you enable RAID (which really translates to IRST), AHCI is also enabled. This is why you can "specify raid [IRST], and you will get trim for the ssd [AHCI], so long as it is not part of a raid array [AHCI]".

In other words AHCI and RAID (IRST) are not mutually exclusive. But they are completely different things, even if they are implemented in the same ROM, where AHCI is a protocol and IRST ("RAID") is a data format for disk arrays.

They (motherboard manufacturers) could have laid out the settings in a different, more obvious way. Such as...

  • IDE: On/Off
  • AHCI: On/Off
  • RAID: On/Off

Bu then they have to handle the logic behind the settings in a different way. For example, if IDE is On, then AHCI and RAID must be Off and so on. There are many more combinations. You end up with a lot of If and Else constructs. BIOS chips have very limited memory and are difficult to program (x86 assembly) and a lot easier to make a mistake. The newer EFI and UEFI are better in that respect, but that's no reason to be irresponsible and not conserve memory and write elegant code.

This is why vendors are using RAID as the SATA mode in the bios vs AHCI as the default with these as the OS boot drive.

This is not the reason they are pre-configuring these computers with RAID as the default option. As I explained above, RAID and SATA are two separate entities with little in common. It's just the way the settings are laid out in the BIOS/UEFI.

The real reason why they have it set to RAID by default is so that in case the user decides to install a second disk, they can create a disk array more readily. This is why I have only been seeing this in high-end computers like the HP Z series workstations.

No, this does not have to be an actual RAID setup with multiple disks or a separate controller. Vendors are sending the systems straight from the factor with the SATA mode configured in RAID mode and the installed Operating System prepped for this setup. This is the same, even when there is only one hard drive in the system.

I will try not to repeat myself. So yes, they may be sending these computers with SATA mode set to RAID from the factory and with only one disk installed. Both you and I know that's not RAID. But the reason they do it is so that a RAID array can be created more readily should the user decide to install a second disk. In the meantime, the single disk system operates as AHCI. Not because AHCI is a subset of RAID (IRST), but because one single switch/option in BIOS/UEFI utility internally sets them both to On position. Because there is normally no separate parameter for AHCI and RAID (IRST), although it's totally possible to program the BIOS/UEFI in such a way that would make that clear.

I have already touch briefly on why they (motherboard makers) haven't done so already, one reason is purely because of the way they are programmed, another is the BIOS vendor lineage. Even though there are more than 10 motherboard makers in the world, there are no more than 3 major BIOS vendors. This is especially true for classical BIOS (no EFI/UEFI). What that means is that BIOS code has been passed around and extended (beaten to death) between many different motherboard makers since the 1980s.

This is why the Acronis Linux media is usually no good. It's a lightweight build of busybox and driver support isn't great to begin with. Yes, there are newer and better versions of Linux/Unix out there and some that support PCIe NVME drives in RAID now, but up until recently, this was more of an enthusiast feature.

True, it is more of an enthusiast feature because it requires some tinkering, and not a lot of people got that right the first time they tried. I'm referring to IRST under Linux now. But we need to put the blame where it belongs. If Rufus can make my USB bootable in UEFI mode, based on a ISO file that Acronis Recovery Media Builder tool itself had made... and the Recovery Media Builder tool cannot do the same by directly writing to the USB then I know pretty well what tool to blame.

Making something bootable in either BIOS or UEFI doesn't even need to be a discussion about Linux vs. Windows. I will spare you the details. You mentioned GPT and MBR early on, so I'm sure you have been around long enough to know the difference and what is required to make a USB disk bootable on a UEFI system (Windows or Linux, regardless).

Also, yes, you can switch the bios to AHCI and it will pick up a single pcie nvme drive with the Linux rescue media.

Yes. Because AHCI has nothing to do with NVMe. Remember, I already made a Linux based Acronis rescue media and I was able to both boot it up and to see my NVMe disk, and it also allowed me to select it so I can recover my backup image to it. But I did not do that, because I had already done so earlier.

I can also add here that the Ubuntu 18.10 installer sees my NVMe disk just fine. I tested this earlier. So I could install Ubuntu 18.10 if I wanted to. And yes, my SATA mode is still AHCI. But on the other hand, RAID (IRST) is not enabled. I don't have time to tinker with it right now, but I plan on installing Ubuntu on this computer in RAID (IRST) mode. I will need to buy a second NVMe disk for that. But I might be able to use two regular SATA SSD disks just for practice.

However, most consumers find this too difficult or scary to deal with.  With Windows 10, you can usually still boot if you forget to change it back, but anyone still using Windows 7 will find that their OS is unbootable then.

Change what? Install Windows 7 in RAID (IRST) mode and then switch over to AHCI?

There are hotfixes that can allow for a permanent change in Windows 7 to allow this switch, but again, more in depth than most home consumers want to deal with. Also, although switching the bios SATA mode from RAID to AHCI temporarily works for a single drive setup, it certainly won't work in a real RAID setup.

Yes, registry hacks are of great pleasure for non-consumers. I recall doing this sort of switching between IDE and AHCI when AHCI was a new thing. I think I did that two times in total.

It will work for a single disk setup because one disk doesn't make a RAID array. So even though you see "RAID" in your BIOS doesn't mean it's RAID (IRST). It only becomes RAID once when you put in at least one more disk and create an array. I covered this above.

So again, the comments I made earlier are for general purposes, because it won't apply to everyone, but pretty much covers the majority of systems you're going to see consumers getting from vendors that have any type of PCIe NVME boot drive (desktop or laptop and a Windows OS).

Well there's nothing to fear here, because systems that ship with PCIe NVMe disks pre-installed can be counted on ten fingers, and those who get them will certainly not be easily scared consumers.

I'm not a big Linux/Unix user so my experience with them is limited.  However I'm a long time Acronis user and very familiar with the needs and limitations of the rescue media in both the Linux and WinPE/WinRE environments and how users in the forums generally interact with them. I can't cover all scenarios and use cases and don't really plan to as I'm just here in my spare time passing along my knowledge and experience with True Image.

And I thank you for the discussion! Linux fan or Windows fan, regardless, I think we can all get along just fine. You certainly made me brush up my memory on some old materials and made me want to experiment and learn more. I am also an old Acronis True Image user. True Image and I go back to 2011. I am good at tinkering and understanding difficult problems, I am methodological and a good teacher I hear. But I am by no means an expert on True Image. I'm just happy if it works, and I don't ever look at it unless I have to, but it sure is one of the best backup software available.

 

We will have to agree to disagree on some of this, but appreciate the feedback!

I'm curious though, do Microsoft Surface laptops really have a RAID setup?

- Depends.  The 1TB model is actually 2x 512GB in RAID 0.  Everything else is just a single drive.  Regardless, the SATA mode is always set in RAID mode by default on the Surface Pro's and they all use PCIe NVME drives.  Every dell we've ordered with PCIe NVME has been RAID mode from factory too, doesn't matter if it's Latitude, Precsion, laptop, desktop or tablet, or if they only have 1 drive.  Have you seen systems with vendor installed PCIe NVME's (not SATA NVME's) shipped with the SATA mode as AHCI?  I haven't seen any.  However, when they come with a standard SATA or m.2 SATA drive, then yes, I've only seen them shipped in AHCI mode and not RAID (unless they have a true RAID configuration with multiple disks).

Well there's nothing to fear here, because systems that ship with PCIe NVMe disks pre-installed can be counted on ten fingers, and those who get them will certainly not be easily scared consumers.

 

I don't think this is the case.  PCIe NVME are fast becoming mainstream.  You'll find them in every Surface Pro, many Macbooks, high end tablets and ultrabooks.  The Dell XPS line is really common and only offers PCIe NVME drives.  Just about every type of ultra book system in that same class is using PCIe NVME drives too.  Heck, my 2 year old Dell 5285 tablet came with a PCie NVME drive.

Prices for PCIe NVME also keep dropping.  We're seeing 500GB versions for $75-$100 on Amazon/Newegg, which is well on par with where SSD's were several years ago - I'm sure the vendors are getting way better prices direct and in bulk they can mark up a little bit too.  Why wouldn't vendors migrate to them with a smaller form factor, lower price point, less power consumption and better performance? 

A lot of consumers have no idea about the technology or the differences.  They just want the newest, fastest technology so they don't have to upgrade for several years.  

This is not the reason they are pre-configuring these computers with RAID as the default option. As I explained above, RAID and SATA are two separate entities with little in common. It's just the way the settings are laid out in the BIOS/UEFI.

The real reason why they have it set to RAID by default is so that in case the user decides to install a second disk, they can create a disk array more readily. This is why I have only been seeing this in high-end computers like the HP Z series workstations.

- I'm fully aware that RAID and SATA are not the same, but EVERY UEFI bios I've come across lists the SATA Mode or SATA control type (something along that name) with options as RAID, AHCI or IDE.  This is what I'm referring to.  How this is set by default, will impact certain Operating Systems (Windows 10 supports this out of the box, Windows 7 is a pain in the butt).

I fully believe that PCIe NVME drives are being set in RAID mode by the vendors to take advantage of the deeper queue depth that is limited in AHCI - that queue depth limitation is factual and not up for debate.  These are high end drives that can take on more tasks and AHCI limits their ability. 

I don't believe the vendors are planning on the majority of consumers adding extra drives just to use RAID later on.  To create a new RAID, you have to initialize the disks again, which means you'd have to use something like Acronis to backup and restore and that's something I don't think most home users have any plans to do when they go into Walmart (or wherever - but come on, we've all seen the Walmart memes - those consumers aren't going to RAID anything) to buy their next laptop. 

And, there are plenty of tablet/ultrabook systems that only have a single m.2 connector that have PCIe NVME drives and SATA mode set to RAID, but they could never physically add a second drive for an actual RAID.  

This is news to me – "AHCI is a subset of RAID". Do you have a whitepaper or some credible source to back this claim?

There is no white paper on AHCI being a subset of RAID.  So, maybe it's not a "subset" by definition; call it what you want, and if the term is wrong, I'd argue that it still fits because RAID has all all the benefits of AHCI and then some (the deeper queue depth, ability to stripe and mirror, etc.).  On the other hand, AHCI offers no additional benefit over RAID. To those who don't agree it's not a "subset", I won't try to change your mind, but if you can show me where AHCI outperforms RAID, or offers benefits over it, you can try to change mine.

True, it is more of an enthusiast feature because it requires some tinkering, and not a lot of people got that right the first time they tried. I'm referring to IRST under Linux now. But we need to put the blame where it belongs. If Rufus can make my USB bootable in UEFI mode, based on a ISO file that Acronis Recovery Media Builder tool itself had made... and the Recovery Media Builder tool cannot do the same by directly writing to the USB then I know pretty well what tool to blame.

We're talking about different things.  I wasn't aware that you couldn't boot the rescue media?  Acronis is fully capable of making a bootable CD/DVD or USB drive for rescue media and they are always capable of both legacy and UEFI booting.  IRST drivers have nothing to do with the bootabilty here - they only impact the the ability to detect a PCIe NVME drive (or not) when the bios SATA mode is set as RAID.  If the Linux rescue media never booted into the Acronis environment, then the build process was faulty.  And mentioning Rufus, it is actually the likely cause - this has been the cause of boot issue many times for me (and others in the forum), not just with Acronis, but also Windows recovery disks and other bootable media.  When I was in my multiboot phase using Rufus, Yumi, Easy2boot, I used to swear by Rufus.  Now, I mostly just swear at it because of the issues it can cause with bootability when you use other rescue media builders, when Rufus has already worked its magic. 

If you want to test that same USB drive again, do a diskpart "clean" of your thumb drive and then initialize it as Fat32 and perform a full format (not a quick one).  I know it sucks, but humor me and then rebuild the Acronis Linux rescue media - I bet it boots just fine in UEFI mode.  And if it does and your bios SATA mode is AHCI, the linux version will work just fine with your PCIe NVME drive.  But if your bios SATA mode is RAID (which in your case it isn't) then you'll find that your PCIe NVME drive is not detected, which is what I was trying to convey before. 

You can test this though, just by changing the BIOS SATA mode from AHCI to RAID and vice-versa and booting the Linux media and checking if the PCIe NVME drive is detected or not.

And I thank you for the discussion! Linux fan or Windows fan, regardless, I think we can all get along just fine. You certainly made me brush up my memory on some old materials and made me want to experiment and learn more. I am also an old Acronis True Image user. True Image and I go back to 2011. I am good at tinkering and understanding difficult problems, I am methodological and a good teacher I hear. But I am by no means an expert on True Image. I'm just happy if it works, and I don't ever look at it unless I have to, but it sure is one of the best backup software available.

I come here to help people backup and recover.  It's fun to debate a little bit and share ideas, but most concerned with helping folks get through backup or recovery issues. You definitely have a good IT background / mindset.  I came from the world of Norton Ghost and migrated to True Image in 2010. I use it and just about every other commercial backup software out there (OK, not all of them, but a lot - and pretty much all of the direct competitors).  Acronis has been my favorite for a long time now though.  Is it perfect, no?  Are the competitors, no.  I just like having the ability to test, compare and bring some diversity into my backup scheme (just in case).  But my backups and restores with Acronis have been pretty rock solid so it's still my go to.  

Interesting discussion.  AHCI is the standard on which SATA relies on to bo things like Hot Swap, NCQ, etc.  SATA replaced PATA way back when as the storage industry standard.

NVMe will replace SATA in the near future as design architecture changes.  Having data transfer across CPU PCIe lanes is available now and in the future will be the standard.

RAID fits into the present picture for the reasons Bobbo stated, higher queue depths.  That ability allows the NVMe drive to achieve an expected level of performance.  This is why vendors set SATA modes to RAID.  It is done so that the storage controller drivers will use the RAID capabilities of the controller driver to achieve higher queue depths.  Intel drivers now are all-in-one SATA-AHCI-RAID known as a Premium driver.

 

Having data transfer across CPU PCIe lanes is available now and in the future will be the standard.

That's what they said about SATA once. No one knows for sure what the future will bring.

RAID fits into the present picture for the reasons Bobbo stated, higher queue depths. That ability allows the NVMe drive to achieve an expected level of performance. This is why vendors set SATA modes to RAID. It is done so that the storage controller drivers will use the RAID capabilities of the controller driver to achieve higher queue depths.

You are not exactly wrong, but where do you read "RAID"? That Anandtech article clearly compares "queue depth" of NVMe vs. AHCI.

RAID is mentioned only three times in the whole article and in all the wrong places, completely completely out of context of "queue depth". Please add a whitepaper or some other credible source that supports your claim.

https://www.anandtech.com/show/7843/testing-sata-express-with-asus/3

"Yes, you could use RAID to at least partially overcome the SATA bottleneck but that add costs (a single PCIe controller is cheaper than two SATA controllers) and especially with RAID 0 the risk of array failure is higher (one disk fails and the whole array is busted)."

https://www.anandtech.com/show/7843/testing-sata-express-with-asus/5

"As for SATA 6Gbps, the performance is the same as well, which isn't surprising since only the connector is slightly different while electrically everything is the same. With the ASMedia chipset there is ~25-27% reduction in performance but that is inline with the previous ASMedia SATA 6Gbps chipsets I've seen. As I mentioned earlier, I doubt that the ASM106SE brings anything new to the SATA side of the controller and that's why I wasn't expecting more than 400MB/s. Generally you'll only get full SATA bandwidth from an Intel chipset or a higher-end SATA/RAID card."

Intel drivers now are all-in-one SATA-AHCI-RAID known as a Premium driver.

Meaning what exactly? AHCI being a subset of RAID? I have covered this already, so I won't be repeating myself. And what the hell is a "intel premium driver"? This doesn't exactly help support your claim. I did a quick web search and found only 6 matches. It appears to be a a made up term that's floating around in some obscure places of the web. Please find me a link, a document or a file name on Intel website that mentions "intel premium driver".

If by "premium" you mean that's several drivers in one package, then it's nothing new. (Even though it's not called "premium" by Intel.) It's nothing new to bundle several drivers in one package. ATI has been doing this for over a decade with their Catalyst drivers for example.

This still doesn't say anything about AHCI being a subset of SATA... I can bundle up all the drivers on my current Windows system into one single package, including the USB mouse driver. That doesn't make USB a subset of PCH.

 

I just posted a reply to Enchantech and it got published, but as soon as I went in to make a small edit (made the URLs clickable) it got sent to the moderation queue. Is it so wrong to make URLs clickable rather than have them as plain text that you have to copy and paste into the address bar? That's likely what triggered moderation. But previously I was unable to even make a post without moderation, it would go straight to the moderation queue (perhaps because I had external URLs in my post). So what? Only text is allowed in posts?

I want to thank you for the discussion, and for reaching out to help me with True Image. I won't be posting here anymore. I can't afford to put in so much work into composing these lengthy posts with quotes, links, tables, lists and everything, only to see it go off to the trash bin. Besides, I'm up and running again. My problem with True Image is solved. I can't tell you for sure what the solution was since several variables have changed since I first encountered the problem. But I posted about three things (three point list) that may have helped to solve the problem, however that post is being held back for moderation, so it may or may not appear, and I don't really feel like writing it again.

Thank you!

Samir,

Sorry to hear that your posts are being delayed due to moderation.  I hope to see it here soon and will answer if I can.  Hopefully you will continue to read posts left here for awhile.

It is good that you have a working system now with your new NVMe disk.  In reading most of this thread I think a few things are at play that caused or contributed to your issues.

  1. The selection of Source disk "What to recover" resulting in a spinning wheel is expected and may last for some time.  The application is gathering a lot of information during this process and patience is mandatory.  The more installed software you have on the disk the longer this process will be.
  2. Having the TI app crash and generate a System Report is not expected but would indicate possible corruption on the source disk or more commonly a disk that was not completely shutdown prior to attempting the clone process.  This factor is a consequence of Windows Fast Startup being enabled.

The SATA mode selection in your case to RAID should not inhibit the clone process from SATA SSD to NVMe PCie SSD as driver advancement has reached a point where this is no longer an issue except in the case of actually working with a true raid array. (multiple disks)

For a complete shutdown of a Windows 10 disk simply hold down the Shift key then click on Shutdown in the Power menu and you will get full shutdown of the system.  That should avoid any problems with Fast Startup/Hyberfil.  If issues still persist then disk corruption is the next likely suspect.

frestogaslorastaswastavewroviwroclolacorashibushurutraciwrubrishabenichikucrijorejenufrilomuwrigaslowrikejawrachosleratiswurelaseriprouobrunoviswosuthitribrepakotritopislivadrauibretisetewrapenuwrapi
Posts: 250
Comments: 7092

Samir wrote:

Not sure if your response got moderated, or something else.  I saw an email reply, but no update here. And when I click on the email link, it goes to an access denied page.

Yes, this seems to happen to me often. Or more precisely, every time I post something it goes to the moderation queue. My post was approved and published, but then I made a small edit, and it was sent for moderation again. So who knows... maybe it will be published again in the next 24 hours. Sorry! I don't know what the problem is.

Hi Samir
please check your personal messages, I've PMed you regarding this  
(in short - all messages with active links undergo moderation, this is a security measure implemented after a massive spam attack we had in 2018)

Yes, it was a massive attack. I was browsing the forum when it started and watched the spam post appearing like crazy. It is a minor inconvenience for users when the alternative is Forum could go off line for an extended period; the possibility that damage could be done to the underlying data (but I suspect his is regularly backed up by Acronis).

Ian

Thanks for your positive response!

I understand why you do this though. A hyperlink is a core concept of the web and it's one of the things that makes the web so powerful. And it's sad that a handful of spammers are destroying the experience for the rest of us.

If this rule applies to external links only, I suggest that you inform forum users that they cannot link to external content.

Else, If this applies to all links, I suggest that you remove the link button from the CKEditor.

It is possible to write business logic that validates the links that are posted, and pops up a modal window if a link does not validate and informs the user about it and suggests a course of action. This will require some work of course, but it should ease off the burden from moderators.

Thanks again!

 

Samir, you can write in your external links as plain text and let other users select that text and find the link without moderation as far as I understand, otherwise it works as advised.

This was a recent change in the forums but was needed because of the sophistication of new automated attacks that were generating hundred / thousands of new forum posts in a very short time.

I had these screenshots on my desktop so I might as well share them. I took these earlier when I was testing if Ubuntu Linux can see my PCIe NVMe disk. I used Ubuntu 18.10 bootable media, prepped for UEFI.

As you can see, the Ubuntu installer was abel to detect my NVMe disk without any problems. It worked straight out of the box.

I first tested in AHCI mode, and then in so called "RAID" mode (IRST) without an actual RAID array.

I got the same result. Ubuntu booted up just fine and the installer detected my disk. Just as I predicted. Just becauase you set it to "RAID" doesn't mean you have a RAID array and it will stop working in Linux because of it.

Between, I see now where the misconception of an "intel premium driver" is coming from. It's a feature called Intel RST Premium, presumably a beefed up version or IRST.

You can see here that I don't have a RAID array, even though RAID is enabled in UEFI.

And you can see how it's warning me that if I go from RAID back to AHCI it shall destroy all my digital possessions. I can assure you that's not what happened.

Again, just because it says "RAID" doesn't mean you actually have a RAID array. It just means that the IRST option ROM is enabled and ready whenever you are. If you're not telling it to go ahead and fire up a RAID array, then it's perfectly safe to go back to AHCI.

I might do an actual Ubuntu installation on this system later on, in IRST mode and with an IRST controlled RAID array. But I will be using two identical EVO 850 SATA disks, rather than two PCIe NVMe. At least it will demonstrate that Linux can boot off of a "Premium RST" controlled RAID array. If it works for SATA SSD disks, I would expect the same results with two identical PCIe NVMe disks. I have explained earlier that Linux supports NVMe disks since 2012. If I do encounter an issue it will likely arise from IRST controlled disk arrays and not from lack of NVMe support in Linux.

But that's another project, and it's edging on being off topic for this thread really. I just wanted to showcase that Linux does detect NVMe disks, regardless if "RAID" is enabled in BIOS/UEFI or not (having an actual array may differ). If the Acronis built Linux version of True Image doesn't detect NVMe disks then that's a problem with True Image and its tooling, not with Linux per se.

I have no doubt that the latest versions of Ubuntu can handle all variations of NVMe and RAID etc but this is not the Linux distro that Acronis are using for their older style rescue media - Acronis have used a small distro called Busybox which does not have this level of support.

See KB 1537: Acronis Bootable Media - for more (but limited) information on the Linux Kernel version numbers being used in the Busybox rescue media.

Samir,

Steve is correct in that the Linux kernel used by Acronis is Busybox.  It is a trimmed down Linux to say the least.

You are also correct that full Linux distributions support support NVMe.  If you set the SATA mode to AHCI for an NVMe disk it will work but performance will suffer due to a greatly lowered Queue Depth.  This is why RAID mode is selected for NVMe drives to be able to take advantage of higher Queue Depths.

For Intel systems the issue that arises is that of controller drivers.  In addition to higher Queue Depth NVMe needs higher bandwidth to obtain maximum performance as well.  That is achieved via the PCIe interface.  For that to happen you need a driver that can access the remapping of PCIe lanes in the PCH to the drives.  Intel of course uses Intel drivers to accomplish that.  In addition the UEFI Option Rom must have an Intel driver for the RAID controller to enable RAID.  So what you have are 2 drivers in the equation.  An Intel OpRom driver and an Intel Storage Controller driver.  If either are missing on a single drive system where NVMe is installed then performance suffers.  If your intent is to establish a multi-drive NVMe RAID array and you do not have both drivers available you will not be able to detect the drives to create the array nor install Windows.

Acronis has moved away from the Linux based media because of the lack of driver support issues not only with storage devices but network adapters and graphic adapters as well.  WinPE/RE is much better suited for driver support.  So if you want to use modern hardware on an Intel based system your best bet for getting things to work is to use WinPE/RE based recovery media.  If you still use older Legacy hardware then the Linux option should suffice to support that hardware. 

Steve Smith wrote:

I have no doubt that the latest versions of Ubuntu can handle all variations of NVMe and RAID etc but this is not the Linux distro that Acronis are using for their older style rescue media - Acronis have used a small distro called Busybox which does not have this level of support.

See KB 1537: Acronis Bootable Media - for more (but limited) information on the Linux Kernel version numbers being used in the Busybox rescue media.

Well actually, Busybox is not a Linux distribution. It's a collection of alternate implementation of Unix-like core utilities that's commonly used for making a Linux based system with a small footprint (such as recovery media). There is also one called Toybox which is used in Android smartphones. With common GNU Linux desktop operating systems like Ubuntu, Fedora or Debian, you have GNU core utilities.

I had a quick look a the Linux recovery media that the Media Builder makes and I can see that it's based on kernel version 4.9.51 and Busybox version 1.20.2. That's the one I got with Media Builder that comes with True Image 2019 (build 17750). As long as it's a kernel version equal or greater than 3.9 and the nvme kernel module is loaded, Linux based Acronis rescue media should work equally well with NVMe disks as Ubuntu does. It doesn't have to be a large GNU Linux system like Ubuntu.

The nvme driver is part of the kernel. When Acronis developers build their rescue media system they just need to make sure that the right kernel options are enabled. I only tested Ubuntu because it's based on the Linux kernel, and I expected the latest release to be built with NVMe in mind, with no kernel argument manipulation needed on part of its user (and I was right). But the Acronis rescue media can have the same benefit, even though it uses Busybox for its userland.

 

Samir, thanks for the good info about the Kernel drivers!  I was able to confirm the busybox version and kernel to be the same as you noted above with uname-r and --help.

It's been awhile since I've gone back and used the Linux version of the Acronis rescue media for my own use, but found that this one does detect my PCIe NVME drive when the SATA mode is set to RAID as well.  That's refreshing, as it it typically has not been the case in earlier versions (can't say how far back as I've been using WinPE/WinRE with great success for much longer now).  And of course, I'm not running an actual RAID with multiple drives, but I've left my system in RAID mode instead of AHCI to take advantage of the full potential of the PCIe NVME capabilities.

Do you have a good link with information about the kernel version driver support?  It seems like USB 3.0/3.1 drivers and some of the most current Intel NIC's also end up missing drivers in the Acronis rescue media too and would be nice to be able to find what kernel versions they find their way into.  

Came across this link and it actually mentions that 3.3 was where NVME support was added - is 3.9 where it was perfected?

https://nvmexpress.org/resources/drivers/linux-driver-information/

Regardless, glad to see it is working better - at least with my motherboard anyway (Gigabyte GA-Z170X-Gaming-3 v1 and firmware F22J).  There still seem to be users in the forum that cannot get their PCIe NVME drive detected with the rescue media when the SATA Mode is RAID / IRST and using an PCIe NVME drive so am wondering if it is bios firmware that might also be an issue (in addition to settings like secure boot, etc. that might also need tweaking in the bios configs).

Here's another Acronis KB article with some info about the tools they're using...

61597: Third-party software used in Acronis True Image 2019

And (perhaps this doesn't apply anymore - of at least, not on all systems with the newer kernel, but for reference on the documented lack of drivers previously - even in 2019.

58006: Acronis software:M.2 and NVMe drives are not detected by Linux-based bootable media

I also was able to find that Samsung forum about the drivers only being for Magician.  It is linked in one of my previous posts:

https://forum.acronis.com/comment/482250#comment-482250

Unfortunately, the actual forum doesn't connect directly anymore, but here it is/was for reference too:

https://us.community.samsung.com/t5/Memory-Storage/Samsung-NVMe-SSD-960-EVO-Drivers-amp-Windows-7-SP1-64-bit/td-p/294337

 

I will test Linux recovery media on my AMD Ryzen syste to see if it also recognises my NVMe drives.

Ian

Do you have a good link with information about the kernel version driver support?  It seems like USB 3.0/3.1 drivers and some of the most current Intel NIC's also end up missing drivers in the Acronis rescue media too and would be nice to be able to find what kernel versions they find their way into.

You will have to be more specific than that. Brand and model of USB controller? Model of Intel NIC?

I did a quick web search and found an old forum thread on Phoronix (all things Linux).

https://www.phoronix.com/forums/forum/hardware/general-hardware/844871-…

I just bought a Vantec USB 3.1 SuperSpeed+ 10Gb/s Gen II PCIe-x4 adapter card based on an "ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller".

A few months later, the same poster writes the following.

Yeah, it's cuz the kernel had no USB 3.1 support at all. Recent articles indicated it was being worked on (in 4.7 I think).

This thread is from 2016.

Came across this link and it actually mentions that 3.3 was where NVME support was added - is 3.9 where it was perfected?

https://nvmexpress.org/resources/drivers/linux-driver-information/

That's a good find! So it may have been in kernel 3.3 that Linux got NVMe support. But on the same page you can read the following.

The driver supports the mandatory features of the NVMe 1.0c specification.

So even though NVMe support may have been added in kernel 3.3, it was certainly much more mature in kernel 3.9. I think that's when Intel also got made a major contribution. Intel is part of the NVM Express group and its creator, so they have been involved all along.

So we can now with reasonable certainty establish the following.

  • USB 3.1: requires kernel 4.7 or better.
  • PCIe NVMe 1.0: requires kernel 3.3 or better.

Regardless, glad to see it is working better - at least with my motherboard anyway (Gigabyte GA-Z170X-Gaming-3 v1 and firmware F22J).

I have an Asus ROG-STRIX-Z370-F.

There still seem to be users in the forum that cannot get their PCIe NVME drive detected with the rescue media when the SATA Mode is RAID / IRST and using an PCIe NVME drive so am wondering if it is bios firmware that might also be an issue (in addition to settings like secure boot, etc. that might also need tweaking in the bios configs).

All good questions! All these questions need to be answered for each case. You know just declaring that something doesn't work is easy. Finding out why it doesn't work is... well... a lot of work. It's much more convenient to blame the first thing that comes to mind (Linux) and move on to try a different approach (windows).

I use Windows as my primary OS, but I am not unfamiliar with alternative OS like GNU Linux or even Free DOS. I have had a lot of issues with Linux in the past, but I have also learned a lot in the process of solving those problems.

To be fair, I have also had my share of Windows problems as well. One of the major ones was when Windows 10 was the new kid on the block and Microsoft declared "all your files are right where you left them" in order to nudge people to upgrade (and ironically this message appeared during the upgrade process). Well... during the final stages of the upgrade, Windows 10 decided to check one of my disks for errors because it didn't seem healthy for whatever reason, even though it was just a few months old. I was away in the kitchen to get my coffee. All I saw when I came back was a ticking bomb (timer)... 3... 2... 1! It was too late to abort the operation, and it's safer to let it run than to pull the plug.

After signing in for the first time, all my files were where I left them. All except one! My password database with over 200 passwords was corrupted! Imagine that for a second. I had probably a quarter of a million files on that disk and they were all intact except for this one file! To make matters worse, I did not have a fresh backup, because I backup the passwords to a separate disk than my regular backups which do not include it.

I spent about 3 weeks investigating and trying to put this puzzle together and restore whatever I could. I managed to restore about 98% of what I had, which is by about 10% better than what I had on backup. I am calling these numbers back from memory, but I am not making them up. I may even have the numbers tucked away somewhere on disk as I made a project out of this. A 3 week long project to clean up the mess that Windows 10 left me with.

I hate brainless automation like this. It's something we will see only more of as we move forward. What the hell was wrong with that brand new disk? Apparently nothing! I did several tests on it later on, after imaging it and salvaging the data, and I found nothing that stands out as if the disk was to day the next day. All I know for sure is that Linux would never do such a thing. Not without my permission and not without a password. It would take something out of the ordinary for Linux to corrupt your files like this. It's one thing to check a disk for errors. But it's a whole different ballgame to manipulate it without users permission. Microsoft is assuming of course that the user is a moron, to put bluntly. Unless the user protests within 3 seconds, then Microsoft knows what's best for them.

I have a lot more horror stories like this one. The point is, no system is 100% stable nor secure. Given the complexity of today's technology it's nothing short of a miracle that any of this even works! The best thing you can do is actually to make backing up your systems your first priority. Always have a backup plan! Both plan A and B, and even a C plan. As seen here with my recent adventure with True Image, which is a backup solution. It too can fail!

 

Bobbo_3C0X1 wrote:

Here's another Acronis KB article with some info about the tools they're using...

61597: Third-party software used in Acronis True Image 2019

And (perhaps this doesn't apply anymore - of at least, not on all systems with the newer kernel, but for reference on the documented lack of drivers previously - even in 2019.

58006: Acronis software:M.2 and NVMe drives are not detected by Linux-based bootable media

It feels good to see that hey include both dmraid and mdadm for older and newer RAID array setups. The kernel number 4.4.6 is not right for True Image 2019. That article needs to be updated.

Both the second and the first article don't say when they were published, nor when they were last updated. That's a vital piece of information. Also, when it says "applies to" and then a list of product is given... how do they arrive at this conclusion? Have they systematically tested each and every one in every given scenario or they just take the words of users who contact the support team and add validity to them? Let's just say that I have worked in tech support before, and I have a clue about how this type of "knowledge" articles are generated.

So take it for what it is, it's valuable information. But your mileage may vary.

I see a lot of misunderstanding in this thread regarding RAID and NVMe drive. NVMe drives have a built-in controller that needs to be supported by a driver. All Windows 10 WinPE versions have native support for NVMe drives. The TI 2019 build 17750 Linux media also has support for NVMe drives.

Setting the SATA mode to RAID vs AHCI is another issue. You can think of this as they are two separate controllers on the motherboard. Each controller has its own driver requirements. Both the WinPE/WinRE and Linux media have support for the AHCI controller. WinPE and the Linux TI media do not support the RAID controller. WinRE may support the RAID controller depending on the system involved. NVMe drives may fall under the RAID controller or not depending on the motherboard and how the BIOS is set. If they do fall under the RAID controller, there needs to be RAID driver support in the media for them to be seen. It doesn't matter if they are in an actual RAID array or not.

I have a Windows 10 64 bit system running on an ASRock Z270 SuperCarrier motherboard. The OS drive is on 2 NVMe drives attached to the motherboard in a RAID 0 array. There are 3 NVMe slots on this motherboard. The 2 in the RAID 0 array have had the BIOS set to remap them to the RAID controller. The 3rd NVMe slots has a drive that is not remapped to the RAID controller. There are 3 other drives in the system. Two are old spinner HDD's that are connected to Intel SATA ports that do fall under the RAID controller. They are not in a RAID array. The third is an NVMe drive on a PCIe add-in card. As such it does not fall under the RAID controller. I booted the TI 2019 build 17750 Linux media to see what drives were detected. The NVMe drive on the add-in PCIe card was detected. The NVMe drive on the motherboard that is not remapped to the RAID controller was detected. The OS drive in a RAID 0 array was not detected (not even as two separate drives). The 2 HDD's that are not in a RAID array but do fall under the RAID controller were not detected. This proved to me the Linux media has NVMe support but does not have RAID support.

Samir, I can see from you BIOS screenshots that your NVMe drive was not remapped to fall under the RAID controller so I would expect it to be detected by the TI Linux media with the SATA mode set to either AHCI or RAID. You should test again with the NVMe drive set to fall under the RAID controller to see if it is detected by your TI Linux and WinPE media.

EDIT:

I need to make a correction to the above information. I tested the TI 2019 WinPE media created from an 1809 Windows ADK with no additional drivers added. It does have native Intel RAID support and can see all the drives on my ASRock system. I'm not sure what ADK version introduced native Intel RAID support. The WinRE media created using the Simple method on a Windows 10 1809 system also has Intel RAID support as expected.

Thanks Paul for putting this to words that I could not.  You are absolutely correct as I tried to point out earlier in this thread.  Media lacking the RAID controller driver support will not detect drives using that controller.

As promised I tested the Linux recovery media with my AMD Ryzen system (spec in my signature) and it showed all 3 NVMe drives (one on motherboard the other two in PCIe adapter cards). Also correctly identified boot disk (ie included it in disks and partitions to be backed up). One 'anomaly' was that SATA drives and USB drives were listed before the NVMe drives which were at the end of the listing.

Ian

Ian,

Same here.  Seems to follow exactly what Enchantech and Mustang are saying.  The NVME drivers for PCIe are there, but not the actual RAID drivers if there is an actual RAID 0,1, 5 setup with multiple NVME PCIe disks mirrored/striped together.

A modern UEFI system having PCIe RAID arrays must have the raid controller re-mapped from the SATA ports to PCIe lanes.  This is achieved via the bios and requires a number of settings to be modified to enable it.  The raid driver itself is embedded in the UEFI firmware and once enabled, via Intel IRST for Intel based boards and RAID Expert for AMD, then the storage controller drivers detect it and that is what makes the drives visible in software.

For AMD in addition to enabling the RAID Expert in the system bios the bios revision itself must be upgraded to the latest version to achieve the expected performance levels of NVME PCIe storage devices.