Salta al contenuto principale

"The file is corrupted"... yes, but what file is corrupted?

Thread solved

Bob, I love facts, and I live for the truth. It's not easy though to find the truth when all we have are personal accounts of events and fragments of factual evidence such as log files (in XML of all things!).

I do not understand how you get to the point of the UEFI standard not being followed, can you clarify for me?

I wrote that in reference to "[despite] the fact that the UEFI specification requires GPT partition tables to be fully supported". It's in that Wikipedia article. However! I thank you for bringing this up! Because this sent me on a trajectory to investigate further my own disks.

I have discovered that my "UEFI firmware" is set to work in CSM mode. You do know what that means?...

It means that I was right about at least one thing. That "ubuntu" entry that kept disappearing each time I wipe the disk and showing up again after attempting to run the recovery operation in Acronis recovery media, that entry is a direct result of UEFI running in CSM mode and scanning the PC for boot partitions on disks rather than relying on its internal boot configuration in "NVRAM" or whatever. I was right in that I could wipe out "ubuntu" boot entry by wiping the disk. Because that's where that was coming from, not from some NVRAM or UEFI firmware.

I am not sure about the "Windows Boot Manager" entry. It does not carry the "UEFI:" prefix. On this PC, anything that does not have that prefix is not a boot option coming from the boot configuration (in NVRAM) of the firmware, but rather the result of a boot entry found in a boot sector on one of the disks. Since I was able to wipe that out as well by wiping the disk, it's reasonable to assume that "Windows Boot Manager" is not a true UEFI boot entry either.

I have not checked my Windows 10 that I now have up and running, weather it is installed in UEFI mode or BIOS mode. But the picture is becoming clear now, I have a UEFI system that is being prevented from working in a true UEFI mode by having CSM enabled.

OK... I may be wrong on what is "true" UEFI mode, I have really not read the specification details, CSM may be just another part of the specification.

Consequently, I may have also discovered why the Linux based Acronis recovery media was crashing and complaining about a "corrupted" file. What it calls a "file" is really a storage device, I am pretty sure about this now. Because I have two 1 TB disks that I use to store photos, one of which is using the GPT and the other is MBR partitioning schema. I believe it's the MBR disk that Acronis was having trouble with, because I kept booting the Acronis recovery media in UEFI mode ("UEFI: Kingston" as seen in the screenshots). (I will verify shortly if it was this disk that it had issues with and was reporting of having found GRUB on.)

In regard to the "bootability fix" that Acronis was trying to apply and this whole "GRUB detected" nonsense is related to that same MBR disk. Namely, before it was a storage disk, it used to be an OS disk. So it would seem as if Acronis picked up on some fragments or leftovers of GRUB on that disk, and immediately assumed that it's something that needs fixing.

Thinking about Steve's decision to run Linux in a VM rather than dual boot with Windows... wise decision! These two don't mix well.

Doing it "the right way" according to Linux purists is to rather run Windows in a VM on top of Linux. This has its merits. But in my experience it tends to get complicated, and performance of Windows suffers considerably, mainly because Windows is less compliant in such a scenario than Linux.

That's like putting Windows in jail! And immediately it starts banging its tiny little metal cup against the impenetrable bars of Linux. 😅

Linux on the other hand has no problems with being put in jail! Staying true to its spirit, it will still comply and function correctly regardless. Linux is cool about its newfound home, it's not a trouble maker like that "Windows" bandit. 🧘‍♂️

It's no wonder that Windows itself now is implementing Linux as a subsystem – a feature of Windows 10. It's doing that because it can! After all, Linux is free and open source software, unlike Windows. It makes it much easier for Windows to adopt, than would be possible the other way around.

"The file is corrupted"... I still have not found a definitive answer to this.

What file is corrupted?... what file is Acronis talking about? Can anyone offer an explanation? Anyone from Acronis visiting and reading these forums?

This should be the focus of this topic. OK, I was able to figure out how to work my way around the reported issue. But why was I able to recover from this backup? If the backup file was corrupted, would I have been able to recover the whole disk and return this PC to a good working condition? I think not.

I have offered my explanation for this. Acronis is trying to do more than it's supposed to, trying to fix things that's none of its business. Yes, I have installed and used Ubuntu on this PC some time ago, in a dual boot configuration with Windows. Yes, I do have some unneeded boot entries on this PC, but that doesn't bother me. I rarely ever reboot anyway, and when I do, I just let it boot from the default boot option and it brings me to Windows 10 login screen every time. This is normal on this PC and I have no reason to complain. It's just Acronis complaining.

In the system report that Acronis makes, I also found "Acronis OS Selector". Yet another kind of "boot menu" that I forgot all about (and don't ever use). Also, I can tell by its report that Acronis is doing some funny business in regard to boot entries.

Meanwhile, I have now checked on that "problem" disk that Acronis was indicating in the logs, it is indeed a MBR disk. It contains 1 extended partition with 1 logical drive spanning the entire 1 TB.

Also, my current Windows 10 installation is running in UEFI mode according to System Information (msinfo32).

I have 4 disks installed in this PC, and this disk is the only one that is MBR. The other 3 are all GPT disks. Including the Samsung NVMe system disk (target disk for the recovery).

It would be interesting to see if the Linux based Acronis recovery media would actually succeed at recovering this "corrupted file" if I disable the CSM in UEFI firmware.

Without disabling CSM, now that I have recovered the whole disk with all the broken things in the right places, I'm pretty sure that even using the Linux based recovery media would now succeed. Just like it did the first time (before I decided to wipe the disk before a recovering).

Samir,

I now think you are at a point where the change to UEFI is making some sense to you.  At some point in the future, probably not to far off, MBR/BIOS (CSM) will become non-supported and the UEFI Bios will be what is left making the compatibility issues non-existent. 

Personally, I setup all of my systems as UEFI only.  I do this because I have no need or desire to use MBR/BIOS boot period.  I also wish to run Secure Boot which I have found runs afoul of CSM being enabled on some older boards.  We must remember here that UEFI is still young in age, the latest rendition being released in May 2021 so as it matures we face issues with how it effects applications that work at low levels like backup apps in general.

Not sure how old your board is but it is obvious newer boards will better implement the UEFI standard.

I do have suggestions for you, if the disk you have that is formatted MBR can be converted to GPT I recommend that.  There really isn't any reason not to.  I also recommend that you use bcdedit to remove that Ubuntu boot entry and any others that are not general like HDD or USB entries you might find.  This will act as a hedge against the board firmware thinking it should boot something it should not.  I am working with another person right now in that position but his issue is he needs to boot MBR and his board wants to boot UEFI.  Go figure!

Yes, it is starting to dawn on me that there may be more to this UEFI boot menu thing than I originally thought. I seem to have a mix of entries coming from UEFI boot configuration (NVRAM) and from disk boot sectors because of CSM being enabled.

I'm afraid I will break something if I start changing things up now. So I will have to make a plan and do this in baby steps. I will probably save the MBR to GPT conversion for last.

Regarding the Acronis system report I mentioned, this is what one of the logs has to say about my UEFI firmware and its boot entries. This was collected earlier when I still had not made a successful recovery.

Boot variables are allowed on this system

***

Boot variables information:

***

---- 13 ----
Description: USB
Path: Unknown
GUID: Unknown
Partition number: Unknown
Begin: Unknown
Partition Size: Unknown

----------------

Hex dump
01 00 00 00 0D 00 55 00 53 00 42 00 00 00 05 01 09 00 02 00 00 00 00 7F FF 04 00 00 00 47 4F 00 00 4E 4F BF 00 00 00 01 00 00 00 73 00 4B 00 69 00 6E 00 67 00 73 00 74 00 6F 00 6E 00 44 00 61 00 74 00 61 00 54 00 72 00 61 00 76 00 65 00 6C 00 65 00 72 00 20 00 33 00 2E 00 30 00 50 00 4D 00 41 00 50 00 00 00 05 01 09 00 02 00 00 00 00 7F FF 04 00 02 01 0C 00 D0 41 03 0A 00 00 00 00 01 01 06 00 00 14 03 05 06 00 05 00 7F FF 04 00 01 04 46 00 EF 47 64 2D C9 3B A0 41 AC 19 4D 51 D0 1B 4C E6 36 00 30 00 41 00 34 00 34 00 43 00 42 00 34 00 36 00 34 00 34 00 41 00 42 00 45 00 42 00 30 00 37 00 33 00 35 00 45 00 32 00 44 00 38 00 42 00 00 00 7F FF 04 00 00 00 42 4F 00 00 4E 4F BB 00 00 00 01 00 00 00 79 00 57 00 44 00 20 00 4D 00 79 00 20 00 50 00 61 00 73 00 73 00 70 00 6F 00 72 00 74 00 20 00 32 00 35 00 45 00 32 00 34 00 30 00 30 00 35 00 00 00 05 01 09 00 02 00 00 00 00 7F FF 04 00 02 01 0C 00 D0 41 03 0A 00 00 00 00 01 01 06 00 04 1C 01 01 06 00 00 00 03 05 06 00 01 00 7F FF 04 00 01 04 46 00 EF 47 64 2D C9 3B A0 41 AC 19 4D 51 D0 1B 4C E6 35 00 37 00 35 00 38 00 33 00 34 00 33 00 31 00 34 00 34 00 33 00 31 00 33 00 38 00 34 00 41 00 33 00 38 00 33 00 37 00 34 00 35 00 33 00 31 00 00 00 7F FF 04 00 00 00 42 4F

***

---- 12 ----
Description: UEFI: KingstonDataTraveler 3.0PMAP, Partition 1
Path: Unknown
GUID: Unknown
Partition number: 1
Begin: 2048
Partition Size: 15153152

----------------

Hex dump
01 00 00 00 90 00 55 00 45 00 46 00 49 00 3A 00 20 00 4B 00 69 00 6E 00 67 00 73 00 74 00 6F 00 6E 00 44 00 61 00 74 00 61 00 54 00 72 00 61 00 76 00 65 00 6C 00 65 00 72 00 20 00 33 00 2E 00 30 00 50 00 4D 00 41 00 50 00 2C 00 20 00 50 00 61 00 72 00 74 00 69 00 74 00 69 00 6F 00 6E 00 20 00 31 00 00 00 02 01 0C 00 D0 41 03 0A 00 00 00 00 01 01 06 00 00 14 03 05 06 00 05 00 04 01 2A 00 01 00 00 00 00 08 00 00 00 00 00 00 00 38 E7 00 00 00 00 00 1E F2 BD B6 00 00 00 00 00 00 00 00 00 00 00 00 01 01 7F FF 04 00 01 04 46 00 EF 47 64 2D C9 3B A0 41 AC 19 4D 51 D0 1B 4C E6 36 00 30 00 41 00 34 00 34 00 43 00 42 00 34 00 36 00 34 00 34 00 41 00 42 00 45 00 42 00 30 00 37 00 33 00 35 00 45 00 32 00 44 00 38 00 42 00 00 00 7F FF 04 00 00 00 42 4F

***

---- 4 ----
Description: Hard Drive
Path: Unknown
GUID: Unknown
Partition number: Unknown
Begin: Unknown
Partition Size: Unknown

----------------

Hex dump
01 00 00 00 0D 00 48 00 61 00 72 00 64 00 20 00 44 00 72 00 69 00 76 00 65 00 00 00 05 01 09 00 02 00 00 00 00 7F FF 04 00 00 00 47 4F 00 00 4E 4F C1 00 00 00 01 00 00 00 71 00 53 00 61 00 6D 00 73 00 75 00 6E 00 67 00 20 00 53 00 53 00 44 00 20 00 39 00 37 00 30 00 20 00 45 00 56 00 4F 00 20 00 50 00 6C 00 75 00 73 00 20 00 35 00 30 00 30 00 47 00 42 00 00 00 05 01 09 00 02 00 00 00 00 7F FF 04 00 02 01 0C 00 D0 41 03 0A 00 00 00 00 01 01 06 00 00 1B 01 01 06 00 00 00 03 17 10 00 01 00 00 00 00 25 38 51 91 50 97 C7 7F FF 04 00 01 04 34 00 EF 47 64 2D C9 3B A0 41 AC 19 4D 51 D0 1B 4C E6 53 00 34 00 45 00 56 00 
Boot variables in Boot Order:

***

4 12 13

***

I'm not sure what these are? Are they boot entries? Why is almost everything "Unknown"? Converted to ASCII, those hex dumps for number 13 and 12 says "Kingston" and number 4 says "Samsung". So those would be the target disk and the USB flash drive I was using to run the recovery media off of.

Now that I have a fully working Windows system again, I will compare these to output of BCDEdit program and see if I get a match. Then maybe delete them. But not without making a backup first.

Samir,

Your boot variables to me say that 13 is a standard USB device (Kingston non-UEFI), 12 UEFI Kingston obviously, 14 Hard Drive presumably an unallocated  disk having no unique identifier.

Not sure when the System was generated but I would assume on the first attempt that failed?

If that is the case then I think this suggests that TI was picking up the Grub info from the MBR disk and that is what the corruption error was all about.  Error reporting is really very generic in general terms even with Windows.  Most of the time an error will not be specific in reporting unfortunately.

These reports were collected maybe after the third failed attempt at recovering the whole disk, when I decided that the main log was not enough and so I ran a system report in Acronis. It produced a ZIP archive (AcronisSystemReport_Jun_19__2021_3_09_37_PM.zip) which I then had to unpack on another PC so I could look at it. There are several files in there, but the text above is pasted from the "uefi_vars.txt" file, with only modification that I have stripped out the serial number of "Hard Drive" (Samsung SSD!).

After decoding those hex values, this is what I got.

Number 13:

USB KingstonDataTraveler 3.0PMAP

60A44CB4644ABEB0735E2D8B

WD My Passport 25E24005

575834314431384A38374531

This looks damaged! How can a single boot variable or boot entry hold two disk references?

I understand there is only supposed to be one per entry. This is an interesting clue not only because of that, but because of what disks are mentioned in here. The Kingston refers to the flash drive that Acronis was running off of, and the WD disk is the disk that holds the backup files.

The numbers are some sort of ID, it's the last part of the "Parent" property in Device Manager. In fact the WD disk is currently connected and its Parent property value is as follows.

USB\VID_1058&PID_25E2\575834314431384A38374531

Number 12:

UEFI: KingstonDataTraveler 3.0PMAP, Partition 1

60A44CB4644ABEB0735E2D8B

Number 4:

Hard Drive

Samsung SSD 970 EVO Plus 500GB

S4EVNG0******** (REDACTED)

The "hard drive" (Samsung SSD) did appear unallocated more than once, whenever I wiped it clean using DriveCleanser. It also appeared uninitialized at one point. Whenever I tried to recover this disk to its former state using the Acronis backup, I would get that message saying that the file is corrupted and then the recovery operation would fail. Regardless if these boot entries are stored in NVRAM or on a disk like on an old BIOS system, I am convinced that it's Acronis that kept putting them in there. As I said I think it may have ran into some trouble with the MBR disk and possibly the CSM mode that my UEFI firmware was in (not to forget that Kingston flash drive itself was being booted in UEFI mode).

 

Here is my current boot configuration with Windows 10 up and running. I have made a backup now but have not made any changes yet. I don't dare touch this at the moment.

The "Hard Drive" may be a reference to the Samsung SSD just like in the text above, which is the system disk.

All three of the the lower identifiers are referenced under "fwbootmgr" with a GUID, except for "bootmgr". Only two of them have a device "HarddiskVolume16" reference. Which seems strange, because no such volume exists. Highest volume number in DiskPart is 15, but the list of volumes there is zero indexed. Assuming 15 and 16 point both to the same disk, then that would be the WD external USB disk that's used for backups. What an odd place to install an operating system on... unless Acronis itself did something funny with that disk.

C:\WINDOWS\system32>bcdedit /enum firmware

Firmware Boot Manager
---------------------
identifier              {fwbootmgr}
displayorder            {bootmgr}
                        {cfe4b84d-d1e8-11eb-987a-806e6f6e6963}
                        {ecab0094-678e-11e8-9793-ffbca7d2a79b}
                        {847b4b91-d276-11eb-987c-806e6f6e6963}
timeout                 1

Windows Boot Manager
--------------------
identifier              {bootmgr}
device                  partition=\Device\HarddiskVolume16
path                    \EFI\MICROSOFT\BOOT\BOOTMGFW.EFI
description             Windows Boot Manager
locale                  en-US
inherit                 {globalsettings}
flightsigning           Yes
default                 {current}
resumeobject            {19f9ae88-cdbe-11eb-986c-ea1650f33281}
displayorder            {current}
toolsdisplayorder       {memdiag}
timeout                 0

Firmware Application (101fffff)
-------------------------------
identifier              {847b4b91-d276-11eb-987c-806e6f6e6963}
description             USB

Firmware Application (101fffff)
-------------------------------
identifier              {cfe4b84d-d1e8-11eb-987a-806e6f6e6963}
device                  partition=\Device\HarddiskVolume16
path                    \EFI\UBUNTU\SHIMX64.EFI
description             ubuntu

Firmware Application (101fffff)
-------------------------------
identifier              {ecab0094-678e-11e8-9793-ffbca7d2a79b}
description             Hard Drive

Samir,

Your last post shows that the Ubuntu entry is coming from the same EFI System partition as your Windows Boot Manager entry (HarddiskVolume16).  You can delete the Ubuntu entry using bcdedit.exe. However, it will probably just come back. This is because the UEFI firmware has the ability to write entries to the BCD store. If you want to get rid of the Ubuntu entry for good, you need to delete the \EFI\UBUNTU folder from the EFI System partition. Then when the Ubuntu firmware entry is deleted using bcdedit.exe, it will stay gone.

Then easiest way to delete the \EFI\Ubuntu folder is with WinPE because it runs with System level privileges. You need to boot WinPE and assign a drive letter to the EFI partition with the Ubuntu folder. The MVP media will help you do this. You can use diskpart.exe to identify and assign a drive letter to the EFI partition. Then use a provided file manager to locate and delete the Ubuntu folder from the EFI partition.

Be aware that if you restore that EFI System partition again, the unwanted Ubuntu folder will return.

 

Agree 100% with Paul to delete the Ubuntu folder to remove the entry completely.  I think that this suggests that this is one of the unfortunate side effects of dual booting.  Not sure what procedure you followed to remove Ubuntu from the system but that may have moved the boot folder into the EFI path.  I suppose the Grub reference would also be found in this same folder.

Paul, thanks for your suggestion. It sounds like a good plan. I will look into this. I did not want to go down this road at this time, but I might as well do it. Otherwise I'm afraid it will come back to haunt me again, sooner or later.

Bob, I agree that this is an unfortunate side effects of dual booting. This is one of the reasons I no longer do that, the other reason being that it's not very practical or convenient to have to reboot every time you want to get into a different OS.

Just out of curiosity, I wanted to see if the "displayorder" from output above is what I think it is.

{bootmgr}
{cfe4b84d-d1e8-11eb-987a-806e6f6e6963}
{ecab0094-678e-11e8-9793-ffbca7d2a79b}
{847b4b91-d276-11eb-987c-806e6f6e6963}

I puzzled together the little pieces of info I have and translated this to following.

Windows Boot Manager
ubuntu
Samsung SSD 970 EVO Plus 500GB
WD My Passport 25E24005

So this is how the boot entries should appear in UEFI firmware I thought.

For some reason, the order is different when I start the PC and press F8 to get to the boot menu. Also, I have a few more entries in this menu. Can someone explain this? Is this because I have CSM enabled? Looking at disk boot sectors as opposed to NVRAM boot entries? Because not all of these are UEFI boot entries?

However, when I enter the UEFI setup interface, I get an exact match. I get the same number of entries and they are precisely in the order.

What you see does make sense. The order of things in the one time boot menu can be unpredictable. I've even seen the order changed from one boot to the next. Also, not all entries listed are actually bootable. If you select a non-bootable entry the one time boot menu will just return. 

It certainly does make a difference when CSM is enabled. I see no problem with having CSM enabled. I always keep it enabled so I can test booting from 32 bit media.

I tested each of those entries. I started with the two 1 TB drives, one of which Acronis was previously reporting to have found GRUB on.

Booting from the good drive gave me a "MBR Error". The error message is a bit misleading since this drive is using GPT rather than MBR partitioning.

Booting from the suspect drive dropped me to a "grub rescue" shell. Because it failed to locate device "6cd93098-0789-492d-9564-ad80fa11d931". This drive is a MBR disk drive (1 extended partition and 1 logical). Note that this disk is different from the one that Windows is running from. So it seems as if I have GRUB in more than one disk on this PC.

I also tested the 4 TB drive which was never used for any operating system installation, and it too is a GPT disk. It also gave me the "MBR Error" and this is expected.

The "Samsung SSD 970 EVO Plus 500GB" contains Windows 10 at the moment and it's the main system disk on this PC. However, if I just select this option it fails to start. I have to select the option where it says "Windows Boot Manager (Samsung SSD 970 EVO Plus 500GB)" to enter Windows.

The "WD My Passport" option is a bit weird in that when I select it, the menu disappears and the UEFI interface refreshes, and nothing more happens. Doing the same thing one more time does the same. Doing the same a third time then produces a "MBR Error" screen. It's weird like that. I have never used this disk drive for anything else but backups, there should be nothing on there that's related to operating systems. (Although... I seem to recall that True Image is capable of embedding its recovery media onto the same disk drive that holds the backup. I may be wrong. But I have never used that anyway.)

Selecting the "ubuntu" option drops me at a regular GRUB command shell.

The top most option is "Windows Boot Manager" and this is the default that's set in UEFI interface. It starts Windows 10.

So it would seem that I have GRUB installed in more than place.

The 1 TB disk drive that Acronis indicated as having GRUB on does indeed have it. This is the one that drops me to a "grub rescue" shell rather than regular GRUB command shell. It seems to be looking for a device with ID of "6cd93098-0789-492d-9564-ad80fa11d931". I don't see this ID anywhere, not in the Acronis logs and not in the BCDEdit output. I suspect this is a leftover from the now old Samsung 850 EVO SATA SSD that was once used in this PC, with Ubuntu on it. At the time, Ubuntu was used independently. What its GRUB loader is doing now on a completely separate disk, I have no idea.

Could this be what Acronis was trying to fix, in regard to "bootability"? And then failed to find device "6cd93098-0789-492d-9564-ad80fa11d931" so it said "file is corrupted"? I'm just speculating, but this is one GRUB too many for Acronis to handle.

How do I get rid of this old GRUB leftover without wiping the disk?

Then there is that 500 GB Samsung NVMe SSD that also contains GRUB. This one appears to be in a better condition since it does not drop me to "grub rescue" but rather to the regular GRUB command shell. I wonder if it would be enough to just mount and then delete the "\EFI\Ubuntu" folder to get rid of this GRUB?

This is kind of funny... I'm looking for ways to get rid of GRUB, people commonly look for ways to reinstate it when it gets broken. At this time I am moving away from the dual boot and multiboot techniques and going forward with virtualization rather, so I won't need this anymore.

I used BootSect to wipe out GRUB on the 1 TB disk drive that I use to store photos. Hopefully, this should make Linux based Acronis recovery media happy, although I will only use WinPE based media going forward.

I no longer get to a "grub rescue" screen. If I try to boot off of this disk now I get the "MBR Error" so it's looking better already.

C:\WINDOWS\system32>BOOTSECT /NT60 J: /MBR
Target volumes will be updated with BOOTMGR compatible bootcode.

J: (\\?\Volume{240ad7d8-0000-0000-0000-200000000000})

    Updated NTFS filesystem bootcode.  The update may be unreliable since the
    volume could not be locked during the update:
        Access is denied.

\??\PhysicalDrive1

    Successfully updated disk bootcode.

Bootcode was successfully updated on all targeted volumes.

I did two passes because I got some "access is denied" message the first time. So I did it a second time, just for good measure and to be sure it's done correctly.

C:\WINDOWS\system32>BOOTSECT /NT60 J: /MBR
Target volumes will be updated with BOOTMGR compatible bootcode.

J: (\\?\Volume{240ad7d8-0000-0000-0000-200000000000})

    Successfully updated NTFS filesystem bootcode.

\??\PhysicalDrive1

    Successfully updated disk bootcode.

Bootcode was successfully updated on all targeted volumes.

One GRUB down, one more to go.

As for the second GRUB, since it is EFI GRUB on a GPT disk, it does not live in a boot sector. So I will need to mount the EFI partition, in order to delete the "\EFI\UBUNTU\SHIMX64.EFI" and the entire "UBUNTU" folder. Then I can safely remove the "ubuntu" boot entry from UEFI firmware (NVRAM).

For the record, here is what my system disk layout looks like.

 

Samir,

Correct.  You will need System admin level permissions for that so boot should be done from WinPE or an install disk.

You might find the link below useful in editing using BCD

https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/b…

I have yet to edit UEFI boot entries via BCDEdit, but I got rid of EFI GRUB now from the EFI partition on the system disk (Samsung NVMe SSD).

There used to be one "ubuntu" folder with four or five files in it, immediately under the EFI folder you see in the tree below. You can see that it's gone now. (I made another tree previously with those Ubuntu files included, but I dumped it in the wrong place and so I lost it.)

Z:.
\---EFI
    +---Microsoft
    |   +---Boot
    |   |   |   BCD
    |   |   |   boot.stl
    |   |   |   bootmgfw.efi
    |   |   |   bootmgr.efi
    |   |   |   kdnet_uart16550.dll
    |   |   |   kdstub.dll
    |   |   |   kd_02_10df.dll
    |   |   |   kd_02_10ec.dll
    |   |   |   kd_02_1137.dll
    |   |   |   kd_02_14e4.dll
    |   |   |   kd_02_15b3.dll
    |   |   |   kd_02_1969.dll
    |   |   |   kd_02_19a2.dll
    |   |   |   kd_02_1af4.dll
    |   |   |   kd_02_8086.dll
    |   |   |   kd_07_1415.dll
    |   |   |   kd_0C_8086.dll
    |   |   |   memtest.efi
    |   |   |   winsipolicy.p7b
    |   |   |   
    |   |   +---bg-BG
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       
    |   |   +---cs-CZ
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---da-DK
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---de-DE
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---el-GR
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---en-GB
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       
    |   |   +---en-US
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---es-ES
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---es-MX
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       
    |   |   +---et-EE
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       
    |   |   +---fi-FI
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---Fonts
    |   |   |       meiryo_boot.ttf
    |   |   |       msjhn_boot.ttf
    |   |   |       msjh_boot.ttf
    |   |   |       msyhn_boot.ttf
    |   |   |       msyh_boot.ttf
    |   |   |       segmono_boot.ttf
    |   |   |       segoen_slboot.ttf
    |   |   |       segoe_slboot.ttf
    |   |   |       wgl4_boot.ttf
    |   |   |       chs_boot.ttf
    |   |   |       cht_boot.ttf
    |   |   |       jpn_boot.ttf
    |   |   |       kor_boot.ttf
    |   |   |       malgunn_boot.ttf
    |   |   |       malgun_boot.ttf
    |   |   |       meiryon_boot.ttf
    |   |   |       
    |   |   +---fr-CA
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       
    |   |   +---fr-FR
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---hr-HR
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       
    |   |   +---hu-HU
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---it-IT
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---ja-JP
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---ko-KR
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---lt-LT
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       
    |   |   +---lv-LV
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       
    |   |   +---nb-NO
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---nl-NL
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---pl-PL
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---pt-BR
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---pt-PT
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---qps-ploc
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---Resources
    |   |   |   |   bootres.dll
    |   |   |   |   
    |   |   |   \---en-US
    |   |   |           bootres.dll.mui
    |   |   |           
    |   |   +---ro-RO
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       
    |   |   +---ru-RU
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---sk-SK
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       
    |   |   +---sl-SI
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       
    |   |   +---sr-Latn-RS
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       
    |   |   +---sv-SE
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---tr-TR
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   +---uk-UA
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       
    |   |   +---zh-CN
    |   |   |       bootmgfw.efi.mui
    |   |   |       bootmgr.efi.mui
    |   |   |       memtest.efi.mui
    |   |   |       
    |   |   \---zh-TW
    |   |           bootmgfw.efi.mui
    |   |           bootmgr.efi.mui
    |   |           memtest.efi.mui
    |   |           
    |   \---Recovery
    |           BCD
    |           
    \---Boot
            bootx64.efi
            fbx64.efi
            mmx64.efi
            fallback.efi

 

Here is an updated output of what I see when I run the BCDEdit command to list "all firmware applications" (enum firmware).

C:\WINDOWS\system32>bcdedit /enum firmware

Firmware Boot Manager
---------------------
identifier              {fwbootmgr}
displayorder            {bootmgr}
                        {cfe4b84d-d1e8-11eb-987a-806e6f6e6963}
                        {ecab0094-678e-11e8-9793-ffbca7d2a79b}
timeout                 1

Windows Boot Manager
--------------------
identifier              {bootmgr}
device                  partition=\Device\HarddiskVolume3
path                    \EFI\MICROSOFT\BOOT\BOOTMGFW.EFI
description             Windows Boot Manager
locale                  en-US
inherit                 {globalsettings}
flightsigning           Yes
default                 {current}
resumeobject            {19f9ae88-cdbe-11eb-986c-ea1650f33281}
displayorder            {current}
toolsdisplayorder       {memdiag}
timeout                 0

Firmware Application (101fffff)
-------------------------------
identifier              {cfe4b84d-d1e8-11eb-987a-806e6f6e6963}
device                  partition=\Device\HarddiskVolume3
path                    \EFI\UBUNTU\SHIMX64.EFI
description             ubuntu

Firmware Application (101fffff)
-------------------------------
identifier              {ecab0094-678e-11e8-9793-ffbca7d2a79b}
description             Hard Drive

I can now safely delete the last two sections by using those IDs and it should remove the UEFI boot entries I see both in the F8 boot menu and in UEFI interface.

Most interesting here is "HarddiskVolume3". What was previously HarddiskVolume16 has changed to HarddiskVolume3. Why is that? Well, at least it's closer to the truth now I think. However, volume 3 is indicated as "Hidden" and NTFS partition, and it's 555 MB in size. Whereas the EFI partition I mounted and deleted the "ubuntu" folder on is only 99 MB in size and FAT32 formatted. It was indicated as "Hidden", but only in WinPE, now it's "System".

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online          465 GB  2048 KB        *
  Disk 1    Online         7400 MB      0 B

DISKPART> list vol

  Volume ###  Ltr  Label        Fs     Type        Size     Status     Info
  ----------  ---  -----------  -----  ----------  -------  ---------  --------
  Volume 0     C   Win 10 Pro   NTFS   Partition    464 GB  Healthy    Boot
  Volume 1         Recovery     NTFS   Partition    499 MB  Healthy    Hidden
  Volume 2                      FAT32  Partition     99 MB  Healthy    System
  Volume 3                      NTFS   Partition    555 MB  Healthy    Hidden
  Volume 4     U   ACRONIS_MED  FAT32  Removable   7399 MB  Healthy

DISKPART> sel vol 3

Volume 3 is the selected volume.

DISKPART> list part

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 1    Reserved           128 MB    17 KB
  Partition 2    Recovery           499 MB   129 MB
  Partition 3    System              99 MB   628 MB
  Partition 4    Primary            464 GB   727 MB
* Partition 5    Recovery           555 MB   465 GB

Note that I don't have more than only 2 disks at the moment, a system disk (Samsung NVMe SSD) and the Kingston USB flash drive. This is because I have disabled the ports for the other 3 disks.

Never mind that. I think I got this now. As it turns out, I was on the right track with my remark about zero indices. I just discovered that DiskPart starts indexing at 1 when listing partitions, and it starts indexing at 0 when listing volumes. Moreover, BCDEdit incorrectly uses the term volume. It says "HarddiskVolume3" but it does not mean volume 3. It means partition 3. And because volumes and partitions are indexed differently, you can easily arrive at different conclusions.

DISKPART> list vol

  Volume ###  Ltr  Label        Fs     Type        Size     Status     Info
  ----------  ---  -----------  -----  ----------  -------  ---------  --------
* Volume 0     C   Win 10 Pro   NTFS   Partition    464 GB  Healthy    Boot
  Volume 1         Recovery     NTFS   Partition    499 MB  Healthy    Hidden
  Volume 2                      FAT32  Partition     99 MB  Healthy    System
  Volume 3                      NTFS   Partition    555 MB  Healthy    Hidden
  Volume 4     U   ACRONIS_MED  FAT32  Removable   7399 MB  Healthy

DISKPART> sel vol 2

Volume 2 is the selected volume.

DISKPART> list part

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 1    Reserved           128 MB    17 KB
  Partition 2    Recovery           499 MB   129 MB
* Partition 3    System              99 MB   628 MB
  Partition 4    Primary            464 GB   727 MB
  Partition 5    Recovery           555 MB   465 GB

In other words, volume 0 is partition 1, volume 1 is partition 2, volume 2 is partition 3, and so on. But not always, apparently. Because in my example, volume 0 is partition 4... very confusing.

DISKPART> list part

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 1    Reserved           128 MB    17 KB
  Partition 2    Recovery           499 MB   129 MB
* Partition 3    System              99 MB   628 MB
  Partition 4    Primary            464 GB   727 MB
  Partition 5    Recovery           555 MB   465 GB

DISKPART> sel part 4

Partition 4 is now the selected partition.

DISKPART> list vol

  Volume ###  Ltr  Label        Fs     Type        Size     Status     Info
  ----------  ---  -----------  -----  ----------  -------  ---------  --------
* Volume 0     C   Win 10 Pro   NTFS   Partition    464 GB  Healthy    Boot
  Volume 1         Recovery     NTFS   Partition    499 MB  Healthy    Hidden
  Volume 2                      FAT32  Partition     99 MB  Healthy    System
  Volume 3                      NTFS   Partition    555 MB  Healthy    Hidden
  Volume 4     U   ACRONIS_MED  FAT32  Removable   7399 MB  Healthy

The lesson is that you can't trust that "HarddiskVolume3" or "HarddiskVolume16" is what it says it is. It requires some additional detective work for you to know you're applying your fixes and changes to the right places. What a weird system!

Samir,

Glad to see that I think you have cleared up the troublesome Ubuntu boot entry.  Have you verified that all your work in fact was applied?  I tend to do that to double check.

You are correct in how BCDEdit and diskpart count volumes differently.  In my view this is the differences in who programmed what as MS. :-)

Wierd, maybe I suppose but once you get it then it's not.

All good now I think.

C:\WINDOWS\system32>bcdedit /enum firmware

Firmware Boot Manager
---------------------
identifier              {fwbootmgr}
displayorder            {bootmgr}
                        {cfe4b84d-d1e8-11eb-987a-806e6f6e6963}
                        {ecab0094-678e-11e8-9793-ffbca7d2a79b}
timeout                 1

Windows Boot Manager
--------------------
identifier              {bootmgr}
device                  partition=\Device\HarddiskVolume3
path                    \EFI\MICROSOFT\BOOT\BOOTMGFW.EFI
description             Windows Boot Manager
locale                  en-US
inherit                 {globalsettings}
flightsigning           Yes
default                 {current}
resumeobject            {19f9ae88-cdbe-11eb-986c-ea1650f33281}
displayorder            {current}
toolsdisplayorder       {memdiag}
timeout                 0

Firmware Application (101fffff)
-------------------------------
identifier              {cfe4b84d-d1e8-11eb-987a-806e6f6e6963}
device                  partition=\Device\HarddiskVolume3
path                    \EFI\UBUNTU\SHIMX64.EFI
description             ubuntu

Firmware Application (101fffff)
-------------------------------
identifier              {ecab0094-678e-11e8-9793-ffbca7d2a79b}
description             Hard Drive

C:\WINDOWS\system32>bcdedit /delete {ecab0094-678e-11e8-9793-ffbca7d2a79b} /cleanup
The operation completed successfully.

C:\WINDOWS\system32>bcdedit /delete {cfe4b84d-d1e8-11eb-987a-806e6f6e6963} /cleanup
The operation completed successfully.

C:\WINDOWS\system32>bcdedit /enum firmware

Firmware Boot Manager
---------------------
identifier              {fwbootmgr}
displayorder            {bootmgr}
timeout                 1

Windows Boot Manager
--------------------
identifier              {bootmgr}
device                  partition=\Device\HarddiskVolume3
path                    \EFI\MICROSOFT\BOOT\BOOTMGFW.EFI
description             Windows Boot Manager
locale                  en-US
inherit                 {globalsettings}
flightsigning           Yes
default                 {current}
resumeobject            {19f9ae88-cdbe-11eb-986c-ea1650f33281}
displayorder            {current}
toolsdisplayorder       {memdiag}
timeout                 0

This is the end of it. Thank you all for participating!

I will never know what file was corrupted. Just like I will never know why the WinPE based recovery media did what the Linux based recovery media could not, namely recover one of these Acronis True Image disk images without complaining. Regardless of the state of my BCD store, NVRAM contents, UEFI boot entries, etc... all I know is that the WinPE based recovery media did what the Linux based recovery media could not

Bob, I never noticed the difference, and I have used DiskPart more than once. I only noticed it now because I needed to know something specific. It could be that it's due to who programmed what part of it. It's a good tool nonetheless.

Also, Disk Management says it's partition 3 (on disk 0). So it too is starting at 1. However, it only does so for partitions. For disks, it starts at 0... go figure!

I have just rebooted and I can confirm that the "ubuntu" entry is gone from the F8 boot menu.

However, now that I'm back in Windows I see that I have three new "Firmware Application (101fffff)" entries in BCDEdit with the same ID, but the ID is different from what I had previously.

C:\WINDOWS\system32>bcdedit /enum firmware

Firmware Boot Manager
---------------------
identifier              {fwbootmgr}
displayorder            {bootmgr}
                        {6f1dde56-d475-11eb-9888-806e6f6e6963}
                        {6f1dde57-d475-11eb-9888-806e6f6e6963}
                        {6f1dde58-d475-11eb-9888-806e6f6e6963}
timeout                 1

Windows Boot Manager
--------------------
identifier              {bootmgr}
device                  partition=\Device\HarddiskVolume3
path                    \EFI\MICROSOFT\BOOT\BOOTMGFW.EFI
description             Windows Boot Manager
locale                  en-US
inherit                 {globalsettings}
flightsigning           Yes
default                 {current}
resumeobject            {19f9ae88-cdbe-11eb-986c-ea1650f33281}
displayorder            {current}
toolsdisplayorder       {memdiag}
timeout                 0

Firmware Application (101fffff)
-------------------------------
identifier              {6f1dde56-d475-11eb-9888-806e6f6e6963}
description             Hard Drive

Firmware Application (101fffff)
-------------------------------
identifier              {6f1dde57-d475-11eb-9888-806e6f6e6963}
device                  partition=U:
description             UEFI: KingstonDataTraveler 3.0PMAP, Partition 1

Firmware Application (101fffff)
-------------------------------
identifier              {6f1dde58-d475-11eb-9888-806e6f6e6963}
description             USB

This appears to be related to the Kingston USB flash drive that I currently have plugged in. Maybe they will disappear if I unplug it. It doesn't bother me. I can post a photo later of how it looks like at startup now.

 

Looks good Samir.

I'd recommend one more check now that you've done so much work. Make sure your Windows recovery Environment is still working. Open a command prompt as admin. and enter:

reagentc /info

Make sure it reports that WinRE is Enabled.

Samir,

I agree with Paul, it looks good now.

With regard to the 3 new Firmware Applications - follow the Identifiers.  At the bottom of the output you will see the Firmware Apps listed, they are:

Firmware Application (101fffff)
-------------------------------
identifier              {6f1dde56-d475-11eb-9888-806e6f6e6963}
description             Hard Drive

Firmware Application (101fffff)
-------------------------------
identifier              {6f1dde57-d475-11eb-9888-806e6f6e6963}
device                  partition=U:
description             UEFI: KingstonDataTraveler 3.0PMAP, Partition 1

Firmware Application (101fffff)
-------------------------------
identifier              {6f1dde58-d475-11eb-9888-806e6f6e6963}
description             USB

The Hard Drive is a default HDD.  The USB is the same, a default for USB devices.  As you can see in the first number string the sequence increase by one number for each device application in the display order (56, 57, 58).  On my latest build I disabled/removed the default USB.  Why?  Because I have a Windows Storage Spaces device I built that houses disks in a USB 3.0 enclosure.  On occasion my boot disk which is outside of defined disks of the firmware in disk order (boot disk is disk number 10 [actually disk 11 as count starts at 0]) and sometimes UEFI would fail to find that disk and when that happened the UEFI would attempt to find a boot loader on the USB Storage Space device which would result in a "Boot device not found" message.  With the USB disabled I have solved that issue.

 

Paul,

It appears to be in a good working condition. I will dump the output below. But I wonder why you would suspect it might not be working?

C:\WINDOWS\system32>reagentc /info
Windows Recovery Environment (Windows RE) and system reset configuration
Information:

    Windows RE status:         Enabled
    Windows RE location:       \\?\GLOBALROOT\device\harddisk0\partition5\Recovery\WindowsRE
    Boot Configuration Data (BCD) identifier: 19f9ae8b-cdbe-11eb-986c-ea1650f33281
    Recovery image location:
    Recovery image index:      0
    Custom image location:
    Custom image index:        0

REAGENTC.EXE: Operation Successful.

 

Bob,

Good eye there! I missed that. They all looked the same to me, but they are not...

6f1dde56

6f1dde57

6f1dde58

Some of those seem to be popping in and popping out on the fly, or so it seems. I think you may be right about "Hard Drive" being some kind of default. After safely removing the Kingston USB flash drive and a reboot, this "Hard Drive" is all that's left out of these three. Note that my other disks have been disabled in UEFI, so at the moment I only have one internal disk connected – the Samsung NVMe SSD – and no external USB disk or USB flash drive connected.

C:\WINDOWS\system32>bcdedit /enum firmware

Firmware Boot Manager
---------------------
identifier              {fwbootmgr}
displayorder            {bootmgr}
                        {6f1dde56-d475-11eb-9888-806e6f6e6963}
timeout                 1

Windows Boot Manager
--------------------
identifier              {bootmgr}
device                  partition=\Device\HarddiskVolume3
path                    \EFI\MICROSOFT\BOOT\BOOTMGFW.EFI
description             Windows Boot Manager
locale                  en-US
inherit                 {globalsettings}
flightsigning           Yes
default                 {current}
resumeobject            {19f9ae88-cdbe-11eb-986c-ea1650f33281}
displayorder            {current}
toolsdisplayorder       {memdiag}
timeout                 0

Firmware Application (101fffff)
-------------------------------
identifier              {6f1dde56-d475-11eb-9888-806e6f6e6963}
description             Hard Drive

Bob, do you mean you have one of those USB disk enclosures with room for more than one disk? And you have them pooled in a Storage Spaces device in Windows?

On occasion my boot disk which is outside of defined disks of the firmware in disk order (boot disk is disk number 10 [actually disk 11 as count starts at 0]) and sometimes UEFI would fail to find that disk and when that happened the UEFI would attempt to find a boot loader on the USB Storage Space device which would result in a "Boot device not found" message.  With the USB disabled I have solved that issue.

Hm... I think I am beginning to understand the issue you're describing. Correct me if I'm wrong. But it sounds like the issue is caused by this "USB" entry as seen in my previous post?

You're not disabling the USB function itself within UEFI firmware are you? Just checking, because I had this impression at first, but I think I got it now.

Since this "UBS" entry seems be "popping in and popping out on the fly" (as I described above), this can cause the boot disk to get the wrong ID/index during enumeration, which results in UEFI not finding the boot disk. So by deleting the "USB" entry, you effectively save the day by forcing the firmware to give the correct ID/index to the boot disk?

Is that about right?

I don't know exactly what either of these two things are – "Hard Drive" and "USB" – but it sounds like it can cause a lot of trouble.

I would think that this is caused by its dynamic nature, rather than at what number or index enumeration starts. I mean if the boot disk needs to get ID of 5 for example in a zero-indexed list and it gets 4 instead, then that's equally wrong as if it gets 5 where 6 was expected in a one-indexed list. It's still off by one, regardless.

Of course, it does get easier and less prone to error if all systems involved in these processes count numbers the same way. (Who would have thought that counting can be so complicated?) At very least, it makes the job of the programmers much easier. (In a perfect world, everything is easy.)

(Given a zero-indexed list, number 11 is transposed to number 12 in a one-indexed list. In other words, when going from zero-indexed to one-indexed list you add 1, and when going the other way around you subtract 1.)

On the count of counting, here is a screenshot of what the Disk Management tool looks like at the moment.

Note that I have 4 partitions in the diagram, but only 3 partitions in the volume list. What on Earth is going on here? I bet it has to do with how the counting is done in the list vs. the diagram. I can see everything in DiskPart.

Note that I have 4 partitions in the diagram, but only 3 partitions in the volume list. What on Earth is going on here? I bet it has to do with how the counting is done in the list vs. the diagram. I can see everything in DiskPart.

Actually, I can see 5 partitions and 4 volumes in DiskPart. In Disk Management however, I see all 4 volumes in the diagram but only 3 partitions in the partition list.

Here is my DiskPart output for comparison.

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online          465 GB  2048 KB        *

DISKPART> sel disk 0

Disk 0 is now the selected disk.

DISKPART> list part

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 1    Reserved           128 MB    17 KB
  Partition 2    Recovery           499 MB   129 MB
  Partition 3    System              99 MB   628 MB
  Partition 4    Primary            464 GB   727 MB
  Partition 5    Recovery           555 MB   465 GB

DISKPART> list vol

  Volume ###  Ltr  Label        Fs     Type        Size     Status     Info
  ----------  ---  -----------  -----  ----------  -------  ---------  --------
  Volume 0     C   Win 10 Pro   NTFS   Partition    464 GB  Healthy    Boot
  Volume 1         Recovery     NTFS   Partition    499 MB  Healthy    Hidden
  Volume 2                      FAT32  Partition     99 MB  Healthy    System
  Volume 3                      NTFS   Partition    555 MB  Healthy    Hidden

The volume-partition mappings look like this.

Vol 0 - Part 4

Vol 1 - Part 2

Vol 2 - Part 3

Vol 3 - Part 5

I'm not sure how I arrived at this configuration. But it looks damaged. Where is partition 1? What volume does it map to? None? But I can select it with success in DiskPart.

DISKPART> sel part 1

Partition 1 is now the selected partition.

DISKPART> list vol

  Volume ###  Ltr  Label        Fs     Type        Size     Status     Info
  ----------  ---  -----------  -----  ----------  -------  ---------  --------
  Volume 0     C   Win 10 Pro   NTFS   Partition    464 GB  Healthy    Boot
  Volume 1         Recovery     NTFS   Partition    499 MB  Healthy    Hidden
  Volume 2                      FAT32  Partition     99 MB  Healthy    System
  Volume 3                      NTFS   Partition    555 MB  Healthy    Hidden

I know from before that I have what appears to be one too many recovery partitions – one 499 MB name-less and label-less and one 555 MB named and labeled – which is why, I speculate, Paul has asked me to verify that my Windows RE is still working.

This is a symptom I have seen before when running Windows 10, and if recall correctly I saw it first with Windows 8 systems. I have not been paying much attention to it, but it seemed to be caused by doing major Windows updates or upgrades.

According to Microsoft documentation, the Recovery partition needs to go to the end of the disk. As can be seen in the image below. At least that's the default configuration.

The default partition layout for UEFI-based PCs is: a system partition, an MSR, a Windows partition, and a recovery tools partition.

(Microsoft Docs link was here. I'm not allowed to post it by the forum rules it seems. You can do a web search for "gpt disk layout windows" to find it.)

This is the layout I have been using for the last couple of years. Because I just let the Windows setup do this for me, I don't bother doing it manually anymore. Why bother? First of all it's complicated with this many specialty partitions on modern PCs. But it seems like Microsoft itself is not practicing what it's teaching its clients/users. I don't know how they manage to screw it up, as I recall it right after a fresh install, the Recovery partition is at the end of the disk as advertised.

If I map this out to my partitions it looks like this. (I will prefix the items from the image with a number.)

0 System - Part 3

1 MSR - Part 1

2 Windows - Part 4

3 Recovery - Part 2 and Part 5

So it seems MSR is the missing partition that I don't see in Disk Management (128 MB).

If I map this out to my volumes rather, I get this.

0 System - Vol 2

1 MSR - NA/None

2 Windows - Vol 0

3 Recovery - Vol 1 and Vol 3

However I look at it, it looks damaged.

This raises two main points/questions.

  • Where is MSR volume?
  • Why is there more than one Recovery volume/partition?

The term volume and partition is often used interchangeably in Windows community. Perhaps an MSR is just a partition that does not require a volume? Why there is more than one Recovery partition, I have no idea. (One is not enough to recover Windows?)

ElmoShrugGIF

It's just Windows...

Samir,

Welcome to complicated world of partition and volume counting. It's more complicated than you thought because of the way the MSR (Microsoft System Reserved) partition is counted. Disk Management ignores the MSR partition. It isn't counted as a volume because it is an unformatted partition.

One tool I have found very helpful is MiniTool Partition Wizard. Say you want to verify HarddiskVolume16 that was shown in one of your screenshots. You can look at the disk layout with MiniTool and start counting with one and count your way over (including all MSR partitions) to your EFI System partition of the Windows disk. You should get 16.

It seems every Microsoft tool has it's own method of counting. They have certainly made it very confusing.

As for your Disk Management screenshot with only your Windows disk, it is normal to have two Recovery partitions. This is a result of poor planning by Microsoft. The original Recovery partition at the beginning of the disk was not big  enough to hold WinRE when the system was upgraded to a new version of Windows. Microsoft then created a new larger Recovery partition at the end of the disk and left the original (now useless) Recovery partition in place. What is confusing is why Disk management doesn't list the first Recovery partition. If I were you, I wouldn't try to do anything about it.

Thanks for the tip Paul! This seems like a competent partition manager. I haven't used it before. I really like it! It's easy to navigate, well laid out options and menus, and good level of details. I'm not a big fan of the Ribbon UI type of thing going on at the top, but that's also my only objection. I have used Acronis Disk Director in the past, and I have one or two licenses. This is on par with that, if not even better.

I can clearly see the 128 MB MSR partition using this tool.

I installed the latest Acronis Disk Director 12.5 in trial mode to do a quick comparison.

The MSR partition is invisible in Disk Director. But the start address of the next partition (129 MB) suggests that there is something else in front of that.

Also, if I right click on either one of the Recovery partitions, I get to peek inside to view its contents. It becomes immediately apparent that the 499 MB partition is old and that the 555 MB partition is the new Recovery partition. Just look at the creation dates for their contents. (If this is to go by, I may have installed Windows 10 on this PC in June of 2018, three years ago.)

The explanation on double Recovery partitions seems plausible. So 499 MB was not enough, and instead of pushing my user data in the Primary partition over to the right to make more room and risk damaging my files, they decided to rather shrink my Primary and append the new Recovery partition at the end of it? So the new Recovery partition is 555 MB.

Do you think it's safe to delete one of these Recovery partitions so I can reclaim the space? It's not a lot, but every Megabyte counts. If it's possible to move the currently used Recovery partition back to the beginning of the disk and then expand it into that empty space, that would be perfect.

It's interesting to see how Microsoft conserves space... by doubling it! They set out to conserve space by only reserving 499 MB on a 500 GB disk. Then they decide that more space is required after all, so they create a 555 MB partition to replace the old one. But they don't bother to clean up after themselves?...

If not to reclaim that space for the Primary partition, they should deleted it just to put our nerdy minds at ease and possibly save us any future trouble if any system or tool comes along and starts scanning our disks for problems and whatnot and decides we have a "corrupted partition" or a "corrupted file" or something like that.

Anyway...

What's even more useful I think about this tool, is that it can also explore the contents of the EFI partition without having to mount it first in WinPE. And it's free! So thanks again for the tip. I will keep this.

 

Not the best idea to try to delete the first Recovery partition to reclaim the space. You need to understand what you're doing. I did it once just to see if I could do it. It worked, but I decided to leave the rest of my systems with two Recovery partitions. Deleting the first Recovery partition will change the numbers of the EFI and Windows partitions. This will cause Windows to fail to boot until the BCD is corrected with the new partition numbers.

Microsoft took the easy way out when adding a larger WinRE. I think it was a good idea they did. Would you really trust them to move all of your data on the disk? I wouldn't. The good news is that starting with Windows version 2004 and later, the Recovery partition is placed at the end of the disk on a clean install.

Samir, could you run the System Configuration component of the MVP Assistant? I would be curious to see how it shows the disks and partitions in your system.

Not to worry Paul, I will do it once just to see if I can do it. Just kidding! 😉 I'm not touching this, not at the moment. I plan on doing a clean install within the next month and I might as well give it a go and break a few things. A theory is always perfect and cuddly, but it means nothing without practice.

I didn't even want to mess around with BCD and UEFI entries. But I'm glad I did, and I'm grateful for everyone's help.

Bob, I know it took me some convincing that those boot entries were not coming coming from the boot sectors of my disks. Well, not all of them anyway. I also learned of the "NVRAM". Thanks Bob! (Also, I later learned about how CIA uses it for hacking PCs.) I have read the UEFI primer at HowToGeek that you linked to, but not the long post on UEFI by Adam Williamson of Red Hat. The very first thing I did this morning was to run a backup of the whole system disk so that I have a good fallback version.

Paul, how do you count volumes in Partition Wizard? I looked at properties but could not find what partition number whey were, it seems to only list disk numbers, starting from 1 (rather than 0 liked DiskPart).

Bruno, I was able to start the program, but after clicking on System Configuration on the left, it crashed. It gave me some error messages with a detailed error log with references to DLL files, MSIL assemblies and whatnot. I did not save it, thinking it would crash again. But the second time, it did not crash. Go figure!

I have a little more than one disk connected now. But you can see form the screenshot below that it displays the MSR partition on the main system disk ("Samsung SSD 970").

Bruno, I restarted MVP Assistant once and it barked at me again, this time during startup.

Let me know if you want me to send you the log in a PM.

But the previous screenshot I think answers what you wanted to see (if MSR is displayed on the Samsung SSD).

Samir, are you selecting Continue or Quit when it hits an exception?

Yes, I would like to know what the exception was, but perhaps it manages to move past that well. The Assistant will restart in the same place it last left so it probably restarted again in System Configuration, causing the exception again. Does Continue allow it to stay running?

I was also curious as to whether the various partitions you are seen conform well to what you also see in Disk Management or Disk Director, and whether they are able to provide you any additional information. The top portion should identify what Windows sees as a the proper recovery partition.

Bruno, I used Quit first time to exit. This time I used Continue and it continued to run.

System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values.
Parameter name: index
   at System.Windows.Forms.TreeNodeCollection.get_Item(Int32 index)
   at MVPAssistant.SystemConfigForm.FillDriveTrees()
   at MVPAssistant.SystemConfigForm.InitializeControls()
   at MVPAssistant.SystemConfigComponent.Showing()
   at System.Windows.Forms.RadioButton.OnCheckedChanged(EventArgs e)
   at System.Windows.Forms.RadioButton.set_Checked(Boolean value)
   at MVPAssistant.FormMVPAssistant.FormMVPAssistant_Load(Object sender, EventArgs e)
   at System.EventHandler.Invoke(Object sender, EventArgs e)
   at System.Windows.Forms.Form.OnLoad(EventArgs e)
   at System.Windows.Forms.Form.OnCreateControl()
   at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible)
   at System.Windows.Forms.Control.CreateControl()
   at System.Windows.Forms.Control.WmShowWindow(Message& m)
   at System.Windows.Forms.Control.WndProc(Message& m)
   at System.Windows.Forms.Form.WmShowWindow(Message& m)
   at MVPAssistant.FormMVPAssistant.WndProc(Message& m)
   at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)

I will send you a PM of the complete log.

Yes, it remembers what section was in view last time and it resumes where it left off (System Configuration in this case). But I have since been able to both switch view to different sections and restart the program at the same view without crashing it. So it's not crashing every time.

What is "top portion"? In the screenshot above, "Recovery" and "(Disk 3 Partition 5)" are of type "GPT  Microsoft Recovery". The "(Disk 3 Partition 1)" at the very top of "Samsung SSD 970" section is of type "GPT  Microsoft Reserved" or MSR. This is the same as I have seen in the MiniTool Partition Wizard (first MSR, then Recovery, then EFI System, then Boot/Windows, then Recovery again. The last partition has no label.

 

Samir,

Bob, do you mean you have one of those USB disk enclosures with room for more than one disk? And you have them pooled in a Storage Spaces device in Windows?

Yes, a 5 drive USB 3.0 enclosure Yottamaster branded Orico manufactured.  Got a great deal on it.  Windows Storage Spaces configured using PS.  Configuration is an NVMe cache disk for a Tiered Storage pool, Performance Tier of 2 Samsung 860 EVO 500GB SSD's mirrored and Capacity Tier of 5 Seagate FE8 Exos 8TB HDD's mirrored with Parity running ReFS file system.

 

Hm... I think I am beginning to understand the issue you're describing. Correct me if I'm wrong. But it sounds like the issue is caused by this "USB" entry as seen in my previous post?

Yes that is correct.  

You're not disabling the USB function itself within UEFI firmware are you? Just checking, because I had this impression at first, but I think I got it now.

No I am not, Yes I think you have got it now.  The USB entry cause UEFI to look at any attached storage device for a bootable disk.  When it doesn't find one it reports "No boot device found".

Since this "USB" entry seems be "popping in and popping out on the fly" (as I described above), this can cause the boot disk to get the wrong ID/index during enumeration, which results in UEFI not finding the boot disk. So by deleting the "USB" entry, you effectively save the day by forcing the firmware to give the correct ID/index to the boot disk?

Is that about right?

 Yes, that's pretty close.  The situation is this, the board I have is capable of supporting 10 disks those being a combination of SATA and M.2 NVMe.  The board has 3 M.2 slots but I am not using any of them.  The board also has 10 SATA slots 3 of which are disabled if any of the 3 M.2 slots are in use.  The board also supports booting from PCIe expansion slots of which there are 4 total.  This board also includes an PEX PCIe chip that controls 2 of the 4 PCIe slots.  I managed to figure out which 2 slots the PEX chip supports so I have used M.2 adapters on those 2 slots for 2 m.2 NVMe disks.  One of those in my boot device and the other is the cache for the Storage Spaces disk pool. 

So I have all 10 SATA ports loaded with SSD's and 2 HDD's and the 2 PCIe slots with the M.2 disks.  Since the PEX chip controls the 2 PCIe slots for the M.2's and boot has been enabled on those slots none of the SATA slots are disabled.  The board numbers SATA slots beginning at 0 meaning that with all SATA ports filled we use disk numbers 0 - 9 making my boot device disk 10.  Since disk number 10 is outside of beyond what UEFI looks to normally then there are occasions when UEFI does not find disk 10.  When that happens it then looks for USB devices and not finding any that are bootable it coughs up the error.  Removing the USB entry from NVRAM fixes the issue.

Below is my bcdedit enum firmware output:

C:\Windows\system32>bcdedit /enum firmware

Firmware Boot Manager
---------------------
identifier              {fwbootmgr}
displayorder            {bootmgr}
                        {9ba0f089-d24e-11eb-bb6b-806e6f6e6963}
timeout                 1

Windows Boot Manager
--------------------
identifier              {bootmgr}
device                  partition=\Device\HarddiskVolume2
path                    \EFI\MICROSOFT\BOOT\BOOTMGFW.EFI
description             Windows Boot Manager
locale                  en-US
inherit                 {globalsettings}
default                 {current}
resumeobject            {d750b5a3-6a43-11eb-9587-9b331b674b67}
displayorder            {current}
toolsdisplayorder       {memdiag}
timeout                 30

Firmware Application (101fffff)
-------------------------------
identifier              {9ba0f089-d24e-11eb-bb6b-806e6f6e6963}
description             Hard Drive

C:\Windows\system32>

Notice that the boot device is Harddisk Volume2.  That simply indicates that Volume 2 on the boot disk where the path is that contains the boot information.

 

You are welcome for my assistance in helping you understand UEFI.

Samir,

MiniTool displays the disks in the same order as Windows uses. To count partitions I look at the disk map at the bottom and just use my finger to count starting at the top left as number one and moving right to the end of the top line. Then I continue counting with the next lines until I reach the partition I'm interested in. The MSR partitions are counted along the way. 

I was thinking of an easy way for you to eliminate the extra Recovery partition and reclaim the space. You could delete all partitions from the disk and do a clean Windows install using Windows version 2004 or later. This will give you the partition layout you want with the Recovery partition at the end of the disk. Then you could restore ONLY the C: drive Windows partition on top of the new Windows partition. It should work.

Samir wrote:
 

What is "top portion"? In the screenshot above, "Recovery" and "(Disk 3 Partition 5)" are of type "GPT  Microsoft Recovery". The "(Disk 3 Partition 1)" at the very top of "Samsung SSD 970" section is of type "GPT  Microsoft Reserved" or MSR. This is the same as I have seen in the MiniTool Partition Wizard (first MSR, then Recovery, then EFI System, then Boot/Windows, then Recovery again. The last partition has no label.

 

 By "top portion" I meant the Computer System tab at the top half of the screen. What shows in your screen shot is scrolled down to the bottom, but the part scrolled out of view shows the Boot partition and Recovery partition.

Bob, you nearly lost me there! I had to read that twice to get my head around it.

ClipWindowsGIF

I'm still digesting...

Yottamaster 5 drive USB 3.0
===========================
Capacity Tier
    Seagate FE8 Exos 8TB HDD (RAID-5, ReFS)
    Seagate FE8 Exos 8TB HDD (RAID-5, ReFS)
    Seagate FE8 Exos 8TB HDD (RAID-5, ReFS)
    Seagate FE8 Exos 8TB HDD (RAID-5, ReFS)
    Seagate FE8 Exos 8TB HDD (RAID-5, ReFS)

PEX PCIe chip (boot enabled)
===========================
Performance Tier
    Disk 10 Samsung 860 EVO 500GB SSD (RAID-1, Windows boot partition)
    Disk 11 Samsung 860 EVO 500GB SSD (RAID-1, Storage Spaces (Capacity Tier) cache)

Intel/AMD chip SATA ports
===========================
    Disk 0 SSD
    Disk 1 SSD
    Disk 2 SSD
    Disk 3 SSD
    Disk 4 SSD
    Disk 5 SSD
    Disk 6 SSD
    Disk 7 SSD
    Disk 8 HDD
    Disk 9 HDD

Impressive setup!

You have 2 empty PCIe slots?

You have 3 empty M.2 slots?

What board is this?

Do you have 8 TB or 40 TB in that Yottamaster enclosure? I assumed 5 times 8 TB.

"Since disk number 10 is outside of beyond what UEFI looks to normally then there are occasions when UEFI does not find disk 10."

Tell it to boot from PCIe first?

So it prioritizes SATA ports first, PCIe ports second, USB ports third. But sometimes it changes this to SATA ports first, USB ports second, PCIe ports third?

Can you change priority between the 4 PCIe slots?

"When that happens it then looks for USB devices and not finding any that are bootable it coughs up the error.  Removing the USB entry from NVRAM fixes the issue."

So after looking at SATA devices, it then looks at USB devices next? Does it ever look at PCIe devices? Perhaps it does, but it looks at those supported by the system chip rahter then the supplementary PEX chip?

How did you get rid of the USB entry? I'm curious because on my PC it keeps popping back in, and it seems to do that whenever I have a USB storage device of some sort (HDD/Flash) connected during the boot process. It sounds like you have found a permanent solution to this? Also, does this mean you cannot boot from a USB device when you need to in order to reinstall Windows for example?

"Notice that the boot device is Harddisk Volume2.  That simply indicates that Volume 2 on the boot disk where the path is that contains the boot information."

That depends... are you counting from 0 or from 1? Just kidding! 😄

Is that the PEX chip attached Samsung 860 EVO?

Assuming we start at 1 and volume 1 is MSR, then yes, volume 2 should be EFI system partition? This is unlike the "HarddiskVolume16" I saw in my BCDEdit output.

I see you have some intimate knowledge of how the UEFI system works. Now I know who to turn to in case of boot entry troubles. 😉

Some timeout you have there! Did you manually set it to 30 seconds? So what happens at boot, you get a single boot entry for booting Windows and it's counting down from 30 before it starts up?

 

Bruno, I have a new symptom to report. When I start MVP Assistant and it throws that exception and I click Continue, it gets stuck at displaying the splash screen, but I can see the program GUI running underneath and I can move it around. The splash screen obstructing my view.

As for the "top portion", I only see a "Boot" line, there is no "Recovery" mentioned anywhere in that top portion.

It indicates "Disk 3  Partition 2" as the boot, which is correct. Given a count starting from 1 of course (1 MSR, 2 boot/EFI, Recovery partition at the end). I currently have more than one disk connected, so it says disk 3. This was previously disk 0.

Operating System:    Microsoft Windows 10 Pro,  Release 2009,  Version 10.0.21390.2025,  64-bit
    Build ID:    xxxxx.x.xxxxxxxx.xx_xxxxxxx.xxxxxx-xxxx
    Installed:    6/15/2021 1:32:19 PM
    Last Boot:    6/24/2021 8:06:20 PM
    Boot:    Disk 3  Partition 2
    Last Drive:    Z:
    Language:    Alien
    Computer Name:    Area-51
    User Name:    Neo
    Motherboard:    ASUSTeK COMPUTER INC.  ROG STRIX Z370-F GAMING,  Rev X.0x,  SN: 0000000000000
    Processor:    Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz  (64-bit)
    Memory:    15.87 GB Total  /   8.20 GB Available
    BIOS:    American Megatrends Inc.,  0614,  3/22/2018
    BIOS Mode:    UEFI  (Secure Boot Disabled)
    Networks:    Ethernet:  Intel(R) Ethernet Connection (2) I219-V   1000 Mb/s   Up
        Ethernet 2:  XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
        Ethernet 10:  VirtualBox Host-Only Ethernet Adapter   1000 Mb/s   Up
        VMware Network Adapter VMnet1:  VMware Virtual Ethernet Adapter for VMnet1   100 Mb/s   Up
        VMware Network Adapter VMnet8:  VMware Virtual Ethernet Adapter for VMnet8   100 Mb/s   Up
    Running as Admin:    No

 

Assuming we start at 1 and volume 1 is MSR, then yes, volume 2 should be EFI system partition? This is unlike the "HarddiskVolume16" I saw in my BCDEdit output.

I have the answer for why "HarddiskVolume16" was indicated as EFI system partition. This is because I had all the disks connected at the time. I have practiced some counting now with MiniTool Partition Wizard and it checks out.

It's just that the Microsoft tools don't list the MSR partition as a volume, and while DiskPart can list both partitions and volumes, it starts counting volumes from 0 (rather than 1 like other popular third party tools).

So with DiskPart, you have to select one disk at a time, list its partitions but account it as a volume, then select the next disk and do the same, continuing the sequence. That's how you arrive at volume "HarddiskVolume16" being the EFI system partition.

Also, in case of MBR disks with "Extended" partitions, you only account the extended partition as a one volume, you ignore any number of "Logical" partitions that sit on top of that extended partition.

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online         3726 GB   891 GB        *
  Disk 1    Online          931 GB  1024 KB
  Disk 2    Online          931 GB  1024 KB        *
  Disk 3    Online          465 GB  2048 KB        *
  Disk 4    Online         3725 GB      0 B        *

DISKPART> sel disk 0

Disk 0 is now the selected disk.

DISKPART> list part

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 1    Reserved           128 MB    17 KB
  Partition 2    Primary             70 GB   129 MB
  Partition 3    Primary             90 GB    91 GB
  Partition 4    Primary            689 GB   340 GB
  Partition 5    Primary           1494 GB  1479 GB
  Partition 6    Primary            100 GB  2977 GB
  Partition 7    Primary            350 GB  3087 GB
  Partition 8    Primary             20 GB  3437 GB
  Partition 9    Primary             20 GB  3457 GB

DISKPART> sel disk 1

Disk 1 is now the selected disk.

DISKPART> list part

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 0    Extended           931 GB  1024 KB
  Partition 1    Logical            931 GB  2048 KB

DISKPART> sel disk 2

Disk 2 is now the selected disk.

DISKPART> list part

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 1    Reserved            15 MB    17 KB
  Partition 2    Primary            100 GB    16 MB
  Partition 3    Primary            831 GB   100 GB

DISKPART> sel disk 3

Disk 3 is now the selected disk.

DISKPART> list part

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 1    Reserved           128 MB    17 KB
  Partition 2    Recovery           499 MB   129 MB
  Partition 3    System              99 MB   628 MB
  Partition 4    Primary            464 GB   727 MB
  Partition 5    Recovery           555 MB   465 GB

DISKPART> sel disk 4

Disk 4 is now the selected disk.

DISKPART> list part

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 1    Primary           3725 GB  1024 KB

After some cleaning up...

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 1    Reserved           128 MB    17 KB
  Partition 2    Primary             70 GB   129 MB
  Partition 3    Primary             90 GB    91 GB
  Partition 4    Primary            689 GB   340 GB
  Partition 5    Primary           1494 GB  1479 GB
  Partition 6    Primary            100 GB  2977 GB
  Partition 7    Primary            350 GB  3087 GB
  Partition 8    Primary             20 GB  3437 GB
  Partition 9    Primary             20 GB  3457 GB

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 0    Extended           931 GB  1024 KB
  Partition 1    Logical            931 GB  2048 KB

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 1    Reserved            15 MB    17 KB
  Partition 2    Primary            100 GB    16 MB
  Partition 3    Primary            831 GB   100 GB

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 1    Reserved           128 MB    17 KB
  Partition 2    Recovery           499 MB   129 MB
  Partition 3    System              99 MB   628 MB
  Partition 4    Primary            464 GB   727 MB
  Partition 5    Recovery           555 MB   465 GB

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 1    Primary           3725 GB  1024 KB

And then some polish...

 1    Partition 1    Reserved           128 MB    17 KB
 2    Partition 2    Primary             70 GB   129 MB
 3    Partition 3    Primary             90 GB    91 GB
 4    Partition 4    Primary            689 GB   340 GB
 5    Partition 5    Primary           1494 GB  1479 GB
 6    Partition 6    Primary            100 GB  2977 GB
 7    Partition 7    Primary            350 GB  3087 GB
 8    Partition 8    Primary             20 GB  3437 GB
 9    Partition 9    Primary             20 GB  3457 GB
10    Partition 0    Extended           931 GB  1024 KB
      Partition 1    Logical            931 GB  2048 KB
11    Partition 1    Reserved            15 MB    17 KB
12    Partition 2    Primary            100 GB    16 MB
13    Partition 3    Primary            831 GB   100 GB
14    Partition 1    Reserved           128 MB    17 KB
15    Partition 2    Recovery           499 MB   129 MB
16    Partition 3    System              99 MB   628 MB
17    Partition 4    Primary            464 GB   727 MB
18    Partition 5    Recovery           555 MB   465 GB
19    Partition 1    Primary           3725 GB  1024 KB

Tada! Number 16 is the system "volume". Same result, just different level of difficulty arriving at it.

It would have been easier with only one disk connected!

Note to self! Disconnect or disable all but the most essential disks from the PC before you start counting volumes, and also do the same before you start restoring a True Image backup or begin to reinstall Windows.

 

Samir,

Note to self! Disconnect or disable all but the most essential disks from the PC before you start counting volumes, and also do the same before you start restoring a True Image backup or begin to reinstall Windows.

Exactly!  If more users would practice that simple rule there would be a lot less issues with restore problems posted here on the forum.

 

PEX PCIe chip (boot enabled)
===========================
Performance Tier
    Disk 10 Samsung 860 EVO 500GB SSD (RAID-1, Windows boot partition)
    Disk 11 Samsung 860 EVO 500GB SSD (RAID-1, Storage Spaces (Capacity Tier) cache)

Not quite right.  The Windows boot partition resides on a Samsung M.2 NVMe disk installed on a PCIe adapter in one of the PEX supported PCIe slots.

So I will attempt to diagram the Storage Spaces for you below:

Performance or Fast tier:

  • Samsung 860 EVO 500GB (raid 1 disk 1)
  • Samsung 860 EVO 500GB (raid 1 disk 2)

Capacity or Slow tier:

  • Seagate FE8 Exos 8TB HDD (ReFS with Parity)
  • Seagate FE8 Exos 8TB HDD (ReFS with Parity)
  • Seagate FE8 Exos 8TB HDD (ReFS with Parity)
  • Seagate FE8 Exos 8TB HDD (ReFS with Parity)
  • Seagate FE8 Exos 8TB HDD (ReFS with Parity)

Fast Cache:

  • 16.4GB RAM (Level 1)
  • 232 GB NVME (Level 2) installed on second PCIe PEX slot

What board is this?

Do you have 8 TB or 40 TB in that Yottamaster enclosure? I assumed 5 times 8 TB.

Board is ASRock SuperCarrier running a Z270 chipset.

Total storage capacity of Storage Spaces is 39.4TB due to tiered parity configuration.

Tell it to boot from PCIe first?  Not possible.

So it prioritizes SATA ports first, PCIe ports second, USB ports third. But sometimes it changes this to SATA ports first, USB ports second, PCIe ports third?   SATA ports are always first regardless of boards.  PCIe lanes are mapped to SATA ports under normal circumstances which this is not normal here.  You then have other HDD boot followed by USB boot.

Can you change priority between the 4 PCIe slots?  No, not possible

So after looking at SATA devices, it then looks at USB devices next? Does it ever look at PCIe devices? Perhaps it does, but it looks at those supported by the system chip rahter then the supplementary PEX chip?

Because of SATA port mapping on the Intel Controller for M.2 boot the PEX boot is a secondary controller and unfortunately ASRock did not provide adequately in UEFI firmware implementation for my configuration.

 How did you get rid of the USB entry? I'm curious because on my PC it keeps popping back in, and it seems to do that whenever I have a USB storage device of some sort (HDD/Flash) connected during the boot process. It sounds like you have found a permanent solution to this? Also, does this mean you cannot boot from a USB device when you need to in order to reinstall Windows for example?

Use bcdedit /delete {identifier}.  This is not persistent however.  I do it every now and again.  Most of the time it stays gone but it does come back on its own and I expect that.  I have found that if I Sign Out of Windows and then use Shift+Shutdown I avoid the boot issue.  So I practice that but I do remove the USB entry from the UEFI after having used a flash drive or other USB device as that always triggers the entry to return.

 That depends... are you counting from 0 or from 1? Just kidding! 😄 Lol

Is that the PEX chip attached Samsung 860 EVO?  Answered previously

Assuming we start at 1 and volume 1 is MSR, then yes, volume 2 should be EFI system partition? This is unlike the "HarddiskVolume16" I saw in my BCDEdit output. Boot device is Disk #10, boot file location is volume 2 on disk behind MSR

I see you have some intimate knowledge of how the UEFI system works. Now I know who to turn to in case of boot entry troubles. 😉  Thanks for the credit 😉

Some timeout you have there! Did you manually set it to 30 seconds? So what happens at boot, you get a single boot entry for booting Windows and it's counting down from 30 before it starts up?  I am not clear on the timeout reference.  Boot is no different than a normal single boot Win install.  After POST I think boot is around 10 seconds, not really sure as I never timed it.

Bob,

Congratulations! You lost me! 😄

Not quite right. The Windows boot partition resides on a Samsung M.2 NVMe disk installed on a PCIe adapter in one of the PEX supported PCIe slots.

So you don't use any of the M.2 slots on the motherboard?

Performance or Fast tier
Capacity or Slow tier
Fast Cache

Is each of these a Storage Space?

I'm a bit confused about the term "tier". I hardly ever use it or see it used in English language. If I read this correctly, "Performance or Fast tier" is just a way for you to categorize the different disk types based on their performance?

16.4GB RAM (Level 1)
232 GB NVME (Level 2) installed on second PCIe PEX slot

You only use 1 of the 2 PCIe PEX slots?

Is that RAM on PCIe slot?

Total storage capacity of Storage Spaces is 39.4TB due to tiered parity configuration.

So the external 5 x 8 TB Seagate disks are used for Storage Spaces?

But you have also filled the 10 internal SATA ports if I'm not mistaken?

SATA ports are always first regardless of boards.  PCIe lanes are mapped to SATA ports under normal circumstances which this is not normal here.  You then have other HDD boot followed by USB boot.

I don't quite understand how this mapping works. But I assume that there are physical SATA ports and then the PCIe slots can be seen as virtual SATA ports so to speak, followed by USB.

1. SATA
2. SATA <-> PCIe
3. USB

I'm not quite sure what "other HDD" refers to? Do you by any chance mean to say PATA? 😄 I'm having a hard time figuring out what other kinds of disks there are other than SATA these days. They are most common. Then there are these exotic things like M.2 NVMe, there is also mSATA.

Because of SATA port mapping on the Intel Controller for M.2 boot the PEX boot is a secondary controller and unfortunately ASRock did not provide adequately in UEFI firmware implementation for my configuration.

Why not use one of the provided M.2 slots on the motherboard? Because you then lose one of the SATA ports?

Is this problem specific to this motherboard, or do all Z270 boards suffer the same?

I have a Z370 myself. I think it has only two M.2 slots, but I only use one and I use it for boot, so I'm good.

Use bcdedit /delete {identifier}.  This is not persistent however.  I do it every now and again.  Most of the time it stays gone but it does come back on its own and I expect that.  I have found that if I Sign Out of Windows and then use Shift+Shutdown I avoid the boot issue.  So I practice that but I do remove the USB entry from the UEFI after having used a flash drive or other USB device as that always triggers the entry to return.

Isn't that key combination used to bypass Fast Startup in Windows? Or is it for accessing Advanced options menu on UEFI based systems?

Interesting problem you got there. The workaround is even more interesting! 😉

I am not clear on the timeout reference.  Boot is no different than a normal single boot Win install.  After POST I think boot is around 10 seconds, not really sure as I never timed it.

I was referring to one of the parameters in your BCDEdit output above, where it says "timeout 30".

I know at least in my UEFI setup there are options for setting a delay, for how long you want to look at POST for example. I have bumped that up to 5 seconds on one of my PCs, because otherwise I miss the chance to enter setup, I always miss it.

I don't care much for bootup times to be honest, I never complain about that. But 30 seconds on such a system?... come on! 😃

Samir,

Congratulations! You lost me! 😄  Lol, not to worry this is really advanced stuff.  Trust me learning to set this up in PowerShell took me a ton of time.  Further, the set up is akin to what you would find in server farm storage array where what is known as clustered storage is the norm.  The whole thing really shouldn't work in Windows 10 but it can and does if you devote the time to learn how.

So you don't use any of the M.2 slots on the motherboard?  Correct.

Performance or Fast tier
Capacity or Slow tier
Fast Cache

Is each of these a Storage Space?  Yes.  All three assets are combined into a single storage pool.

I'm a bit confused about the term "tier". I hardly ever use it or see it used in English language. If I read this correctly, "Performance or Fast tier" is just a way for you to categorize the different disk types based on their performance?  So the configuration we have here is called Tiered Storage.  The idea is to use large slow spinning rust (HDD's) for Capacity (slow), smaller yet (fast) Performance flash based SSD's for improved write/read and a large very fast cache to feed these tiers.  It works likes this, you create a Storage Spaces pool and then within that Pool you create a Performance Tier using SSD's and a Capacity Tier using HDD's.  To that you add a large cache to write/read data from the Performance Tier.  The Performance Tier will store often used data for fast reading but data that is not frequently accessed is moved to the Capacity Tier for storage purposes.  The link below has a good basic explanation of how this works.  My version is this is modified from what you'll see in the link

http://joe.blog.freemansoft.com/2020/04/accelerate-storage-spaces-with-…

16.4GB RAM (Level 1)
232 GB NVME (Level 2) installed on second PCIe PEX slot

You only use 1 of the 2 PCIe PEX slots?  Well, one is used for Windows 10 OS and one for an NVMe cache device.

Is that RAM on PCIe slot?  No, system has 32GB of installed ram so I am using a software app to configure 16.4GB of that ram as a level 1 cache and using an entire 250GB NVMe disk as a level 2 cache.  This all looks like a single cache device to the OS and the Windows NTFS filesystem.

 

Total storage capacity of Storage Spaces is 39.4TB due to tiered parity configuration. I should add that usable storage of Capacity Tier is 18.4TB.  Even though the ram, NVME, and SSD's exist they are not viewable in Windows and their capacity are not calculated in that figure.

So the external 5 x 8 TB Seagate disks are used for Storage Spaces?  Answered above.

But you have also filled the 10 internal SATA ports if I'm not mistaken? Yes

 

I don't quite understand how this mapping works. But I assume that there are physical SATA ports and then the PCIe slots can be seen as virtual SATA ports so to speak, followed by USB.  More or less yes.  What happens is that the PCIe lanes are mapped internally on the board to an SATA port which disables the use of the SATA port.

1. SATA
2. SATA <-> PCIe
3. USB

 

I'm not quite sure what "other HDD" refers to? Do you by any chance mean to say PATA? 😄 I'm having a hard time figuring out what other kinds of disks there are other than SATA these days. They are most common. Then there are these exotic things like M.2 NVMe, there is also mSATA.  Better word choice would have been secondary controllers like an add in raid card or SATA expansion card rather than "other"

 

Because of SATA port mapping on the Intel Controller for M.2 boot the PEX boot is a secondary controller and unfortunately ASRock did not provide adequately in UEFI firmware implementation for my configuration.

Why not use one of the provided M.2 slots on the motherboard? Because you then lose one of the SATA ports?  Right

Is this problem specific to this motherboard, or do all Z270 boards suffer the same?  MB specific

I have a Z370 myself. I think it has only two M.2 slots, but I only use one and I use it for boot, so I'm good.  That you are.

 

Use bcdedit /delete {identifier}.  This is not persistent however.  I do it every now and again.  Most of the time it stays gone but it does come back on its own and I expect that.  I have found that if I Sign Out of Windows and then use Shift+Shutdown I avoid the boot issue.  So I practice that but I do remove the USB entry from the UEFI after having used a flash drive or other USB device as that always triggers the entry to return.

Isn't that key combination used to bypass Fast Startup in Windows? Or is it for accessing Advanced options menu on UEFI based systems?  Shift + Shutdown is a way of forcing Windows 10 to shut down all the apps and sign out all the users for a full shutdown. It allows the system to bypass hybrid shutdown or hibernation which allows the system to pick up from where you left.

Interesting problem you got there. The workaround is even more interesting! 😉  Agreed!  I sign out the user first which I may be able to bypass however, I know that sign out will force close all apps then the Shift+Shutdown only bypases hibernation.

 

 I am not clear on the timeout reference.  Boot is no different than a normal single boot Win install.  After POST I think boot is around 10 seconds, not really sure as I never timed it.

I was referring to one of the parameters in your BCDEdit output above, where it says "timeout 30". Can you be more specific?

I know at least in my UEFI setup there are options for setting a delay, for how long you want to look at POST for example. I have bumped that up to 5 seconds on one of my PCs, because otherwise I miss the chance to enter setup, I always miss it. I have not set anything like that and I do not think I even can.

I don't care much for bootup times to be honest, I never complain about that. But 30 seconds on such a system?... come on! 😃  As I said boot after post is about 8 to 10 seconds.  If you post I would add another 10 or maybe 15 seconds tops.

 Just for kicks below is a Crystal Disk Mark benchmark of the Storage Spaces pool.  Bear in mind that the cache heavily influences the results shown:

image 423