Cloning OS's: This needs to be addressed {GUIDE on Windows Cloning}

Yeah its free i guess.
Samsung megician or whatever its called. :-)

Sorry, I didn't make it clear enough when I said you can't transfer it.

Microsoft effectively consider the motherboard the computer. You can change any other component and keep the same licence (and often the same installation).

OEM licences are activated against the motherboard and often check in with Microsoft's servers on activation to make sure they are for that "computer".

1 Like

I used Clonezilla last week, at my job, to migrate Windows 10 to an SSD. It didn't want to boot but using a Windows recovery disk and the BOOTREC commands fixed it. Other than that it seemed to work fine.

1 Like

Details?

Might need to add a BOOTREC portion to this guide

Or you could use dd
kek

3 Likes

I did not really want to get to into this but...

Clonezilla will create images byte-per-byte style so it is not a good idea to use those images on any other hard drive, let alone any other computer. There are workarounds, but they are workarounds which signifies something fundamentally wrong with the workflow.

For anything end-user facing, I would recommend Macrium Reflect instead, since it at least has a GUI. I have not used Acronis, but I assume it is roughly the same thing.

For the overt transferring of images to different hardware or different systems enterprise style (such as when imaging multiple computers at once), I would recommenced Microsoft's Windows 10 ADK, or this older one, with the MDT extension.

  • Free
  • File-by-file based
  • Industry Standard
  • Scalable (SCCM is based on it)
    • The ADK Tools are mostly command line (e.g. scriptable)
    • The MDT extension the ADK Tools using GUIs.
  • Well supported
  • Perfect for small deployments

I'll add that if you are cloning from HDD to SSD that you should check that Windows 10 has correctly identified the new disk as an SSD. This will ensure it changes the preset optimisation from defrag to trim.

In Windows 10 you can just check from the disk tools when you right click properties. The SSD's should be correctly identified in the second column:

If they are not, just running the winsat disk command should be enough for Windows to now correctly identify the new disk as an SSD.

1 Like

Which is why we have the CHKDSK command, which will move the bytes to readable positions accordingly. I don't think there is much wrong here, unless you have specific end users. Otherwise this a generic setup.
If you want a GUI, you can also try Parted Magic, a free linux based tool as well.

I agree that enterprise deployment is different entirely and I will link your guide here. However, for what I'm doing (retail) you cannot use enterprise deployment for the MS Refurbished licenses.
So I'll make a note. But many doing same deployments still aren't using enterprise COAs, unless you mean something different.

Good reminder! I should add that the system pagefile should be turned off on the SSD as well, I'll add that later tonight.

Here is a guide. I didn't do the /scanos and it still worked for me. You just need a recovery disk or Windows installation disk/usb to boot from.

As far as I know, chkdsk checks Fat32 and NTFS file systems for corruption, not extend partitions and file systems. Running it on a byte-per-byte clone will not extend/shrink the sizes of the partitions when transferring from a 120GB -> 500GB disk or vice-versa.

The image will show 120 utilized with ~380 unused. There are workarounds to getting it to work, including shrinking, and the imaging has to be done a specific way (not include the partition table format) but that's what they are: workarounds.

So... the idea is... to enter the key on the CoA present on each individual target system after imaging (slmgr.vbs /ipk xxxx... + slmgr.vbs /ato). And then it activates. And then you're done. o.o...

This is covered in the guide, dude. I never said that chkdsk does any of that. The point of it is to verify data was transferred without corruption. If you run with /r /r it will attempt to fix/move the non-portions. I feel like if I had issues I would've seen them by now.

I'm not sure if you can count 'steps' as 'workarounds.' I don't think expanding a partition is a 'workaround.'
If I had issues in the 3 times I've done this for my last system image, including one transfer from a bad drive, I would have mentioned it. I've never had any issues and my most recent image is a copy from a 250GB>500GB>240GB SSD. All of the space is accounted for and I haven't had any issues in the last year.
So idk where you're going with that but you will have to elobarate.

Can you explain? The only information I have regarding this deals with enterprise deployment/windows server, something that a normal everyday user isn't going to have. I feel like if you're doing that, this guide is already beyond you and not needed. slmgr.vbs is for verifying, so again, I really don't know where you're going with that.

Gah, I missed that. Thanks for pointing it out.

It is a workaround because byte-per-byte imaging technology was not designed for this and alternative imaging methods exist where this step is not necessary.

Windows did not even support expanding Fat32/NTFS partitions until Vista and not all file systems can be expanded.

Shrinking also requires unallocated space towards the "end" of the drive, which typically can involve moving any files towards the "beginning" if any files just so happen to be there using defrag software, which can require running chkdsk before the defragging software will agree to move the files. Defragging your hard disk should not be part of imaging it, neither should chkdsk. Sometimes doing this is necessary as a workaround, but then it is exactly that: a workaround.

In addition, partitions like the "system reserved" one or EFI system partitions are both relatively easy to recreate and, in the case of GPT disks, the GPT table is designed to be instance specific (Globally Unique Identifier Partition Table). This means weird problems related to booting tend to happen if these structures are copied directly, like not being able to access the RE tools partition, or Windows insisting on running the fixboot tool after imaging or the bcd store's pointer to the GUID of the boot partition getting messed up because that partition does not exist anymore, the target system not supporting the old boot mode (BIOS/UEFI) etc.

File-by-file technology bypasses all of these issues completely by recreating the intermediary structures dynamically. There is no expectation of transferring the structures because only file data is captured and hence the myriad of issues created by trying to transfer the structures are all avoided.

The method pointed out above works because:

  • NTFS happens to support being expanded
  • Windows happens to have a built in tool for it
  • You happen to be copying it in a way that the Acronis/Clonezilla software can be aware of the intricacies of the Windows boot process and fix the problems byte-per-byte based technology creates with regards to these issues silently, in the background.

Uh... the software for the file-by-file based technology is free and well supported by Microsoft and third party vendors. The minimum requirement for it is a working Windows computer and perhaps a flash drive to put the WinPE booting software on.

There are also open source automation projects for both building the boot disks dynamically (ADKTools), closed source GUI ones (MDT) and both command line (murphy's diskpart script) and graphical tools for imaging disks (gimagex) to do the actual imaging and dynamic creation of disk structures. In addition, it can also be extended by the MDT for true enterprise style deployment that can be scaled further using SCCM. All of that builds on the basic functionality of the ADK, which is free.

My post was mostly to let people know that if they are doing imaging on any significant scale, to switch to file-by-file based techniques for improved flexibility, specifically pointing out the one supported by Microsoft.

Is there a particular reason why sysprep is not mentioned in this guide, or the thread in general? There really isn't any mention of unique SID's for each clone, which is important for Domain environments. It also has the option to generalize the state of windows to avoid any issues with using a different chipset.

This is also in the guide. Again, I feel like if this were an issue, I would've seen it by now.

I have used the one button clone tool (also the byte by byte method) and clonezilla listed above countless times for myself and customers- no issues yet. The whole point of this guide was to avoid reinstalling. I'm pretty sure the one button clone does not have anything for fixing file structures. I've never had to run any type of defraging either.

? So what's you point? I feel like you just contradicted yourself here.

I use MDT at my work. Yeah, this is for larger scale enterprise deployment. Still more work than just using a completed active image, because you have to actually install the programs/policies you want. So again, if you're doing enterprise employment, this guide is probably beyond you. For what I'm doing (retail with several machines of the exact same type) this is a waste of time. I would still have to image/install windows and wait for the apps to install. As opposed to just cloning and changing the COA.

That's fine, and I will link you here accordingly. But really, some of the stuff you're saying seems kind of irrelevant. Like I said, I would believe you if I had actual issues with my systems in the years I've been doing this, but I have alas had none.

This is meant to be a generalized guide for beginners at home, like those I mention at the start of the guide.

Domain environments are something different entirely, and I would think, again, that this guide is probably beyond you if you're running a domain.

1 Like

If you are migrating to a different platform/chipset just sysprep the computer before you capture your image...

I've used both dd and dism to take images of a windows installation and then deployed it to desktops and laptops without a problem.

1 Like

I can include a sysprep portion in this guide.

Again though, you're not supposed to clone a image of windows that is OEM for a device. So if its a laptop w/OEM licensing, you're supposed to buy a new copy of windows. As mentioned before, windows defines the computer as a motherboard + CPU, so changing a drive is fine, but generally changing the motherboard + CPU calls for a new copy, which I will include in the guide as well.

Guys keep in mind by no means am I windows certed or anything.
Sysprep is good for switching chipsets, and I can include it later. I'll cite/source someone if they would like to write it. We will have to include that its supposed to be for retail copies of windows.

Also I feel like you guys are really taking this to the next level when I wrote this for the standard at home user. Again, if you're running a domain or using something like Active Directory, this guide is beyond you and probably not needed.

Some of the things mentioned like enterprise deployment really are beyond what at home people are doing.

I appreciate the feedback regardless. I can include/expand on enterprise portions of the guide as need be.

The point of most of my previous post was to explain the fundamental limitations of disk byte-per-byte based imaging technology. If you do not acknowledge the limitations I listed, as limitations, by saying "I feel like if this were an issue, I would've seen it by now," then I am not convinced you understand the technology.

The point, which you somehow missed, was that the technology works because there are workarounds for the numerous problems that arise when using it. Just because there is a fix for it, does not mean there was never a problem in the first place.

The more accurate way to look at is, is that because there was a fix for the problem created, the technology then becomes usable via workarounds. A workaround is solving a problem that need-not-have-been-created in order to solve another problem and are thus inelegant by definition. Having to modify what are intended to be easily re-created instance specific structures fits this definition of a workaround. Just because you are calling it a "step" does negate the fundamental needlessness of that step when a better solution to the fundamental problem exists that does not require such a step.

My experience is to the contrary, including the defrag issue, which is why it is important to discuss the appropriateness of the underlying technology instead of relying on experience.

So we agree that this functionality is not required, so why bring it up?

My point is that just as you wrote a guide for using byte-per-byte imaging technology + workarounds that can successfully image a completed system, an equivalent guide could be written using file-by-file based technology, that omitted any of the workaround steps and hence would be simpler for the end user.

You could be a bit more ad hom on your shit instead of being a technology elitist.
I'm not responding with the intention of creating hostility; I'm simply asking for clarification as the point is to make a guide that works for everyone.
Essentially I understand what you've said, but you really haven't made a mention as to how its a problem. Which is why I said if it were an issue, I feel like I would've seen it by now. Seeing as the blocks definitely differ on different size drives like a 250gb, 500gb, and 240GB SSD. So if there were issues with that method, they definitely would have came up at some point.

ugh, again, you could be a bit more ad hom on you shit.

This more less seems like nitpicking. You wrote that with the assumption that I knew the expand/shrink tools are 'fixes' rather than the 'tools' that they more clearly seem to be. This is wrote for a simple end user; not someone deploying enterprise level machines. It doesn't really seem like a 'fundamental problem' as you're putting it.

If you could actually give more specific reasons why this isn't going to work I would agree with you. But you have only alluded to the fact that the block by block method differs and can create issues. Additionally, file to file only works in one partition at a time, and requires you to reinstall the OS. The point of this particular guide was to avoid that.
You could have better cited your point with something like this.

Once again, I can't stress enough that this is not a workaround but a method that differs from yours. The base system in a file by file clone that defrags requires you to reinstall the base of the OS. This method does not and has been the point of the entire guide; to avoid reinstalling the OS. If anything, I would call your method a 'workaround' but that is my opinion, as yours is yours.
You brought up these tools and I only commented on them.
At this point, you are creating arguments where there should be none; the point of the guide is to help simple home users with a simple clone on their home machines.