Running Pop!_OS Alongside Ubuntu (Are systemd-boot and GRUB compatible?)

I’ve finally switched fully to Linux on my main rig (still need Windows 10 for school). I am running Pop!_OS 20.04 and am loving it! My problem is I want to be able to run Ubuntu alongside Pop!_OS on the same physical drive. Is it possible to have two bootloaders on the same drive (i.e. systemd-boot and GRUB)? Also, I was wondering if any of you have been able to get Linux From Scratch working on Pop!_OS as the 10.0 instructions talk about setting up GRUB.

What do you want to accomplish running two different debian based distros?

You could install Ubuntu and run Pop inside a “box”, or inside a full blown virtual machine for that matter.

1 Like

I understand that. There are some programs that are looking specifically for Ubuntu and won’t run on Pop OS. Also, I just kinda want to see if I prefer the documentation and compatibility of Ubuntu over the extra features of Pop OS. Back to my main question is it possible for you to have 2 bootloaders on a single drive? Or do I have to switch to GRUB on Pop OS?

One bootloader, multiple root partitions no problem. I’d try to stick to legacy boot instead of EFI for less headache.

1 Like

I’d say the opposite, use EFI boot! In my eyes it’s much easier to handle. You’ll then just have three folder in the EFI partition, grub, boot and one called sth else. The installer will then create 2 entries for your 2 distributions, so you choose which one you gonna boot into.

Yeah, I want to learn EFI. How hard is that to setup? From what I understand of your comment, you can have 2 bootloaders one for each OS? Is there any good guides on how to set this up?

The installer should just automatically do that. The only thing it’ll do is create a grub folder (when installing Ubuntu afterwards) and adding the EFI/grub/grubx64.efi to the boot menu. Be aware that the EFI/BOOT folder will most likely contain the last installed oses bootloader as that directory or more precisely EFI/BOOT/BOOTX64.EFI is the default boot target. If for whatever reason the boot entry doesn’t get added or you mainboard is too stupid to figure out that there’s a different bootloader on that partition than you can either use the efibootmgr tool in Linux or boot into the clover bootloader and enter the EFI shell and after figuring out the right partition running bcfg boot add.

Also if the second installer for whatever reason deletes the EFI partition, using grub-install within the chroot will fix the problem quite “easily”. But it shouldn’t come to that! :slight_smile:

At first this sounds insanely difficult but after you understand the principles it’s actually quite good! If you don’t understand sth, due to my bad English or whatever, don’t hesitate to ask.

The difference in software compatibility should be none really. They are the same underlying OS. Similarly the documentation is more or less the same since popos is using Ubuntu and it’s software as a base.

I’m all for trying different things but I question the reason for both here too. Distro hopping between Ubuntu and it’s derivatives is not the way to solve issues you run across and tutorials for Ubuntu apply to popos for the most part.

3 Likes

I agree, I’m struggling to imagine what would run on Ubuntu but not Pop!_OS.
Please tell.

1 Like

could be for a beginner, that some how-to specifically lists ubuntu rather than calling it a (debian based distro), and translating differences between the distros might be too much of a task

1 Like

Yeah u’re obviously right, but still if that’s what he wants? More power to him. I mean I’d do the same if Ubuntu for eg would be my main os and popos as my second to try and find the differences to which there are some. I used to use Ubuntu LTS for my work laptop but now switched to PopOS because of the better handling of Nvidia cards. To my surprise it’s more different then I originally thought.

By saying that he shouldn’t do it he’s not gonna change his mind. :joy: Giving the advice as a sidenote is definitely worth doing but everything else is counterproductive.

I never said he shouldn’t do it. I’m trying understand why he thinks he needs to.

1 Like

Preface: I learned about the forum’s “Hide Details” bbcode; my apologies in advance… Also this is more of a dump of concepts needed to consider possible configurations, not a direct “yes sure, here are exact steps” since I don’t know all the moving pieces that will be used or how they are configured by default for the given distros in question.

I wouldn’t say that EFI is hard to setup/learn, though it’s different (something to learn) and has a few more moving parts and has more ‘recent’ prevalence; though, sometimes that means more places to make a mistake or get confused.

If you’re working in a VM you won’t really need EFI boot for the VM (I assume a VM will just have one OS inside it), though it can have more uses for a physical system (and maybe more convenient for dual booting and disk swapping depending on how you go about it). Further below I’ve included some quick notes on EFI vs MBR, but it might be good to clarify now that they are just methods for the OS to finding a bootloader/efi-binary, which in turn will be in charge of finding a list of kernels and where they are located (usually by reading a config file that the OS/you have created).

vebosity that is a tl;dr repeat of what's further below (to condensed here I suspect)


In the case of grub it’s either bits of grub stored at the “top of the disk” and within /boot for MBR, or a grub64.efi or similar efi-binary inside a separate EFI partition. Beyond that stage, grub will read a grub.cfg for the list of kernels, which can point grub towards any /boot partition, and then the kernel+initramfs eventually get your OS disk mounted up and ready for you. So kernels and configuration for booting from other OS can be in a single grub.cfg but depending on how that is done it can become messy to maintain/setup. Instead of pointing at the other OS’s kernels, it is possible to point at another bootloader and “chainload” it too.

The EFI/MBR choice comes into play when making those decisions. I’ve not provided exact details below on how to dual boot, but general concepts/approaches that may help realize what is involved and what may be easier or convenient, or if it’s worth those efforts at this time just to get Ubuntu which PopOS is based on.


TL;DR: For the above question -

  • If by “GRUB” we mean grub installed in EFI mode (Ie. a grub64.efi binary) then, from my limited reading of systemd-boot intro docs (upstream/archlinux) it is usually(only?ugh) installed as an EFI binary; thus a system that is already booting in EFI mode is a good start, and it seems systemd-boot is able to also boot other efi binaries (ie. maybe other efi boot managers, -thus-> the example on of one on systemd’s page for it is “like GRUB”). Even if systemd-boot ends up unable to boot another OS’s grub64.efi binary then it’s still in EFI mode so it’s simple enough to add the second OS’s efi binary as another boot option in the motherboard’s firmware. In that case you would choose which to boot using the firmware’s quick boot device selection menu by pressing the correct key to display/interrupt boot early on. (ie. multiple EFI bootable OS should be relatively simple if doing it that way.
  • If by "GRUB we mean legacy MBR and we are also talking about the same disk… unless it’s a hybridized thing then I’m not sure I can say that they are “compatible”. (I mention this bullet in case it’s about LFS testing, not Ubuntu, for which older Ubuntu should have efi boot support? or else we use newer EFI and just a pointer off to the older ubuntu grub.cfg, maybe) I did not address exact differences of msdos vs gpt partition tables in this comment (MBR boot would need extra stuff to read existing GPT partitions)
    • If you must use MBR on the same disk, You uh… (I should not suggest this…) could have a usb key that is only the bootloader and bios-boot partition to allow it to read from the GPT partition to get other things, but then if it’s just the result of LFS then why not put it all on the usb key or in a VM.

To make more sense of the above ‘tl;dr’ there are notes below to provide context/concepts, but also that “let’s pause and think about the goals/needs” moment. Sorry about that.


First, I would agree that a VM would be a bit simpler and may offer some conveniences, such as messing it all up and then rolling it back, and being able to try and fix it while booted in a working OS. This really is a good question to consider upfront.

Some reasoning ordered by relevance
  • It’s also good training for becoming used to making an application work on your preferred/existing system, as that may be needed in the future. Since PopOS is based upon Ubuntu (and thus both are debian based), there is a good chance that most things will be similar and work (versions depending). Small variations between distros can be annoying though it is how we achieve variety and often there is a way to do something similar between them. … Though in a non ideal situation, if PopOS and Ubuntu are different enough, then if you’re following a guide for Debian or a different version of Ubuntu/PopOS then you may find yourself adapting the guide differently now for two different OS installs in different states (in general they should be fairly similar though).

  • Though (for a single PopOS with [some-app] and or VM) it depends on what applications you’re looking to run in Ubuntu but I’d believe the OS will be quite similar from the base upwards quite far; though with a VM if you need to pass-through peripherals and other things that may not work; but a VM is really convenient and the disk image for it would sit in the current PopOS filesystem, and you could look up info with the web browser etc on PopOS while the VM is in shambles without rebooting the system (using the windows system), etc.

  • “Distro hopping” as they might call it is also a minor potential concern; you may find that you’re not using one of them because it’s a pain to shutdown and boot the other one, and become “unfamiliar” (or it’ll be so similar in this case that it may become unneeded). … When that happens we start to loose focus on learning something and just quick ignore a problem for convenience sake and switch to a different distro/install as soon as the next problem comes up. I’m not suggesting it will happen here, but its a common concern that comes up.


@Adubs: regarding your and other’s pondering… I may have also over looked this LFS part, though agree it may not be clear without background reasoning notes (I’ve not looked at LFS in over a decade myself, but could make guesses; but not ones about why a vm would not work with non virtio disks etc)

< Is the question/need here… >

…about dual booting (needing Ubuntu) directly due to:

A Linux From Scratch cross-compilation/dev environment? Where the need is to have existing instructions be possible/easier?

If so then I’m suspecting that installing the older Ubuntu within a VM could be much more convenient for experimenting with Linux From Scratch. I may be wrong as it’s been well over a decade since I played with it, but if it’s not due to gaming/multimedia or something that might not fair as well within a VM, then I’d definitely try starting the process in a VM if you are concerned about compatibility with newer PopOS/Ubuntu (or chance of messing up the current PopOS install).

Is the Ubuntu version you need to install so old that it is only supporting MBR boot? (I could not tell from the opening comment and others if this is the case, or if this is the reason to dual boot Ubuntu, or if the “GRUB” in “Are systemd-boot and GRUB compatible” is older grub 1 / non-efi capable grub/OS.


Regardless, to address the question about UEFI/MBR this very short article/image may help to clarify with some initial concept:

Overall Quick Summary of EFI/MBR boot concepts
  • With MBR the motherboard BIOS will read a small bit off the top of the disk, then a bit more from the top of the disk depending on the bootloader, then that determines where /boot is and it finds a list of kernels and loads them —from whatever partition the list points to— (last part is important).
  • With UEFI the motherboard’s UEFI firmware looks at -it’s- list of boot entries (one that you/the-installer added) that tells what EFI filesystem partition to look for an EFI binary (the “bootloader” such as grub64.efi) which is usually associated with /boot/efi, that then runs and it looks in the same EFI filesystem for the -list- of kernels, but then usually the kernels are not there and instead are at a familiar /boot partition.
  • EFI has a special efi partition that MBR does not use/need but you can put many different OS’s efi binaries in that partition if you like and tell the motherboard UEFI firmware where each one is that you might want to boot by picking it at BIOS prompts when booting. Those efi binaries can mostly co-exist in the one efi partition or in multiple separate efi partitions (one for each OS). | With MBR you can’t put “multiple bootloaders” in the same top part of the disk. This is not exactly a concern (except when you tell it to clobber the current working OS’s bootloader), but we need to discuss more details below.

< Initial Considerations >

  1. If PopOS is currently installed with MBR (not UEFI) and does not have a GPT partition table (vs mdos) and you do not plan to reinstall everything, then it can be a pain to convert from using MBR to using EFI while also being sure you won’t destroy all the data/OS on the drive. (usually by messing up the partition table and loosing track of the filesystems etc). In most cases I wouldn’t recommend converting an existing system without backup and OS reinstall.

    • The current OS (efi vs mbr) might be checked with things like
      $ lsmod | grep efi  ## look for efi kernel module
      $ efibootmgr -v  ## for any non-error output / PopOs name, etc
      $ mount | grep -i efi  ## distro's often mount the efi partition on boot
      $ blkid | grep vfat  ## for any vFAT that are not your usb key
      $ gdisk -l /dev/<your-disk>
      
      and similar to see if the output of everything agrees that there is an efi partition (It may depend on if the OS was installed while UEFI booting was enabled in the bios.)
    • If PopOS (your system) is using systemd-boot then I believe it is EFI based and so likely it is booting via EFI, unless I guess it falls back to installing legacy MBR ‘grub’ if UEFI boot is not on in the bios? (If it’s already EFI based then it may be convenient. If it’s systemd-boot which I’m not that familiar with, then it looks like you can load other efi binaries (ie. it may allow other boot manager hand-off), so you’d add a config file in the main systemd-boot's EFI partition and point it at some other OS’s efi boot manager and let the magic happen.
    • If the resulting Linux from Scratch is going to be booted on the system and needs MBR and the same disk then I would not try to hybridize. Booting it in a vm or if it offers EFI boot may be more appropriate. If LFS is not really a concern and we only are thinking of Ubuntu+PopOS then both as EFI boot would be possible on the same disk at least.
  2. As gordonthree has noted, it is possible to have one bootloader that has a config file customized to point at kernel entries for any OS’s /boot directory, or to point at another bootloader (“chainloading”) contained at the top of the secondary OS’s /boot partition. EFI simplifies this some in that if there is only one disk for multiple OS and if willing to use the EFI(bios) boot select menu to choose which OS to boot, then all that is needed is to install each OS into their own separate sets of partitions though the EFI partition can(caveats) be shared if desired. Though with MBR and chainloading one might say it is also fairly simple, but it requires you edit the grub.cfg of the main OS that is in charge of (was installed to) the MBR and instead use the grub/bootloaders menu to choose which OS to boot (versus the quick select bios/firmware menus)

I do still agree with gordonthree and others in that a VM is often much more convenient and in that the OS are pretty close to the same and figuring out how to adapt any instructions or software to work on one or the other may be quite similar and not need much variation. Of course there can be things like drivers and versions that are different between them that may help initially without requiring you to go set them up yourself (on one vs the other). Also it may depend a lot on the software and how you need to use it if a VM can work or not. It’s just that others are looking out for you with past experience and trying to help with the decision making process (as opposed to trying it and destroying the working OS, or later realizing the efforts are not worth the results that could otherwise have been ok in just using PopOS). Whereas in some cases I’ll gladly provide some background to help make the decision on if your looking into it further to determine exact steps for your OS choice with new awareness of the parts, and maybe allow you to “shoot yourself in the foot” and as they might call it… :slight_smile:

Below tries to outline some common ways of dual booting (or booting in general with MBR/EFI to provide context to help with making decisions; later there is a question about why not a vm (just for Linux from Scratch?), and maybe the beginning of an answer regarding how systemd-boot may fit into this setup.


<Ways of booting multiple OS (generalized)>

Choosing the disk(OS) to boot by -BIOS- quick boot menu:

(This relies on your bios’s quick select/boot menu each time you boot as it defaults to one and you must interrupt it to pick the other.)

Two separate (self contained) disks (one motherboard)
  • Variant(MBR)- Have two separated self contained OS disks, each with their own bootloader (MBR at the top of disk) on their respective disk. It’s almost as if you only had one or the other disk, and each could boot itself if you removed the other.
  • Variant(EFI)- Same as above though it’s separate EFI partitions on each disk, and you use the UEFI firmware(bios) quck select/boot screen just as above.
One shared disk (partitioned for multiple OS)
  • Variant(EFI)- Have a single or multiple EFI partitions which contain each OS’s efi binary (“bootloader”), and add mulitple boot entries to the EFI firmware(bios) NVRAM list of bootable efi binaries (ie. bootable OSs). Likely here you will also have separate /boot partitions for each OS.
  • Variant(MBR)- now we have an issue since there is only one “top of the disk”. There are several approaches (one is in the other category below), one is to install the bootloader for the second OS into the beginning of it’s own /boot partition, but we must get the bios/main-bootloader pointed at that one and not the main MBR. Here usually you end up switching approach to using the -bootloader- to choose which OS to boot from (category below) and not the BIOS/firmware menu if using MBR.

Choosing the OS to boot via bootloader’s menus

(This relies on the grub/other bootloader from one OS to be aware of the other OS options you have installed, and thus OS is picked from that menu)

Two separate (self contained) disks (joke removed)

This is mostly for convenience or security of not having to use the BIOS boot selection menus, or when you must use the other OS’s bootloader (such as windows) to boot that OS due to incompatibility/annoyances/self-containment; Where bios/efi is told to boot only one bootloader and then that bootloader is configured to let you choose which OS to boot.

  • Chainloading- in grub terms it tells grub to load the bootloader from another disk/partition and let it take over.
  • Single grub.cfg- which either loads the other OS’s grub.cfg or directly contains info about both that original OS’s kernels and ALSO the other kernels sitting on another OS’s /boot partition. (this has some pain points if maintaining a single grub.cfg for mulitple Os depending on how it is done.)
  • There are many minor variants of this approach.
One shared disk (partitioned for multiple OS)

This is where it gets messy as there are many variants when using a single bootloader to find multiple OS.

  • The main concern is that bootloaders like grub will read a configuration file that has a list of kernels and where to get them from.
  • The list of kernels (in grub.cfg) can have entries that point at a krernel+initrd in any /boot disk you want, like the /boot partition of the other OS. (Thus one bootloader, and possibly one grub.cfg)
    • However… when you do OS updates, normally it will just update it’s grub.cfg to have an entry for the new kernel, but depending on how it’s normally done, they may not exactly play nice trying to update a shared grub.cfg in a single main /boot partition (or /boot/efi for that matter; yes for EFI the grub.cfg goes in the /boot/efi partition, well at least a small one does that may point to the one with the list of kernels in it’s associated /boot depending on your OS type/version of debian/fedora/etc ).
    • Coordinating if the maingrub.cfg will simply have a line that loads the other OS’s grub.cfg or adding all kernels to a single grub.cfg and other variants are the crux of that issue/solution.

So, one ‘simple’ way in the case of grub (either EFI/MBR) is to have the OS that is in charge of the main bootloader / grub.cfg always add an entry that makes a submenu/loading of the other OS’s grub.cfg. With grub. That can usually be placed in some of the /etc/grub.d/ files that are used to build the final grub.cfg. There can be other variants for OS detection and updating of the grub.cfg but if they involve placing entries in a shared grub.cfg then being sure of how each Os functions is fairly critical (using grubby to add entries or grub-mkconfig and clobbering it, or if we can search multiple /boot yet also add the correct OS root volume details to each kernel line, etc, etc).

This is to say (in combination of explaining the other variations which clarify what may be needed and what you get out of any given method) that it depends partly on your awareness of the boot process as to which method you may want to consider using, and it helps to understand those variations to decide on how to move forward.

Having two separate physical disks and dedicating each OS to one can simplify things greatly, especially if you’re willing to use the motherboard quick boot device select menu to pick which one to boot from. If sharing a disk then with MBR there is only one “top of the disk” so to speak, so then we need a main bootloader and then it needs a way to find either a) more kernels (and kept aware of them) for other OS, or b) a way to chainload another OS’s bootloader installed into a later partition (like the other /boot of that secondary OS); | With EFI we simply add entries into the firmware when you install and so the BIOS firmware can let you pick whichever one you wanted which points at the EFI partition for the OS you wanted (usually each OS can share an single EFI partition but it can be safest to have separate ones; more so if PopOS and Ubuntu are using the same name of ‘debian’ for it’s efi directory… I’m unsure about that). In all cases, if using grub, the grub.cfg usually must be maintained separately so that one OS isn’t clobbering and making a mess of the grub.cfg the other one sees, so it’s loading the right filesystem/device modules for grub, etc.)


After all of the above and with it in mind, the below question may still be dependent on the specific way things are setup for each involved OS, but…

…first here is that it looks like systmed-boot is EFI based, so you could use it separate from another grub.efi, or even (by it’s features) load other efi binaries etc, and the configuration files are broken out a bit more (possibly more convenient); but, if already booting the system using UEFI then you could just as easily pick any other OS installed on the system’s efi grub/other boot manager (“bootloader”) directly from the motheboard firmware’s quick boot device selection list.

…so as others noted, with EFI it is absolutely possible to “have too bootloaders on the same drive” which in that case either means sharing an EFI partition (you could avoid this for safety with really no issue/difference to do so), or can have separate EFI partition for each OS. As long as the efi-binary(it’s disk and path) is registered with the motherboard firmware, you can just pick whichever one you want from the bios quick boot choice menus.

It looks as though systemd-boot (archwiki)(official) " is a simple UEFI boot manager", key word being EFI and the arch docs that say to place it in your existing EFI partition. And:

systemd-boot operates on the EFI System Partition (ESP) only. Configuration file fragments, kernels, initrds, other EFI images need to reside on the ESP. Linux kernels need to be built with CONFIG_EFI_STUB to be able to be directly executed as an EFI image.

systemd-boot reads simple and entirely generic boot loader configuration files; one file per boot loader entry to select from. All files need to reside on the ESP.

So it seems to launch as an EFI binary (UEFI boot) and then can be used to either load kernels built with CONFIG_EFI_STUB or other EFI images (like grub64.efi). So if it’s EFI based and loads EFI then at first it woud look superfluous though it offers some more management and features that can avoid other bootloaders and such, it seems. EDIT: I should also say I’m unsure for sytstemd-boot if it has features that may break if it’s loaded by another efi binary vs it loading another efi binary, or if there is integration with the OS side systemd such that that OS may have an issue if it’s not booted via systemd-boot (more reading/testing etc would be appropriate if concerned)… and a vm can be good for that too.

If you do dual boot and share a single disk-

  • Will you use EFI or MBR? (can all the OS read GPT partitioning and be efi booted?) if it’s already GPT&EFI then go with that, else may need reinstall)
  • For EFI will you use the the motherboard/bios firmware quick boot device selection menus to pick a non default disk?
  • For EFI will you try the equivalent of “chainloading” and load another efi binary later? (like what systemd-boot can do)
  • Will/How-will you maintain a single grub.cfg or other systemd-boot related config files for a single bootloader menu of os/kernels? (Which contain lists of kernels or other EFI binaries ie. possibly other OS’s boot loaders like a grub.efi) (a grub.cfg that loads other OS’s grub.cfg as submenus? ) This is not required but you have to find the boot manager (and all OS’s kernels) via the firmware device menus or one of the other boot manager’s menus in some fashion.
1 Like

Thanks for the awesome reply! I understand the many different levels involved with loading the OS now! Currently, I’m using UEFI in the BIOS and GPT for partitioning (I switched to this for all my drives after I accidentally setup Windows 10 with MBR when I used that).

So it just seems like I can dual boot Ubuntu and it’ll do the setup for me as long as I have a separate partition. I’ll definitely try out a VM first.

As for LFS, is it possible to do it in a VM? I’ve heard that that was not possible. If I need to make a new thread for this question I can.

PS 99.99% of Ubuntu apps run on Pop!_OS. The particular app I was talking about searched for the OS’s name as a string and ran an install script based on that. Pop!_OS wasn’t on the list and the install script was long enough that I didn’t bother changing all the checks for the string. Sorry, for being a little misleading about compatibility.

No problem.

My initial concerns may be regarding drivers for storage or video, though if you are building the kernel yourself and if it’s a kernel that has that support then that may not be as much of an initial issue. (I suspect it will not be an issue for Ubuntu if you put it inside the VM, or may need minor tweaks; the LFS side is mostly what I refer to here but the above can apply to both when looking at older/foreign OS in a VM)

  • For Ubuntu in a VM (ignoring LFS), I’d just start with the default virt-manager settings and change the OS type and things you think it needs. If virtio storage is an issue then it will fail to detect the disk to install on and you’ll know fairly quickly); then if needed the “Disk Bus” type can be switched to sata or ide. For networking you may find it does not work but it’s easy enough after install to swap the VM settings for the NIC to be “device model” of e1000e or similar that is not virtio. These may be non issues.

It seems that some kind traveler has left some breadcrumbs here, which link out to another post from a professor that has been using virtualbox

  • Thus I might assume that plain qemu-kvm which is what the virt manager gui (libvirt underneath) uses, might be enough. You may need to add the LFS disk image with settings that are not for virtio storage interface type but sata when trying to have a VM for booting the resulting LFS itself, similarly the network driver as e1000 or some more generic device than virtio but it may be worth leaving them as virtio if the kernel is built with the drivers. (This would depend partly on LFS and if it will cover what is needed or if diverging a bit is needed.)

  • I’ve not read over that second post that the first links to, but there might be some hints regarding vitualbox that translate over to qemu-kvm; or more documents online. It just happened to be the first good one that popped up on a quick search of just looking to confirm if others are doing it.

  • Hmm… the professor’s old page is also down and the VM page is missing in the archive, but it shows that it has been done in the past.
    http://sun0.cs.uca.edu/~administrator/LinuxClass/lfs/ (archive.org)


(EDIT3: For the moment, having a separate post may not be needed but it may actually be good since others that may have tried out LFS recently (or have other recommendations in a similar vein) are more likely to see the topic subject and chime in; so after you do a little searching it may be worth doing so.)


Ugh, those can be a pain. On the one hand it might be protecting you from some of it’s assumptions that could not wok or be bad on another platform, or it could just be an oversight in OS variant-names/version-numbering. That and software certification that the vendor support won’t help with unless it’s on the correct platform, which can be understandable but often ends up feeling like an excuse when it’s not “directly” related to the issue (though usually the point is ‘they’ may not be able to tell if it is related or not; and it is equally likely to be the same for you when you can’t look into the code or types of issues seen). Sometimes the installer / environment needs a little convincing in the way of modifications or extra package dependencies.


EDIT2: Seems that reddit’s cdn is having issues at the moment, at least on my side.

However, the professor’s notes reffered to by the reddit posts (though reddit has other comments, ends in a 404 for the VM page at least:

Seems the 'Preface: The VM' is not on archive.org, though

So if google’s cache helps for the reddit posts then searching google for the url (and choosing the Cached version from the drop-down arrow next to the url in the search results may partly help for now).

Original post (with useful comments):
http://www.reddit.com/r/linuxfromscratch/comments/1k8qsr/the_credits_dont_go_to_me_but_heres_my_input/

Links off to the following from 7 years ago:
http://www.reddit.com/r/linuxfromscratch/comments/1k8qsr/the_credits_dont_go_to_me_but_heres_my_input/

Which has several links but links out to a page thats down, and from 2012
http://sun0.cs.uca.edu/~administrator/LinuxClass/lfs/ (archive.org)

That professor’s page being down and old likely means that part of it may not directly apply, but at least that is a note that it was being done within a VM at least in the past with VirtualBox.

Looking over the reddit posts later and more searching would be good :slight_smile:

Continuing some checking of the LFS document since that reddit source ended up less useful looking for more than that a professor got it sorted out for virtual box VM in the past.

This mostly borrows from some sections in the LFS doc that are releant to the questions about VM and also how they seem to do the bootloader side and kernel building.


Regarding older vs newer Ubuntu/OS host side:

Checking over the LFS 10.0 doc it seems there are some sections:


2.2. Host System Requirements

Your host system should have the following software with the minimum versions indicated. This should not be an
issue for most modern Linux distributions.

Earlier versions of the listed software packages may work, but have not been tested.

  • Linux Kernel-3.2
    The reason for the kernel version requirement is that we specify that version when building glibc in Chapter 6 at the recommendation of the developers. It is also required by udev.

    If the host kernel is earlier than 3.2 you will need to replace the kernel with a more up to date version. If your vendor doesn’t offer an acceptable kernel package, or you would prefer not to install it, you can compile a kernel yourself. Instructions for compiling the kernel and configuring the boot loader (assuming the host uses GRUB) are located in Chapter 10.

5.4. Linux-5.8.3 API Headers

The Linux API Headers (in linux-5.8.3.tar.xz) expose the kernel’s API for use by Glibc.

5.4.1. Installation of Linux API Headers
The Linux kernel needs to expose an Application Programming Interface (API) for the system’s C library (Glibc in LFS) to use. This is done by way of sanitizing various C header files that are shipped in the Linux kernel source tarball.

8.8. Glibc-2.32

The Glibc package contains the main C library. This library provides the basic routines for allocating memory, searching directories, opening and closing files, reading and writing files, string handling, pattern matching, arithmetic, and so on

8.8.1. Installation of Glibc

Prepare Glibc for compilation:

../configure --prefix=/usr \
 --disable-werror \
 --enable-kernel=3.2 \
 --enable-stack-protector=strong \
 --with-headers=/usr/include \
 libc_cv_slibdir=/lib

The meaning of the configure options:

–enable-kernel=3.2

  • This option tells the build system that this glibc may be used with kernels as old as 3.2. This means generating workarounds in case a system call introduced in a later version cannot be used.

It would appear to be referring to minimum versions and also including discussion of linux kernel 5.8 in most sections (assuming at least 3.2 per the docs above and glibc flags).


LFS is still using SysV currently to run boot scrips. (worth noting)

(SysV instead of systemd)

8.75. Sysvinit-2.97
The Sysvinit package contains programs for controlling the startup, running, and shutdown of the system.

9.1.1. System V
System V is the classic boot process that has been used in Unix and Unix-like systems such as Linux since about 1983. It consists of a small program, init, that sets up basic programs such as login (via getty) and runs a script. This script, usually named rc, controls the execution of a set of additional scripts that perform the tasks required to initialize the system.

This is the first thing the kernel will run ie. usually init=/sbin/init, so this is after the bootloader etc, which runs the startup scripts.


Regarding booting LFS in a VM (general thoughts)

10.1. Introduction

It is time to make the LFS system bootable. This chapter discusses creating the /etc/fstab file, building a kernel for the new LFS system, and installing the GRUB boot loader so that the LFS system can be selected for booting at startup.


Regarding EFI and LFS - if you were to not do LFS in a vm (you can do EFI in a VM, though it’s usually not enabled by default), it seems there is a section note and a link out to more details:

10.3.1. Installation of the kernel

Building the kernel involves a few steps—configuration, compilation, and installation. Read the README file in the kernel source tree for alternative methods to the way this book configures the kernel.

Note

If your host hardware is using UEFI, then the ‘make defconfig’ above should automatically add in some EFI related kernel options.

In order to allow your LFS kernel to be booted from within your host’s UEFI boot environment, your kernel must have this option selected:

Processor type and features --->
[*] EFI stub support [CONFIG_EFI_STUB]

A fuller description of managing UEFI environments from within LFS is covered by the lfs-uefi.txt hint at http://www.linuxfromscratch.org/hints/downloads/files/lfs-uefi.txt


In that section above there is also an important hint:

If desired, skip kernel configuration by copying the kernel config file, .config, from the host system (assuming it is available) to the unpacked linux-5.8.3 directory. However, we do not recommend this option. It is often better to explore all the configuration menus and create the kernel configuration from scratch.


Also, is the section about grub / where to put the kernel images:

10.3.1. Installation of the kernel


After kernel compilation is complete, additional steps are required to complete the installation. Some files need to be copied to the /boot directory.
Caution

If the host system has a separate /boot partition, the files copied below should go there. The easiest way to do that is to bind /boot on the host (outside chroot) to /mnt/lfs/boot before proceeding. As the root user in
the host system:

mount --bind /boot /mnt/lfs/boot

It is referring to the -host- system where I believe the plan in that case is to use the existing host’s grub/bootloader to load the kernels (and since the config file for that contains a kernel line that will then say where the root OS can be found, then the LFS kernel will then happily go off to try to access that disk and not the host (Ubuntu/etc) OS.

  • You could also put them in their own /boot partition for LFS, or in some cases inside the main LFS root OS volume if desired as long as the filesystem time is something grub/bootloader can read to find the kernel image to load and it’s grub.cfg list of kernels etc.
  • If LFS is in a VM then you don’t have to worry as much about it. If the VM has the LFS disk as a separate disk, you can always detach it from the Ubuntu host VM later and try booting it in a separate vm, or just booting it and not the Ubuntu host os, etc.
  • It may not need to share a /boot partition if you go through the steps of installing grub on the LFS dedicated disk (MBR likely).
  • (Technically you can pass the kernel and any initramfs directly to qemu-kvm/virt-install to boot from where the kernel+initramfs files are located -outside- of the actual vm, and probably forego any bootloader at all. There are likely many options/approaches.)

Continuing above (but applies to LVS on VM/hardware)

10.4. Using GRUB to Set Up the Boot Process

10.4.1. Introduction

Warning

Configuring GRUB incorrectly can render your system inoperable without an alternate boot device such as a CD-ROM or bootable USB drive. This section is not required to boot your LFS system. You may just want to modify your current boot loader, e.g. Grub-Legacy, GRUB2, or LILO.

To boot LFS on host systems that have UEFI enabled, the kernel needs to have been built with the CONFIG_EFI_STUB capability described in the previous section. However, LFS can be booted using GRUB2 without such an addition. To do this, the UEFI Mode and Secure Boot capabilities in the host system’s BIOS need to be turned off. For details, see the lfs-uefi.txt hint at http://www.linuxfromscratch.org/hints/downloads/files/lfs-uefi.txt.


Also important to distinguish - LFS does not default to having an initramfs:

2.4. Creating a New Partition


Note

For experienced users, other partitioning schemes are possible. The new LFS system can be on a software RAID array or an LVM logical volume. However, some of these options require an initramfs, which is an advanced topic. These partitioning methodologies are not recommended for first time LFS users.


This means that there is no stage between when the kernel loads and when it figures out how to access the root OS volume. Many modern distros will have the initramfs do things like load extra drivers that are needed to access your type of storage, activate LVM volumes, systemd initalizaition, udev triggering, and other initial steps all in preparation for switching over to the main OS root volume. It’s all contained in an initramfs image file that is a small filesystem with the needed things.

Usually booting is something like:
  • Power on ->
  • BIOS/UEFI ->
  • Bootloader/EFI-bootmanager(grub) ->
    (which loads kernel+initramfs into memory)
  • kernel starts, reaches into a disk to run /sbin/init
    (or whateverinit=... you put on the kernel cmdline)
    • if there is an initramfs it’s running /sbin/init from there.
    • if no initramfs image then it’s going to just use the root OS volume you pointed it at.

So without an initramfs the concern is just if it will load or have the needed modules to access a disk, such as a virtio type disk or to use LVM etc. This may or may not become relevant to the vm depending on the storage type of it’s disk, video, etc and if it needs to load anything extra prior to that. (hence can also be related to how the kernel is compiled)


So it’s to say that:

  • it may be fine to install Ubuntu/PopOS in a VM as the host system, and maybe newer versions are fine, you’ll just need to sort out what devel packages and such are needed to cross compile the software, which the LFS 10.0 doc does discuss but does not mention exact host OS package names so that it can apply to any linux distro as the host OS.
  • LFS itself (the part you’ll end up running/booting) ends up being a filesystem containing everything you built, so while doing it and building a kernel specific to the VM may be slightly bound to that x86_64 archetecture and ‘hardware’ it can be pulled out if needed.
  • You could boot the LFS kernel image using the host OS’s grub (in a vm or hardware), or you can grub-install non-efi (MBR) but of course if you do that you want to do that onto the dedicated LFS disk (which there will not be one in the normal sense in the case of on hardware (if you have only 1 disk for everything), though you may have made a separate partition for it on your normal disk (you don’t want to grub-install the old MBR on your efi system.
  • It does seem that there are supplemental LFS documents on making LFS bootable via EFI, in which case it could be on your normal system, but it’s always safer to do in a VM.
  • A key point is to make sure the kernel is built with the drivers it needs for a VM, or just test it to see if it works; the LFS doc hints that you could try copying the kernel config file from your host distro and using that when you build your kernel; this is the part where we can learn about building kernels for specific hardware :slight_smile:
  • You may want to check with the LFS mailing list or irc for others that have tried it on qemu-kvm vm to see if they have any tips or caveats as ultimately they will be much more familiar with LFS and others that may have popped in asking similar questions.