CPU peripherals communications

I want to know how CPU is connected to peripherals in a system, and how does a system knows what peripheral is occupying a certain slot, on hardware level

Meaning, suppose I’m designing a computing unit, how would I connect CPU to ethernet controller, drive controllers, etc… is it I2C , SPI , etc… ?

And after the system gets power and make the POST how does it know that this device on PCIe slot is a graphics card and the other one is a USB controller and the third is a 10 Gbps ethernet etc… ?

Is there some special “code” on each peripheral that the CPU/system read in order to know what it is?

it’d be nice if you mention x86 CPUs as well as ARM CPUs, and where exactly can I find detailed information about this topic and how to connect a CPU (articles, Documentation or books)

I’m a hardware design engineer, and I’m thinking of designing a computing unit
based on an ARM CPU,but I never made any computing hardware before.

Thanks a lot

1 Like

Is IOMMU the thing you are looking for?

One of the expansion buses connected to your CPU’s memory controller. Mostly PCIe, PCI/PCI-X, ISA, etc.

You can connect a peripheral over other interfaces, but it will be on a bridge or controller that is running on one of those expansion buses. Like USB.

There is -

For PCI and PCIe, each device has a chunk of memory called the Configuration Space that is accessed differently than the actual memory space. When a system boots, it queries the configuration space for device type/etc from each device connected to the bus.

USB has a similar structure that contains the DEVICE_ID and VENDOR_ID of the device.


This should be a good starting point for you.

Logically speaking, a lot of the low level stuff (practically hardware level) is based on old PCI (not PCIe) standards, whether it’s physically USB, PCI, PCIe, etc.
Take note of the “plug and play BIOS” link too. Also, UEFI is essentially just an opperating system.


First of all, you need to split the busses in two categories: enumerable and non enumerable.

If a bus is enumerable (such as PCIe or USB), the SoC will detect if a device is present, and the device itself can provide some info on what it is - usually in the form of vendor and product IDs which are matched to a driver.

For busses you can’t enumerate - such as SPI, I2C, SDIO and probably others, the info is hardcoded somewhere. I’m not familiar with x86 and UEFI booting, I think it’s in ACPI as Marc linked. For ARM devices, it’s usually in something called device tree, which the bootloader passes to the kernel. At least it works that way on Linux, no idea about other OSes.


Yeah, the difference with phones (as far as I understand) is instead of BIOS/UEFI relaying info to the OS, the OS is “hardcoded” essentially for hardware config. Pre-applied to OS instead of detected at boot and passed to OS.

1 Like

Here’s more good stuff.


I don’t know exactly, but I think IOMMU is a Linux OS thing to handle peripherals.
I’m now concerned about the hardware stuff, because I want to design one, and I want to know information about that before I start and make a mistake.

What I know now for peripherals is I want to use either I2C or SPI to connect the CPU to them (I don’t think UART is suitable because drive and ethernet are quite high-speed devices).

Then I want to know if ARM processors need BIOS as x86.

and finally I want to know how to communicate audio, video and ethernet over PCIe.

hmmm, nice.

so the system will check what is written on the IC/controller memory and recognize what it is.

and for connection I’d use whatever buses available on the CPU I/O interfaces. right?

I will read the wiki article you provided carefully. thanks for your time.

great info, I’ve had a fast look over device tree and it seems to provide a lot of info.
also I’m only interested with Linux and android (de-googled is best).

Thanks to be given to all of you everybody.
I also think I should get the Documentation of the CPU I want to use (an ARM CPU) and read it carefully along side with it’s application note.

Thanks again you healthy community :innocent: , I’d make sure I update you with the result.

1 Like

Part of my day job is bringing up Linux SoMs on custom baseboards :wink: What I actually need to do is:

  1. Make sense of SoM vendor BSP
  2. Patch U-Boot (the bootloader) with info about the board
  3. Write the part of device tree which describes our baseboard

Sounds simple, isn’t. That said, for USB I don’t handle it - the SoM part of device tree (provided by the vendor) - will configure the USB host peripheral in the SoC and the pins, and whatever device is connected, the kernel handles by itself. Worst case I need to enable the drivers in kernel config.


Device segregation to make virtualization stuff easier.

1 Like

um, what is SoM and BSP please?

SoM - system on module. Basically, you buy a small board with the CPU, RAM and eMMC (and sometimes WiFi), which you then socket into your own baseboard. This way you skip the hardest parts of the design. An example would be Raspberry Pi CM3 or CM4.

BSP - board support package - software the vendor gives you to make writing code for their hardware easier, or sometimes even possible.


@Abode091 one more thing I just remembered - many, many SoMs on the market don’t follow any standards. Which, in this times of shortage, sucks. There are SoMs which follow standards, although they’re a bit more expensive. For those standards see https://sget.org

1 Like

Also, keep in mind that x86 / amd64 /… PC platforms have a ton of legacy stuff that modern devices and modern arm/powerpc/risc-v platforms don’t need to concern themselves with. And there’s lots of PC specific configuration mechanism that are generally not strictly required, but were a good idea at the time.

e.g. another io address space, separate from memory address space (modern stuff is all memory mapped).

e.g.g. Things like bios and uefi and so on , aren’t really a thing on most TVs, cameras, traffic lights, cars, smart lights or washing machines - after very low level CPU power on and very low level bring up, they usually just start executing instructions from SPI nor, first disabling debugging, configuring clocks, initializing some CPU cache and copying parts of bootloader from SPI NOR flash (usually) into the CPU caches, and then jumping into the addresses backed by the CPU cache in order to initialize ram – although this all varies too.

PCIe communication both on the CPU side and on the device side, boils down to something that looks like reading/writing memory, for the most part.

1 Like

how would I connect CPU to ethernet controller, drive controllers, etc… is it I2C , SPI , etc… ?

Most devices on a modern x86 PC are connected, directly or indirectly, via PCIe physically, but much of the older PCI software standard still remains in use, really all that changed between PCI and PCIe is the electrical interface.

Like you mention, some devices are connected by different busses, but are via PCI devices, which the processor communicates with.

For example, my machine has a PCI device which acts as an SMBus (a subset of I2C) interface:

00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)

On most small/cheap machines (like early Raspberry PIs), there is no PCI interface at all - devices use a mixture of different interfaces which do not have plug-n-play / autodetection standards defined. These usually use a data structure called a Devicetree which is made specifically for the particular board. The devicetree binary may be passed from the bootloader to the kernel at boot time to dynamically configure the kernel for the hardware, or be staticly compiled into the kernel.

after the system gets power and make the POST how does it know that this device on PCIe slot is a graphics card and the other one is a USB controller and the third is a 10 Gbps ethernet etc… ?

(about the best resource for this is PCI - OSDev Wiki)

PCI can be implemented on any architecture, but there are parts which are architecture specific.

The standard part begins with a PCI bus scan. Every PCI device has a bus number, device number and function number. You see this in Linux lspci output, like the example above. 00:14.0 is bus 0, device 14h (hex), function 0.

The BIOS or UEFI firmware configures the PCI controller (in PCIe nomenclature - “root complex”) to assigns device numbers to physical slots, or partial slots (like for PCI bifurcation) at system boot, using its board-specific configuration along with a probe of the physical slots, at the electrical level, to determine devices that are physically connected and which PCI capabilities they have.

When the operating system / kernel boots, it performs a PCI scan using the software interface.

For every possible bus/device/function number, the kernel reads the BAR (Base Addres Registers) from the PCI bus. The first 16-bits of the BAR is the vendor code of the PCI device.

If a device doesn’t exist, it comes back as all 1’s (legacy PCI would physically have the bus floating, so in the high-impedance state reads would be binary 1), so a vendor code of 0xffff indicates a device is not present, so is skipped.

If the vendor code is valid, then the rest of the BAR is read to get the 16-bit device code, which is allocated by the device vendor to indicate a particular model of device, then the PCI standard device code, which indicates any particular PCI standards which the device implements (such as a PCI bridge, VGA controller, network device).

Once the list of devices is discovered, then the task of resource allocation must be done.

For each PCI address found, a 32 or 64-bit all-1’s value is written to the memory BAR base address register, which is a register that indicates where in memory this device is mapped to. The device will clear all the least significant bits which it uses to decode an address, indicating the size of the address range which the device uses.

The OS builds a list of the memory sizes of all the devices (which may have multiple different types of memory), and decides how to map them in RAM. Once the OS has allocated resources, it writes the base address to each BAR of each device, configures memory paging to make sure combining-writes or caching is set appropriately (some types of memory might be fine to cache, some types may be used purely as MMIO, which would be a disaster to cache)

To actually implement this in the kernel for a specific architecture depends on the architecture.

On x86, before UEFI and ACPI, the BIOS and OS would use I/O port 0xcf8 to select a device address, then read/write from I/O port 0xcfc to configure the device.

Since UEFI and ACPI (and PCIe), an special ACPI table called MCFG is created by the firmware then made available to the kernel, which contains a physical address of the MMIO interface of the PCI controller (PCI Express - OSDev Wiki). The ACPI is 64-bit capable, and available on arhitectures which do not implement port I/O (some RISCs had to emulate port I/O in the PCI controller for legacy PCI).

To get started for examples for Linux, check pci_legacy_init (arch/x86/pci/legacy.c), which calls PCI functions common to all architectures in pci_scan_root_bus/pci_scan_child_bus/pci_scan_child_bus_extend in drivers/pci/probe.c

1 Like

First of all, let me thank you for the time and effort you’ve put in this answer, it was quite informative.

Second I have a question: if I choose a well know CPU package (say an ARM CPU with well-known Cortex cores and off the shelf SoC) then I don’t have to get myself in the firmware/kernel stuff, everything should be already existing in the linux kernel … right ?

With ARM, realistically, you’ll be getting an NXP or Rokchip SoC, or more likely a SoM with one (do you want to deal with memory timings?). All the drivers are there in the kernel. What you will still need to do (and it’s unavoidable), is:

  1. Adapt vendor code in the bootloader from a devkit to your board
  2. Add the non-enumerable stuff from your baseboard to the device tree
  3. Maybe mess with kernel config to enable drivers you need

Yes, an STM MPU is an option, but those are old, slow, cores, and they’re new to the market.

1 Like