Porting Linux to a new processor architecture, part 2: The early code

In part 1 of this series, we laid the groundwork for porting Linux to a new processor architecture by explaining the (non-code-related) preliminary steps. This article continues from there to delve into the boot code. This includes what code needs to be written in order to get from the early assembly boot code to the creation of the first kernel thread.
在本系列的第一部分中,我们为将 Linux 移植到新处理器架构打下了基础,解释了与代码无关的初始步骤。本文将从此继续,深入探讨启动代码的实现过程,包括从最初的汇编启动代码过渡到创建第一个内核线程所需编写的代码。

The header files
头文件

As briefly mentioned in the previous article, the arch header files (in my case, located under linux/arch/tsar/include/) constitute the two interfaces between the architecture-specific and architecture-independent code required by Linux.
如前文简要提到的,架构相关头文件(在我的案例中位于 linux/arch/tsar/include/ 下)构成了 Linux 中架构相关代码与架构无关代码之间的两个接口。

The first portion of these headers (subdirectory asm/) is part of the kernel interface and is used internally by the kernel source code. The second portion (uapi/asm/) is part of the user interface and is meant to be exported to user space—even though the various standard C libraries tend to reimplement the headers instead of including the exported ones. These interfaces are not completely airtight, as many of the asm headers are used by user space.
这些头文件的第一部分(位于 asm/ 子目录中)属于内核接口,由内核源码内部使用。第二部分(位于 uapi/asm/ 中)则属于用户接口,旨在导出给用户空间使用——尽管各种标准 C 库往往会自行实现这些头文件,而不是直接包含导出的版本。这些接口并不完全隔离,因为许多 asm 头文件也会被用户空间使用。

Both interfaces are typically more than a hundred header files altogether, which is why headers represent one of the biggest tasks in porting Linux to a new processor architecture. Fortunately, over the past few years, developers noticed that many processor architectures were sharing similar code (because they often exhibited the same behaviors), so the majority of this code has been aggregated into a generic layer of header files (in linux/include/asm-generic/ and linux/include/uapi/asm-generic/).
这两个接口总共往往涉及上百个头文件,这也是将 Linux 移植到新处理器架构时头文件工作量如此巨大的原因。幸运的是,过去几年中,开发者们注意到许多处理器架构共享类似的代码(因为它们通常具有相似的行为),因此大多数此类代码已被汇总到通用的头文件层中(位于 linux/include/asm-generic/linux/include/uapi/asm-generic/)。

The real benefit is that it is possible to refer to these generic header files, instead of providing custom versions, by simply writing appropriate Kbuild files. For example, the few first lines of a typical include/asm/Kbuild looks like:
真正的好处是,只需通过编写合适的 Kbuild 文件,就可以引用这些通用头文件,而无需提供定制版本。例如,典型的 include/asm/Kbuild 文件开头几行可能如下所示:

    generic-y += atomic.h
    generic-y += barrier.h
    generic-y += bitops.h
    ...

When porting Linux, I'm afraid there is no other choice than to make a list of all of the possible headers and examine them one by one in order to decide whether the generic version can be used or if it requires customization. Such a list can be created from the generic headers already provided by Linux as well as the customized ones implemented by other architectures.
在移植 Linux 时,恐怕别无选择,只能列出所有可能涉及的头文件,并逐个检查,以决定能否使用通用版本,或是否需要进行定制。这样的列表可以从 Linux 已提供的通用头文件以及其他架构实现的定制版本中整理出来。

Basically, a specific version must be developed for all of the headers that are related to the details of an architecture, as defined by the hardware or by the software through the ABI: cache (asm/cache.h) and TLB management (asm/tlbflush.h), the ELF format (asm/elf.h), interrupt enabling/disabling (asm/irqflags.h), page table management (asm/page.h, asm/pgalloc.h, asm/pgtable.h), context switching (asm/mmu_context.h, asm/ptrace.h), byte ordering (uapi/asm/byteorder.h), and so on.
基本上,所有涉及架构细节(由硬件定义或由 ABI 所规范的软件接口)的头文件都需要开发专门版本,例如:缓存管理(asm/cache.h)和 TLB 管理(asm/tlbflush.h)、ELF 格式(asm/elf.h)、中断开关控制(asm/irqflags.h)、页表管理(asm/page.hasm/pgalloc.hasm/pgtable.h)、上下文切换(asm/mmu_context.hasm/ptrace.h)、字节序(uapi/asm/byteorder.h)等等。

Boot sequence
启动流程

As explained in part 1, figuring out the boot sequence helps to understand the minimal set of architecture-specific functions that must be implemented—and in which order.
如第一部分所述,弄清启动流程有助于确定必须实现的最小架构相关函数集合——以及它们的调用顺序。

The boot sequence always starts with a function that must be written manually, usually in assembly code (in my case, this function is called kernel_entry() and is located in arch/tsar/kernel/head.S). It is defined as the main entry point of the kernel image, which indicates to the bootloader where to jump after loading the image in memory.
启动流程总是从一个必须手动编写的函数开始,通常是用汇编实现的(在我的案例中,这个函数名为 kernel_entry(),位于 arch/tsar/kernel/head.S)。它被定义为内核镜像的主入口点,告知引导加载程序在将镜像加载到内存后应跳转到何处执行。

The following trace shows an excerpt of the sequence of functions that is executed during the boot (starred functions are the architecture-specific ones that will be discussed later in this article):
下面的调用链展示了启动过程中执行的部分函数(带星号的函数是架构相关函数,本文稍后将进一步讨论)

    kernel_entry*
    start_kernel
        setup_arch*
        trap_init*
        mm_init
            mem_init*
        init_IRQ*
        time_init*
        rest_init
            kernel_thread
            kernel_thread
            cpu_startup_entry

Early assembly boot code
早期汇编启动代码

The early assembly boot code has this special aura that scared me at first (as I'm sure it did many other programmers), since it is often considered one of the most complex pieces of code in a port. But even though writing assembly code is usually not an easy ride, this early boot code is not magic. It is merely a trampoline to the first architecture-independent C function and, to this end, only needs to perform a short and defined list of tasks.
早期的汇编启动代码带有一种神秘的光环,一开始让我望而生畏(我相信许多程序员也是如此),因为它通常被认为是移植过程中最复杂的代码之一。但尽管编写汇编代码通常不轻松,这段早期启动代码并不是魔法。它只是通往第一个架构无关 C 函数的“跳板”,因此它只需要完成一小段明确的任务清单即可。

When the early boot code begins execution, it knows nothing about what has happened before: Has the system been rebooted or just been powered on? Which bootloader has just loaded the kernel in memory? And so forth. For this reason, it is safer to put the processor into a known state. Resetting one or several system registers usually does the trick, making sure that the processor is operating in kernel mode with interrupts disabled.
当早期启动代码开始执行时,它对之前发生的一切一无所知:系统是刚刚重启还是刚刚上电?是哪个引导加载程序将内核加载到了内存中?诸如此类的问题。因此,更稳妥的做法是将处理器置于一个已知状态。通常可以通过重置一个或多个系统寄存器来实现这一点,从而确保处理器处于内核模式并且中断被禁用。

Similarly, not much is known about the state of the memory. In particular, there is no guarantee that the portion of memory representing the kernel’s bss section (the section containing uninitialized data) was reset to zero, which is why this section must be explicitly cleared.
同样,内存的状态也基本未知。尤其是,无法保证代表内核 bss 段(用于存放未初始化数据的内存区域)的那部分内存是否被清零,因此必须显式地清除该段。

Often Linux receives arguments from the bootloader (in the same way that a program receives arguments when it is launched). For example, this could be the memory address of a flattened device tree (on ARM, MicroBlaze, openRISC, etc.) or some other architecture-specific structure. Often such arguments are passed using registers and need to be saved into proper kernel variables.
通常,Linux 会从引导加载程序接收参数(类似于程序启动时接收的命令行参数)。例如,这可能是一个扁平设备树(在 ARM、MicroBlaze、openRISC 等架构中)的内存地址,或其他架构相关的数据结构。这类参数通常通过寄存器传递,并需要保存到内核中的适当变量中。

At this point, virtual memory has not been activated and it is interesting to note that kernel symbols, which are all defined in the kernel's virtual address space, have to be accessed through a special macro: pa() in x86, tophys() in OpenRISC, etc. Such a macro translates the virtual memory address for symbols into their corresponding physical memory address, thus acting as a temporary software-based translation mechanism.
此时,虚拟内存尚未启用,有趣的是,所有内核符号都定义在内核的虚拟地址空间中,必须通过特殊的宏来访问:x86 中为 pa(),OpenRISC 中为 tophys() 等。这些宏会将符号的虚拟地址转换为对应的物理地址,充当临时的软件地址转换机制。

Now, in order to enable virtual memory, a page table structure must be set up from scratch. This structure usually exists as a static variable in the kernel image, since at this stage it is nearly impossible to allocate memory. For the same reason, only the kernel image can be mapped by the page table at first, using huge pages if possible. According to convention, this initial page table structure is called swapper_pg_dir and is thereafter used as the reference page table structure throughout the execution of the system.
现在,为了启用虚拟内存,必须从头构建一个页表结构。由于此阶段几乎无法进行内存分配,因此该结构通常作为内核镜像中的静态变量存在。出于同样的原因,初始页表只能映射内核镜像本身,如果可能的话,优先使用大页(huge pages)。按照惯例,这个初始页表结构被命名为 swapper_pg_dir,并在整个系统执行过程中作为参考页表结构。

On many processor architectures, including TSAR, there is an interesting thing about mapping the kernel in that it actually needs to be mapped twice. The first mapping implements the expected direct-mapping strategy as described in part 1 (i.e. access to virtual address 0xC0000000 redirects to physical address 0x00000000). However, another mapping is temporarily required for when virtual memory has just been enabled but the code execution flow still hasn't jumped to a virtually mapped location. This second mapping is a simple identity mapping (i.e. access to virtual address 0x00000000 redirects to physical address 0x00000000).
在许多处理器架构中,包括 TSAR,有一个关于内核映射的有趣之处,即它实际上需要被映射两次。第一次映射实现了第一部分中描述的预期直接映射策略(例如访问虚拟地址 0xC0000000 会跳转到物理地址 0x00000000)。但在启用虚拟内存之后、代码执行流程尚未跳转到虚拟映射地址之前,还需要一个临时的映射。这第二个映射是一个简单的恒等映射(即访问虚拟地址 0x00000000 会跳转到物理地址 0x00000000)。

With an initialized page table structure, it is now possible to enable virtual memory, meaning that the kernel is fully executing in the virtual address space and that all of the kernel symbols can be accessed normally by their name, without having to use the translation macro mentioned earlier.
初始化页表结构后,现在就可以启用虚拟内存了,这意味着内核已经完全在虚拟地址空间中运行,并且所有内核符号都可以直接通过名称访问,而无需再使用前文提到的地址转换宏。

One of the last steps is to set up the stack register with the address of the initial kernel stack so that C functions can be properly called. In most processor architectures (SPARC, Alpha, OpenRISC, etc.), another register is also dedicated to containing a pointer to the current thread's information (struct thread_info). Setting up such a pointer is optional, since it can be derived from the current kernel stack pointer (the thread_info structure is usually located at the bottom of the kernel stack) but, when allowed by the architecture, it enables much faster and more convenient access.
最后的步骤之一是将堆栈寄存器设置为初始内核栈的地址,以便可以正确调用 C 函数。在大多数处理器架构中(如 SPARC、Alpha、OpenRISC 等),还会有一个专门的寄存器用于保存当前线程信息结构(struct thread_info)的指针。设置这个指针是可选的,因为它可以从当前内核栈指针推导出来(thread_info 通常位于内核栈底部),但如果架构支持显式设置,那么可以带来更快、更方便的访问。

The last step of the early boot code is to jump to the first architecture-independent C function that Linux provides: start_kernel().
早期启动代码的最后一步是跳转到 Linux 提供的第一个架构无关 C 函数:start_kernel()

En route to the first kernel thread
走向第一个内核线程的路上

start_kernel() is where many subsystems are initialized, from the various virtual filesystem (VFS) caches and the security framework to time management, the console layer, and so on. Here, we will look at the main architecture-specific functions that start_kernel() calls during boot before it finally calls rest_init(), which creates the first two kernel threads and morphs into the boot idle thread.
start_kernel() 是许多子系统初始化的起点,从各种虚拟文件系统(VFS)缓存、安全框架,到时间管理、控制台层等。在这里,我们将关注 start_kernel() 在启动过程中调用的主要架构相关函数,然后它会调用 rest_init(),创建前两个内核线程,并最终演变为引导空闲线程。

setup_arch()

While it has a rather generic name, setup_arch() can actually do quite a bit, depending on the architecture. Yet examining the code for different ports reveals that it generally performs the same tasks, albeit never in the same order nor the same way. For a simple port (with device tree support), there is a simple skeleton that setup_arch() can follow.
尽管 setup_arch() 这个名字看起来很通用,但它的功能实际上取决于具体的架构。查看不同架构移植版本的代码可以发现,它通常执行相同的任务,尽管顺序和实现方式各不相同。对于支持设备树的简单移植版本,setup_arch() 可以遵循一个基本的骨架流程。

One of the first steps is to discover the memory ranges in the system. A device-tree-based system can quickly skim through the flattened device tree given by the bootloader (using early_init_devtree()) to discover the physical memory banks available and to register them into the memblock layer. Then, parsing the early arguments (using parse_early_param()) that were either given by the bootloader or directly included in the device tree can activate useful features such as early_printk(). The order is important here, as the device tree might contain the physical address of the terminal device used for printing and thus needs to be scanned first.
第一步之一是识别系统中的内存范围。基于设备树的系统可以快速扫描由引导加载程序提供的扁平设备树(通过 early_init_devtree()),以识别可用的物理内存块,并将其注册到 memblock 层中。随后,通过解析早期参数(使用 parse_early_param()),这些参数可能来自引导加载程序或直接包含在设备树中,可以启用一些有用的功能,例如 early_printk()。这里的顺序很重要,因为设备树中可能包含用于打印输出的终端设备的物理地址,因此必须先进行扫描。

Next the memblock layer needs some more configuration before it is possible to map the low memory region, which enables memory to be allocated. First, the regions of memory occupied by the kernel image and the device tree are set as being reserved in order to remove them from the pool of free memory, which is later released to the buddy allocator. The boundary between low memory and high memory (i.e. which portion of the physical memory should be included in the direct mapping region) needs to be determined. Finally the page table structure can be cleaned up (by removing the identity mapping created by the early boot code) and the low memory mapped.
接下来,在可以映射低端内存区域以便进行内存分配之前,还需要对 memblock 层进行一些额外配置。首先,将内核镜像和设备树占用的内存区域标记为保留区域,以将它们从可用内存池中移除(该内存池稍后会释放给伙伴分配器)。还需要确定低端内存和高端内存之间的边界(即物理内存中哪些部分应该包含在直接映射区域中)。最后,可以清理页表结构(移除早期启动代码创建的恒等映射),并映射低端内存。

The last step of the memory initialization is to configure the memory zones. Physical memory pages can be associated with different zones: ZONE_DMA for pages compatible with the old ISA 24-bit DMA address limitation, and ZONE_NORMAL and ZONE_HIGHMEM for low- and high-memory pages, respectively. Further reading on memory allocation in Linux can be found in Linux Device Drivers [PDF].
内存初始化的最后一步是配置内存区域。物理内存页可以归属于不同的区域:ZONE_DMA 表示兼容旧式 ISA 24 位 DMA 地址限制的页面,ZONE_NORMAL 和 ZONE_HIGHMEM 分别表示低端内存和高端内存页面。关于 Linux 中内存分配的更多内容,可参阅《Linux 设备驱动程序》一书中的相关章节 [PDF]。

Finally, the kernel memory segments are registered using the resource API and a tree of struct device_node entries is created from the flattened device tree.
最后,使用资源管理 API 注册内核的各个内存段,并从扁平设备树中创建一棵由 struct device_node 构成的节点树。

If early_printk() is enabled, here is an example of what appears on the terminal at this stage:
如果启用了 early_printk(),此阶段在终端上可能会显示如下信息示例:

Linux version 3.13.0-00201-g7b7e42b-dirty (joel@joel-zenbook) 
    (gcc version 4.8.3 (GCC) ) #329 SMP Thu Sep 25 14:17:56 CEST 2014
Model: UPMC/LIP6/SoC - Tsar
bootconsole [early_tty_cons0] enabled
Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 65024
Kernel command line: console=tty0 console=ttyVTTY0 earlyprintk

trap_init()

The role of trap_init() is to configure the hardware and software architecture-specific parts involved in the interrupt/exception infrastructure. Up to this point, an exception would either cause the system to crash immediately or it would be caught by a handler that the bootloader might have set up (which would eventually result in a crash as well, but perhaps with more information).
trap_init() 的作用是配置中断/异常处理机制中与硬件和软件架构相关的部分。在这之前,任何异常要么会立即导致系统崩溃,要么会被引导加载程序设置的某个处理程序捕获(这最终也可能导致崩溃,但可能会提供更多调试信息)。

Behind (the actually simple) trap_init() hides another of the more complex pieces of code in a Linux port: the interrupt/exception handling manager. A big part of it has to be written in assembly code because, as with the early boot code, it deals with specifics that are unique to the targeted processor architecture. On a typical processor, a possible overview of what happens on an interrupt is as follows:
隐藏在(其实很简单的)trap_init() 背后的是 Linux 移植中另一段复杂的代码:中断/异常处理管理器。这部分很大程度上必须用汇编语言编写,因为它和早期启动代码一样,涉及与目标处理器架构紧密相关的细节。在典型处理器中,中断发生时可能会经历以下流程:

The processor automatically switches to kernel mode, disables interrupts, and its execution flow is diverted to a special address that leads to the main interrupt handler.
处理器会自动切换到内核模式,禁用中断,并将执行流引导到一个特殊地址,该地址指向主中断处理程序。

This main handler retrieves the exact cause of the interrupt and usually jumps to a sub-handler specialized for this cause. Often an interrupt vector table is used to associate an interrupt sub-handler with a specific cause, and on some architectures there is no need for a main interrupt handler, as the routing between the actual interrupt event and the interrupt vector is done transparently by hardware.
主处理程序会提取中断的具体原因,并通常跳转到对应的子处理程序。中断向量表通常用于将中断子处理程序与特定中断原因进行关联,而在某些架构中,主中断处理程序并不需要,因为硬件可以自动完成中断事件与中断向量之间的路由。

The sub-handler saves the current context, which is the state of the processor that can later be restored in order to resume exactly where it stopped. It may also re-enable the interrupts (thus making Linux re-entrant) and usually jumps to a C function that is better able to handle the cause of the exception. For example, such a C function can, in the case of an access to an illegal memory address, terminate the faulty user program with a SIGBUS signal.
子处理程序会保存当前上下文,即处理器的当前状态,以便稍后恢复执行时能够从中断点继续。它也可能重新启用中断(从而使 Linux 具备可重入能力),并通常跳转到一个 C 函数,由该函数处理具体的异常原因。例如,当访问非法内存地址时,这个 C 函数可以通过发送 SIGBUS 信号终止出错的用户程序。

Once all of this interrupt infrastructure is in place, trap_init() merely initializes the interrupt vector table and configures the processor via one of its system registers to reflect the address of the main interrupt handler (or of the interrupt vector table directly).
一旦上述中断基础设施准备就绪,trap_init() 只需初始化中断向量表,并通过某个系统寄存器配置处理器,使其知道主中断处理程序(或直接是中断向量表)的地址即可。

mem_init()

The main role of mem_init() is to release the free memory from the memblock layer to the buddy allocator (aka the page allocator). This represents the last memory-related task before the slab allocator (i.e. the cache of commonly used objects, accessible via kmalloc()) and the vmalloc infrastructure can be started, as both are based on the buddy allocator.
mem_init() 的主要作用是将 memblock 层中的空闲内存释放给伙伴分配器(即页分配器)。这是在 slab 分配器(用于缓存常用对象,通过 kmalloc() 访问)和 vmalloc 基础设施启动之前所需完成的最后一个与内存相关的任务,因为这两者都依赖于伙伴分配器。

Often mem_init() also prints some information about the memory system:
mem_init() 通常还会打印一些关于内存系统的信息:

Memory: 257916k/262144k available (1412k kernel code, 
    4228k reserved, 267k data, 84k bss, 169k init, 0k highmem)
Virtual kernel memory layout:
    vmalloc : 0xd0800000 - 0xfffff000 ( 759 MB)
    lowmem  : 0xc0000000 - 0xd0000000 ( 256 MB)
      .init : 0xc01a5000 - 0xc01ba000 (  84 kB)
      .data : 0xc01621f8 - 0xc01a4fe0 ( 267 kB)
      .text : 0xc00010c0 - 0xc01621f8 (1412 kB)

init_IRQ()

Interrupt networks can be of very different sizes and complexities. In a simple system, the interrupt lines of a few hardware devices are directly connected to the interrupt inputs of the processor. In complex systems, the numerous hardware devices are connected to multiple programmable interrupt controllers (PICs) and these PICs are often cascaded to each other, forming a multilayer interrupt network. The device tree helps a great deal by easily describing such networks (and especially the routing) instead of having to specify them directly in the source code.
中断网络的规模和复杂性可能大相径庭。在简单的系统中,少量硬件设备的中断线会直接连接到处理器的中断输入口。而在复杂系统中,众多硬件设备连接到多个可编程中断控制器(PIC),这些 PIC 往往层层级联,形成多层中断网络。设备树极大地简化了这类网络(尤其是其中的路由关系)的描述,避免了在源代码中直接硬编码。

In init_IRQ(), the main task is to call irqchip_init() in order to scan the device tree and find all the nodes identified as interrupt controllers (e.g. PICs). It then finds the associated driver for each node and initializes it. Unless the targeted system uses an already-supported interrupt controller, that typically means the first device driver will need to be written.
init_IRQ() 中,主要任务是调用 irqchip_init() 来扫描设备树,找出所有标识为中断控制器(如 PIC)的节点。接着,它会为每个节点找到相应的驱动并进行初始化。除非目标系统使用的是已被支持的中断控制器,否则通常意味着必须为其编写第一个设备驱动程序。

Such a driver contains a few major functions: an initialization function that maps the device in the kernel address space and maps the controller-local interrupt lines to the Linux IRQ number space (through the irq_domain mapping library); a mask/unmask function that can configure the controller in order to mask or unmask the specified Linux IRQ number; and, finally, a controller-specific interrupt handler that can find out which of its inputs is active and call the interrupt handler registered with this input (for example, this is how the interrupt handler of a block device connected to a PIC ends up being called after the device has raised an interrupt).
这类驱动通常包含几个主要功能:一个初始化函数,用于将该设备映射到内核地址空间,并通过 irq_domain 映射库将控制器本地中断线映射到 Linux 的 IRQ 编号空间;一个屏蔽/解除屏蔽函数,用于配置控制器以屏蔽或启用某个 Linux IRQ 编号;最后是一个控制器专用的中断处理程序,用于判断哪个输入线被激活,并调用与该输入线注册的中断处理程序(例如,一个连接到 PIC 的块设备发出中断后,其中断处理程序就是通过这一过程被调用的)。


time_init()

The purpose of time_init() is to initialize the architecture-specific aspects of the timekeeping infrastructure. A minimal version of this function, which relies on the use of a device tree, only involves two function calls.
time_init() 的目的是初始化时间保持系统中与体系结构相关的部分。一个最小化实现的版本会依赖于设备树,并且只涉及两个函数调用。

First, of_clk_init() will scan the device tree and find all the nodes identified as clock providers in order to initialize the clock framework. A very simple clock-provider node only has to define a fixed frequency directly specified as one of its properties.
首先,of_clk_init() 会扫描设备树,找出所有标识为时钟提供者的节点,以初始化时钟框架。一个非常简单的时钟提供节点只需通过属性直接定义一个固定频率即可。

Then, clocksource_of_init() will parse the clock-source nodes of the device tree and initialize their associated driver. As described in the kernel documentation, Linux actually needs two types of timekeeping abstraction (which are actually often both provided by the same device): a clock-source device provides the basic timeline by monotonically counting (for example it can count system cycles), and a clock-event device raises interrupts on certain points on this timeline, typically by being programmed to count periods of time. Combined with the clock provider, it allows for precise timekeeping.
然后,clocksource_of_init() 会解析设备树中的时钟源节点,并初始化对应的驱动程序。根据内核文档描述,Linux 实际上需要两种时间保持抽象(通常由同一个设备同时提供):clock-source 设备通过单调递增计数(如系统周期数)来提供基本的时间线,而 clock-event 设备则在时间线上的特定点产生中断,通常是通过编程计时实现的。配合时钟提供者,这就能实现精确的时间保持。

The driver of a clock-source device can be extremely simple, especially for a memory-mapped device for which the generic MMIO clock-source driver only needs to know the address of the device register containing the counter. For the clock event, it is slightly more complicated as the driver needs to define how to program a period and how to acknowledge it when it is over, as well as provide an interrupt handler for when a timer interrupt is raised.
clock-source 设备的驱动程序可以非常简单,尤其是对内存映射设备而言,通用 MMIO clock-source 驱动只需知道包含计数器的设备寄存器地址即可。而 clock-event 的驱动稍微复杂些,因为它不仅要定义如何设置时间周期,还要定义如何在周期结束时进行确认,同时还需要提供一个用于处理定时器中断的中断处理程序。


Conclusion
总结

One of the main tasks performed by start_kernel() later on is to calibrate the number of loops per jiffy, which is the number of times the processor can execute an internal delay loop in one jiffy—an internal timer period that normally ranges from one to ten milliseconds. Succeeding in performing this calibration should mean that the different infrastructures and drivers set up by the architecture-specific functions we just presented are working, since the calibration makes use of most of them.
稍后由 start_kernel() 执行的主要任务之一是校准每 jiffy 的循环次数,也就是处理器在一个 jiffy 内(通常为一到十毫秒之间的内部定时周期)能执行多少次内部延迟循环。如果这个校准过程能够成功执行,通常意味着我们前面介绍的那些架构相关函数所设置的各项基础设施和驱动都已经正确运行,因为校准过程依赖它们的大多数组件。

In the next article, we will present the last portion of the port: from the creation of the first kernel thread to the init process.
在下一篇文章中,我们将介绍移植过程的最后部分:从创建第一个内核线程到启动 init 进程。

© 版权声明
THE END
如果内容对您有所帮助,就支持一下吧!
点赞0 分享
评论 抢沙发

请登录后发表评论

    暂无评论内容