Tue, 13 Mar 2007

Andi has resisted the PDA->percpu conversion for i386, partially because x86_64 has a PDA. So the obvious answer is to convert the x86_64 pda to the percpu section as well.

This is my first x86_64 experience, and I hit a serious snag when I did "make ARCH=x86_64":

In file included from include/asm/system.h:4,
                 from include/asm/processor.h:18,
                 from include/asm/atomic.h:5,
                 from include/linux/crypto.h:20,
                 from arch/x86_64/kernel/asm-offsets.c:7:
include/linux/kernel.h:115: warning: conflicting types for built-in function 'snprintf'
include/linux/kernel.h:117: warning: conflicting types for built-in function 'vsnprintf'
...
And lots of like errors. Turns out my include/asm symlink was still pointing to include/asm-i386: make distclean fixed that.

Once in my x86_64 kernel, kernel compiles seemed faster, so I thought I'd benchmark it: compiling similarly-configured 32 bit and 64 bit kernels running under 32 bit and 64 bit kernels. This is a 2.13GHz Core Duo 2 with 4G of RAM running 2.6.21-rc3-mm2 (HIGHMEM4G enabled on i386), compiling 2.6.21-rc3-git1 with "make -j4".

Running a 64-bit kernel:

  • Compiling a 64-bit kernel (median of three): 6m17s
  • Compiling a 32-bit kernel (median of three): 6m50s

Running a 32-bit kernel:

  • Compiling a 64-bit kernel (median of three): 6m19s
  • Compiling a 32-bit kernel (median of three): 6m54s

In a nutshell, no performance difference, it's just that compiling an x86-64 is faster: given that there's almost exactly the same number of .o files in each case, and the x86-64 vmlinux is 5% bigger, I'd suspect that gcc is having an easier time compiling x86_64 code.


[/tech] permanent link