Subject: Make section names compatible with -ffunction-sections -fdata-sections
Date: Fri, 5 Dec 2008 19:03:54 -0500
From: Denys Vlasenko <vda.linux@googlemail.com>

This patch was originally written by Denys Vlasenko, but it has been
substantially modified by Anders Kaseorg and Tim Abbott to update it
to apply to the current kernel and fix bugs.  Any bugs were added by
Anders Kaseorg and Tim Abbott.

The purpose of this patch is to make it possible to build the kernel
with "gcc -ffunction-sections -fdata-sections".

The problem is that with -ffunction-sections -fdata-sections, gcc
creates sections like .text.head and .data.nosave whenever someone has
innocuous code like this:

static void head(...) {...}

or this:

static int nosave = 1;

somewhere in the kernel.

The kernel linker scripts are confused by such names, and thus put
these sections in the wrong places.

This patch renames all "magic" section names used by the kernel to not
have this format, eliminating the possibility of such collisions.

Ksplice's 'run-pre matching' process is much simpler if the original
kernel was compiled with -ffunction-sections and -fdata-sections.

Changelog since Denys Vlasenko's latest patch:
* Renamed .kernel.exit.text.refok to .kernel.text.exit.refok, for
better analogy with .kernel.*.init.refok.
* Updated from 2.6.27-rc4-git3 to 2.6.28-rc5.
* Added some missing section flags for some of the .kernel.* sections
(gcc assumes "ax" for sections with names starting with ".text", but
does not for sections with names starting with ".kernel.text").

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
[andersk@mit.edu] Add missing section flags for .kernel.* sections.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
[tabbott@mit.edu] Update from 2.6.27-rc4-git3 to 2.6.28-rc5.
[tabbott@mit.edu] Rename .kernel.exit.text.refok.
Signed-off-by: Tim Abbott <tabbott@mit.edu>
Tested-by: Waseem Daher <wdaher@mit.edu>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
---
 Documentation/mutex-design.txt              |    4 +-
 arch/alpha/kernel/head.S                    |    2 -
 arch/alpha/kernel/init_task.c               |    2 -
 arch/alpha/kernel/vmlinux.lds.S             |   14 ++++----
 arch/arm/kernel/head-nommu.S                |    3 +
 arch/arm/kernel/head.S                      |    3 +
 arch/arm/kernel/init_task.c                 |    2 -
 arch/arm/kernel/vmlinux.lds.S               |   14 ++++----
 arch/arm/mm/proc-v6.S                       |    2 -
 arch/arm/mm/proc-v7.S                       |    2 -
 arch/arm/mm/tlb-v6.S                        |    2 -
 arch/arm/mm/tlb-v7.S                        |    2 -
 arch/avr32/kernel/init_task.c               |    2 -
 arch/avr32/kernel/vmlinux.lds.S             |    6 +--
 arch/avr32/mm/init.c                        |    2 -
 arch/blackfin/kernel/vmlinux.lds.S          |    2 -
 arch/cris/kernel/process.c                  |    2 -
 arch/frv/kernel/break.S                     |    4 +-
 arch/frv/kernel/entry.S                     |    2 -
 arch/frv/kernel/head-mmu-fr451.S            |    2 -
 arch/frv/kernel/head-uc-fr401.S             |    2 -
 arch/frv/kernel/head-uc-fr451.S             |    2 -
 arch/frv/kernel/head-uc-fr555.S             |    2 -
 arch/frv/kernel/head.S                      |    4 +-
 arch/frv/kernel/init_task.c                 |    2 -
 arch/frv/kernel/vmlinux.lds.S               |   18 +++++-----
 arch/frv/mm/tlb-miss.S                      |    2 -
 arch/h8300/boot/compressed/head.S           |    2 -
 arch/h8300/boot/compressed/vmlinux.lds      |    2 -
 arch/h8300/kernel/init_task.c               |    2 -
 arch/h8300/kernel/vmlinux.lds.S             |    2 -
 arch/ia64/include/asm/asmmacro.h            |   12 +++----
 arch/ia64/include/asm/cache.h               |    2 -
 arch/ia64/include/asm/percpu.h              |    2 -
 arch/ia64/kernel/Makefile                   |    2 -
 arch/ia64/kernel/gate-data.S                |    2 -
 arch/ia64/kernel/gate.S                     |    8 ++--
 arch/ia64/kernel/gate.lds.S                 |   10 ++---
 arch/ia64/kernel/head.S                     |    2 -
 arch/ia64/kernel/init_task.c                |    4 +-
 arch/ia64/kernel/ivt.S                      |    2 -
 arch/ia64/kernel/minstate.h                 |    4 +-
 arch/ia64/kernel/paravirtentry.S            |   12 +++----
 arch/ia64/kernel/vmlinux.lds.S              |   48 ++++++++++++++--------------
 arch/ia64/kvm/vmm_ivt.S                     |    2 -
 arch/ia64/xen/xensetup.S                    |    2 -
 arch/m32r/kernel/init_task.c                |    2 -
 arch/m32r/kernel/vmlinux.lds.S              |    6 +--
 arch/m68k/kernel/head.S                     |    2 -
 arch/m68k/kernel/process.c                  |    2 -
 arch/m68k/kernel/sun3-head.S                |    2 -
 arch/m68k/kernel/vmlinux-std.lds            |    6 +--
 arch/m68k/kernel/vmlinux-sun3.lds           |    4 +-
 arch/m68knommu/kernel/init_task.c           |    2 -
 arch/m68knommu/kernel/vmlinux.lds.S         |    6 +--
 arch/m68knommu/platform/68360/head-ram.S    |    2 -
 arch/m68knommu/platform/68360/head-rom.S    |    2 -
 arch/mips/kernel/init_task.c                |    2 -
 arch/mips/kernel/vmlinux.lds.S              |    8 ++--
 arch/mips/lasat/image/head.S                |    2 -
 arch/mips/lasat/image/romscript.normal      |    2 -
 arch/mn10300/kernel/head.S                  |    2 -
 arch/mn10300/kernel/init_task.c             |    2 -
 arch/mn10300/kernel/vmlinux.lds.S           |   16 ++++-----
 arch/parisc/include/asm/cache.h             |    2 -
 arch/parisc/include/asm/system.h            |    2 -
 arch/parisc/kernel/head.S                   |    2 -
 arch/parisc/kernel/init_task.c              |    8 ++--
 arch/parisc/kernel/vmlinux.lds.S            |   26 +++++++--------
 arch/powerpc/include/asm/cache.h            |    2 -
 arch/powerpc/include/asm/page_64.h          |    2 -
 arch/powerpc/include/asm/ppc_asm.h          |    4 +-
 arch/powerpc/kernel/head_32.S               |    2 -
 arch/powerpc/kernel/head_40x.S              |    2 -
 arch/powerpc/kernel/head_44x.S              |    2 -
 arch/powerpc/kernel/head_8xx.S              |    2 -
 arch/powerpc/kernel/head_fsl_booke.S        |    2 -
 arch/powerpc/kernel/init_task.c             |    2 -
 arch/powerpc/kernel/machine_kexec_64.c      |    2 -
 arch/powerpc/kernel/vdso.c                  |    2 -
 arch/powerpc/kernel/vdso32/vdso32_wrapper.S |    2 -
 arch/powerpc/kernel/vdso64/vdso64_wrapper.S |    2 -
 arch/powerpc/kernel/vmlinux.lds.S           |   28 ++++++++--------
 arch/s390/include/asm/cache.h               |    2 -
 arch/s390/kernel/head.S                     |    2 -
 arch/s390/kernel/init_task.c                |    2 -
 arch/s390/kernel/vmlinux.lds.S              |   20 +++++------
 arch/sh/include/asm/cache.h                 |    2 -
 arch/sh/kernel/cpu/sh5/entry.S              |    4 +-
 arch/sh/kernel/head_32.S                    |    2 -
 arch/sh/kernel/head_64.S                    |    2 -
 arch/sh/kernel/init_task.c                  |    2 -
 arch/sh/kernel/irq.c                        |    4 +-
 arch/sh/kernel/vmlinux_32.lds.S             |   14 ++++----
 arch/sh/kernel/vmlinux_64.lds.S             |   14 ++++----
 arch/sparc/boot/btfixupprep.c               |    4 +-
 arch/sparc/include/asm/cache.h              |    2 -
 arch/sparc/kernel/head_32.S                 |    2 -
 arch/sparc/kernel/head_64.S                 |    2 -
 arch/sparc/kernel/vmlinux.lds.S             |   12 +++----
 arch/um/include/asm/common.lds.S            |    4 +-
 arch/um/kernel/dyn.lds.S                    |    4 +-
 arch/um/kernel/init_task.c                  |    4 +-
 arch/um/kernel/uml.lds.S                    |    4 +-
 arch/x86/boot/compressed/head_32.S          |    2 -
 arch/x86/boot/compressed/head_64.S          |    2 -
 arch/x86/boot/compressed/vmlinux.scr        |    2 -
 arch/x86/boot/compressed/vmlinux_32.lds     |   14 +++++---
 arch/x86/boot/compressed/vmlinux_64.lds     |   10 +++--
 arch/x86/include/asm/cache.h                |    4 +-
 arch/x86/kernel/acpi/wakeup_32.S            |    2 -
 arch/x86/kernel/head_32.S                   |    6 +--
 arch/x86/kernel/head_64.S                   |    4 +-
 arch/x86/kernel/init_task.c                 |    4 +-
 arch/x86/kernel/traps.c                     |    2 -
 arch/x86/kernel/vmlinux_32.lds.S            |   36 ++++++++++-----------
 arch/x86/kernel/vmlinux_64.lds.S            |   26 +++++++--------
 arch/xtensa/kernel/head.S                   |    2 -
 arch/xtensa/kernel/init_task.c              |    2 -
 arch/xtensa/kernel/vmlinux.lds.S            |    6 +--
 include/asm-frv/init.h                      |    8 ++--
 include/asm-generic/vmlinux.lds.h           |   14 ++++----
 include/linux/cache.h                       |    2 -
 include/linux/init.h                        |    8 ++--
 include/linux/linkage.h                     |    4 +-
 include/linux/percpu.h                      |   12 +++----
 include/linux/spinlock.h                    |    2 -
 kernel/module.c                             |    2 -
 scripts/mod/modpost.c                       |   12 +++----
 scripts/recordmcount.pl                     |    6 +--
 130 files changed, 353 insertions(+), 343 deletions(-)

diff --git a/Documentation/mutex-design.txt b/Documentation/mutex-design.txt
--- a/Documentation/mutex-design.txt
+++ b/Documentation/mutex-design.txt
@@ -66,14 +66,14 @@ of advantages of mutexes:
 
     c0377ccb <mutex_lock>:
     c0377ccb:       f0 ff 08                lock decl (%eax)
-    c0377cce:       78 0e                   js     c0377cde <.text.lock.mutex>
+    c0377cce:       78 0e                   js     c0377cde <.kernel.text.lock.mutex>
     c0377cd0:       c3                      ret
 
    the unlocking fastpath is equally tight:
 
     c0377cd1 <mutex_unlock>:
     c0377cd1:       f0 ff 00                lock incl (%eax)
-    c0377cd4:       7e 0f                   jle    c0377ce5 <.text.lock.mutex+0x7>
+    c0377cd4:       7e 0f                   jle    c0377ce5 <.kernel.text.lock.mutex+0x7>
     c0377cd6:       c3                      ret
 
  - 'struct mutex' semantics are well-defined and are enforced if
diff --git a/arch/alpha/kernel/head.S b/arch/alpha/kernel/head.S
--- a/arch/alpha/kernel/head.S
+++ b/arch/alpha/kernel/head.S
@@ -10,7 +10,7 @@
 #include <asm/system.h>
 #include <asm/asm-offsets.h>
 
-.section .text.head, "ax"
+.section .kernel.text.head, "ax"
 .globl swapper_pg_dir
 .globl _stext
 swapper_pg_dir=SWAPPER_PGD
diff --git a/arch/alpha/kernel/init_task.c b/arch/alpha/kernel/init_task.c
--- a/arch/alpha/kernel/init_task.c
+++ b/arch/alpha/kernel/init_task.c
@@ -17,5 +17,5 @@ EXPORT_SYMBOL(init_task);
 EXPORT_SYMBOL(init_task);
 
 union thread_union init_thread_union
-	__attribute__((section(".data.init_thread")))
+	__attribute__((section(".kernel.data.init_thread")))
 	= { INIT_THREAD_INFO(init_task) };
diff --git a/arch/alpha/kernel/vmlinux.lds.S b/arch/alpha/kernel/vmlinux.lds.S
--- a/arch/alpha/kernel/vmlinux.lds.S
+++ b/arch/alpha/kernel/vmlinux.lds.S
@@ -16,7 +16,7 @@ SECTIONS
 
 	_text = .;	/* Text and read-only data */
 	.text : {
-	*(.text.head)
+	*(.kernel.text.head)
 		TEXT_TEXT
 		SCHED_TEXT
 		LOCK_TEXT
@@ -93,18 +93,18 @@ SECTIONS
 	/* Freed after init ends here */
 
 	/* Note 2 page alignment above.  */
-	.data.init_thread : {
-		*(.data.init_thread)
+	.kernel.data.init_thread : {
+		*(.kernel.data.init_thread)
 	}
 
 	. = ALIGN(PAGE_SIZE);
-	.data.page_aligned : {
-		*(.data.page_aligned)
+	.kernel.data.page_aligned : {
+		*(.kernel.data.page_aligned)
 	}
 
 	. = ALIGN(64);
-	.data.cacheline_aligned : {
-		*(.data.cacheline_aligned)
+	.kernel.data.cacheline_aligned : {
+		*(.kernel.data.cacheline_aligned)
 	}
 
 	_data = .;
diff --git a/arch/arm/kernel/head-nommu.S b/arch/arm/kernel/head-nommu.S
--- a/arch/arm/kernel/head-nommu.S
+++ b/arch/arm/kernel/head-nommu.S
@@ -32,7 +32,8 @@
  * numbers for r1.
  *
  */
-	.section ".text.head", "ax"
+	.section ".kernel.text.head", "ax"
+	.type	stext, %function
 ENTRY(stext)
 	msr	cpsr_c, #PSR_F_BIT | PSR_I_BIT | SVC_MODE @ ensure svc mode
 						@ and irqs disabled
diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
--- a/arch/arm/kernel/head.S
+++ b/arch/arm/kernel/head.S
@@ -74,7 +74,8 @@
  * crap here - that's what the boot loader (or in extreme, well justified
  * circumstances, zImage) is for.
  */
-	.section ".text.head", "ax"
+	.section ".kernel.text.head", "ax"
+	.type	stext, %function
 ENTRY(stext)
 	msr	cpsr_c, #PSR_F_BIT | PSR_I_BIT | SVC_MODE @ ensure svc mode
 						@ and irqs disabled
diff --git a/arch/arm/kernel/init_task.c b/arch/arm/kernel/init_task.c
--- a/arch/arm/kernel/init_task.c
+++ b/arch/arm/kernel/init_task.c
@@ -29,7 +29,7 @@ EXPORT_SYMBOL(init_mm);
  * The things we do for performance..
  */
 union thread_union init_thread_union
-	__attribute__((__section__(".data.init_task"))) =
+	__attribute__((__section__(".kernel.data.init_task"))) =
 		{ INIT_THREAD_INFO(init_task) };
 
 /*
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -23,10 +23,10 @@ SECTIONS
 #else
 	. = PAGE_OFFSET + TEXT_OFFSET;
 #endif
-	.text.head : {
+	.kernel.text.head : {
 		_stext = .;
 		_sinittext = .;
-		*(.text.head)
+		*(.kernel.text.head)
 	}
 
 	.init : {			/* Init code and data		*/
@@ -65,8 +65,8 @@ SECTIONS
 #endif
 		. = ALIGN(4096);
 		__per_cpu_start = .;
-			*(.data.percpu)
-			*(.data.percpu.shared_aligned)
+			*(.kernel.data.percpu)
+			*(.kernel.data.percpu.shared_aligned)
 		__per_cpu_end = .;
 #ifndef CONFIG_XIP_KERNEL
 		__init_begin = _stext;
@@ -125,7 +125,7 @@ SECTIONS
 		 * first, the init task union, aligned
 		 * to an 8192 byte boundary.
 		 */
-		*(.data.init_task)
+		*(.kernel.data.init_task)
 
 #ifdef CONFIG_XIP_KERNEL
 		. = ALIGN(4096);
@@ -137,7 +137,7 @@ SECTIONS
 
 		. = ALIGN(4096);
 		__nosave_begin = .;
-		*(.data.nosave)
+		*(.kernel.data.nosave)
 		. = ALIGN(4096);
 		__nosave_end = .;
 
@@ -145,7 +145,7 @@ SECTIONS
 		 * then the cacheline aligned data
 		 */
 		. = ALIGN(32);
-		*(.data.cacheline_aligned)
+		*(.kernel.data.cacheline_aligned)
 
 		/*
 		 * The exception fixup table (might need resorting at runtime)
diff --git a/arch/arm/mm/proc-v6.S b/arch/arm/mm/proc-v6.S
--- a/arch/arm/mm/proc-v6.S
+++ b/arch/arm/mm/proc-v6.S
@@ -132,7 +132,7 @@ cpu_v6_name:
 	.asciz	"ARMv6-compatible processor"
 	.align
 
-	.section ".text.init", #alloc, #execinstr
+	.section ".kernel.text.init", #alloc, #execinstr
 
 /*
  *	__v6_setup
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -153,7 +153,7 @@ cpu_v7_name:
 	.ascii	"ARMv7 Processor"
 	.align
 
-	.section ".text.init", #alloc, #execinstr
+	.section ".kernel.text.init", #alloc, #execinstr
 
 /*
  *	__v7_setup
diff --git a/arch/arm/mm/tlb-v6.S b/arch/arm/mm/tlb-v6.S
--- a/arch/arm/mm/tlb-v6.S
+++ b/arch/arm/mm/tlb-v6.S
@@ -87,7 +87,7 @@ 1:
 	mcr	p15, 0, r2, c7, c5, 4		@ prefetch flush
 	mov	pc, lr
 
-	.section ".text.init", #alloc, #execinstr
+	.section ".kernel.text.init", #alloc, #execinstr
 
 	.type	v6wbi_tlb_fns, #object
 ENTRY(v6wbi_tlb_fns)
diff --git a/arch/arm/mm/tlb-v7.S b/arch/arm/mm/tlb-v7.S
--- a/arch/arm/mm/tlb-v7.S
+++ b/arch/arm/mm/tlb-v7.S
@@ -80,7 +80,7 @@ 1:
 	mov	pc, lr
 ENDPROC(v7wbi_flush_kern_tlb_range)
 
-	.section ".text.init", #alloc, #execinstr
+	.section ".kernel.text.init", #alloc, #execinstr
 
 	.type	v7wbi_tlb_fns, #object
 ENTRY(v7wbi_tlb_fns)
diff --git a/arch/avr32/kernel/init_task.c b/arch/avr32/kernel/init_task.c
--- a/arch/avr32/kernel/init_task.c
+++ b/arch/avr32/kernel/init_task.c
@@ -23,7 +23,7 @@ EXPORT_SYMBOL(init_mm);
  * Initial thread structure. Must be aligned on an 8192-byte boundary.
  */
 union thread_union init_thread_union
-	__attribute__((__section__(".data.init_task"))) =
+	__attribute__((__section__(".kernel.data.init_task"))) =
 		{ INIT_THREAD_INFO(init_task) };
 
 /*
diff --git a/arch/avr32/kernel/vmlinux.lds.S b/arch/avr32/kernel/vmlinux.lds.S
--- a/arch/avr32/kernel/vmlinux.lds.S
+++ b/arch/avr32/kernel/vmlinux.lds.S
@@ -95,15 +95,15 @@ SECTIONS
 		/*
 		 * First, the init task union, aligned to an 8K boundary.
 		 */
-		*(.data.init_task)
+		*(.kernel.data.init_task)
 
 		/* Then, the page-aligned data */
 		. = ALIGN(PAGE_SIZE);
-		*(.data.page_aligned)
+		*(.kernel.data.page_aligned)
 
 		/* Then, the cacheline aligned data */
 		. = ALIGN(L1_CACHE_BYTES);
-		*(.data.cacheline_aligned)
+		*(.kernel.data.cacheline_aligned)
 
 		/* And the rest... */
 		*(.data.rel*)
diff --git a/arch/avr32/mm/init.c b/arch/avr32/mm/init.c
--- a/arch/avr32/mm/init.c
+++ b/arch/avr32/mm/init.c
@@ -24,7 +24,7 @@
 #include <asm/setup.h>
 #include <asm/sections.h>
 
-#define __page_aligned	__attribute__((section(".data.page_aligned")))
+#define __page_aligned	__attribute__((section(".kernel.data.page_aligned")))
 
 DEFINE_PER_CPU(struct mmu_gather, mmu_gathers);
 
diff --git a/arch/blackfin/kernel/vmlinux.lds.S b/arch/blackfin/kernel/vmlinux.lds.S
--- a/arch/blackfin/kernel/vmlinux.lds.S
+++ b/arch/blackfin/kernel/vmlinux.lds.S
@@ -94,7 +94,7 @@ SECTIONS
 		__sdata = .;
 		/* This gets done first, so the glob doesn't suck it in */
 		. = ALIGN(32);
-		*(.data.cacheline_aligned)
+		*(.kernel.data.cacheline_aligned)
 
 #if !L1_DATA_A_LENGTH
 		. = ALIGN(32);
diff --git a/arch/cris/kernel/process.c b/arch/cris/kernel/process.c
--- a/arch/cris/kernel/process.c
+++ b/arch/cris/kernel/process.c
@@ -51,7 +51,7 @@ EXPORT_SYMBOL(init_mm);
  * "init_task" linker map entry..
  */
 union thread_union init_thread_union 
-	__attribute__((__section__(".data.init_task"))) =
+	__attribute__((__section__(".kernel.data.init_task"))) =
 		{ INIT_THREAD_INFO(init_task) };
 
 /*
diff --git a/arch/frv/kernel/break.S b/arch/frv/kernel/break.S
--- a/arch/frv/kernel/break.S
+++ b/arch/frv/kernel/break.S
@@ -21,7 +21,7 @@
 #
 # the break handler has its own stack
 #
-	.section	.bss.stack
+	.section	.bss.kernel.stack
 	.globl		__break_user_context
 	.balign		THREAD_SIZE
 __break_stack:
@@ -63,7 +63,7 @@ __break_trace_through_exceptions:
 # entry point for Break Exceptions/Interrupts
 #
 ###############################################################################
-	.section	.text.break
+	.section	.kernel.text.break,"ax",@progbits
 	.balign		4
 	.globl		__entry_break
 __entry_break:
diff --git a/arch/frv/kernel/entry.S b/arch/frv/kernel/entry.S
--- a/arch/frv/kernel/entry.S
+++ b/arch/frv/kernel/entry.S
@@ -38,7 +38,7 @@
 
 #define nr_syscalls ((syscall_table_size)/4)
 
-	.section	.text.entry
+	.section	.kernel.text.entry,"ax",@progbits
 	.balign		4
 
 .macro LEDS val
diff --git a/arch/frv/kernel/head-mmu-fr451.S b/arch/frv/kernel/head-mmu-fr451.S
--- a/arch/frv/kernel/head-mmu-fr451.S
+++ b/arch/frv/kernel/head-mmu-fr451.S
@@ -31,7 +31,7 @@
 #define __400_LCR	0xfe000100
 #define __400_LSBR	0xfe000c00
 
-	.section	.text.init,"ax"
+	.section	.kernel.text.init,"ax"
 	.balign		4
 
 ###############################################################################
diff --git a/arch/frv/kernel/head-uc-fr401.S b/arch/frv/kernel/head-uc-fr401.S
--- a/arch/frv/kernel/head-uc-fr401.S
+++ b/arch/frv/kernel/head-uc-fr401.S
@@ -30,7 +30,7 @@
 #define __400_LCR	0xfe000100
 #define __400_LSBR	0xfe000c00
 
-	.section	.text.init,"ax"
+	.section	.kernel.text.init,"ax"
 	.balign		4
 
 ###############################################################################
diff --git a/arch/frv/kernel/head-uc-fr451.S b/arch/frv/kernel/head-uc-fr451.S
--- a/arch/frv/kernel/head-uc-fr451.S
+++ b/arch/frv/kernel/head-uc-fr451.S
@@ -30,7 +30,7 @@
 #define __400_LCR	0xfe000100
 #define __400_LSBR	0xfe000c00
 
-	.section	.text.init,"ax"
+	.section	.kernel.text.init,"ax"
 	.balign		4
 
 ###############################################################################
diff --git a/arch/frv/kernel/head-uc-fr555.S b/arch/frv/kernel/head-uc-fr555.S
--- a/arch/frv/kernel/head-uc-fr555.S
+++ b/arch/frv/kernel/head-uc-fr555.S
@@ -29,7 +29,7 @@
 #define __551_LCR	0xfeff1100
 #define __551_LSBR	0xfeff1c00
 
-	.section	.text.init,"ax"
+	.section	.kernel.text.init,"ax"
 	.balign		4
 
 ###############################################################################
diff --git a/arch/frv/kernel/head.S b/arch/frv/kernel/head.S
--- a/arch/frv/kernel/head.S
+++ b/arch/frv/kernel/head.S
@@ -27,7 +27,7 @@
 #   command line string
 #
 ###############################################################################
-	.section	.text.head,"ax"
+	.section	.kernel.text.head,"ax"
 	.balign		4
 
 	.globl		_boot, __head_reference
@@ -541,7 +541,7 @@ __head_end:
 	.size		_boot, .-_boot
 
 	# provide a point for GDB to place a break
-	.section	.text.start,"ax"
+	.section	.kernel.text.start,"ax"
 	.globl		_start
 	.balign		4
 _start:
diff --git a/arch/frv/kernel/init_task.c b/arch/frv/kernel/init_task.c
--- a/arch/frv/kernel/init_task.c
+++ b/arch/frv/kernel/init_task.c
@@ -24,7 +24,7 @@ EXPORT_SYMBOL(init_mm);
  * "init_task" linker map entry..
  */
 union thread_union init_thread_union
-	__attribute__((__section__(".data.init_task"))) =
+	__attribute__((__section__(".kernel.data.init_task"))) =
 		{ INIT_THREAD_INFO(init_task) };
 
 /*
diff --git a/arch/frv/kernel/vmlinux.lds.S b/arch/frv/kernel/vmlinux.lds.S
--- a/arch/frv/kernel/vmlinux.lds.S
+++ b/arch/frv/kernel/vmlinux.lds.S
@@ -26,7 +26,7 @@ SECTIONS
 
   _sinittext = .;
   .init.text : {
-	*(.text.head)
+	*(.kernel.text.head)
 #ifndef CONFIG_DEBUG_INFO
 	INIT_TEXT
 	EXIT_TEXT
@@ -71,13 +71,13 @@ SECTIONS
 
   /* put sections together that have massive alignment issues */
   . = ALIGN(THREAD_SIZE);
-  .data.init_task : {
+  .kernel.data.init_task : {
 	  /* init task record & stack */
-	  *(.data.init_task)
+	  *(.kernel.data.init_task)
   }
 
   . = ALIGN(L1_CACHE_BYTES);
-  .data.cacheline_aligned : { *(.data.cacheline_aligned) }
+  .kernel.data.cacheline_aligned : { *(.kernel.data.cacheline_aligned) }
 
   .trap : {
 	/* trap table management - read entry-table.S before modifying */
@@ -94,10 +94,10 @@ SECTIONS
   _text = .;
   _stext = .;
   .text : {
-	*(.text.start)
-	*(.text.entry)
-	*(.text.break)
-	*(.text.tlbmiss)
+	*(.kernel.text.start)
+	*(.kernel.text.entry)
+	*(.kernel.text.break)
+	*(.kernel.text.tlbmiss)
 	TEXT_TEXT
 	SCHED_TEXT
 	LOCK_TEXT
@@ -152,7 +152,7 @@ SECTIONS
 
   .sbss		: { *(.sbss .sbss.*) }
   .bss		: { *(.bss .bss.*) }
-  .bss.stack	: { *(.bss) }
+  .bss.kernel.stack	: { *(.bss) }
 
   __bss_stop = .;
   _end = . ;
diff --git a/arch/frv/mm/tlb-miss.S b/arch/frv/mm/tlb-miss.S
--- a/arch/frv/mm/tlb-miss.S
+++ b/arch/frv/mm/tlb-miss.S
@@ -16,7 +16,7 @@
 #include <asm/highmem.h>
 #include <asm/spr-regs.h>
 
-	.section	.text.tlbmiss
+	.section	.kernel.text.tlbmiss,"ax",@progbits
 	.balign		4
 
 	.globl		__entry_insn_mmu_miss
diff --git a/arch/h8300/boot/compressed/head.S b/arch/h8300/boot/compressed/head.S
--- a/arch/h8300/boot/compressed/head.S
+++ b/arch/h8300/boot/compressed/head.S
@@ -9,7 +9,7 @@
 
 #define SRAM_START 0xff4000
 
-	.section	.text.startup
+	.section	.kernel.text.startup,"ax",@progbits
 	.global	startup
 startup:
 	mov.l	#SRAM_START+0x8000, sp
diff --git a/arch/h8300/boot/compressed/vmlinux.lds b/arch/h8300/boot/compressed/vmlinux.lds
--- a/arch/h8300/boot/compressed/vmlinux.lds
+++ b/arch/h8300/boot/compressed/vmlinux.lds
@@ -4,7 +4,7 @@ SECTIONS
         {
         __stext = . ;
 	__text = .;
-	       *(.text.startup)
+	       *(.kernel.text.startup)
 	       *(.text)
         __etext = . ;
         }
diff --git a/arch/h8300/kernel/init_task.c b/arch/h8300/kernel/init_task.c
--- a/arch/h8300/kernel/init_task.c
+++ b/arch/h8300/kernel/init_task.c
@@ -36,6 +36,6 @@ EXPORT_SYMBOL(init_task);
  * "init_task" linker map entry..
  */
 union thread_union init_thread_union
-	__attribute__((__section__(".data.init_task"))) =
+	__attribute__((__section__(".kernel.data.init_task"))) =
 		{ INIT_THREAD_INFO(init_task) };
 
diff --git a/arch/h8300/kernel/vmlinux.lds.S b/arch/h8300/kernel/vmlinux.lds.S
--- a/arch/h8300/kernel/vmlinux.lds.S
+++ b/arch/h8300/kernel/vmlinux.lds.S
@@ -101,7 +101,7 @@ SECTIONS
 	___data_start = . ;
 
 	. = ALIGN(0x2000) ;
-		*(.data.init_task)
+		*(.kernel.data.init_task)
 	. = ALIGN(0x4) ;
 		DATA_DATA
 	. = ALIGN(0x4) ;
diff --git a/arch/ia64/include/asm/asmmacro.h b/arch/ia64/include/asm/asmmacro.h
--- a/arch/ia64/include/asm/asmmacro.h
+++ b/arch/ia64/include/asm/asmmacro.h
@@ -70,12 +70,12 @@ name:
  * path (ivt.S - TLB miss processing) or in places where it might not be
  * safe to use a "tpa" instruction (mca_asm.S - error recovery).
  */
-	.section ".data.patch.vtop", "a"	// declare section & section attributes
+	.section ".kernel.data.patch.vtop", "a"	// declare section & section attributes
 	.previous
 
 #define	LOAD_PHYSICAL(pr, reg, obj)		\
 [1:](pr)movl reg = obj;				\
-	.xdata4 ".data.patch.vtop", 1b-.
+	.xdata4 ".kernel.data.patch.vtop", 1b-.
 
 /*
  * For now, we always put in the McKinley E9 workaround.  On CPUs that don't need it,
@@ -84,11 +84,11 @@ name:
 #define DO_MCKINLEY_E9_WORKAROUND
 
 #ifdef DO_MCKINLEY_E9_WORKAROUND
-	.section ".data.patch.mckinley_e9", "a"
+	.section ".kernel.data.patch.mckinley_e9", "a"
 	.previous
 /* workaround for Itanium 2 Errata 9: */
 # define FSYS_RETURN					\
-	.xdata4 ".data.patch.mckinley_e9", 1f-.;	\
+	.xdata4 ".kernel.data.patch.mckinley_e9", 1f-.;	\
 1:{ .mib;						\
 	nop.m 0;					\
 	mov r16=ar.pfs;					\
@@ -107,11 +107,11 @@ 2:{ .mib;						\
  * If physical stack register size is different from DEF_NUM_STACK_REG,
  * dynamically patch the kernel for correct size.
  */
-	.section ".data.patch.phys_stack_reg", "a"
+	.section ".kernel.data.patch.phys_stack_reg", "a"
 	.previous
 #define LOAD_PHYS_STACK_REG_SIZE(reg)			\
 [1:]	adds reg=IA64_NUM_PHYS_STACK_REG*8+8,r0;	\
-	.xdata4 ".data.patch.phys_stack_reg", 1b-.
+	.xdata4 ".kernel.data.patch.phys_stack_reg", 1b-.
 
 /*
  * Up until early 2004, use of .align within a function caused bad unwind info.
diff --git a/arch/ia64/include/asm/cache.h b/arch/ia64/include/asm/cache.h
--- a/arch/ia64/include/asm/cache.h
+++ b/arch/ia64/include/asm/cache.h
@@ -24,6 +24,6 @@
 # define SMP_CACHE_BYTES	(1 << 3)
 #endif
 
-#define __read_mostly __attribute__((__section__(".data.read_mostly")))
+#define __read_mostly __attribute__((__section__(".kernel.data.read_mostly")))
 
 #endif /* _ASM_IA64_CACHE_H */
diff --git a/arch/ia64/include/asm/percpu.h b/arch/ia64/include/asm/percpu.h
--- a/arch/ia64/include/asm/percpu.h
+++ b/arch/ia64/include/asm/percpu.h
@@ -27,7 +27,7 @@ extern void *per_cpu_init(void);
 
 #else /* ! SMP */
 
-#define PER_CPU_ATTRIBUTES	__attribute__((__section__(".data.percpu")))
+#define PER_CPU_ATTRIBUTES	__attribute__((__section__(".kernel.data.percpu")))
 
 #define per_cpu_init()				(__phys_per_cpu_start)
 
diff --git a/arch/ia64/kernel/Makefile b/arch/ia64/kernel/Makefile
--- a/arch/ia64/kernel/Makefile
+++ b/arch/ia64/kernel/Makefile
@@ -72,7 +72,7 @@ GATECFLAGS_gate-syms.o = -r
 $(obj)/gate-syms.o: $(obj)/gate.lds $(obj)/gate.o FORCE
 	$(call if_changed,gate)
 
-# gate-data.o contains the gate DSO image as data in section .data.gate.
+# gate-data.o contains the gate DSO image as data in section .kernel.data.gate.
 # We must build gate.so before we can assemble it.
 # Note: kbuild does not track this dependency due to usage of .incbin
 $(obj)/gate-data.o: $(obj)/gate.so
diff --git a/arch/ia64/kernel/gate-data.S b/arch/ia64/kernel/gate-data.S
--- a/arch/ia64/kernel/gate-data.S
+++ b/arch/ia64/kernel/gate-data.S
@@ -1,3 +1,3 @@
-	.section .data.gate, "aw"
+	.section .kernel.data.gate, "aw"
 
 	.incbin "arch/ia64/kernel/gate.so"
diff --git a/arch/ia64/kernel/gate.S b/arch/ia64/kernel/gate.S
--- a/arch/ia64/kernel/gate.S
+++ b/arch/ia64/kernel/gate.S
@@ -20,18 +20,18 @@
  * to targets outside the shared object) and to avoid multi-phase kernel builds, we
  * simply create minimalistic "patch lists" in special ELF sections.
  */
-	.section ".data.patch.fsyscall_table", "a"
+	.section ".kernel.data.patch.fsyscall_table", "a"
 	.previous
 #define LOAD_FSYSCALL_TABLE(reg)			\
 [1:]	movl reg=0;					\
-	.xdata4 ".data.patch.fsyscall_table", 1b-.
+	.xdata4 ".kernel.data.patch.fsyscall_table", 1b-.
 
-	.section ".data.patch.brl_fsys_bubble_down", "a"
+	.section ".kernel.data.patch.brl_fsys_bubble_down", "a"
 	.previous
 #define BRL_COND_FSYS_BUBBLE_DOWN(pr)			\
 [1:](pr)brl.cond.sptk 0;				\
 	;;						\
-	.xdata4 ".data.patch.brl_fsys_bubble_down", 1b-.
+	.xdata4 ".kernel.data.patch.brl_fsys_bubble_down", 1b-.
 
 GLOBAL_ENTRY(__kernel_syscall_via_break)
 	.prologue
diff --git a/arch/ia64/kernel/gate.lds.S b/arch/ia64/kernel/gate.lds.S
--- a/arch/ia64/kernel/gate.lds.S
+++ b/arch/ia64/kernel/gate.lds.S
@@ -32,21 +32,21 @@ SECTIONS
 	 */
 	. = GATE_ADDR + 0x600;
 
-	.data.patch		: {
+	.kernel.data.patch	: {
 		__start_gate_mckinley_e9_patchlist = .;
-		*(.data.patch.mckinley_e9)
+		*(.kernel.data.patch.mckinley_e9)
 		__end_gate_mckinley_e9_patchlist = .;
 
 		__start_gate_vtop_patchlist = .;
-		*(.data.patch.vtop)
+		*(.kernel.data.patch.vtop)
 		__end_gate_vtop_patchlist = .;
 
 		__start_gate_fsyscall_patchlist = .;
-		*(.data.patch.fsyscall_table)
+		*(.kernel.data.patch.fsyscall_table)
 		__end_gate_fsyscall_patchlist = .;
 
 		__start_gate_brl_fsys_bubble_down_patchlist = .;
-		*(.data.patch.brl_fsys_bubble_down)
+		*(.kernel.data.patch.brl_fsys_bubble_down)
 		__end_gate_brl_fsys_bubble_down_patchlist = .;
 	}						:readable
 
diff --git a/arch/ia64/kernel/head.S b/arch/ia64/kernel/head.S
--- a/arch/ia64/kernel/head.S
+++ b/arch/ia64/kernel/head.S
@@ -181,7 +181,7 @@ halt_msg:
 halt_msg:
 	stringz "Halting kernel\n"
 
-	.section .text.head,"ax"
+	.section .kernel.text.head,"ax"
 
 	.global start_ap
 
diff --git a/arch/ia64/kernel/init_task.c b/arch/ia64/kernel/init_task.c
--- a/arch/ia64/kernel/init_task.c
+++ b/arch/ia64/kernel/init_task.c
@@ -27,7 +27,7 @@ EXPORT_SYMBOL(init_mm);
  * Initial task structure.
  *
  * We need to make sure that this is properly aligned due to the way process stacks are
- * handled. This is done by having a special ".data.init_task" section...
+ * handled. This is done by having a special ".kernel.data.init_task" section...
  */
 #define init_thread_info	init_task_mem.s.thread_info
 
@@ -37,7 +37,7 @@ union {
 		struct thread_info thread_info;
 	} s;
 	unsigned long stack[KERNEL_STACK_SIZE/sizeof (unsigned long)];
-} init_task_mem asm ("init_task") __attribute__((section(".data.init_task"))) = {{
+} init_task_mem asm ("init_task") __attribute__((section(".kernel.data.init_task"))) = {{
 	.task =		INIT_TASK(init_task_mem.s.task),
 	.thread_info =	INIT_THREAD_INFO(init_task_mem.s.task)
 }};
diff --git a/arch/ia64/kernel/ivt.S b/arch/ia64/kernel/ivt.S
--- a/arch/ia64/kernel/ivt.S
+++ b/arch/ia64/kernel/ivt.S
@@ -83,7 +83,7 @@
 	mov r19=n;;			/* prepare to save predicates */		\
 	br.sptk.many dispatch_to_fault_handler
 
-	.section .text.ivt,"ax"
+	.section .kernel.text.ivt,"ax"
 
 	.align 32768	// align on 32KB boundary
 	.global ia64_ivt
diff --git a/arch/ia64/kernel/minstate.h b/arch/ia64/kernel/minstate.h
--- a/arch/ia64/kernel/minstate.h
+++ b/arch/ia64/kernel/minstate.h
@@ -16,7 +16,7 @@
 #define ACCOUNT_SYS_ENTER
 #endif
 
-.section ".data.patch.rse", "a"
+.section ".kernel.data.patch.rse", "a"
 .previous
 
 /*
@@ -215,7 +215,7 @@
 (pUStk) extr.u r17=r18,3,6;			\
 (pUStk)	sub r16=r18,r22;			\
 [1:](pKStk)	br.cond.sptk.many 1f;		\
-	.xdata4 ".data.patch.rse",1b-.		\
+	.xdata4 ".kernel.data.patch.rse",1b-.	\
 	;;					\
 	cmp.ge p6,p7 = 33,r17;			\
 	;;					\
diff --git a/arch/ia64/kernel/paravirtentry.S b/arch/ia64/kernel/paravirtentry.S
--- a/arch/ia64/kernel/paravirtentry.S
+++ b/arch/ia64/kernel/paravirtentry.S
@@ -24,12 +24,12 @@
 #include <asm/asm-offsets.h>
 #include "entry.h"
 
-#define DATA8(sym, init_value)			\
-	.pushsection .data.read_mostly ;	\
-	.align 8 ;				\
-	.global sym ;				\
-	sym: ;					\
-	data8 init_value ;			\
+#define DATA8(sym, init_value)					\
+	.pushsection .kernel.data.read_mostly,"aw",@progbits ;	\
+	.align 8 ;						\
+	.global sym ;						\
+	sym: ;							\
+	data8 init_value ;					\
 	.popsection
 
 #define BRANCH(targ, reg, breg)		\
diff --git a/arch/ia64/kernel/vmlinux.lds.S b/arch/ia64/kernel/vmlinux.lds.S
--- a/arch/ia64/kernel/vmlinux.lds.S
+++ b/arch/ia64/kernel/vmlinux.lds.S
@@ -8,7 +8,7 @@
 
 #define IVT_TEXT							\
 		VMLINUX_SYMBOL(__start_ivt_text) = .;			\
-		*(.text.ivt)						\
+		*(.kernel.text.ivt)					\
 		VMLINUX_SYMBOL(__end_ivt_text) = .;
 
 OUTPUT_FORMAT("elf64-ia64-little")
@@ -51,13 +51,13 @@ SECTIONS
 	KPROBES_TEXT
 	*(.gnu.linkonce.t*)
     }
-  .text.head : AT(ADDR(.text.head) - LOAD_OFFSET)
-	{ *(.text.head) }
+  .kernel.text.head : AT(ADDR(.kernel.text.head) - LOAD_OFFSET)
+	{ *(.kernel.text.head) }
   .text2 : AT(ADDR(.text2) - LOAD_OFFSET)
 	{ *(.text2) }
 #ifdef CONFIG_SMP
-  .text.lock : AT(ADDR(.text.lock) - LOAD_OFFSET)
-	{ *(.text.lock) }
+  .kernel.text.lock : AT(ADDR(.kernel.text.lock) - LOAD_OFFSET)
+	{ *(.kernel.text.lock) }
 #endif
   _etext = .;
 
@@ -84,10 +84,10 @@ SECTIONS
 	  __stop___mca_table = .;
 	}
 
-  .data.patch.phys_stack_reg : AT(ADDR(.data.patch.phys_stack_reg) - LOAD_OFFSET)
+  .kernel.data.patch.phys_stack_reg : AT(ADDR(.kernel.data.patch.phys_stack_reg) - LOAD_OFFSET)
 	{
 	  __start___phys_stack_reg_patchlist = .;
-	  *(.data.patch.phys_stack_reg)
+	  *(.kernel.data.patch.phys_stack_reg)
 	  __end___phys_stack_reg_patchlist = .;
 	}
 
@@ -148,24 +148,24 @@ SECTIONS
 	  __initcall_end = .;
 	}
 
-  .data.patch.vtop : AT(ADDR(.data.patch.vtop) - LOAD_OFFSET)
+  .kernel.data.patch.vtop : AT(ADDR(.kernel.data.patch.vtop) - LOAD_OFFSET)
 	{
 	  __start___vtop_patchlist = .;
-	  *(.data.patch.vtop)
+	  *(.kernel.data.patch.vtop)
 	  __end___vtop_patchlist = .;
 	}
 
-  .data.patch.rse : AT(ADDR(.data.patch.rse) - LOAD_OFFSET)
+  .kernel.data.patch.rse : AT(ADDR(.kernel.data.patch.rse) - LOAD_OFFSET)
 	{
 	  __start___rse_patchlist = .;
-	  *(.data.patch.rse)
+	  *(.kernel.data.patch.rse)
 	  __end___rse_patchlist = .;
 	}
 
-  .data.patch.mckinley_e9 : AT(ADDR(.data.patch.mckinley_e9) - LOAD_OFFSET)
+  .kernel.data.patch.mckinley_e9 : AT(ADDR(.kernel.data.patch.mckinley_e9) - LOAD_OFFSET)
 	{
 	  __start___mckinley_e9_bundles = .;
-	  *(.data.patch.mckinley_e9)
+	  *(.kernel.data.patch.mckinley_e9)
 	  __end___mckinley_e9_bundles = .;
 	}
 
@@ -193,34 +193,34 @@ SECTIONS
   __init_end = .;
 
   /* The initial task and kernel stack */
-  .data.init_task : AT(ADDR(.data.init_task) - LOAD_OFFSET)
-	{ *(.data.init_task) }
+  .kernel.data.init_task : AT(ADDR(.kernel.data.init_task) - LOAD_OFFSET)
+	{ *(.kernel.data.init_task) }
 
-  .data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET)
+  .kernel.data.page_aligned : AT(ADDR(.kernel.data.page_aligned) - LOAD_OFFSET)
         { *(__special_page_section)
 	  __start_gate_section = .;
-	  *(.data.gate)
+	  *(.kernel.data.gate)
 	  __stop_gate_section = .;
 	}
   . = ALIGN(PAGE_SIZE);		/* make sure the gate page doesn't expose
   				 * kernel data
 				 */
 
-  .data.read_mostly : AT(ADDR(.data.read_mostly) - LOAD_OFFSET)
-        { *(.data.read_mostly) }
+  .kernel.data.read_mostly : AT(ADDR(.kernel.data.read_mostly) - LOAD_OFFSET)
+        { *(.kernel.data.read_mostly) }
 
-  .data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET)
-        { *(.data.cacheline_aligned) }
+  .kernel.data.cacheline_aligned : AT(ADDR(.kernel.data.cacheline_aligned) - LOAD_OFFSET)
+        { *(.kernel.data.cacheline_aligned) }
 
   /* Per-cpu data: */
   percpu : { } :percpu
   . = ALIGN(PERCPU_PAGE_SIZE);
   __phys_per_cpu_start = .;
-  .data.percpu PERCPU_ADDR : AT(__phys_per_cpu_start - LOAD_OFFSET)
+  .kernel.data.percpu PERCPU_ADDR : AT(__phys_per_cpu_start - LOAD_OFFSET)
 	{
 		__per_cpu_start = .;
-		*(.data.percpu)
-		*(.data.percpu.shared_aligned)
+		*(.kernel.data.percpu)
+		*(.kernel.data.percpu.shared_aligned)
 		__per_cpu_end = .;
 	}
   . = __phys_per_cpu_start + PERCPU_PAGE_SIZE;	/* ensure percpu data fits
diff --git a/arch/ia64/kvm/vmm_ivt.S b/arch/ia64/kvm/vmm_ivt.S
--- a/arch/ia64/kvm/vmm_ivt.S
+++ b/arch/ia64/kvm/vmm_ivt.S
@@ -104,7 +104,7 @@ GLOBAL_ENTRY(kvm_vmm_panic)
 	br.call.sptk.many b6=vmm_panic_handler;
 END(kvm_vmm_panic)
 
-    .section .text.ivt,"ax"
+    .section .kernel.text.ivt,"ax"
 
     .align 32768    // align on 32KB boundary
     .global kvm_ia64_ivt
diff --git a/arch/ia64/xen/xensetup.S b/arch/ia64/xen/xensetup.S
--- a/arch/ia64/xen/xensetup.S
+++ b/arch/ia64/xen/xensetup.S
@@ -14,7 +14,7 @@
 #include <linux/init.h>
 #include <xen/interface/elfnote.h>
 
-	.section .data.read_mostly
+	.section .kernel.data.read_mostly, "a"
 	.align 8
 	.global xen_domain_type
 xen_domain_type:
diff --git a/arch/m32r/kernel/init_task.c b/arch/m32r/kernel/init_task.c
--- a/arch/m32r/kernel/init_task.c
+++ b/arch/m32r/kernel/init_task.c
@@ -25,7 +25,7 @@ EXPORT_SYMBOL(init_mm);
  * "init_task" linker map entry..
  */
 union thread_union init_thread_union
-	__attribute__((__section__(".data.init_task"))) =
+	__attribute__((__section__(".kernel.data.init_task"))) =
 		{ INIT_THREAD_INFO(init_task) };
 
 /*
diff --git a/arch/m32r/kernel/vmlinux.lds.S b/arch/m32r/kernel/vmlinux.lds.S
--- a/arch/m32r/kernel/vmlinux.lds.S
+++ b/arch/m32r/kernel/vmlinux.lds.S
@@ -57,17 +57,17 @@ SECTIONS
 
   . = ALIGN(4096);
   __nosave_begin = .;
-  .data_nosave : { *(.data.nosave) }
+  .data_nosave : { *(.kernel.data.nosave) }
   . = ALIGN(4096);
   __nosave_end = .;
 
   . = ALIGN(32);
-  .data.cacheline_aligned : { *(.data.cacheline_aligned) }
+  .kernel.data.cacheline_aligned : { *(.kernel.data.cacheline_aligned) }
 
   _edata = .;			/* End of data section */
 
   . = ALIGN(8192);		/* init_task */
-  .data.init_task : { *(.data.init_task) }
+  .kernel.data.init_task : { *(.kernel.data.init_task) }
 
   /* will be freed after init */
   . = ALIGN(4096);		/* Init code and data */
diff --git a/arch/m68k/kernel/head.S b/arch/m68k/kernel/head.S
--- a/arch/m68k/kernel/head.S
+++ b/arch/m68k/kernel/head.S
@@ -577,7 +577,7 @@ func_define	putn,1
 #endif
 .endm
 
-.section ".text.head","ax"
+.section ".kernel.text.head","ax"
 ENTRY(_stext)
 /*
  * Version numbers of the bootinfo interface
diff --git a/arch/m68k/kernel/process.c b/arch/m68k/kernel/process.c
--- a/arch/m68k/kernel/process.c
+++ b/arch/m68k/kernel/process.c
@@ -47,7 +47,7 @@ EXPORT_SYMBOL(init_mm);
 EXPORT_SYMBOL(init_mm);
 
 union thread_union init_thread_union
-__attribute__((section(".data.init_task"), aligned(THREAD_SIZE)))
+__attribute__((section(".kernel.data.init_task"), aligned(THREAD_SIZE)))
        = { INIT_THREAD_INFO(init_task) };
 
 /* initial task structure */
diff --git a/arch/m68k/kernel/sun3-head.S b/arch/m68k/kernel/sun3-head.S
--- a/arch/m68k/kernel/sun3-head.S
+++ b/arch/m68k/kernel/sun3-head.S
@@ -29,7 +29,7 @@ kernel_pmd_table:              .skip 0x2
 .globl kernel_pg_dir
 .equ    kernel_pg_dir,kernel_pmd_table
 
-	.section .text.head
+	.section .kernel.text.head, "ax"
 ENTRY(_stext)
 ENTRY(_start)
 
diff --git a/arch/m68k/kernel/vmlinux-std.lds b/arch/m68k/kernel/vmlinux-std.lds
--- a/arch/m68k/kernel/vmlinux-std.lds
+++ b/arch/m68k/kernel/vmlinux-std.lds
@@ -12,7 +12,7 @@ SECTIONS
   . = 0x1000;
   _text = .;			/* Text and read-only data */
   .text : {
-	*(.text.head)
+	*(.kernel.text.head)
 	TEXT_TEXT
 	SCHED_TEXT
 	LOCK_TEXT
@@ -35,7 +35,7 @@ SECTIONS
 	}
 
   . = ALIGN(16);
-  .data.cacheline_aligned : { *(.data.cacheline_aligned) }
+  .kernel.data.cacheline_aligned : { *(.kernel.data.cacheline_aligned) } :data
 
   .bss : { *(.bss) }		/* BSS */
 
@@ -78,7 +78,7 @@ SECTIONS
   . = ALIGN(8192);
   __init_end = .;
 
-  .data.init_task : { *(.data.init_task) }	/* The initial task and kernel stack */
+  .kernel.data.init_task : { *(.kernel.data.init_task) }	/* The initial task and kernel stack */
 
   _end = . ;
 
diff --git a/arch/m68k/kernel/vmlinux-sun3.lds b/arch/m68k/kernel/vmlinux-sun3.lds
--- a/arch/m68k/kernel/vmlinux-sun3.lds
+++ b/arch/m68k/kernel/vmlinux-sun3.lds
@@ -12,7 +12,7 @@ SECTIONS
   . = 0xE002000;
   _text = .;			/* Text and read-only data */
   .text : {
-	*(.text.head)
+	*(.kernel.text.head)
 	TEXT_TEXT
 	SCHED_TEXT
 	LOCK_TEXT
@@ -70,7 +70,7 @@ __init_begin = .;
 #endif
 	. = ALIGN(PAGE_SIZE);
 	__init_end = .;
-	.data.init.task : { *(.data.init_task) }
+	.kernel.data.init.task : { *(.kernel.data.init_task) }
 
 
   .bss : { *(.bss) }		/* BSS */
diff --git a/arch/m68knommu/kernel/init_task.c b/arch/m68knommu/kernel/init_task.c
--- a/arch/m68knommu/kernel/init_task.c
+++ b/arch/m68knommu/kernel/init_task.c
@@ -36,6 +36,6 @@ EXPORT_SYMBOL(init_task);
  * "init_task" linker map entry..
  */
 union thread_union init_thread_union
-	__attribute__((__section__(".data.init_task"))) =
+	__attribute__((__section__(".kernel.data.init_task"))) =
 		{ INIT_THREAD_INFO(init_task) };
 
diff --git a/arch/m68knommu/kernel/vmlinux.lds.S b/arch/m68knommu/kernel/vmlinux.lds.S
--- a/arch/m68knommu/kernel/vmlinux.lds.S
+++ b/arch/m68knommu/kernel/vmlinux.lds.S
@@ -55,7 +55,7 @@ SECTIONS {
 	.romvec : {
 		__rom_start = . ;
 		_romvec = .;
-		*(.data.initvect)
+		*(.kernel.data.initvect)
 	} > romvec
 #endif
 
@@ -66,7 +66,7 @@ SECTIONS {
 		TEXT_TEXT
 		SCHED_TEXT
 		LOCK_TEXT
-        	*(.text.lock)
+        	*(.kernel.text.lock)
 
 		. = ALIGN(16);          /* Exception table              */
 		__start___ex_table = .;
@@ -148,7 +148,7 @@ SECTIONS {
 		_sdata = . ;
 		DATA_DATA
 		. = ALIGN(8192) ;
-		*(.data.init_task)
+		*(.kernel.data.init_task)
 		_edata = . ;
 	} > DATA
 
diff --git a/arch/m68knommu/platform/68360/head-ram.S b/arch/m68knommu/platform/68360/head-ram.S
--- a/arch/m68knommu/platform/68360/head-ram.S
+++ b/arch/m68knommu/platform/68360/head-ram.S
@@ -280,7 +280,7 @@ _dprbase:
      * and then overwritten as needed.
      */
  
-.section ".data.initvect","awx"
+.section ".kernel.data.initvect","awx"
     .long   RAMEND	/* Reset: Initial Stack Pointer                 - 0.  */
     .long   _start      /* Reset: Initial Program Counter               - 1.  */
     .long   buserr      /* Bus Error                                    - 2.  */
diff --git a/arch/m68knommu/platform/68360/head-rom.S b/arch/m68knommu/platform/68360/head-rom.S
--- a/arch/m68knommu/platform/68360/head-rom.S
+++ b/arch/m68knommu/platform/68360/head-rom.S
@@ -291,7 +291,7 @@ _dprbase:
      * and then overwritten as needed.
      */
  
-.section ".data.initvect","awx"
+.section ".kernel.data.initvect","awx"
     .long   RAMEND	/* Reset: Initial Stack Pointer                 - 0.  */
     .long   _start      /* Reset: Initial Program Counter               - 1.  */
     .long   buserr      /* Bus Error                                    - 2.  */
diff --git a/arch/mips/kernel/init_task.c b/arch/mips/kernel/init_task.c
--- a/arch/mips/kernel/init_task.c
+++ b/arch/mips/kernel/init_task.c
@@ -26,7 +26,7 @@ EXPORT_SYMBOL(init_mm);
  * The things we do for performance..
  */
 union thread_union init_thread_union
-	__attribute__((__section__(".data.init_task"),
+	__attribute__((__section__(".kernel.data.init_task"),
 	               __aligned__(THREAD_SIZE))) =
 		{ INIT_THREAD_INFO(init_task) };
 
diff --git a/arch/mips/kernel/vmlinux.lds.S b/arch/mips/kernel/vmlinux.lds.S
--- a/arch/mips/kernel/vmlinux.lds.S
+++ b/arch/mips/kernel/vmlinux.lds.S
@@ -77,7 +77,7 @@ SECTIONS
 		 * object file alignment.  Using 32768
 		 */
 		. = ALIGN(_PAGE_SIZE);
-		*(.data.init_task)
+		*(.kernel.data.init_task)
 
 		DATA_DATA
 		CONSTRUCTORS
@@ -99,14 +99,14 @@ SECTIONS
 	. = ALIGN(_PAGE_SIZE);
 	.data_nosave : {
 		__nosave_begin = .;
-		*(.data.nosave)
+		*(.kernel.data.nosave)
 	}
 	. = ALIGN(_PAGE_SIZE);
 	__nosave_end = .;
 
 	. = ALIGN(1 << CONFIG_MIPS_L1_CACHE_SHIFT);
-	.data.cacheline_aligned : {
-		*(.data.cacheline_aligned)
+	.kernel.data.cacheline_aligned : {
+		*(.kernel.data.cacheline_aligned)
 	}
 	_edata =  .;			/* End of data section */
 
diff --git a/arch/mips/lasat/image/head.S b/arch/mips/lasat/image/head.S
--- a/arch/mips/lasat/image/head.S
+++ b/arch/mips/lasat/image/head.S
@@ -1,7 +1,7 @@
 #include <asm/lasat/head.h>
 
 	.text
-	.section .text.start, "ax"
+	.section .kernel.text.start, "ax"
 	.set noreorder
 	.set mips3
 
diff --git a/arch/mips/lasat/image/romscript.normal b/arch/mips/lasat/image/romscript.normal
--- a/arch/mips/lasat/image/romscript.normal
+++ b/arch/mips/lasat/image/romscript.normal
@@ -4,7 +4,7 @@ SECTIONS
 {
   .text :
   {
-    *(.text.start)
+    *(.kernel.text.start)
   }
 
   /* Data in ROM */
diff --git a/arch/mn10300/kernel/head.S b/arch/mn10300/kernel/head.S
--- a/arch/mn10300/kernel/head.S
+++ b/arch/mn10300/kernel/head.S
@@ -19,7 +19,7 @@
 #include <asm/param.h>
 #include <asm/unit/serial.h>
 
-	.section .text.head,"ax"
+	.section .kernel.text.head,"ax"
 
 ###############################################################################
 #
diff --git a/arch/mn10300/kernel/init_task.c b/arch/mn10300/kernel/init_task.c
--- a/arch/mn10300/kernel/init_task.c
+++ b/arch/mn10300/kernel/init_task.c
@@ -31,7 +31,7 @@ EXPORT_SYMBOL(init_mm);
  * "init_task" linker map entry..
  */
 union thread_union init_thread_union
-	__attribute__((__section__(".data.init_task"))) =
+	__attribute__((__section__(".kernel.data.init_task"))) =
 		{ INIT_THREAD_INFO(init_task) };
 
 /*
diff --git a/arch/mn10300/kernel/vmlinux.lds.S b/arch/mn10300/kernel/vmlinux.lds.S
--- a/arch/mn10300/kernel/vmlinux.lds.S
+++ b/arch/mn10300/kernel/vmlinux.lds.S
@@ -28,7 +28,7 @@ SECTIONS
   _text = .;			/* Text and read-only data */
   .text : {
 	*(
-	.text.head
+	.kernel.text.head
 	.text
 	)
 	TEXT_TEXT
@@ -58,25 +58,25 @@ SECTIONS
 
   . = ALIGN(PAGE_SIZE);
   __nosave_begin = .;
-  .data_nosave : { *(.data.nosave) }
+  .data_nosave : { *(.kernel.data.nosave) }
   . = ALIGN(PAGE_SIZE);
   __nosave_end = .;
 
   . = ALIGN(PAGE_SIZE);
-  .data.page_aligned : { *(.data.idt) }
+  .kernel.data.page_aligned : { *(.kernel.data.idt) }
 
   . = ALIGN(32);
-  .data.cacheline_aligned : { *(.data.cacheline_aligned) }
+  .kernel.data.cacheline_aligned : { *(.kernel.data.cacheline_aligned) }
 
   /* rarely changed data like cpu maps */
   . = ALIGN(32);
-  .data.read_mostly : AT(ADDR(.data.read_mostly)) {
-	*(.data.read_mostly)
+  .kernel.data.read_mostly : AT(ADDR(.kernel.data.read_mostly)) {
+	*(.kernel.data.read_mostly)
 	_edata = .;		/* End of data section */
   }
 
   . = ALIGN(THREAD_SIZE);	/* init_task */
-  .data.init_task : { *(.data.init_task) }
+  .kernel.data.init_task : { *(.kernel.data.init_task) }
 
   /* might get freed after init */
   . = ALIGN(PAGE_SIZE);
@@ -134,7 +134,7 @@ SECTIONS
 
   __bss_start = .;		/* BSS */
   .bss : {
-	*(.bss.page_aligned)
+	*(.bss.kernel.page_aligned)
 	*(.bss)
   }
   . = ALIGN(4);
diff --git a/arch/parisc/include/asm/cache.h b/arch/parisc/include/asm/cache.h
--- a/arch/parisc/include/asm/cache.h
+++ b/arch/parisc/include/asm/cache.h
@@ -28,7 +28,7 @@
 
 #define SMP_CACHE_BYTES L1_CACHE_BYTES
 
-#define __read_mostly __attribute__((__section__(".data.read_mostly")))
+#define __read_mostly __attribute__((__section__(".kernel.data.read_mostly")))
 
 void parisc_cache_init(void);	/* initializes cache-flushing */
 void disable_sr_hashing_asm(int); /* low level support for above */
diff --git a/arch/parisc/include/asm/system.h b/arch/parisc/include/asm/system.h
--- a/arch/parisc/include/asm/system.h
+++ b/arch/parisc/include/asm/system.h
@@ -174,7 +174,7 @@ static inline void set_eiem(unsigned lon
 })
 
 #ifdef CONFIG_SMP
-# define __lock_aligned __attribute__((__section__(".data.lock_aligned")))
+# define __lock_aligned __attribute__((__section__(".kernel.data.lock_aligned")))
 #endif
 
 #define arch_align_stack(x) (x)
diff --git a/arch/parisc/kernel/head.S b/arch/parisc/kernel/head.S
--- a/arch/parisc/kernel/head.S
+++ b/arch/parisc/kernel/head.S
@@ -345,7 +345,7 @@ ENDPROC(stext)
 ENDPROC(stext)
 
 #ifndef CONFIG_64BIT
-	.section .data.read_mostly
+	.section .kernel.data.read_mostly,"aw",@progbits
 
 	.align	4
 	.export	$global$,data
diff --git a/arch/parisc/kernel/init_task.c b/arch/parisc/kernel/init_task.c
--- a/arch/parisc/kernel/init_task.c
+++ b/arch/parisc/kernel/init_task.c
@@ -48,7 +48,7 @@ EXPORT_SYMBOL(init_mm);
  * "init_task" linker map entry..
  */
 union thread_union init_thread_union
-	__attribute__((aligned(128))) __attribute__((__section__(".data.init_task"))) =
+	__attribute__((aligned(128))) __attribute__((__section__(".kernel.data.init_task"))) =
 		{ INIT_THREAD_INFO(init_task) };
 
 #if PT_NLEVELS == 3
@@ -57,11 +57,11 @@ union thread_union init_thread_union
  * guarantee that global objects will be laid out in memory in the same order 
  * as the order of declaration, so put these in different sections and use
  * the linker script to order them. */
-pmd_t pmd0[PTRS_PER_PMD] __attribute__ ((__section__ (".data.vm0.pmd"), aligned(PAGE_SIZE)));
+pmd_t pmd0[PTRS_PER_PMD] __attribute__ ((__section__ (".kernel.data.vm0.pmd"), aligned(PAGE_SIZE)));
 #endif
 
-pgd_t swapper_pg_dir[PTRS_PER_PGD] __attribute__ ((__section__ (".data.vm0.pgd"), aligned(PAGE_SIZE)));
-pte_t pg0[PT_INITIAL * PTRS_PER_PTE] __attribute__ ((__section__ (".data.vm0.pte"), aligned(PAGE_SIZE)));
+pgd_t swapper_pg_dir[PTRS_PER_PGD] __attribute__ ((__section__ (".kernel.data.vm0.pgd"), aligned(PAGE_SIZE)));
+pte_t pg0[PT_INITIAL * PTRS_PER_PTE] __attribute__ ((__section__ (".kernel.data.vm0.pte"), aligned(PAGE_SIZE)));
 
 /*
  * Initial task structure.
diff --git a/arch/parisc/kernel/vmlinux.lds.S b/arch/parisc/kernel/vmlinux.lds.S
--- a/arch/parisc/kernel/vmlinux.lds.S
+++ b/arch/parisc/kernel/vmlinux.lds.S
@@ -94,8 +94,8 @@ SECTIONS
 
 	/* rarely changed data like cpu maps */
 	. = ALIGN(16);
-	.data.read_mostly : {
-		*(.data.read_mostly)
+	.kernel.data.read_mostly : {
+		*(.kernel.data.read_mostly)
 	}
 
 	. = ALIGN(L1_CACHE_BYTES);
@@ -106,14 +106,14 @@ SECTIONS
 	}
 
 	. = ALIGN(L1_CACHE_BYTES);
-	.data.cacheline_aligned : {
-		*(.data.cacheline_aligned)
+	.kernel.data.cacheline_aligned : {
+		*(.kernel.data.cacheline_aligned)
 	}
 
 	/* PA-RISC locks requires 16-byte alignment */
 	. = ALIGN(16);
-	.data.lock_aligned : {
-		*(.data.lock_aligned)
+	.kernel.data.lock_aligned : {
+		*(.kernel.data.lock_aligned)
 	}
 
 	/* nosave data is really only used for software suspend...it's here
@@ -122,7 +122,7 @@ SECTIONS
 	. = ALIGN(PAGE_SIZE);
 	__nosave_begin = .;
 	.data_nosave : {
-		*(.data.nosave)
+		*(.kernel.data.nosave)
 	}
 	. = ALIGN(PAGE_SIZE);
 	__nosave_end = .;
@@ -134,10 +134,10 @@ SECTIONS
 	__bss_start = .;
 	/* page table entries need to be PAGE_SIZE aligned */
 	. = ALIGN(PAGE_SIZE);
-	.data.vmpages : {
-		*(.data.vm0.pmd)
-		*(.data.vm0.pgd)
-		*(.data.vm0.pte)
+	.kernel.data.vmpages : {
+		*(.kernel.data.vm0.pmd)
+		*(.kernel.data.vm0.pgd)
+		*(.kernel.data.vm0.pte)
 	}
 	.bss : {
 		*(.bss)
@@ -149,8 +149,8 @@ SECTIONS
 	/* assembler code expects init_task to be 16k aligned */
 	. = ALIGN(16384);
 	/* init_task */
-	.data.init_task : {
-		*(.data.init_task)
+	.kernel.data.init_task : {
+		*(.kernel.data.init_task)
 	}
 
 #ifdef CONFIG_64BIT
diff --git a/arch/powerpc/include/asm/cache.h b/arch/powerpc/include/asm/cache.h
--- a/arch/powerpc/include/asm/cache.h
+++ b/arch/powerpc/include/asm/cache.h
@@ -38,7 +38,7 @@ extern struct ppc64_caches ppc64_caches;
 #endif /* __powerpc64__ && ! __ASSEMBLY__ */
 
 #if !defined(__ASSEMBLY__)
-#define __read_mostly __attribute__((__section__(".data.read_mostly")))
+#define __read_mostly __attribute__((__section__(".kernel.data.read_mostly")))
 #endif
 
 #endif /* __KERNEL__ */
diff --git a/arch/powerpc/include/asm/page_64.h b/arch/powerpc/include/asm/page_64.h
--- a/arch/powerpc/include/asm/page_64.h
+++ b/arch/powerpc/include/asm/page_64.h
@@ -157,7 +157,7 @@ do {						\
 #else
 #define __page_aligned \
 	__attribute__((__aligned__(PAGE_SIZE), \
-		__section__(".data.page_aligned")))
+		__section__(".kernel.data.page_aligned")))
 #endif
 
 #define VM_DATA_DEFAULT_FLAGS \
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -193,7 +193,7 @@ GLUE(.,name):
 GLUE(.,name):
 
 #define _INIT_GLOBAL(name) \
-	.section ".text.init.refok"; \
+	.section ".kernel.text.init.refok","ax",@progbits; \
 	.align 2 ; \
 	.globl name; \
 	.globl GLUE(.,name); \
@@ -233,7 +233,7 @@ GLUE(.,name):
 GLUE(.,name):
 
 #define _INIT_STATIC(name) \
-	.section ".text.init.refok"; \
+	.section ".kernel.text.init.refok","ax",@progbits; \
 	.align 2 ; \
 	.section ".opd","aw"; \
 name: \
diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
--- a/arch/powerpc/kernel/head_32.S
+++ b/arch/powerpc/kernel/head_32.S
@@ -50,7 +50,7 @@
 	mtspr	SPRN_DBAT##n##L,RB;	\
 1:
 
-	.section	.text.head, "ax"
+	.section	.kernel.text.head, "ax"
 	.stabs	"arch/powerpc/kernel/",N_SO,0,0,0f
 	.stabs	"head_32.S",N_SO,0,0,0f
 0:
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -52,7 +52,7 @@
  *
  * This is all going to change RSN when we add bi_recs.......  -- Dan
  */
-	.section	.text.head, "ax"
+	.section	.kernel.text.head, "ax"
 _ENTRY(_stext);
 _ENTRY(_start);
 
diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S
--- a/arch/powerpc/kernel/head_44x.S
+++ b/arch/powerpc/kernel/head_44x.S
@@ -50,7 +50,7 @@
  *   r7 - End of kernel command line string
  *
  */
-	.section	.text.head, "ax"
+	.section	.kernel.text.head, "ax"
 _ENTRY(_stext);
 _ENTRY(_start);
 	/*
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -38,7 +38,7 @@
 #else
 #define DO_8xx_CPU6(val, reg)
 #endif
-	.section	.text.head, "ax"
+	.section	.kernel.text.head, "ax"
 _ENTRY(_stext);
 _ENTRY(_start);
 
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -53,7 +53,7 @@
  *   r7 - End of kernel command line string
  *
  */
-	.section	.text.head, "ax"
+	.section	.kernel.text.head, "ax"
 _ENTRY(_stext);
 _ENTRY(_start);
 	/*
diff --git a/arch/powerpc/kernel/init_task.c b/arch/powerpc/kernel/init_task.c
--- a/arch/powerpc/kernel/init_task.c
+++ b/arch/powerpc/kernel/init_task.c
@@ -21,7 +21,7 @@ EXPORT_SYMBOL(init_mm);
  * "init_task" linker map entry..
  */
 union thread_union init_thread_union 
-	__attribute__((__section__(".data.init_task"))) =
+	__attribute__((__section__(".kernel.data.init_task"))) =
 		{ INIT_THREAD_INFO(init_task) };
 
 /*
diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c
--- a/arch/powerpc/kernel/machine_kexec_64.c
+++ b/arch/powerpc/kernel/machine_kexec_64.c
@@ -250,7 +250,7 @@ static void kexec_prepare_cpus(void)
  * current, but that audit has not been performed.
  */
 static union thread_union kexec_stack
-	__attribute__((__section__(".data.init_task"))) = { };
+	__attribute__((__section__(".kernel.data.init_task"))) = { };
 
 /* Our assembly helper, in kexec_stub.S */
 extern NORET_TYPE void kexec_sequence(void *newstack, unsigned long start,
diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -74,7 +74,7 @@ static union {
 static union {
 	struct vdso_data	data;
 	u8			page[PAGE_SIZE];
-} vdso_data_store __attribute__((__section__(".data.page_aligned")));
+} vdso_data_store __attribute__((__section__(".kernel.data.page_aligned")));
 struct vdso_data *vdso_data = &vdso_data_store.data;
 
 /* Format of the patch table */
diff --git a/arch/powerpc/kernel/vdso32/vdso32_wrapper.S b/arch/powerpc/kernel/vdso32/vdso32_wrapper.S
--- a/arch/powerpc/kernel/vdso32/vdso32_wrapper.S
+++ b/arch/powerpc/kernel/vdso32/vdso32_wrapper.S
@@ -1,7 +1,7 @@
 #include <linux/init.h>
 #include <asm/page.h>
 
-	.section ".data.page_aligned"
+	.section ".kernel.data.page_aligned","aw",@progbits
 
 	.globl vdso32_start, vdso32_end
 	.balign PAGE_SIZE
diff --git a/arch/powerpc/kernel/vdso64/vdso64_wrapper.S b/arch/powerpc/kernel/vdso64/vdso64_wrapper.S
--- a/arch/powerpc/kernel/vdso64/vdso64_wrapper.S
+++ b/arch/powerpc/kernel/vdso64/vdso64_wrapper.S
@@ -1,7 +1,7 @@
 #include <linux/init.h>
 #include <asm/page.h>
 
-	.section ".data.page_aligned"
+	.section ".kernel.data.page_aligned","aw",@progbits
 
 	.globl vdso64_start, vdso64_end
 	.balign PAGE_SIZE
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -52,9 +52,9 @@ SECTIONS
 	/* Text and gots */
 	.text : AT(ADDR(.text) - LOAD_OFFSET) {
 		ALIGN_FUNCTION();
-		*(.text.head)
+		*(.kernel.text.head)
 		_text = .;
-		*(.text .fixup .text.init.refok .exit.text.refok __ftr_alt_*)
+		*(.text .fixup .kernel.text.init.refok .kernel.text.exit.refok __ftr_alt_*)
 		SCHED_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
@@ -182,10 +182,10 @@ SECTIONS
 	}
 #endif
 	. = ALIGN(PAGE_SIZE);
-	.data.percpu  : AT(ADDR(.data.percpu) - LOAD_OFFSET) {
+	.kernel.data.percpu  : AT(ADDR(.kernel.data.percpu) - LOAD_OFFSET) {
 		__per_cpu_start = .;
-		*(.data.percpu)
-		*(.data.percpu.shared_aligned)
+		*(.kernel.data.percpu)
+		*(.kernel.data.percpu.shared_aligned)
 		__per_cpu_end = .;
 	}
 
@@ -259,28 +259,28 @@ SECTIONS
 #else
 	. = ALIGN(16384);
 #endif
-	.data.init_task : AT(ADDR(.data.init_task) - LOAD_OFFSET) {
-		*(.data.init_task)
+	.kernel.data.init_task : AT(ADDR(.kernel.data.init_task) - LOAD_OFFSET) {
+		*(.kernel.data.init_task)
 	}
 
 	. = ALIGN(PAGE_SIZE);
-	.data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET) {
-		*(.data.page_aligned)
+	.kernel.data.page_aligned : AT(ADDR(.kernel.data.page_aligned) - LOAD_OFFSET) {
+		*(.kernel.data.page_aligned)
 	}
 
-	.data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET) {
-		*(.data.cacheline_aligned)
+	.kernel.data.cacheline_aligned : AT(ADDR(.kernel.data.cacheline_aligned) - LOAD_OFFSET) {
+		*(.kernel.data.cacheline_aligned)
 	}
 
 	. = ALIGN(L1_CACHE_BYTES);
-	.data.read_mostly : AT(ADDR(.data.read_mostly) - LOAD_OFFSET) {
-		*(.data.read_mostly)
+	.kernel.data.read_mostly : AT(ADDR(.kernel.data.read_mostly) - LOAD_OFFSET) {
+		*(.kernel.data.read_mostly)
 	}
 
 	. = ALIGN(PAGE_SIZE);
 	.data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) {
 		__nosave_begin = .;
-		*(.data.nosave)
+		*(.kernel.data.nosave)
 		. = ALIGN(PAGE_SIZE);
 		__nosave_end = .;
 	}
diff --git a/arch/s390/include/asm/cache.h b/arch/s390/include/asm/cache.h
--- a/arch/s390/include/asm/cache.h
+++ b/arch/s390/include/asm/cache.h
@@ -14,6 +14,6 @@
 #define L1_CACHE_BYTES     256
 #define L1_CACHE_SHIFT     8
 
-#define __read_mostly __attribute__((__section__(".data.read_mostly")))
+#define __read_mostly __attribute__((__section__(".kernel.data.read_mostly")))
 
 #endif
diff --git a/arch/s390/kernel/head.S b/arch/s390/kernel/head.S
--- a/arch/s390/kernel/head.S
+++ b/arch/s390/kernel/head.S
@@ -35,7 +35,7 @@
 #define ARCH_OFFSET	0
 #endif
 
-.section ".text.head","ax"
+.section ".kernel.text.head","ax"
 #ifndef CONFIG_IPL
 	.org   0
 	.long  0x00080000,0x80000000+startup	# Just a restart PSW
diff --git a/arch/s390/kernel/init_task.c b/arch/s390/kernel/init_task.c
--- a/arch/s390/kernel/init_task.c
+++ b/arch/s390/kernel/init_task.c
@@ -30,7 +30,7 @@ EXPORT_SYMBOL(init_mm);
  * "init_task" linker map entry..
  */
 union thread_union init_thread_union 
-	__attribute__((__section__(".data.init_task"))) =
+	__attribute__((__section__(".kernel.data.init_task"))) =
 		{ INIT_THREAD_INFO(init_task) };
 
 /*
diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
--- a/arch/s390/kernel/vmlinux.lds.S
+++ b/arch/s390/kernel/vmlinux.lds.S
@@ -29,7 +29,7 @@ SECTIONS
 	. = 0x00000000;
 	.text : {
 	_text = .;		/* Text and read-only data */
-		*(.text.head)
+		*(.kernel.text.head)
 	TEXT_TEXT
 		SCHED_TEXT
 		LOCK_TEXT
@@ -66,30 +66,30 @@ SECTIONS
 	. = ALIGN(PAGE_SIZE);
 	.data_nosave : {
 	__nosave_begin = .;
-		*(.data.nosave)
+		*(.kernel.data.nosave)
 	}
 	. = ALIGN(PAGE_SIZE);
 	__nosave_end = .;
 
 	. = ALIGN(PAGE_SIZE);
-	.data.page_aligned : {
-		*(.data.idt)
+	.kernel.data.page_aligned : {
+		*(.kernel.data.idt)
 	}
 
 	. = ALIGN(0x100);
-	.data.cacheline_aligned : {
-		*(.data.cacheline_aligned)
+	.kernel.data.cacheline_aligned : {
+		*(.kernel.data.cacheline_aligned)
 	}
 
 	. = ALIGN(0x100);
-	.data.read_mostly : {
-		*(.data.read_mostly)
+	.kernel.data.read_mostly : {
+		*(.kernel.data.read_mostly)
 	}
 	_edata = .;		/* End of data section */
 
 	. = ALIGN(THREAD_SIZE);	/* init_task */
-	.data.init_task : {
-		*(.data.init_task)
+	.kernel.data.init_task : {
+		*(.kernel.data.init_task)
 	}
 
 	/* will be freed after init */
diff --git a/arch/sh/include/asm/cache.h b/arch/sh/include/asm/cache.h
--- a/arch/sh/include/asm/cache.h
+++ b/arch/sh/include/asm/cache.h
@@ -14,7 +14,7 @@
 
 #define L1_CACHE_BYTES		(1 << L1_CACHE_SHIFT)
 
-#define __read_mostly __attribute__((__section__(".data.read_mostly")))
+#define __read_mostly __attribute__((__section__(".kernel.data.read_mostly")))
 
 #ifndef __ASSEMBLY__
 struct cache_info {
diff --git a/arch/sh/kernel/cpu/sh5/entry.S b/arch/sh/kernel/cpu/sh5/entry.S
--- a/arch/sh/kernel/cpu/sh5/entry.S
+++ b/arch/sh/kernel/cpu/sh5/entry.S
@@ -2058,10 +2058,10 @@ asm_uaccess_end:
 
 
 /*
- * --- .text.init Section
+ * --- .kernel.text.init Section
  */
 
-	.section	.text.init, "ax"
+	.section	.kernel.text.init, "ax"
 
 /*
  * void trap_init (void)
diff --git a/arch/sh/kernel/head_32.S b/arch/sh/kernel/head_32.S
--- a/arch/sh/kernel/head_32.S
+++ b/arch/sh/kernel/head_32.S
@@ -40,7 +40,7 @@ 1:
 1:
 	.skip	PAGE_SIZE - empty_zero_page - 1b
 
-	.section	.text.head, "ax"
+	.section	.kernel.text.head, "ax"
 
 /*
  * Condition at the entry of _stext:
diff --git a/arch/sh/kernel/head_64.S b/arch/sh/kernel/head_64.S
--- a/arch/sh/kernel/head_64.S
+++ b/arch/sh/kernel/head_64.S
@@ -110,7 +110,7 @@ fpu_in_use:	.quad	0
 fpu_in_use:	.quad	0
 
 
-	.section	.text.head, "ax"
+	.section	.kernel.text.head, "ax"
 	.balign L1_CACHE_BYTES
 /*
  * Condition at the entry of __stext:
diff --git a/arch/sh/kernel/init_task.c b/arch/sh/kernel/init_task.c
--- a/arch/sh/kernel/init_task.c
+++ b/arch/sh/kernel/init_task.c
@@ -21,7 +21,7 @@ EXPORT_SYMBOL(init_mm);
  * "init_task" linker map entry..
  */
 union thread_union init_thread_union
-	__attribute__((__section__(".data.init_task"))) =
+	__attribute__((__section__(".kernel.data.init_task"))) =
 		{ INIT_THREAD_INFO(init_task) };
 
 /*
diff --git a/arch/sh/kernel/irq.c b/arch/sh/kernel/irq.c
--- a/arch/sh/kernel/irq.c
+++ b/arch/sh/kernel/irq.c
@@ -158,10 +158,10 @@ asmlinkage int do_IRQ(unsigned int irq, 
 
 #ifdef CONFIG_IRQSTACKS
 static char softirq_stack[NR_CPUS * THREAD_SIZE]
-		__attribute__((__section__(".bss.page_aligned")));
+		__attribute__((__section__(".bss.kernel.page_aligned")));
 
 static char hardirq_stack[NR_CPUS * THREAD_SIZE]
-		__attribute__((__section__(".bss.page_aligned")));
+		__attribute__((__section__(".bss.kernel.page_aligned")));
 
 /*
  * allocate per-cpu stacks for hardirq and for softirq processing
diff --git a/arch/sh/kernel/vmlinux_32.lds.S b/arch/sh/kernel/vmlinux_32.lds.S
--- a/arch/sh/kernel/vmlinux_32.lds.S
+++ b/arch/sh/kernel/vmlinux_32.lds.S
@@ -28,7 +28,7 @@ SECTIONS
 	} = 0
 
 	.text : {
-		*(.text.head)
+		*(.kernel.text.head)
 		TEXT_TEXT
 		SCHED_TEXT
 		LOCK_TEXT
@@ -58,19 +58,19 @@ SECTIONS
 
 	. = ALIGN(THREAD_SIZE);
 	.data : {			/* Data */
-		*(.data.init_task)
+		*(.kernel.data.init_task)
 
 		. = ALIGN(L1_CACHE_BYTES);
-		*(.data.cacheline_aligned)
+		*(.kernel.data.cacheline_aligned)
 
 		. = ALIGN(L1_CACHE_BYTES);
-		*(.data.read_mostly)
+		*(.kernel.data.read_mostly)
 
 		. = ALIGN(PAGE_SIZE);
-		*(.data.page_aligned)
+		*(.kernel.data.page_aligned)
 
 		__nosave_begin = .;
-		*(.data.nosave)
+		*(.kernel.data.nosave)
 		. = ALIGN(PAGE_SIZE);
 		__nosave_end = .;
 
@@ -128,7 +128,7 @@ SECTIONS
 	.bss : {
 		__init_end = .;
 		__bss_start = .;		/* BSS */
-		*(.bss.page_aligned)
+		*(.bss.kernel.page_aligned)
 		*(.bss)
 		*(COMMON)
 		. = ALIGN(4);
diff --git a/arch/sh/kernel/vmlinux_64.lds.S b/arch/sh/kernel/vmlinux_64.lds.S
--- a/arch/sh/kernel/vmlinux_64.lds.S
+++ b/arch/sh/kernel/vmlinux_64.lds.S
@@ -42,7 +42,7 @@ SECTIONS
 	} = 0
 
 	.text : C_PHYS(.text) {
-		*(.text.head)
+		*(.kernel.text.head)
 		TEXT_TEXT
 		*(.text64)
 		*(.text..SHmedia32)
@@ -70,19 +70,19 @@ SECTIONS
 
 	. = ALIGN(THREAD_SIZE);
 	.data : C_PHYS(.data) {			/* Data */
-		*(.data.init_task)
+		*(.kernel.data.init_task)
 
 		. = ALIGN(L1_CACHE_BYTES);
-		*(.data.cacheline_aligned)
+		*(.kernel.data.cacheline_aligned)
 
 		. = ALIGN(L1_CACHE_BYTES);
-		*(.data.read_mostly)
+		*(.kernel.data.read_mostly)
 
 		. = ALIGN(PAGE_SIZE);
-		*(.data.page_aligned)
+		*(.kernel.data.page_aligned)
 
 		__nosave_begin = .;
-		*(.data.nosave)
+		*(.kernel.data.nosave)
 		. = ALIGN(PAGE_SIZE);
 		__nosave_end = .;
 
@@ -140,7 +140,7 @@ SECTIONS
 	.bss : C_PHYS(.bss) {
 		__init_end = .;
 		__bss_start = .;		/* BSS */
-		*(.bss.page_aligned)
+		*(.bss.kernel.page_aligned)
 		*(.bss)
 		*(COMMON)
 		. = ALIGN(4);
diff --git a/arch/sparc/boot/btfixupprep.c b/arch/sparc/boot/btfixupprep.c
--- a/arch/sparc/boot/btfixupprep.c
+++ b/arch/sparc/boot/btfixupprep.c
@@ -171,7 +171,7 @@ main1:
 			}
 		} else if (buffer[nbase+4] != '_')
 			continue;
-		if (!strcmp (sect, ".text.exit"))
+		if (!strcmp (sect, ".kernel.text.exit"))
 			continue;
 		if (strcmp (sect, ".text") &&
 		    strcmp (sect, ".init.text") &&
@@ -325,7 +325,7 @@ main1:
 		(*rr)->next = NULL;
 	}
 	printf("! Generated by btfixupprep. Do not edit.\n\n");
-	printf("\t.section\t\".data.init\",#alloc,#write\n\t.align\t4\n\n");
+	printf("\t.section\t\".kernel.data.init\",#alloc,#write\n\t.align\t4\n\n");
 	printf("\t.global\t___btfixup_start\n___btfixup_start:\n\n");
 	for (i = 0; i < last; i++) {
 		f = array + i;
diff --git a/arch/sparc/include/asm/cache.h b/arch/sparc/include/asm/cache.h
--- a/arch/sparc/include/asm/cache.h
+++ b/arch/sparc/include/asm/cache.h
@@ -19,7 +19,7 @@
 
 #define SMP_CACHE_BYTES (1 << SMP_CACHE_BYTES_SHIFT)
 
-#define __read_mostly __attribute__((__section__(".data.read_mostly")))
+#define __read_mostly __attribute__((__section__(".kernel.data.read_mostly")))
 
 #ifdef CONFIG_SPARC32
 #include <asm/asi.h>
diff --git a/arch/sparc/kernel/head_32.S b/arch/sparc/kernel/head_32.S
--- a/arch/sparc/kernel/head_32.S
+++ b/arch/sparc/kernel/head_32.S
@@ -735,7 +735,7 @@ go_to_highmem:
 		 nop
 
 /* The code above should be at beginning and we have to take care about
- * short jumps, as branching to .text.init section from .text is usually
+ * short jumps, as branching to .kernel.text.init section from .text is usually
  * impossible */
 		__INIT
 /* Acquire boot time privileged register values, this will help debugging.
diff --git a/arch/sparc/kernel/head_64.S b/arch/sparc/kernel/head_64.S
--- a/arch/sparc/kernel/head_64.S
+++ b/arch/sparc/kernel/head_64.S
@@ -467,7 +467,7 @@ jump_to_sun4u_init:
 	jmpl    %g2 + %g0, %g0
 	 nop
 
-	.section	.text.init.refok
+	.section	.kernel.text.init.refok,"ax",@progbits
 sun4u_init:
 	BRANCH_IF_SUN4V(g1, sun4v_init)
 
diff --git a/arch/sparc/kernel/vmlinux.lds.S b/arch/sparc/kernel/vmlinux.lds.S
--- a/arch/sparc/kernel/vmlinux.lds.S
+++ b/arch/sparc/kernel/vmlinux.lds.S
@@ -59,20 +59,20 @@ SECTIONS
 		*(.data1)
 	}
 	. = ALIGN(SMP_CACHE_BYTES);
-	.data.cacheline_aligned : {
-		*(.data.cacheline_aligned)
+	.kernel.data.cacheline_aligned : {
+		*(.kernel.data.cacheline_aligned)
 	}
 	. = ALIGN(SMP_CACHE_BYTES);
-	.data.read_mostly : {
-		*(.data.read_mostly)
+	.kernel.data.read_mostly : {
+		*(.kernel.data.read_mostly)
 	}
 	/* End of data section */
 	_edata = .;
 
 	/* init_task */
 	. = ALIGN(THREAD_SIZE);
-	.data.init_task : {
-		*(.data.init_task)
+	.kernel.data.init_task : {
+		*(.kernel.data.init_task)
 	}
 	.fixup : {
 		__start___fixup = .;
diff --git a/arch/um/include/asm/common.lds.S b/arch/um/include/asm/common.lds.S
--- a/arch/um/include/asm/common.lds.S
+++ b/arch/um/include/asm/common.lds.S
@@ -49,9 +49,9 @@
   }
 
   . = ALIGN(32);
-  .data.percpu : {
+  .kernel.data.percpu : {
 	__per_cpu_start = . ;
-	*(.data.percpu)
+	*(.kernel.data.percpu)
 	__per_cpu_end = . ;
   }
 	
diff --git a/arch/um/kernel/dyn.lds.S b/arch/um/kernel/dyn.lds.S
--- a/arch/um/kernel/dyn.lds.S
+++ b/arch/um/kernel/dyn.lds.S
@@ -97,9 +97,9 @@ SECTIONS
   .fini_array     : { *(.fini_array) }
   .data           : {
     . = ALIGN(KERNEL_STACK_SIZE);		/* init_task */
-    *(.data.init_task)
+    *(.kernel.data.init_task)
     . = ALIGN(KERNEL_STACK_SIZE);
-    *(.data.init_irqstack)
+    *(.kernel.data.init_irqstack)
     DATA_DATA
     *(.data.* .gnu.linkonce.d.*)
     SORT(CONSTRUCTORS)
diff --git a/arch/um/kernel/init_task.c b/arch/um/kernel/init_task.c
--- a/arch/um/kernel/init_task.c
+++ b/arch/um/kernel/init_task.c
@@ -34,9 +34,9 @@ EXPORT_SYMBOL(init_task);
  */
 
 union thread_union init_thread_union
-	__attribute__((__section__(".data.init_task"))) =
+	__attribute__((__section__(".kernel.data.init_task"))) =
 		{ INIT_THREAD_INFO(init_task) };
 
 union thread_union cpu0_irqstack
-	__attribute__((__section__(".data.init_irqstack"))) =
+	__attribute__((__section__(".kernel.data.init_irqstack"))) =
 		{ INIT_THREAD_INFO(init_task) };
diff --git a/arch/um/kernel/uml.lds.S b/arch/um/kernel/uml.lds.S
--- a/arch/um/kernel/uml.lds.S
+++ b/arch/um/kernel/uml.lds.S
@@ -53,9 +53,9 @@ SECTIONS
   .data    :
   {
     . = ALIGN(KERNEL_STACK_SIZE);		/* init_task */
-    *(.data.init_task)
+    *(.kernel.data.init_task)
     . = ALIGN(KERNEL_STACK_SIZE);
-    *(.data.init_irqstack)
+    *(.kernel.data.init_irqstack)
     DATA_DATA
     *(.gnu.linkonce.d*)
     CONSTRUCTORS
diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S
--- a/arch/x86/boot/compressed/head_32.S
+++ b/arch/x86/boot/compressed/head_32.S
@@ -29,7 +29,7 @@
 #include <asm/boot.h>
 #include <asm/asm-offsets.h>
 
-.section ".text.head","ax",@progbits
+.section ".kernel.text.head","ax",@progbits
 	.globl startup_32
 
 startup_32:
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -33,7 +33,7 @@
 #include <asm/processor-flags.h>
 #include <asm/asm-offsets.h>
 
-.section ".text.head"
+.section ".kernel.text.head","ax",@progbits
 	.code32
 	.globl startup_32
 
diff --git a/arch/x86/boot/compressed/vmlinux.scr b/arch/x86/boot/compressed/vmlinux.scr
--- a/arch/x86/boot/compressed/vmlinux.scr
+++ b/arch/x86/boot/compressed/vmlinux.scr
@@ -1,6 +1,6 @@ SECTIONS
 SECTIONS
 {
-  .rodata.compressed : {
+  .kernel.rodata.compressed : {
 	input_len = .;
 	LONG(input_data_end - input_data) input_data = .;
 	*(.data)
diff --git a/arch/x86/boot/compressed/vmlinux_32.lds b/arch/x86/boot/compressed/vmlinux_32.lds
--- a/arch/x86/boot/compressed/vmlinux_32.lds
+++ b/arch/x86/boot/compressed/vmlinux_32.lds
@@ -7,19 +7,23 @@ SECTIONS
 	 * address 0.
 	 */
 	. = 0;
-	.text.head : {
+	.kernel.text.head : {
 		_head = . ;
-		*(.text.head)
+		*(.kernel.text.head)
 		_ehead = . ;
 	}
-	.rodata.compressed : {
-		*(.rodata.compressed)
+	.kernel.rodata.compressed : {
+		*(.kernel.rodata.compressed)
 	}
 	.text :	{
 		_text = .; 	/* Text */
 		*(.text)
 		*(.text.*)
 		_etext = . ;
+	}
+	.got : {
+		*(.got)
+		*(.got.*)
 	}
 	.rodata : {
 		_rodata = . ;
@@ -40,4 +44,6 @@ SECTIONS
 		*(COMMON)
 		_end = . ;
 	}
+	/* Be bold, and discard everything not explicitly mentioned */
+	/DISCARD/ : { *(*) }
 }
diff --git a/arch/x86/boot/compressed/vmlinux_64.lds b/arch/x86/boot/compressed/vmlinux_64.lds
--- a/arch/x86/boot/compressed/vmlinux_64.lds
+++ b/arch/x86/boot/compressed/vmlinux_64.lds
@@ -7,13 +7,13 @@ SECTIONS
 	 * address 0.
 	 */
 	. = 0;
-	.text.head : {
+	.kernel.text.head : {
 		_head = . ;
-		*(.text.head)
+		*(.kernel.text.head)
 		_ehead = . ;
 	}
-	.rodata.compressed : {
-		*(.rodata.compressed)
+	.kernel.rodata.compressed : {
+		*(.kernel.rodata.compressed)
 	}
 	.text :	{
 		_text = .; 	/* Text */
@@ -45,4 +45,6 @@ SECTIONS
 		. = . + 4096 * 6;
 		_ebss = .;
 	}
+	/* Be bold, and discard everything not explicitly mentioned */
+	/DISCARD/ : { *(*) }
 }
diff --git a/arch/x86/include/asm/cache.h b/arch/x86/include/asm/cache.h
--- a/arch/x86/include/asm/cache.h
+++ b/arch/x86/include/asm/cache.h
@@ -5,7 +5,7 @@
 #define L1_CACHE_SHIFT	(CONFIG_X86_L1_CACHE_SHIFT)
 #define L1_CACHE_BYTES	(1 << L1_CACHE_SHIFT)
 
-#define __read_mostly __attribute__((__section__(".data.read_mostly")))
+#define __read_mostly __attribute__((__section__(".kernel.data.read_mostly")))
 
 #ifdef CONFIG_X86_VSMP
 /* vSMP Internode cacheline shift */
@@ -13,7 +13,7 @@
 #ifdef CONFIG_SMP
 #define __cacheline_aligned_in_smp					\
 	__attribute__((__aligned__(1 << (INTERNODE_CACHE_SHIFT))))	\
-	__attribute__((__section__(".data.page_aligned")))
+	__attribute__((__section__(".kernel.data.page_aligned")))
 #endif
 #endif
 
diff --git a/arch/x86/kernel/acpi/wakeup_32.S b/arch/x86/kernel/acpi/wakeup_32.S
--- a/arch/x86/kernel/acpi/wakeup_32.S
+++ b/arch/x86/kernel/acpi/wakeup_32.S
@@ -1,4 +1,4 @@
-	.section .text.page_aligned
+	.section .kernel.text.page_aligned,"ax",@progbits
 #include <linux/linkage.h>
 #include <asm/segment.h>
 #include <asm/page.h>
diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -81,7 +81,7 @@ INIT_MAP_BEYOND_END = BOOTBITMAP_SIZE + 
  * any particular GDT layout, because we load our own as soon as we
  * can.
  */
-.section .text.head,"ax",@progbits
+.section .kernel.text.head,"ax",@progbits
 ENTRY(startup_32)
 	/* test KEEP_SEGMENTS flag to see if the bootloader is asking
 		us to not reload segments */
@@ -609,7 +609,7 @@ ENTRY(_stext)
 /*
  * BSS section
  */
-.section ".bss.page_aligned","wa"
+.section ".bss.kernel.page_aligned","wa"
 	.align PAGE_SIZE_asm
 #ifdef CONFIG_X86_PAE
 swapper_pg_pmd:
@@ -626,7 +626,7 @@ ENTRY(empty_zero_page)
  * This starts the data section.
  */
 #ifdef CONFIG_X86_PAE
-.section ".data.page_aligned","wa"
+.section ".kernel.data.page_aligned","wa"
 	/* Page-aligned for the benefit of paravirt? */
 	.align PAGE_SIZE_asm
 ENTRY(swapper_pg_dir)
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -40,7 +40,7 @@ L3_START_KERNEL = pud_index(__START_KERN
 L3_START_KERNEL = pud_index(__START_KERNEL_map)
 
 	.text
-	.section .text.head
+	.section .kernel.text.head,"ax",@progbits
 	.code64
 	.globl startup_64
 startup_64:
@@ -414,7 +414,7 @@ ENTRY(idt_table)
 ENTRY(idt_table)
 	.skip 256 * 16
 
-	.section .bss.page_aligned, "aw", @nobits
+	.section .bss.kernel.page_aligned, "aw", @nobits
 	.align PAGE_SIZE
 ENTRY(empty_zero_page)
 	.skip PAGE_SIZE
diff --git a/arch/x86/kernel/init_task.c b/arch/x86/kernel/init_task.c
--- a/arch/x86/kernel/init_task.c
+++ b/arch/x86/kernel/init_task.c
@@ -22,7 +22,7 @@ struct mm_struct init_mm = INIT_MM(init_
  * "init_task" linker map entry..
  */
 union thread_union init_thread_union
-	__attribute__((__section__(".data.init_task"))) =
+	__attribute__((__section__(".kernel.data.init_task"))) =
 		{ INIT_THREAD_INFO(init_task) };
 
 /*
@@ -36,7 +36,7 @@ EXPORT_SYMBOL(init_task);
 /*
  * per-CPU TSS segments. Threads are completely 'soft' on Linux,
  * no more per-task TSS's. The TSS size is kept cacheline-aligned
- * so they are allowed to end up in the .data.cacheline_aligned
+ * so they are allowed to end up in the .kernel.data.cacheline_aligned
  * section. Since TSS's are completely CPU-local, we want them
  * on exact cacheline boundaries, to eliminate cacheline ping-pong.
  */
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -78,7 +78,7 @@ char ignore_fpu_irq;
  * for this.
  */
 gate_desc idt_table[256]
-	__attribute__((__section__(".data.idt"))) = { { { { 0, 0 } } }, };
+	__attribute__((__section__(".kernel.data.idt"))) = { { { { 0, 0 } } }, };
 #endif
 
 DECLARE_BITMAP(used_vectors, NR_VECTORS);
diff --git a/arch/x86/kernel/vmlinux_32.lds.S b/arch/x86/kernel/vmlinux_32.lds.S
--- a/arch/x86/kernel/vmlinux_32.lds.S
+++ b/arch/x86/kernel/vmlinux_32.lds.S
@@ -31,15 +31,15 @@ SECTIONS
   . = LOAD_OFFSET + LOAD_PHYSICAL_ADDR;
   phys_startup_32 = startup_32 - LOAD_OFFSET;
 
-  .text.head : AT(ADDR(.text.head) - LOAD_OFFSET) {
+  .kernel.text.head : AT(ADDR(.kernel.text.head) - LOAD_OFFSET) {
   	_text = .;			/* Text and read-only data */
-	*(.text.head)
+	*(.kernel.text.head)
   } :text = 0x9090
 
   /* read-only */
   .text : AT(ADDR(.text) - LOAD_OFFSET) {
 	. = ALIGN(PAGE_SIZE); /* not really needed, already page aligned */
-	*(.text.page_aligned)
+	*(.kernel.text.page_aligned)
 	TEXT_TEXT
 	SCHED_TEXT
 	LOCK_TEXT
@@ -71,32 +71,32 @@ SECTIONS
   . = ALIGN(PAGE_SIZE);
   .data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) {
   	__nosave_begin = .;
-	*(.data.nosave)
+	*(.kernel.data.nosave)
   	. = ALIGN(PAGE_SIZE);
   	__nosave_end = .;
   }
 
   . = ALIGN(PAGE_SIZE);
-  .data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET) {
-	*(.data.page_aligned)
-	*(.data.idt)
+  .kernel.data.page_aligned : AT(ADDR(.kernel.data.page_aligned) - LOAD_OFFSET) {
+	*(.kernel.data.page_aligned)
+	*(.kernel.data.idt)
   }
 
   . = ALIGN(32);
-  .data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET) {
-	*(.data.cacheline_aligned)
+  .kernel.data.cacheline_aligned : AT(ADDR(.kernel.data.cacheline_aligned) - LOAD_OFFSET) {
+	*(.kernel.data.cacheline_aligned)
   }
 
   /* rarely changed data like cpu maps */
   . = ALIGN(32);
-  .data.read_mostly : AT(ADDR(.data.read_mostly) - LOAD_OFFSET) {
-	*(.data.read_mostly)
+  .kernel.data.read_mostly : AT(ADDR(.kernel.data.read_mostly) - LOAD_OFFSET) {
+	*(.kernel.data.read_mostly)
 	_edata = .;		/* End of data section */
   }
 
   . = ALIGN(THREAD_SIZE);	/* init_task */
-  .data.init_task : AT(ADDR(.data.init_task) - LOAD_OFFSET) {
-	*(.data.init_task)
+  .kernel.data.init_task : AT(ADDR(.kernel.data.init_task) - LOAD_OFFSET) {
+	*(.kernel.data.init_task)
   }
 
   /* might get freed after init */
@@ -179,11 +179,11 @@ SECTIONS
   }
 #endif
   . = ALIGN(PAGE_SIZE);
-  .data.percpu  : AT(ADDR(.data.percpu) - LOAD_OFFSET) {
+  .kernel.data.percpu  : AT(ADDR(.kernel.data.percpu) - LOAD_OFFSET) {
 	__per_cpu_start = .;
-	*(.data.percpu.page_aligned)
-	*(.data.percpu)
-	*(.data.percpu.shared_aligned)
+	*(.kernel.data.percpu.page_aligned)
+	*(.kernel.data.percpu)
+	*(.kernel.data.percpu.shared_aligned)
 	__per_cpu_end = .;
   }
   . = ALIGN(PAGE_SIZE);
@@ -192,7 +192,7 @@ SECTIONS
   .bss : AT(ADDR(.bss) - LOAD_OFFSET) {
 	__init_end = .;
 	__bss_start = .;		/* BSS */
-	*(.bss.page_aligned)
+	*(.bss.kernel.page_aligned)
 	*(.bss)
 	. = ALIGN(4);
 	__bss_stop = .;
diff --git a/arch/x86/kernel/vmlinux_64.lds.S b/arch/x86/kernel/vmlinux_64.lds.S
--- a/arch/x86/kernel/vmlinux_64.lds.S
+++ b/arch/x86/kernel/vmlinux_64.lds.S
@@ -28,7 +28,7 @@ SECTIONS
   _text = .;			/* Text and read-only data */
   .text :  AT(ADDR(.text) - LOAD_OFFSET) {
 	/* First the code that has to be first for bootstrapping */
-	*(.text.head)
+	*(.kernel.text.head)
 	_stext = .;
 	/* Then the rest */
 	TEXT_TEXT
@@ -63,17 +63,17 @@ SECTIONS
 
   . = ALIGN(PAGE_SIZE);
   . = ALIGN(CONFIG_X86_L1_CACHE_BYTES);
-  .data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET) {
-	*(.data.cacheline_aligned)
+  .kernel.data.cacheline_aligned : AT(ADDR(.kernel.data.cacheline_aligned) - LOAD_OFFSET) {
+	*(.kernel.data.cacheline_aligned)
   }
   . = ALIGN(CONFIG_X86_INTERNODE_CACHE_BYTES);
-  .data.read_mostly : AT(ADDR(.data.read_mostly) - LOAD_OFFSET) {
-  	*(.data.read_mostly)
+  .kernel.data.read_mostly : AT(ADDR(.kernel.data.read_mostly) - LOAD_OFFSET) {
+  	*(.kernel.data.read_mostly)
   }
 
 #define VSYSCALL_ADDR (-10*1024*1024)
-#define VSYSCALL_PHYS_ADDR ((LOADADDR(.data.read_mostly) + SIZEOF(.data.read_mostly) + 4095) & ~(4095))
-#define VSYSCALL_VIRT_ADDR ((ADDR(.data.read_mostly) + SIZEOF(.data.read_mostly) + 4095) & ~(4095))
+#define VSYSCALL_PHYS_ADDR ((LOADADDR(.kernel.data.read_mostly) + SIZEOF(.kernel.data.read_mostly) + 4095) & ~(4095))
+#define VSYSCALL_VIRT_ADDR ((ADDR(.kernel.data.read_mostly) + SIZEOF(.kernel.data.read_mostly) + 4095) & ~(4095))
 
 #define VLOAD_OFFSET (VSYSCALL_ADDR - VSYSCALL_PHYS_ADDR)
 #define VLOAD(x) (ADDR(x) - VLOAD_OFFSET)
@@ -122,13 +122,13 @@ SECTIONS
 #undef VVIRT
 
   . = ALIGN(THREAD_SIZE);	/* init_task */
-  .data.init_task : AT(ADDR(.data.init_task) - LOAD_OFFSET) {
-	*(.data.init_task)
+  .kernel.data.init_task : AT(ADDR(.kernel.data.init_task) - LOAD_OFFSET) {
+	*(.kernel.data.init_task)
   }:data.init
 
   . = ALIGN(PAGE_SIZE);
-  .data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET) {
-	*(.data.page_aligned)
+  .kernel.data.page_aligned : AT(ADDR(.kernel.data.page_aligned) - LOAD_OFFSET) {
+	*(.kernel.data.page_aligned)
   }
 
   /* might get freed after init */
@@ -215,13 +215,13 @@ SECTIONS
 
   . = ALIGN(PAGE_SIZE);
   __nosave_begin = .;
-  .data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) { *(.data.nosave) }
+  .data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) { *(.kernel.data.nosave) }
   . = ALIGN(PAGE_SIZE);
   __nosave_end = .;
 
   __bss_start = .;		/* BSS */
   .bss : AT(ADDR(.bss) - LOAD_OFFSET) {
-	*(.bss.page_aligned)
+	*(.bss.kernel.page_aligned)
 	*(.bss)
 	}
   __bss_stop = .;
diff --git a/arch/xtensa/kernel/head.S b/arch/xtensa/kernel/head.S
--- a/arch/xtensa/kernel/head.S
+++ b/arch/xtensa/kernel/head.S
@@ -234,7 +234,7 @@ should_never_return:
  * BSS section
  */
 	
-.section ".bss.page_aligned", "w"
+.section ".bss.kernel.page_aligned", "w"
 ENTRY(swapper_pg_dir)
 	.fill	PAGE_SIZE, 1, 0
 ENTRY(empty_zero_page)
diff --git a/arch/xtensa/kernel/init_task.c b/arch/xtensa/kernel/init_task.c
--- a/arch/xtensa/kernel/init_task.c
+++ b/arch/xtensa/kernel/init_task.c
@@ -28,7 +28,7 @@ EXPORT_SYMBOL(init_mm);
 EXPORT_SYMBOL(init_mm);
 
 union thread_union init_thread_union
-	__attribute__((__section__(".data.init_task"))) =
+	__attribute__((__section__(".kernel.data.init_task"))) =
 { INIT_THREAD_INFO(init_task) };
 
 struct task_struct init_task = INIT_TASK(init_task);
diff --git a/arch/xtensa/kernel/vmlinux.lds.S b/arch/xtensa/kernel/vmlinux.lds.S
--- a/arch/xtensa/kernel/vmlinux.lds.S
+++ b/arch/xtensa/kernel/vmlinux.lds.S
@@ -121,14 +121,14 @@ SECTIONS
     DATA_DATA
     CONSTRUCTORS
     . = ALIGN(XCHAL_ICACHE_LINESIZE);
-    *(.data.cacheline_aligned)
+    *(.kernel.data.cacheline_aligned)
   }
 
   _edata = .;
 
   /* The initial task */
   . = ALIGN(8192);
-  .data.init_task : { *(.data.init_task) }
+  .kernel.data.init_task : { *(.kernel.data.init_task) }
 
   /* Initialization code and data: */
 
@@ -259,7 +259,7 @@ SECTIONS
 
   /* BSS section */
   _bss_start = .;
-  .bss : { *(.bss.page_aligned) *(.bss) }
+  .bss : { *(.bss.kernel.page_aligned) *(.bss) }
   _bss_end = .;
 
   _end = .;
diff --git a/include/asm-frv/init.h b/include/asm-frv/init.h
--- a/include/asm-frv/init.h
+++ b/include/asm-frv/init.h
@@ -1,12 +1,12 @@
 #ifndef _ASM_INIT_H
 #define _ASM_INIT_H
 
-#define __init __attribute__ ((__section__ (".text.init")))
-#define __initdata __attribute__ ((__section__ (".data.init")))
+#define __init __attribute__ ((__section__ (".kernel.text.init")))
+#define __initdata __attribute__ ((__section__ (".kernel.data.init")))
 /* For assembly routines */
-#define __INIT		.section	".text.init",#alloc,#execinstr
+#define __INIT		.section	".kernel.text.init",#alloc,#execinstr
 #define __FINIT		.previous
-#define __INITDATA	.section	".data.init",#alloc,#write
+#define __INITDATA	.section	".kernel.data.init",#alloc,#write
 
 #endif
 
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -64,7 +64,7 @@
 /* .data section */
 #define DATA_DATA							\
 	*(.data)							\
-	*(.data.init.refok)						\
+	*(.kernel.data.init.refok)					\
 	*(.ref.data)							\
 	DEV_KEEP(init.data)						\
 	DEV_KEEP(exit.data)						\
@@ -255,8 +255,8 @@
 		*(.text.hot)						\
 		*(.text)						\
 		*(.ref.text)						\
-		*(.text.init.refok)					\
-		*(.exit.text.refok)					\
+		*(.kernel.text.init.refok)				\
+		*(.kernel.text.exit.refok)				\
 	DEV_KEEP(init.text)						\
 	DEV_KEEP(exit.text)						\
 	CPU_KEEP(init.text)						\
@@ -433,9 +433,9 @@
 #define PERCPU(align)							\
 	. = ALIGN(align);						\
 	VMLINUX_SYMBOL(__per_cpu_start) = .;				\
-	.data.percpu  : AT(ADDR(.data.percpu) - LOAD_OFFSET) {		\
-		*(.data.percpu.page_aligned)				\
-		*(.data.percpu)						\
-		*(.data.percpu.shared_aligned)				\
+	.kernel.data.percpu  : AT(ADDR(.kernel.data.percpu) - LOAD_OFFSET) { \
+		*(.kernel.data.percpu.page_aligned)				\
+		*(.kernel.data.percpu)					\
+		*(.kernel.data.percpu.shared_aligned)			\
 	}								\
 	VMLINUX_SYMBOL(__per_cpu_end) = .;
diff --git a/include/linux/cache.h b/include/linux/cache.h
--- a/include/linux/cache.h
+++ b/include/linux/cache.h
@@ -31,7 +31,7 @@
 #ifndef __cacheline_aligned
 #define __cacheline_aligned					\
   __attribute__((__aligned__(SMP_CACHE_BYTES),			\
-		 __section__(".data.cacheline_aligned")))
+		 __section__(".kernel.data.cacheline_aligned")))
 #endif /* __cacheline_aligned */
 
 #ifndef __cacheline_aligned_in_smp
diff --git a/include/linux/init.h b/include/linux/init.h
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -62,9 +62,9 @@
 
 /* backward compatibility note
  *  A few places hardcode the old section names:
- *  .text.init.refok
- *  .data.init.refok
- *  .exit.text.refok
+ *  .kernel.text.init.refok
+ *  .kernel.data.init.refok
+ *  .kernel.text.exit.refok
  *  They should be converted to use the defines from this file
  */
 
@@ -301,7 +301,7 @@ void __init parse_early_param(void);
 #endif
 
 /* Data marked not to be saved by software suspend */
-#define __nosavedata __section(.data.nosave)
+#define __nosavedata __section(.kernel.data.nosave)
 
 /* This means "can be init if no module support, otherwise module load
    may call it." */
diff --git a/include/linux/linkage.h b/include/linux/linkage.h
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -18,8 +18,8 @@
 # define asmregparm
 #endif
 
-#define __page_aligned_data	__section(.data.page_aligned) __aligned(PAGE_SIZE)
-#define __page_aligned_bss	__section(.bss.page_aligned) __aligned(PAGE_SIZE)
+#define __page_aligned_data	__section(.kernel.data.page_aligned) __aligned(PAGE_SIZE)
+#define __page_aligned_bss	__section(.bss.kernel.page_aligned) __aligned(PAGE_SIZE)
 
 /*
  * This is used by architectures to keep arguments on the stack
diff --git a/include/linux/percpu.h b/include/linux/percpu.h
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -10,13 +10,13 @@
 
 #ifdef CONFIG_SMP
 #define DEFINE_PER_CPU(type, name)					\
-	__attribute__((__section__(".data.percpu")))			\
+	__attribute__((__section__(".kernel.data.percpu")))		\
 	PER_CPU_ATTRIBUTES __typeof__(type) per_cpu__##name
 
 #ifdef MODULE
-#define SHARED_ALIGNED_SECTION ".data.percpu"
+#define SHARED_ALIGNED_SECTION ".kernel.data.percpu"
 #else
-#define SHARED_ALIGNED_SECTION ".data.percpu.shared_aligned"
+#define SHARED_ALIGNED_SECTION ".kernel.data.percpu.shared_aligned"
 #endif
 
 #define DEFINE_PER_CPU_SHARED_ALIGNED(type, name)			\
@@ -24,14 +24,14 @@
 	PER_CPU_ATTRIBUTES __typeof__(type) per_cpu__##name		\
 	____cacheline_aligned_in_smp
 
-#define DEFINE_PER_CPU_PAGE_ALIGNED(type, name)			\
-	__attribute__((__section__(".data.percpu.page_aligned")))	\
+#define DEFINE_PER_CPU_PAGE_ALIGNED(type, name)				\
+	__attribute__((__section__(".kernel.data.percpu.page_aligned")))\
 	PER_CPU_ATTRIBUTES __typeof__(type) per_cpu__##name
 #else
 #define DEFINE_PER_CPU(type, name)					\
 	PER_CPU_ATTRIBUTES __typeof__(type) per_cpu__##name
 
-#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name)		      \
+#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name)			\
 	DEFINE_PER_CPU(type, name)
 
 #define DEFINE_PER_CPU_PAGE_ALIGNED(type, name)		      \
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -60,7 +60,7 @@
 /*
  * Must define these before including other files, inline functions need them
  */
-#define LOCK_SECTION_NAME ".text.lock."KBUILD_BASENAME
+#define LOCK_SECTION_NAME ".kernel.text.lock."KBUILD_BASENAME
 
 #define LOCK_SECTION_START(extra)               \
         ".subsection 1\n\t"                     \
diff --git a/kernel/module.c b/kernel/module.c
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -484,7 +484,7 @@ static unsigned int find_pcpusec(Elf_Ehd
 				 Elf_Shdr *sechdrs,
 				 const char *secstrings)
 {
-	return find_sec(hdr, sechdrs, secstrings, ".data.percpu");
+	return find_sec(hdr, sechdrs, secstrings, ".kernel.data.percpu");
 }
 
 static void percpu_modcopy(void *pcpudest, const void *from, unsigned long size)
diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
--- a/scripts/mod/modpost.c
+++ b/scripts/mod/modpost.c
@@ -794,9 +794,9 @@ static const char *data_sections[] = { D
 /* sections that may refer to an init/exit section with no warning */
 static const char *initref_sections[] =
 {
-	".text.init.refok*",
-	".exit.text.refok*",
-	".data.init.refok*",
+	".kernel.text.init.refok*",
+	".kernel.text.exit.refok*",
+	".kernel.data.init.refok*",
 	NULL
 };
 
@@ -915,7 +915,7 @@ static int section_mismatch(const char *
  * Pattern 0:
  *   Do not warn if funtion/data are marked with __init_refok/__initdata_refok.
  *   The pattern is identified by:
- *   fromsec = .text.init.refok* | .data.init.refok*
+ *   fromsec = .kernel.text.init.refok* | .kernel.data.init.refok*
  *
  * Pattern 1:
  *   If a module parameter is declared __initdata and permissions=0
@@ -939,8 +939,8 @@ static int section_mismatch(const char *
  *           *probe_one, *_console, *_timer
  *
  * Pattern 3:
- *   Whitelist all refereces from .text.head to .init.data
- *   Whitelist all refereces from .text.head to .init.text
+ *   Whitelist all refereces from .kernel.text.head to .init.data
+ *   Whitelist all refereces from .kernel.text.head to .init.text
  *
  * Pattern 4:
  *   Some symbols belong to init section but still it is ok to reference
diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl
--- a/scripts/recordmcount.pl
+++ b/scripts/recordmcount.pl
@@ -26,7 +26,7 @@
 # which will also be the location of that section after final link.
 # e.g.
 #
-#  .section ".text.sched"
+#  .section ".sched.text"
 #  .globl my_func
 #  my_func:
 #        [...]
@@ -39,7 +39,7 @@
 #        [...]
 #
 # Both relocation offsets for the mcounts in the above example will be
-# offset from .text.sched. If we make another file called tmp.s with:
+# offset from .sched.text. If we make another file called tmp.s with:
 #
 #  .section __mcount_loc
 #  .quad  my_func + 0x5
@@ -51,7 +51,7 @@
 # But this gets hard if my_func is not globl (a static function).
 # In such a case we have:
 #
-#  .section ".text.sched"
+#  .section ".sched.text"
 #  my_func:
 #        [...]
 #        call mcount  (offset: 0x5)
