问题现象
- 复现步骤
- x86架构手机(Android 7.0)
- kernel 4.4小版本升级后,x86架构手机无法正常启动.
定位分析
- 相关log
- 手机无法正常启动,是因为zygote进程发生了native crash.
- tombstone
01-01 00:14:31.868 3484 3484 F DEBUG : Revision: '0' 01-01 00:14:31.868 3484 3484 F DEBUG : ABI: 'x86' 01-01 00:14:31.869 3484 3484 F DEBUG : pid: 2679, tid: 2679, name: zygote >>> zygote <<< 01-01 00:14:31.869 3140 3140 I dex2oat : dex2oat took 60.206s (threads: 8) arena alloc=22MB (23156320B) java alloc=6MB (7266936B) native alloc=90MB (95111512B) free=11MB (11974312B) 01-01 00:14:31.869 3484 3484 F DEBUG : signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0xff99c000 01-01 00:14:31.871 3484 3484 F DEBUG : eax ff99d000 ebx e4d7bacc ecx 20fd0600 edx e4d7d440 01-01 00:14:31.871 3484 3484 F DEBUG : esi ff9b9000 edi ff1bf000 01-01 00:14:31.871 3484 3484 F DEBUG : xcs 00000023 xds 0000002b xes 0000002b xfs 00000003 xss 0000002b 01-01 00:14:31.871 3484 3484 F DEBUG : eip e4bc7320 ebp ff9b93a8 esp ff9b9340 flags 00010206 01-01 00:14:31.889 3484 3484 F DEBUG : 01-01 00:14:31.889 3484 3484 F DEBUG : backtrace: 01-01 00:14:31.889 3484 3484 F DEBUG : #00 pc 005f1320 /system/lib/libart.so (_ZN3art6Thread25InstallImplicitProtectionEv+192) 01-01 00:14:31.889 3484 3484 F DEBUG : #01 pc 005f3129 /system/lib/libart.so (_ZN3art6Thread12InitStackHwmEv+281) 01-01 00:14:31.889 3484 3484 F DEBUG : #02 pc 005f0fa9 /system/lib/libart.so (_ZN3art6Thread4InitEPNS_10ThreadListEPNS_9JavaVMExtEPNS_9JNIEnvExtE+297) 01-01 00:14:31.890 3484 3484 F DEBUG : #03 pc 005f36fe /system/lib/libart.so (_ZN3art6Thread6AttachEPKcbP8_jobjectb+494) 01-01 00:14:31.890 3484 3484 F DEBUG : #04 pc 005d2188 /system/lib/libart.so (_ZN3art7Runtime4InitEONS_18RuntimeArgumentMapE+14472) 01-01 00:14:31.890 3484 3484 F DEBUG : #05 pc 005d587d /system/lib/libart.so (_ZN3art7Runtime6CreateERKNSt3__16vectorINS1_4pairINS1_12basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEEPKvEENS7_ISC_EEEEb+157) 01-01 00:14:31.890 3484 3484 F DEBUG : #06 pc 00446551 /system/lib/libart.so (JNI_CreateJavaVM+609) 01-01 00:14:31.890 3484 3484 F DEBUG : #07 pc 0000518b /system/lib/libnativehelper.so (JNI_CreateJavaVM+59) 01-01 00:14:31.890 3484 3484 F DEBUG : #08 pc 00074856 /system/lib/libandroid_runtime.so (_ZN7android14AndroidRuntime7startVmEPP7_JavaVMPP7_JNIEnvb+2966) 01-01 00:14:31.890 3484 3484 F DEBUG : #09 pc 00074f73 /system/lib/libandroid_runtime.so (_ZN7android14AndroidRuntime5startEPKcRKNS_6VectorINS_7String8EEEb+403)01-01 00:14:31.890 3484 3484 F DEBUG : #10 pc 00001b67 /system/bin/app_process32 01-01 00:14:31.890 3484 3484 F DEBUG : #11 pc 00016ddd /system/lib/libc.so (__libc_init+125) 01-01 00:14:31.890 3484 3484 F DEBUG : #12 pc 0000144c /system/bin/app_process32 01-01 00:14:31.890 3484 3484 F DEBUG : #13 pc 00000004 <unknown>
初步分析
-
gdb堆栈
(gdb) bt #0 0xe4bc7320 in art::Thread::InstallImplicitProtection (this=<optimized out>) at vendor/intel/art-extension/runtime/thread.cc:572 #1 0xe4bc912a in art::Thread::InitStackHwm (this=0xe4e0b400) at vendor/intel/art-extension/runtime/thread.cc:935 #2 0xe4bc6faa in art::Thread::Init (this=0xe4e0b400, thread_list=0x20fd0601, java_vm=0xff99d000, jni_env_ext=<optimized out>) at vendor/intel/art-extension/runtime/thread.cc:702 #3 0xe4bc96ff in art::Thread::Attach (thread_name=<optimized out>, as_daemon=<optimized out>, thread_group=0xff99d000, create_peer=<optimized out>) at vendor/intel/art-extension/runtime/thread.cc:752 #4 0xe4ba8189 in art::Runtime::Init(art::RuntimeArgumentMap&&) (this=<optimized out>, runtime_options_in=<optimized out>) at vendor/intel/art-extension/runtime/runtime.cc:1208 #5 0xe4bab87e in art::Runtime::Create(art::RuntimeArgumentMap&&) (runtime_options=<optimized out>) at vendor/intel/art-extension/runtime/runtime.cc:495 #6 art::Runtime::Create (raw_options=..., ignore_unrecognized=<optimized out>) at vendor/intel/art-extension/runtime/runtime.cc:508 #7 0xe4a1c552 in JNI_CreateJavaVM (p_vm=0x20fd0600, p_env=0xff99d000, vm_args=<optimized out>) at vendor/intel/art-extension/runtime/java_vm_ext.cc:966 #8 0xe6b3b18c in JniInvocation::JNI_CreateJavaVM (this=<optimized out>, p_vm=0xe87db3c0 <android::AndroidRuntime::mJavaVM>, p_env=0xe4d7d440 <art::gLogVerbosity>, vm_args=<optimized out>) at libnativehelper/JniInvocation.cpp:152 #9 JNI_CreateJavaVM (p_vm=0xe87db3c0 <android::AndroidRuntime::mJavaVM>, p_env=0xe4d7d440 <art::gLogVerbosity>, vm_args=0x20fd0600) at libnativehelper/JniInvocation.cpp:181 #10 0xe868f857 in android::AndroidRuntime::startVm (this=this@entry=0xff9bae28, pJavaVM=pJavaVM@entry=0xe87db3c0 <android::AndroidRuntime::mJavaVM>, pEnv=pEnv@entry=0xff9bacf4, zygote=zygote@entry=true) at frameworks/base/core/jni/AndroidRuntime.cpp:934 #11 0xe868ff74 in android::AndroidRuntime::start (this=0xff9bae28, className=0x5c2b7cf1 "com.android.internal.os.ZygoteInit", options=..., zygote=true) at frameworks/base/core/jni/AndroidRuntime.cpp:1011 #12 0x5c2b6b68 in ?? () #13 0xe881edde in __libc_init (raw_args=0xff9bbef0, onexit=0x0, slingshot=0x5c2b6510, structors=0xff9bbed0) at bionic/libc/bionic/libc_init_dynamic.cpp:109 #14 0x5c2b644d in ?? () #15 0x00000005 in ?? ()
-
相关code
520 // Install a protected region in the stack. This is used to trigger a SIGSEGV if a stack 521 // overflow is detected. It is located right below the stack_begin_. 522 ATTRIBUTE_NO_SANITIZE_ADDRESS 523 void Thread::InstallImplicitProtection() { 524 uint8_t* pregion = tlsPtr_.stack_begin - kStackOverflowProtectedSize; 525 uint8_t* stack_himem = tlsPtr_.stack_end; 526 uint8_t* stack_top = reinterpret_cast<uint8_t*>(reinterpret_cast<uintptr_t>(&stack_himem) & 527 ~(kPageSize - 1)); // Page containing current top of stack. 528 529 // Try to directly protect the stack. 530 VLOG(threads) << "installing stack protected region at " << std::hex << 531 static_cast<void*>(pregion) << " to " << 532 static_cast<void*>(pregion + kStackOverflowProtectedSize - 1); 533 if (ProtectStack(/* fatal_on_error */ false)) { 534 // Tell the kernel that we won't be needing these pages any more. 535 // NB. madvise will probably write zeroes into the memory (on linux it does). 536 uint32_t unwanted_size = stack_top - pregion - kPageSize; 537 madvise(pregion, unwanted_size, MADV_DONTNEED); 538 return; 539 } 540 541 // There is a little complexity here that deserves a special mention. On some 542 // architectures, the stack is created using a VM_GROWSDOWN flag 543 // to prevent memory being allocated when it's not needed. This flag makes the 544 // kernel only allocate memory for the stack by growing down in memory. Because we 545 // want to put an mprotected region far away from that at the stack top, we need 546 // to make sure the pages for the stack are mapped in before we call mprotect. 547 // 548 // The failed mprotect in UnprotectStack is an indication of a thread with VM_GROWSDOWN 549 // with a non-mapped stack (usually only the main thread). 550 // 551 // We map in the stack by reading every page from the stack bottom (highest address) 552 // to the stack top. (We then madvise this away.) This must be done by reading from the 553 // current stack pointer downwards. Any access more than a page below the current SP 554 // might cause a segv. 555 // TODO: This comment may be out of date. It seems possible to speed this up. As 556 // this is normally done once in the zygote on startup, ignore for now. 557 // 558 // AddressSanitizer does not like the part of this functions that reads every stack page. 559 // Looks a lot like an out-of-bounds access. 560 561 // (Defensively) first remove the protection on the protected region as will want to read 562 // and write it. Ignore errors. 563 UnprotectStack(); 564 565 VLOG(threads) << "Need to map in stack for thread at " << std::hex << 566 static_cast<void*>(pregion); 567 568 // Read every page from the high address to the low. 569 volatile uint8_t dont_optimize_this; 570 UNUSED(dont_optimize_this); 571 for (uint8_t* p = stack_top; p >= pregion; p -= kPageSize) { 572 dont_optimize_this = *p; 573 } 574 575 VLOG(threads) << "(again) installing stack protected region at " << std::hex << 576 static_cast<void*>(pregion) << " to " << 577 static_cast<void*>(pregion + kStackOverflowProtectedSize - 1); 578 579 // Protect the bottom of the stack to prevent read/write to it. 580 ProtectStack(/* fatal_on_error */ true); 581 582 // Tell the kernel that we won't be needing these pages any more. 583 // NB. madvise will probably write zeroes into the memory (on linux it does). 584 uint32_t unwanted_size = stack_top - pregion - kPageSize; 585 madvise(pregion, unwanted_size, MADV_DONTNEED); 586 }
-
分析
从堆栈来看,zygote的main线程在设置线程相关栈溢出保护页时遇到SIGSEGV。
-
ART stack overflow机制
-
ART打印的当前的thread* 相关信息如下:
(gdb) f 1 (gdb) p /x this->tlsPtr_ .stack_begin $2 = 0xff1bf000 (gdb) p /x this->tlsPtr_.stack_end $3 = 0xff1c1000 (gdb) p /x this->tlsPtr_.stack_size $4 = 0x800000
ART需要将stack_begin前的一个page(0xff1be000)设置为保护页,当前从ESP向栈顶逐页访问,到达栈顶后,将0xff1be000所在页设置为保护页(不可读写).但在访问到[%eax - 0x1000]= [0xff99d000 - 0x1000] = [0xff99c000]时发生SIGSEGV。
-
-
查看maps文件
... ... e97f1000-e97f2000 r--p 00000000 00:00 0 e97f2000-e97f4000 rw-p 00000000 00:00 0 ff99d000-ff9be000 rw-p 00000000 00:00 0 [stack] ... ...
可以看到,并没有0xff99c000这个地址。
排查实验
- kernel升级,基于同一分支x86架构手机无法启动,但arm/arm64架构的没有问题.
深入分析
- 根据用户态程序的逻辑分析,kernel同事排查升级相关patch.
commit 4b359430674caa2c98d0049a6941f157d2a33741
Author: Hugh Dickins <hughd@google.com>
Date: Mon Jun 19 04:03:24 2017 -0700
mm: larger stack guard gap, between vmas
commit 1be7107fbe18eed3e319a6c3e83c78254b693acb upstream.
Stack guard page is a useful feature to reduce a risk of stack smashing
into a different mapping. We have been using a single page gap which
is sufficient to prevent having stack adjacent to a different mapping.
But this seems to be insufficient in the light of the stack usage in
userspace. E.g. glibc uses as large as 64kB alloca() in many commonly
used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX]
which is 256kB or stack strings with MAX_ARG_STRLEN.
This will become especially dangerous for suid binaries and the default
no limit for the stack size limit because those applications can be
tricked to consume a large portion of the stack and a single glibc call
could jump over the guard page. These attacks are not theoretical,
unfortunatelly.
Make those attacks less probable by increasing the stack guard gap
to 1MB (on systems with 4k pages; but make it depend on the page size
because systems with larger base pages might cap stack allocations in
the PAGE_SIZE units) which should cover larger alloca() and VLA stack
allocations. It is obviously not a full fix because the problem is
somehow inherent, but it should reduce attack space a lot.
One could argue that the gap size should be configurable from userspace,
but that can be done later when somebody finds that the new 1MB is wrong
for some special case applications. For now, add a kernel command line
option (stack_guard_gap) to specify the stack gap size (in page units).
Implementation wise, first delete all the old code for stack guard page:
because although we could get away with accounting one extra page in a
stack vma, accounting a larger gap can break userspace - case in point,
a program run with "ulimit -S -v 20000" failed when the 1MB gap was
counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK
and strict non-overcommit mode.
Instead of keeping gap inside the stack vma, maintain the stack guard
gap as a gap between vmas: using vm_start_gap() in place of vm_start
(or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few
places which need to respect the gap - mainly arch_get_unmapped_area(),
and and the vma tree's subtree_gap support for that.
Original-patch-by: Oleg Nesterov <oleg@redhat.com>
Original-patch-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Tested-by: Helge Deller <deller@gmx.de> # parisc
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[wt: backport to 4.11: adjust context]
[wt: backport to 4.9: adjust context ; kernel doc was not in admin-guide]
[wt: backport to 4.4: adjust context ; drop ppc hugetlb_radix changes]
Signed-off-by: Willy Tarreau <w@1wt.eu>
[gkh: minor build fixes for 4.4]
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index ca64ca566099..7c77d7edb851 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -3580,6 +3580,13 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
spia_pedr=
spia_peddr=
+ stack_guard_gap= [MM]
+ override the default stack gap protection. The value
+ is in page units and it defines how many pages prior
+ to (for stacks growing down) resp. after (for stacks
+ growing up) the main stack are reserved for no other
+ mapping. Default value is 256 pages.
+
stacktrace [FTRACE]
Enabled the stack tracer on boot up.
diff --git a/arch/arc/mm/mmap.c b/arch/arc/mm/mmap.c
index 2e06d56e987b..cf4ae6958240 100644
--- a/arch/arc/mm/mmap.c
+++ b/arch/arc/mm/mmap.c
@@ -64,7 +64,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
index 407dc786583a..c469c0665752 100644
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -89,7 +89,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -140,7 +140,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/arch/frv/mm/elf-fdpic.c b/arch/frv/mm/elf-fdpic.c
index 836f14707a62..efa59f1f8022 100644
--- a/arch/frv/mm/elf-fdpic.c
+++ b/arch/frv/mm/elf-fdpic.c
@@ -74,7 +74,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
addr = PAGE_ALIGN(addr);
vma = find_vma(current->mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
goto success;
}
diff --git a/arch/mips/mm/mmap.c b/arch/mips/mm/mmap.c
index 5c81fdd032c3..025cb31aa0a2 100644
--- a/arch/mips/mm/mmap.c
+++ b/arch/mips/mm/mmap.c
@@ -92,7 +92,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp,
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c
index 5aba01ac457f..4dda73c44fee 100644
--- a/arch/parisc/kernel/sys_parisc.c
+++ b/arch/parisc/kernel/sys_parisc.c
@@ -88,7 +88,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags)
{
struct mm_struct *mm = current->mm;
- struct vm_area_struct *vma;
+ struct vm_area_struct *vma, *prev;
unsigned long task_size = TASK_SIZE;
int do_color_align, last_mmap;
struct vm_unmapped_area_info info;
@@ -115,9 +115,10 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
else
addr = PAGE_ALIGN(addr);
- vma = find_vma(mm, addr);
+ vma = find_vma_prev(mm, addr, &prev);
if (task_size - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)) &&
+ (!prev || addr >= vm_end_gap(prev)))
goto found_addr;
}
@@ -141,7 +142,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
const unsigned long len, const unsigned long pgoff,
const unsigned long flags)
{
- struct vm_area_struct *vma;
+ struct vm_area_struct *vma, *prev;
struct mm_struct *mm = current->mm;
unsigned long addr = addr0;
int do_color_align, last_mmap;
@@ -175,9 +176,11 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
addr = COLOR_ALIGN(addr, last_mmap, pgoff);
else
addr = PAGE_ALIGN(addr);
- vma = find_vma(mm, addr);
+
+ vma = find_vma_prev(mm, addr, &prev);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)) &&
+ (!prev || addr >= vm_end_gap(prev)))
goto found_addr;
}
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 0f432a702870..6ad12b244770 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -105,7 +105,7 @@ static int slice_area_is_free(struct mm_struct *mm, unsigned long addr,
if ((mm->task_size - len) < addr)
return 0;
vma = find_vma(mm, addr);
- return (!vma || (addr + len) <= vma->vm_start);
+ return (!vma || (addr + len) <= vm_start_gap(vma));
}
static int slice_low_has_vma(struct mm_struct *mm, unsigned long slice)
diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
index f2b6b1d9c804..126c4a9b9bf9 100644
--- a/arch/s390/mm/mmap.c
+++ b/arch/s390/mm/mmap.c
@@ -97,7 +97,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -135,7 +135,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/arch/sh/mm/mmap.c b/arch/sh/mm/mmap.c
index 6777177807c2..7df7d5944188 100644
--- a/arch/sh/mm/mmap.c
+++ b/arch/sh/mm/mmap.c
@@ -63,7 +63,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -113,7 +113,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c
index c690c8e16a96..7f0f7c01b297 100644
--- a/arch/sparc/kernel/sys_sparc_64.c
+++ b/arch/sparc/kernel/sys_sparc_64.c
@@ -118,7 +118,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
vma = find_vma(mm, addr);
if (task_size - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -181,7 +181,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
vma = find_vma(mm, addr);
if (task_size - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
index da1142401bf4..ffa842b4d7d4 100644
--- a/arch/sparc/mm/hugetlbpage.c
+++ b/arch/sparc/mm/hugetlbpage.c
@@ -115,7 +115,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
addr = ALIGN(addr, HPAGE_SIZE);
vma = find_vma(mm, addr);
if (task_size - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
if (mm->get_unmapped_area == arch_get_unmapped_area)
diff --git a/arch/tile/mm/hugetlbpage.c b/arch/tile/mm/hugetlbpage.c
index c034dc3fe2d4..c97ee6c7f949 100644
--- a/arch/tile/mm/hugetlbpage.c
+++ b/arch/tile/mm/hugetlbpage.c
@@ -232,7 +232,7 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
addr = ALIGN(addr, huge_page_size(h));
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
if (current->mm->get_unmapped_area == arch_get_unmapped_area)
diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 10e0272d789a..136ad7c1ce7b 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -143,7 +143,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
if (end - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -186,7 +186,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
index 42982b26e32b..39bdaf3ac44a 100644
--- a/arch/x86/mm/hugetlbpage.c
+++ b/arch/x86/mm/hugetlbpage.c
@@ -144,7 +144,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
addr = ALIGN(addr, huge_page_size(h));
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
if (mm->get_unmapped_area == arch_get_unmapped_area)
diff --git a/arch/xtensa/kernel/syscall.c b/arch/xtensa/kernel/syscall.c
index 83cf49685373..3aaaae18417c 100644
--- a/arch/xtensa/kernel/syscall.c
+++ b/arch/xtensa/kernel/syscall.c
@@ -87,7 +87,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
/* At this point: (!vmm || addr < vmm->vm_end). */
if (TASK_SIZE - len < addr)
return -ENOMEM;
- if (!vmm || addr + len <= vmm->vm_start)
+ if (!vmm || addr + len <= vm_start_gap(vmm))
return addr;
addr = vmm->vm_end;
if (flags & MAP_SHARED)
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 595ebdb41846..a17da8b57fc6 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -191,7 +191,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
addr = ALIGN(addr, huge_page_size(h));
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index db1a1427c27a..07ef85e19fbc 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -295,11 +295,7 @@ show_map_vma(struct seq_file *m, struct vm_area_struct *vma, int is_pid)
/* We don't show the stack guard page in /proc/maps */
start = vma->vm_start;
- if (stack_guard_page_start(vma, start))
- start += PAGE_SIZE;
end = vma->vm_end;
- if (stack_guard_page_end(vma, end))
- end -= PAGE_SIZE;
seq_setwidth(m, 25 + sizeof(void *) * 6 - 1);
seq_printf(m, "%08lx-%08lx %c%c%c%c %08llx %02x:%02x %lu ",
diff --git a/include/linux/mm.h b/include/linux/mm.h
index f0ffa01c90d9..55f950afb60d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1278,39 +1278,11 @@ int clear_page_dirty_for_io(struct page *page);
int get_cmdline(struct task_struct *task, char *buffer, int buflen);
-/* Is the vma a continuation of the stack vma above it? */
-static inline int vma_growsdown(struct vm_area_struct *vma, unsigned long addr)
-{
- return vma && (vma->vm_end == addr) && (vma->vm_flags & VM_GROWSDOWN);
-}
-
static inline bool vma_is_anonymous(struct vm_area_struct *vma)
{
return !vma->vm_ops;
}
-static inline int stack_guard_page_start(struct vm_area_struct *vma,
- unsigned long addr)
-{
- return (vma->vm_flags & VM_GROWSDOWN) &&
- (vma->vm_start == addr) &&
- !vma_growsdown(vma->vm_prev, addr);
-}
-
-/* Is the vma a continuation of the stack vma below it? */
-static inline int vma_growsup(struct vm_area_struct *vma, unsigned long addr)
-{
- return vma && (vma->vm_start == addr) && (vma->vm_flags & VM_GROWSUP);
-}
-
-static inline int stack_guard_page_end(struct vm_area_struct *vma,
- unsigned long addr)
-{
- return (vma->vm_flags & VM_GROWSUP) &&
- (vma->vm_end == addr) &&
- !vma_growsup(vma->vm_next, addr);
-}
-
int vma_is_stack_for_task(struct vm_area_struct *vma, struct task_struct *t);
extern unsigned long move_page_tables(struct vm_area_struct *vma,
@@ -2012,6 +1984,7 @@ void page_cache_async_readahead(struct address_space *mapping,
pgoff_t offset,
unsigned long size);
+extern unsigned long stack_guard_gap;
/* Generic expand stack which grows the stack according to GROWS{UP,DOWN} */
extern int expand_stack(struct vm_area_struct *vma, unsigned long address);
@@ -2040,6 +2013,30 @@ static inline struct vm_area_struct * find_vma_intersection(struct mm_struct * m
return vma;
}
+static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
+{
+ unsigned long vm_start = vma->vm_start;
+
+ if (vma->vm_flags & VM_GROWSDOWN) {
+ vm_start -= stack_guard_gap;
+ if (vm_start > vma->vm_start)
+ vm_start = 0;
+ }
+ return vm_start;
+}
+
+static inline unsigned long vm_end_gap(struct vm_area_struct *vma)
+{
+ unsigned long vm_end = vma->vm_end;
+
+ if (vma->vm_flags & VM_GROWSUP) {
+ vm_end += stack_guard_gap;
+ if (vm_end < vma->vm_end)
+ vm_end = -PAGE_SIZE;
+ }
+ return vm_end;
+}
+
static inline unsigned long vma_pages(struct vm_area_struct *vma)
{
return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
diff --git a/mm/gup.c b/mm/gup.c
index 4b0b7e7d1136..b599526db9f7 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -312,11 +312,6 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma,
/* mlock all present pages, but do not fault in new pages */
if ((*flags & (FOLL_POPULATE | FOLL_MLOCK)) == FOLL_MLOCK)
return -ENOENT;
- /* For mm_populate(), just skip the stack guard page. */
- if ((*flags & FOLL_POPULATE) &&
- (stack_guard_page_start(vma, address) ||
- stack_guard_page_end(vma, address + PAGE_SIZE)))
- return -ENOENT;
if (*flags & FOLL_WRITE)
fault_flags |= FAULT_FLAG_WRITE;
if (nonblocking)
diff --git a/mm/memory.c b/mm/memory.c
index 76dcee317714..e6fa13484447 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2662,40 +2662,6 @@ out_release:
}
/*
- * This is like a special single-page "expand_{down|up}wards()",
- * except we must first make sure that 'address{-|+}PAGE_SIZE'
- * doesn't hit another vma.
- */
-static inline int check_stack_guard_page(struct vm_area_struct *vma, unsigned long address)
-{
- address &= PAGE_MASK;
- if ((vma->vm_flags & VM_GROWSDOWN) && address == vma->vm_start) {
- struct vm_area_struct *prev = vma->vm_prev;
-
- /*
- * Is there a mapping abutting this one below?
- *
- * That's only ok if it's the same stack mapping
- * that has gotten split..
- */
- if (prev && prev->vm_end == address)
- return prev->vm_flags & VM_GROWSDOWN ? 0 : -ENOMEM;
-
- return expand_downwards(vma, address - PAGE_SIZE);
- }
- if ((vma->vm_flags & VM_GROWSUP) && address + PAGE_SIZE == vma->vm_end) {
- struct vm_area_struct *next = vma->vm_next;
-
- /* As VM_GROWSDOWN but s/below/above/ */
- if (next && next->vm_start == address + PAGE_SIZE)
- return next->vm_flags & VM_GROWSUP ? 0 : -ENOMEM;
-
- return expand_upwards(vma, address + PAGE_SIZE);
- }
- return 0;
-}
-
-/*
* We enter with non-exclusive mmap_sem (to exclude vma changes,
* but allow concurrent faults), and pte mapped but not yet locked.
* We return with mmap_sem still held, but pte unmapped and unlocked.
@@ -2715,10 +2681,6 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
if (vma->vm_flags & VM_SHARED)
return VM_FAULT_SIGBUS;
- /* Check if we need to add a guard page to the stack */
- if (check_stack_guard_page(vma, address) < 0)
- return VM_FAULT_SIGSEGV;
-
/* Use the zero-page for reads */
if (!(flags & FAULT_FLAG_WRITE) && !mm_forbids_zeropage(mm)) {
entry = pte_mkspecial(pfn_pte(my_zero_pfn(address),
diff --git a/mm/mmap.c b/mm/mmap.c
index 455772a05e54..5e043dd1de2b 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -288,6 +288,7 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)
unsigned long retval;
unsigned long newbrk, oldbrk;
struct mm_struct *mm = current->mm;
+ struct vm_area_struct *next;
unsigned long min_brk;
bool populate;
@@ -332,7 +333,8 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)
}
/* Check against existing mmap mappings. */
- if (find_vma_intersection(mm, oldbrk, newbrk+PAGE_SIZE))
+ next = find_vma(mm, oldbrk);
+ if (next && newbrk + PAGE_SIZE > vm_start_gap(next))
goto out;
/* Ok, looks good - let it rip. */
@@ -355,10 +357,22 @@ out:
static long vma_compute_subtree_gap(struct vm_area_struct *vma)
{
- unsigned long max, subtree_gap;
- max = vma->vm_start;
- if (vma->vm_prev)
- max -= vma->vm_prev->vm_end;
+ unsigned long max, prev_end, subtree_gap;
+
+ /*
+ * Note: in the rare case of a VM_GROWSDOWN above a VM_GROWSUP, we
+ * allow two stack_guard_gaps between them here, and when choosing
+ * an unmapped area; whereas when expanding we only require one.
+ * That's a little inconsistent, but keeps the code here simpler.
+ */
+ max = vm_start_gap(vma);
+ if (vma->vm_prev) {
+ prev_end = vm_end_gap(vma->vm_prev);
+ if (max > prev_end)
+ max -= prev_end;
+ else
+ max = 0;
+ }
if (vma->vm_rb.rb_left) {
subtree_gap = rb_entry(vma->vm_rb.rb_left,
struct vm_area_struct, vm_rb)->rb_subtree_gap;
@@ -451,7 +465,7 @@ static void validate_mm(struct mm_struct *mm)
anon_vma_unlock_read(anon_vma);
}
- highest_address = vma->vm_end;
+ highest_address = vm_end_gap(vma);
vma = vma->vm_next;
i++;
}
@@ -620,7 +634,7 @@ void __vma_link_rb(struct mm_struct *mm, struct vm_area_struct *vma,
if (vma->vm_next)
vma_gap_update(vma->vm_next);
else
- mm->highest_vm_end = vma->vm_end;
+ mm->highest_vm_end = vm_end_gap(vma);
/*
* vma->vm_prev wasn't known when we followed the rbtree to find the
@@ -866,7 +880,7 @@ again: remove_next = 1 + (end > next->vm_end);
vma_gap_update(vma);
if (end_changed) {
if (!next)
- mm->highest_vm_end = end;
+ mm->highest_vm_end = vm_end_gap(vma);
else if (!adjust_next)
vma_gap_update(next);
}
@@ -909,7 +923,7 @@ again: remove_next = 1 + (end > next->vm_end);
else if (next)
vma_gap_update(next);
else
- mm->highest_vm_end = end;
+ VM_WARN_ON(mm->highest_vm_end != vm_end_gap(vma));
}
if (insert && file)
uprobe_mmap(insert);
@@ -1741,7 +1755,7 @@ unsigned long unmapped_area(struct vm_unmapped_area_info *info)
while (true) {
/* Visit left subtree if it looks promising */
- gap_end = vma->vm_start;
+ gap_end = vm_start_gap(vma);
if (gap_end >= low_limit && vma->vm_rb.rb_left) {
struct vm_area_struct *left =
rb_entry(vma->vm_rb.rb_left,
@@ -1752,7 +1766,7 @@ unsigned long unmapped_area(struct vm_unmapped_area_info *info)
}
}
- gap_start = vma->vm_prev ? vma->vm_prev->vm_end : 0;
+ gap_start = vma->vm_prev ? vm_end_gap(vma->vm_prev) : 0;
check_current:
/* Check if current node has a suitable gap */
if (gap_start > high_limit)
@@ -1779,8 +1793,8 @@ check_current:
vma = rb_entry(rb_parent(prev),
struct vm_area_struct, vm_rb);
if (prev == vma->vm_rb.rb_left) {
- gap_start = vma->vm_prev->vm_end;
- gap_end = vma->vm_start;
+ gap_start = vm_end_gap(vma->vm_prev);
+ gap_end = vm_start_gap(vma);
goto check_current;
}
}
@@ -1844,7 +1858,7 @@ unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info)
while (true) {
/* Visit right subtree if it looks promising */
- gap_start = vma->vm_prev ? vma->vm_prev->vm_end : 0;
+ gap_start = vma->vm_prev ? vm_end_gap(vma->vm_prev) : 0;
if (gap_start <= high_limit && vma->vm_rb.rb_right) {
struct vm_area_struct *right =
rb_entry(vma->vm_rb.rb_right,
@@ -1857,7 +1871,7 @@ unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info)
check_current:
/* Check if current node has a suitable gap */
- gap_end = vma->vm_start;
+ gap_end = vm_start_gap(vma);
if (gap_end < low_limit)
return -ENOMEM;
if (gap_start <= high_limit && gap_end - gap_start >= length)
@@ -1883,7 +1897,7 @@ check_current:
struct vm_area_struct, vm_rb);
if (prev == vma->vm_rb.rb_right) {
gap_start = vma->vm_prev ?
- vma->vm_prev->vm_end : 0;
+ vm_end_gap(vma->vm_prev) : 0;
goto check_current;
}
}
@@ -1921,7 +1935,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags)
{
struct mm_struct *mm = current->mm;
- struct vm_area_struct *vma;
+ struct vm_area_struct *vma, *prev;
struct vm_unmapped_area_info info;
if (len > TASK_SIZE - mmap_min_addr)
@@ -1932,9 +1946,10 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
if (addr) {
addr = PAGE_ALIGN(addr);
- vma = find_vma(mm, addr);
+ vma = find_vma_prev(mm, addr, &prev);
if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)) &&
+ (!prev || addr >= vm_end_gap(prev)))
return addr;
}
@@ -1957,7 +1972,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
const unsigned long len, const unsigned long pgoff,
const unsigned long flags)
{
- struct vm_area_struct *vma;
+ struct vm_area_struct *vma, *prev;
struct mm_struct *mm = current->mm;
unsigned long addr = addr0;
struct vm_unmapped_area_info info;
@@ -1972,9 +1987,10 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
/* requesting a specific address */
if (addr) {
addr = PAGE_ALIGN(addr);
- vma = find_vma(mm, addr);
+ vma = find_vma_prev(mm, addr, &prev);
if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)) &&
+ (!prev || addr >= vm_end_gap(prev)))
return addr;
}
@@ -2099,21 +2115,19 @@ find_vma_prev(struct mm_struct *mm, unsigned long addr,
* update accounting. This is shared with both the
* grow-up and grow-down cases.
*/
-static int acct_stack_growth(struct vm_area_struct *vma, unsigned long size, unsigned long grow)
+static int acct_stack_growth(struct vm_area_struct *vma,
+ unsigned long size, unsigned long grow)
{
struct mm_struct *mm = vma->vm_mm;
struct rlimit *rlim = current->signal->rlim;
- unsigned long new_start, actual_size;
+ unsigned long new_start;
/* address space limit tests */
if (!may_expand_vm(mm, grow))
return -ENOMEM;
/* Stack limit test */
- actual_size = size;
- if (size && (vma->vm_flags & (VM_GROWSUP | VM_GROWSDOWN)))
- actual_size -= PAGE_SIZE;
- if (actual_size > READ_ONCE(rlim[RLIMIT_STACK].rlim_cur))
+ if (size > READ_ONCE(rlim[RLIMIT_STACK].rlim_cur))
return -ENOMEM;
/* mlock limit tests */
@@ -2151,17 +2165,30 @@ static int acct_stack_growth(struct vm_area_struct *vma, unsigned long size, uns
int expand_upwards(struct vm_area_struct *vma, unsigned long address)
{
struct mm_struct *mm = vma->vm_mm;
+ struct vm_area_struct *next;
+ unsigned long gap_addr;
int error = 0;
if (!(vma->vm_flags & VM_GROWSUP))
return -EFAULT;
/* Guard against wrapping around to address 0. */
- if (address < PAGE_ALIGN(address+4))
- address = PAGE_ALIGN(address+4);
- else
+ address &= PAGE_MASK;
+ address += PAGE_SIZE;
+ if (!address)
return -ENOMEM;
+ /* Enforce stack_guard_gap */
+ gap_addr = address + stack_guard_gap;
+ if (gap_addr < address)
+ return -ENOMEM;
+ next = vma->vm_next;
+ if (next && next->vm_start < gap_addr) {
+ if (!(next->vm_flags & VM_GROWSUP))
+ return -ENOMEM;
+ /* Check that both stack segments have the same anon_vma? */
+ }
+
/* We must make sure the anon_vma is allocated. */
if (unlikely(anon_vma_prepare(vma)))
return -ENOMEM;
@@ -2206,7 +2233,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
if (vma->vm_next)
vma_gap_update(vma->vm_next);
else
- mm->highest_vm_end = address;
+ mm->highest_vm_end = vm_end_gap(vma);
spin_unlock(&mm->page_table_lock);
perf_event_mmap(vma);
@@ -2227,6 +2254,8 @@ int expand_downwards(struct vm_area_struct *vma,
unsigned long address)
{
struct mm_struct *mm = vma->vm_mm;
+ struct vm_area_struct *prev;
+ unsigned long gap_addr;
int error;
address &= PAGE_MASK;
@@ -2234,6 +2263,17 @@ int expand_downwards(struct vm_area_struct *vma,
if (error)
return error;
+ /* Enforce stack_guard_gap */
+ gap_addr = address - stack_guard_gap;
+ if (gap_addr > address)
+ return -ENOMEM;
+ prev = vma->vm_prev;
+ if (prev && prev->vm_end > gap_addr) {
+ if (!(prev->vm_flags & VM_GROWSDOWN))
+ return -ENOMEM;
+ /* Check that both stack segments have the same anon_vma? */
+ }
+
/* We must make sure the anon_vma is allocated. */
if (unlikely(anon_vma_prepare(vma)))
return -ENOMEM;
@@ -2289,28 +2329,25 @@ int expand_downwards(struct vm_area_struct *vma,
return error;
}
-/*
- * Note how expand_stack() refuses to expand the stack all the way to
- * abut the next virtual mapping, *unless* that mapping itself is also
- * a stack mapping. We want to leave room for a guard page, after all
- * (the guard page itself is not added here, that is done by the
- * actual page faulting logic)
- *
- * This matches the behavior of the guard page logic (see mm/memory.c:
- * check_stack_guard_page()), which only allows the guard page to be
- * removed under these circumstances.
- */
+/* enforced gap between the expanding stack and other mappings. */
+unsigned long stack_guard_gap = 256UL<<PAGE_SHIFT;
+
+static int __init cmdline_parse_stack_guard_gap(char *p)
+{
+ unsigned long val;
+ char *endptr;
+
+ val = simple_strtoul(p, &endptr, 10);
+ if (!*endptr)
+ stack_guard_gap = val << PAGE_SHIFT;
+
+ return 0;
+}
+__setup("stack_guard_gap=", cmdline_parse_stack_guard_gap);
+
#ifdef CONFIG_STACK_GROWSUP
int expand_stack(struct vm_area_struct *vma, unsigned long address)
{
- struct vm_area_struct *next;
-
- address &= PAGE_MASK;
- next = vma->vm_next;
- if (next && next->vm_start == address + PAGE_SIZE) {
- if (!(next->vm_flags & VM_GROWSUP))
- return -ENOMEM;
- }
return expand_upwards(vma, address);
}
@@ -2332,14 +2369,6 @@ find_extend_vma(struct mm_struct *mm, unsigned long addr)
#else
int expand_stack(struct vm_area_struct *vma, unsigned long address)
{
- struct vm_area_struct *prev;
-
- address &= PAGE_MASK;
- prev = vma->vm_prev;
- if (prev && prev->vm_end == address) {
- if (!(prev->vm_flags & VM_GROWSDOWN))
- return -ENOMEM;
- }
return expand_downwards(vma, address);
}
@@ -2437,7 +2466,7 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
vma->vm_prev = prev;
vma_gap_update(vma);
} else
- mm->highest_vm_end = prev ? prev->vm_end : 0;
+ mm->highest_vm_end = prev ? vm_end_gap(prev) : 0;
tail_vma->vm_next = NULL;
/* Kill the cache */
- 升级kernel前,kernel是通过
do_anonymous_page()->check_stack_guard_page()->expand_downwards()去扩展stack,从而实现用户态ART虚拟机对保护页的访问。
2671 static inline int check_stack_guard_page(struct vm_area_struct *vma, unsigned long address)
2672 {
2673 address &= PAGE_MASK;
2674 if ((vma->vm_flags & VM_GROWSDOWN) && address == vma->vm_start) {
2675 struct vm_area_struct *prev = vma->vm_prev;
2676
2677 /*
2678 * Is there a mapping abutting this one below?
2679 *
2680 * That's only ok if it's the same stack mapping
2681 * that has gotten split..
2682 */
2683 if (prev && prev->vm_end == address)
2684 return prev->vm_flags & VM_GROWSDOWN ? 0 : -ENOMEM;
2685
2686 return expand_downwards(vma, address - PAGE_SIZE);
2687 }
2688 if ((vma->vm_flags & VM_GROWSUP) && address + PAGE_SIZE == vma->vm_end) {
2689 struct vm_area_struct *next = vma->vm_next;
2690
2691 /* As VM_GROWSDOWN but s/below/above/ */
2692 if (next && next->vm_start == address + PAGE_SIZE)
2693 return next->vm_flags & VM_GROWSUP ? 0 : -ENOMEM;
2694
2695 return expand_upwards(vma, address + PAGE_SIZE);
2696 }
2697 return 0;
2698 }
- 升级kernel后,该扩展方式被删除。
-
x86 code
由于可能存在安全隐患,所以kernel禁止访问当前sp以下超过65536 + 32 * sizeof(unsigned long)的访问。
kernel/arch/x86/mm/fault.c 1064 static noinline void 1065 __do_page_fault(struct pt_regs *regs, unsigned long error_code, 1066 unsigned long address) 1067 { ... ... 1212 if (error_code & PF_USER) { 1213 /* 1214 * Accessing the stack below %sp is always a bug. 1215 * The large cushion allows instructions like enter 1216 * and pusha to work. ("enter $65535, $31" pushes 1217 * 32 pointers and then decrements %sp by 65535.) 1218 */ 1219 if (unlikely(address + 65536 + 32 * sizeof(unsigned long) < regs->sp)) { 1220 bad_area(regs, error_code, address); 1221 return; 1222 } 1223 } 1224 if (unlikely(expand_stack(vma, address))) { 1225 bad_area(regs, error_code, address); 1226 return; 1227 } ... ... }
由以上可知,%esp - fault address = 0xff9b9340 - 0xff99c000 = 119616, 所以最终kernel抛出SIGSEGV错误。
-
arm code
222static int __kprobes 223__do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr, 224 unsigned int flags, struct task_struct *tsk) 225{ 226 struct vm_area_struct *vma; 227 int fault; 228 229 vma = find_vma(mm, addr); 230 fault = VM_FAULT_BADMAP; 231 if (unlikely(!vma)) 232 goto out; 233 if (unlikely(vma->vm_start > addr)) 234 goto check_stack; 235 236 /* 237 * Ok, we have a good vm_area for this 238 * memory access, so we can handle it. 239 */ 240good_area: 241 if (access_error(fsr, vma)) { 242 fault = VM_FAULT_BADACCESS; 243 goto out; 244 } 245 246 return handle_mm_fault(mm, vma, addr & PAGE_MASK, flags); 247 248check_stack: 249 /* Don't allow expansion below FIRST_USER_ADDRESS */ 250 if (vma->vm_flags & VM_GROWSDOWN && 251 addr >= FIRST_USER_ADDRESS && !expand_stack(vma, addr)) 252 goto good_area; 253out: 254 return fault; 255}
-
arm64 code
213static int __do_page_fault(struct mm_struct *mm, unsigned long addr, 214 unsigned int mm_flags, unsigned long vm_flags, 215 struct task_struct *tsk) 216{ 217 struct vm_area_struct *vma; 218 int fault; 219 220 vma = find_vma(mm, addr); 221 fault = VM_FAULT_BADMAP; 222 if (unlikely(!vma)) 223 goto out; 224 if (unlikely(vma->vm_start > addr)) 225 goto check_stack; 226 227 /* 228 * Ok, we have a good vm_area for this memory access, so we can handle 229 * it. 230 */ 231good_area: 232 /* 233 * Check that the permissions on the VMA allow for the fault which 234 * occurred. If we encountered a write or exec fault, we must have 235 * appropriate permissions, otherwise we allow any permission. 236 */ 237 if (!(vma->vm_flags & vm_flags)) { 238 fault = VM_FAULT_BADACCESS; 239 goto out; 240 } 241 242 return handle_mm_fault(mm, vma, addr & PAGE_MASK, mm_flags); 243 244check_stack: 245 if (vma->vm_flags & VM_GROWSDOWN && !expand_stack(vma, addr)) 246 goto good_area; 247out: 248 return fault; 249}
-
-
示意图如下
Root Cause
- 由于kernel升级,导致原来用户态程序访问线程栈顶页出现问题。
- x86架构和arm/arm64架构表现不同原因在于x86架构在expand_stack()前有检查机制.
- 目前来看,不能通过以前逐页访问的方式去做(ESP离访问的地址远远大于65535)。需要使ESP寄存器靠近栈顶,如采用递归调用函数等来解决。
解决方案
- AOSP已于今年6月底在ART仓库提交了一笔关于此类问题的修改patch
- ART patch
- 相关描述
ART: Change main-thread thread paging scheme Modify the code that ensures we can install a stack guard page into the main thread. A recent kernel change means that our previous approach of using a free pointer does not work. It is important to actually extend the stack properly. For portability, use a function with a large stack frame (suggested by and adapted from hboehm). Bug: 62952017 Test: m Test: m test-art-host Test: Device boots (x86_64 emulator) [change_type ] AOB --> google_original [tag_product ] common Test: Device boots (bullhead) Change-Id: Ic2a0c3d6d05a1ea9f655329d147b46949e1b9db3
- 验证
- 合入aosp patch后,x86和arm架构手机能正常启动。
- 查看maps文件,ART设置的溢出检查保护页也已经成功设置。
- zygote64的maps
... ... 7ffff6487000-7ffff6488000 ---p 00000000 00:00 0 7ffff6488000-7ffff6c87000 rw-p 00000000 00:00 0 [stack] ... ...
- zygote32的maps
... ... ff6e3000-ff6e4000 ---p 00000000 00:00 0 ff6e4000-ffee3000 rw-p 00000000 00:00 0 [stack] ... ...
- zygote64的maps