是什么决定进程在内存中具有哪种结构? [英] What decides which structure a process has in memory?

查看:120
本文介绍了是什么决定进程在内存中具有哪种结构?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我了解到一个进程在内存中具有以下结构:

I've learned that a process has the following structure in memory:

(来自操作系统概念"的图像,第82页)

(Image from Operating System Concepts, page 82)

但是,我不清楚是什么决定一个过程看起来像这样.我想如果您看一下非标准的OS/体系结构,进程可能会(并且确实会发生变化).

However, it is not clear to me what decides that a process looks like this. I guess processes could (and do?) look different if you have a look at non-standard OS / architectures.

这种结构是由操作系统决定的吗?由程序编译吗?通过计算机架构?这些结合吗?

Is this structure decided by the OS? By the compiler of the program? By the computer architecture? A combination of those?

推荐答案

相关且可能重复的内容:

Related and possible duplicate: Why do stacks typically grow downwards?.

在某些ISA(例如x86)上,放入向下增长的堆栈.(例如, call 递减SP/ESP/RSP,然后再推送返回地址,而异常/中断则推送返回上下文因此,即使您编写的效率低下的代码避免了 call 指令,您也无法逃脱至少内核堆栈的硬件使用,尽管用户空间堆栈可以执行您想要的任何操作.)

On some ISAs (like x86), a downward-growing stack is baked in. (e.g. call decrements SP/ESP/RSP before pushing a return address, and exceptions / interrupts push a return context onto the stack so even if you wrote inefficient code that avoided the call instruction, you can't escape hardware usage of at least the kernel stack, although user-space stacks can do whatever you want.)

在其他情况下(例如没有隐式堆栈使用的MIPS),这是一种软件约定.

On others (like MIPS where there's no implicit stack usage), it's a software convention.

其余的布局是这样的:在堆栈向下和/或堆栈向上冲突之前,您需要尽可能大的空间.(或者允许您为它们的增长设置更大的限制.)

The rest of the layout follows from that: you want as much room as possible for downward stack growth and/or upward heap growth before they collide. (Or allowing you to set larger limits on their growth.)

根据操作系统和可执行文件的格式,链接器可能会选择布局,例如文本是在BSS之上还是之下,以及是读写数据.操作系统的程序加载器必须考虑链接器在何处加载段(至少相对于彼此,对于支持其静态代码/数据/BSS的ASLR的可执行文件).通常,此类可执行文件使用PC相对寻址来访问静态数据,因此将文本相对与数据或bss关联起来会需要运行时修复(并且没有完成).

Depending on the OS and executable file format, the linker may get to choose the layout, like whether text is above or below BSS and read-write data. The OS's program loader must respect where the linker asks for sections to be loaded (at least relative to each other, for executables that support ASLR of their static code/data/BSS). Normally such executables use PC-relative addressing to access static data, so ASLRing the text relative to the data or bss would require runtime fixups (and isn't done).

或依赖于位置可执行文件的所有段都加载到固定(虚拟)地址,并且只有堆栈地址是随机的.

Or position-dependent executables have all their segments loaded at fixed (virtual) addresses, with only the stack address randomized.

堆"通常不是真正的东西,尤其是在具有虚拟内存的系统中,因此每个进程都可以拥有自己的私有虚拟地址空间.通常,您为堆栈保留了一些空间,并且尚未分配的所有内容都是分配malloc(实际上是其基础的 mmap(MAP_ANONYMOUS)系统调用)时公平的选择,以便分配新页面.但是是的,即使是现代Linux上的现代glibc的 malloc 仍然使用 brk()向上移动程序中断"以进行小分配,从而增加了堆"的大小.您的图表显示方式.

The "heap" isn't normally a real thing, especially in systems with virtual memory so each process can have their own private virtual address space. Normally you have some space reserved for the stack, and everything outside that which isn't already mapped is fair game for malloc (actually its underlying mmap(MAP_ANONYMOUS) system calls) to choose when allocating new pages. But yes even modern glibc's malloc on modern Linux does still use brk() to move the "program break" upward for small allocations, increasing the size of "the heap" the way your diagram shows.

这篇关于是什么决定进程在内存中具有哪种结构?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆