更快的移动内存页比mremap方式()? [英] Faster way to move memory page than mremap()?
问题描述
我一直在尝试与mremap()。我希望能够在高速行驶中移动的虚拟内存页面。至少有更高的速度比复制它们。我有一个算法,它可以利用能够真正快速移动的内存页面的一些想法。问题是,下面的程序显示,mremap()是非常缓慢的 - 至少在我的笔记本电脑的酷睿i7 - 相比实际复制相同的内存页逐字节
I've been experimenting with mremap(). I'd like to be able to move virtual memory pages around at high speeds. At least higher speeds than copying them. I have some ideas for algorithms which could make use of being able to move memory pages really fast. Problem is that the program below shows that mremap() is very slow -- at least on my i7 laptop -- compared to actually copying the same memory pages byte by byte.
如何测试源$ C $ C的工作? MMAP()256 MB的RAM,它比上CPU的高速缓存大。迭代200,000次。在使用特定的交换方法,每次迭代交换两个随机内存页面。运行一次,并使用mremap时间() - 基于页面交换方法。再次运行时间和使用基法的逐字节副本掉。原来,mremap()只管理每秒71577页互换,而逐字节的拷贝管理每秒高达287879页互换。所以mremap()是4字节通过复制慢于一个字节的时间!
How does the test source code work? mmap() 256 MB of RAM which is bigger than the on-CPU caches. Iterate for 200,000 times. On each iteration swap two random memory pages using a particular swap method. Run once and time using the mremap()-based page swap method. Run again and time using the byte-by-byte copy swap methed. Turns out that mremap() only manages 71,577 page swaps per second, whereas the byte-by-byte copy manages a whopping 287,879 page swaps per second. So mremap() is 4 times slower than a byte by byte copy!
问题:
为什么mremap()这么慢?
Why is mremap() so slow?
是否有其他用户土地或内核调用土地页映射操作的API,它可能会更快?
Is there another user-land or kernel-land callable page mapping manipulation API which might be faster?
是否有其他用户土地或内核调用土地页映射操作API,允许多个不连续的页面在一个呼叫被重新映射?
Is there another user-land or kernel-land callable page mapping manipulation API allowing multiple, non-consecutive pages to be remapped in one call?
有没有支持这样的事情的内核扩展?
Are there any kernel extensions that support this sort of thing?
#include <stdio.h>
#include <string.h>
#define __USE_GNU
#include <unistd.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/errno.h>
#include <asm/ldt.h>
#include <asm/unistd.h>
// gcc mremap.c && perl -MTime::HiRes -e '$t1=Time::HiRes::time;system(q[TEST_MREMAP=1 ./a.out]);$t2=Time::HiRes::time;printf qq[%u per second\n],(1/($t2-$t1))*200_000;'
// page size = 4096
// allocating 256 MB
// before 0x7f8e060bd000=0
// before 0x7f8e060be000=1
// before 0x7f8e160bd000
// after 0x7f8e060bd000=41
// after 0x7f8e060be000=228
// 71577 per second
// gcc mremap.c && perl -MTime::HiRes -e '$t1=Time::HiRes::time;system(q[TEST_COPY=1 ./a.out]);$t2=Time::HiRes::time;printf qq[%u per second\n],(1/($t2-$t1))*200_000;'
// page size = 4096
// allocating 256 MB
// before 0x7f1a9efa5000=0
// before 0x7f1a9efa6000=1
// before 0x7f1aaefa5000
// sizeof(i)=8
// after 0x7f1a9efa5000=41
// after 0x7f1a9efa6000=228
// 287879 per second
// gcc mremap.c && perl -MTime::HiRes -e '$t1=Time::HiRes::time;system(q[TEST_MEMCPY=1 ./a.out]);$t2=Time::HiRes::time;printf qq[%u per second\n],(1/($t2-$t1))*200_000;'
// page size = 4096
// allocating 256 MB
// before 0x7faf7c979000=0
// before 0x7faf7c97a000=1
// before 0x7faf8c979000
// sizeof(i)=8
// after 0x7faf7c979000=41
// after 0x7faf7c97a000=228
// 441911 per second
/*
* Algorithm:
* - Allocate 256 MB of memory
* - loop 200,000 times
* - swap a random 4k block for a random 4k block
* Run the test twice; once for swapping using page table, once for swapping using CPU copying!
*/
#define PAGES (1024*64)
int main() {
int PAGE_SIZE = getpagesize();
char* m = NULL;
unsigned char* p[PAGES];
void* t;
printf("page size = %d\n", PAGE_SIZE);
printf("allocating %u MB\n", PAGE_SIZE*PAGES / 1024 / 1024);
m = (char*)mmap(0, PAGE_SIZE*(1+PAGES), PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0);
t = &m[PAGES*PAGE_SIZE];
{
unsigned long i;
for (i=0; i<PAGES; i++) {
p[i] = &m[i*PAGE_SIZE];
memset(p[i], i & 255, PAGE_SIZE);
}
}
printf("before %p=%u\n", p[0], p[0][0]);
printf("before %p=%u\n", p[1], p[1][0]);
printf("before %p\n", t);
if (getenv("TEST_MREMAP")) {
unsigned i;
for (i=0; i<200001; i++) {
unsigned p1 = random() % PAGES;
unsigned p2 = random() % PAGES;
// mremap(void *old_address, size_t old_size, size_t new_size,int flags, /* void *new_address */);
mremap(p[p2], PAGE_SIZE, PAGE_SIZE, MREMAP_FIXED | MREMAP_MAYMOVE, t );
mremap(p[p1], PAGE_SIZE, PAGE_SIZE, MREMAP_FIXED | MREMAP_MAYMOVE, p[p2]);
mremap(t , PAGE_SIZE, PAGE_SIZE, MREMAP_FIXED | MREMAP_MAYMOVE, p[p1]); // p3 no longer exists after this!
} /* for() */
}
else if (getenv("TEST_MEMCPY")) {
unsigned long * pu[PAGES];
unsigned long i;
for (i=0; i<PAGES; i++) {
pu[i] = (unsigned long *)p[i];
}
printf("sizeof(i)=%lu\n", sizeof(i));
for (i=0; i<200001; i++) {
unsigned p1 = random() % PAGES;
unsigned p2 = random() % PAGES;
unsigned long * pa = pu[p1];
unsigned long * pb = pu[p2];
unsigned char t[PAGE_SIZE];
//memcpy(void *dest, const void *src, size_t n);
memcpy(t , pb, PAGE_SIZE);
memcpy(pb, pa, PAGE_SIZE);
memcpy(pa, t , PAGE_SIZE);
} /* for() */
}
else if (getenv("TEST_MODIFY_LDT")) {
unsigned long * pu[PAGES];
unsigned long i;
for (i=0; i<PAGES; i++) {
pu[i] = (unsigned long *)p[i];
}
printf("sizeof(i)=%lu\n", sizeof(i));
// int modify_ldt(int func, void *ptr, unsigned long bytecount);
//
// modify_ldt(int func, void *ptr, unsigned long bytecount);
// modify_ldt() reads or writes the local descriptor table (ldt) for a process. The ldt is a per-process memory management table used by the i386 processor. For more information on this table, see an Intel 386 processor handbook.
//
// When func is 0, modify_ldt() reads the ldt into the memory pointed to by ptr. The number of bytes read is the smaller of bytecount and the actual size of the ldt.
//
// When func is 1, modify_ldt() modifies one ldt entry. ptr points to a user_desc structure and bytecount must equal the size of this structure.
//
// The user_desc structure is defined in <asm/ldt.h> as:
//
// struct user_desc {
// unsigned int entry_number;
// unsigned long base_addr;
// unsigned int limit;
// unsigned int seg_32bit:1;
// unsigned int contents:2;
// unsigned int read_exec_only:1;
// unsigned int limit_in_pages:1;
// unsigned int seg_not_present:1;
// unsigned int useable:1;
// };
//
// On success, modify_ldt() returns either the actual number of bytes read (for reading) or 0 (for writing). On failure, modify_ldt() returns -1 and sets errno to indicate the error.
unsigned char ptr[20000];
int result;
result = modify_ldt(0, &ptr[0], sizeof(ptr)); printf("result=%d, errno=%u\n", result, errno);
result = syscall(__NR_modify_ldt, 0, &ptr[0], sizeof(ptr)); printf("result=%d, errno=%u\n", result, errno);
// todo: how to get these calls returning a non-zero value?
}
else {
unsigned long * pu[PAGES];
unsigned long i;
for (i=0; i<PAGES; i++) {
pu[i] = (unsigned long *)p[i];
}
printf("sizeof(i)=%lu\n", sizeof(i));
for (i=0; i<200001; i++) {
unsigned long j;
unsigned p1 = random() % PAGES;
unsigned p2 = random() % PAGES;
unsigned long * pa = pu[p1];
unsigned long * pb = pu[p2];
unsigned long t;
for (j=0; j<(4096/8/8); j++) {
t = *pa; *pa ++ = *pb; *pb ++ = t;
t = *pa; *pa ++ = *pb; *pb ++ = t;
t = *pa; *pa ++ = *pb; *pb ++ = t;
t = *pa; *pa ++ = *pb; *pb ++ = t;
t = *pa; *pa ++ = *pb; *pb ++ = t;
t = *pa; *pa ++ = *pb; *pb ++ = t;
t = *pa; *pa ++ = *pb; *pb ++ = t;
t = *pa; *pa ++ = *pb; *pb ++ = t;
}
} /* for() */
}
printf("after %p=%u\n", p[0], p[0][0]);
printf("after %p=%u\n", p[1], p[1][0]);
return 0;
}
更新:所以我们并不需要质疑的速度有多快往返内核空间'是,这里有一个进一步的性能测试程序,它表明,我们可以称之为GETPID()连续3次,每秒81916192次在相同的酷睿i7笔记本电脑:
Update: So that we don't need to question how fast 'round-trip to kernelspace' is, here's a further performance test program that shows that we can call getpid() 3 times in a row, 81,916,192 times per second on the same i7 laptop:
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
// gcc getpid.c && perl -MTime::HiRes -e '$t1=Time::HiRes::time;system(q[TEST_COPY=1 ./a.out]);$t2=Time::HiRes::time;printf qq[%u per second\n],(1/($t2-$t1))*100_000_000;'
// running_total=8545800085458
// 81916192 per second
/*
* Algorithm:
* - Call getpid() 100 million times.
*/
int main() {
unsigned i;
unsigned long running_total = 0;
for (i=0; i<100000001; i++) {
/* 123123123 */
running_total += getpid();
running_total += getpid();
running_total += getpid();
} /* for() */
printf("running_total=%lu\n", running_total);
}
更新2:我添加WIP code打电话,我发现所谓modify_ldt()函数。该名男子页暗示操纵页面有可能。但是,不管我怎么努力那么函数总是返回零的时候,我期待它返回读取的字节数。 人modify_ldt说:如果成功,modify_ldt()返回读(读)字节无论是实际数量或0(写)。如果失败,modify_ldt()返回-1,并设置errno以指示错误。任何想法(一)是否modify_ldt()将替代mremap()? (二)如何获得modify_ldt()的工作?
Update 2: I added WIP code to call a function I discovered called modify_ldt(). The man page hints that page manipulation might be possible. However, no matter what I try then the function always returns zero when I'm expecting it to return the number of bytes read. 'man modify_ldt' says "On success, modify_ldt() returns either the actual number of bytes read (for reading) or 0 (for writing). On failure, modify_ldt() returns -1 and sets errno to indicate the error." Any ideas (a) whether modify_ldt() will be an alternative to mremap() ? and (b) how to get modify_ldt() working?
推荐答案
这似乎是没有更快的用户,土地机制,以重新排序内存页比的memcpy()。 mremap()慢得多,因此仅适用于大小调整使用mmap pviously分配的内存$ P $面积有用的()。
It appears that there is no faster user-land mechanism to re-order memory pages than memcpy(). mremap() is far slower and therefore only useful for re-sizing an area of memory previously assigned using mmap().
不过,页表必须非常快,我听到你说!并有可能为用户级调用内核函数百万次每秒!以下引用有助于解释为什么mremap()是如此之慢:
But page tables must be extremely fast I hear you saying! And it's possible for user-land to call kernel functions millions of times per second! The following references help explain why mremap() is so slow:
"An Introduction to Intel Memory Management" is a nice introduction to the theory of memory page mapping.
英特尔虚拟内存的关键概念更加详细地显示它是如何工作,如果你打算写自己的操作系统: - )
"Key concepts of Intel virtual memory" shows how it all works in more detail, in case you plan on writing your own OS :-)
共享页表中的Linux内核显示了一些困难Linux的内存页映射架构决策及其对性能的影响。
"Sharing Page Tables in the Linux Kernel" shows some of the difficult Linux memory page mapping architectural decisions and their effect on performance.
在所有三个引用一起看,然后我们可以看到,很少有精力到目前为止,从内核架构师揭露一种有效的方式内存页面映射到用户空间。甚至在内核中,页表的操作必须通过使用最多三个锁,这将是缓慢进行。
Looking at all three references together then we can see that there has been little effort so far from kernel architects to expose memory page mapping to user-land in an efficient way. Even in the kernel, manipulation of the page table must be done by using up to three locks which will be slow.
去向前,由于页表本身是由4K页面,它可能会改变内核以便特定页表页是独特的特定线程,并且可以假定为具有对锁止较少访问该方法的持续时间。这将有利于通过用户陆特定页面表页的非常有效的操纵。但这种移动原来问题的范围。
Going forwards, since the page table itself is made up of 4k pages, it may be possible to change the kernel so that particular page table pages are unique to a particular thread and can be assumed to have lock-less access for the duration of the process. This would facilitate very efficient manipulation of that particular page table page via user-land. But this moves outside the scope of the original question.
这篇关于更快的移动内存页比mremap方式()?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!