虚拟内存页面替换算法 [英] Virtual Memory Page Replacement Algorithms

查看:107
本文介绍了虚拟内存页面替换算法的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个项目,要求我开发一个应用程序来模拟不同页面替换算法的执行方式(具有不同的工作集大小和稳定期).我的结果:

  • 纵轴:页面错误
  • 水平轴:工作集大小
  • 深度轴:稳定期

我的结果合理吗?我希望 LRU 比 FIFO 有更好的结果.在这里,它们大致相同.

对于随机,稳定期和工作集大小似乎根本不影响性能?我期望与 FIFO & 类似的图表LRU 只是性能最差?如果引用字符串高度稳定(小分支)并且工作集大小很小,那么它应该仍然比具有许多分支和大工作集大小的应用程序具有更少的页面错误?

更多信息

以及最佳和最坏情况.您可能已经熟悉这种简洁的评估方法,因此这主要是为了那些可能没有算法背景的阅读者的利益.

让我们按算法分解您的问题,并在上​​下文中探索它们的组件属性:

  1. FIFO 显示页面错误随着工作集大小(长度轴)的增加而增加.

    这是正确的行为,与 FIFO 替换的贝拉迪异常一致.随着工作页面集大小的增加,页面错误的数量也应该增加.

  2. FIFO 显示随着系统稳定性(1 - 深度轴)降低,缺页错误增加.

    注意你的种子稳定性算法(if random.random() <稳定性),你的结果变得不太稳定,因为稳定性(S) 方法 1. 当您大幅增加数据中的时,页面错误的数量也会增加, 急剧增加和传播贝拉迪异常.

    到目前为止,一切都很好.

  3. LRU 显示与 FIFO 的一致性.为什么?

    注意您的播种算法.标准LRU在您的分页请求结构化为较小的操作框架时是最理想的.对于有序的、可预测的查找,它改进了先进先出,老化不再存在于当前执行帧中的结果,这对于分阶段执行和封装的模态操作是一个非常有用的属性.再次,到目前为止,还不错.

    但是,回想一下您如何为输入数据播种:使用random.random,从而为您提供均匀分布 数据具有可控的熵水平.正因为如此,所有值都同样有可能出现,并且因为您是在浮点空间中构建的,极不可能复发.

    因此,您的 LRU 感知每个元素出现的次数很少,然后在计算下一个值时完全丢弃.因此,它会在每个值落出窗口时正确地对其进行分页,从而为您提供与 FIFO 完全相同的性能.如果您的系统正确地考虑了重复或压缩的字符空间,您会看到明显不同的结果.

  4. 对于随机,稳定期和工作集大小似乎根本不影响性能.为什么我们会在整个图表中看到这种涂鸦,而不是给我们一个相对平滑的流形?

    随机分页方案的情况下,您可以随机老化每个条目.据称,这应该给我们某种形式的流形绑定到我们工作集的熵和大小......对吗?

    或者应该吗?对于每组条目,您随机分配一个子集作为时间的函数进行分页.这应该提供相对均匀的分页性能,无论稳定性如何,也无论您的工作集如何,只要您的访问配置文件再次统一随机.

    因此,根据您正在检查的条件,这是完全正确的行为与我们期望的一致.您将获得均匀的分页性能,该性能不会因其他因素而降低(但相反,不会因它们而改善),适用于高负载、高效操作.不错,只是不是您直觉上所期望的.

所以,简而言之,这就是您的项目目前正在实施的细分.

作为在输入数据的不同处置和分布的背景下进一步探索这些算法的属性的练习,我强烈建议深入研究 scipy.stats 以查看例如高斯分布或逻辑分布可能对每个图形产生什么影响.然后,我会回到每种算法的记录期望,并草拟每种算法最合适和最不合适的案例.

总而言之,我认为您的老师会感到自豪.:)

I have a project where I am asked to develop an application to simulate how different page replacement algorithms perform (with varying working set size and stability period). My results:

  • Vertical axis: page faults
  • Horizontal axis: working set size
  • Depth axis: stable period

Are my results reasonable? I expected LRU to have better results than FIFO. Here, they are approximately the same.

For random, stability period and working set size doesnt seem to affect the performance at all? I expected similar graphs as FIFO & LRU just worst performance? If the reference string is highly stable (little branches) and have a small working set size, it should still have less page faults that an application with many branches and big working set size?

More Info

My Python Code | The Project Question

  • Length of reference string (RS): 200,000
  • Size of virtual memory (P): 1000
  • Size of main memory (F): 100
  • number of time page referenced (m): 100
  • Size of working set (e): 2 - 100
  • Stability (t): 0 - 1

Working set size (e) & stable period (t) affects how reference string are generated.

|-----------|--------|------------------------------------|
0           p        p+e                                  P-1

So assume the above the the virtual memory of size P. To generate reference strings, the following algorithm is used:

  • Repeat until reference string generated
    • pick m numbers in [p, p+e]. m simulates or refers to number of times page is referenced
    • pick random number, 0 <= r < 1
    • if r < t
      • generate new p
      • else (++p)%P

UPDATE (In response to @MrGomez's answer)

However, recall how you seeded your input data: using random.random, thus giving you a uniform distribution of data with your controllable level of entropy. Because of this, all values are equally likely to occur, and because you've constructed this in floating point space, recurrences are highly improbable.

I am using random, but it is not totally random either, references are generated with some locality though the use of working set size and number page referenced parameters?

I tried increasing the numPageReferenced relative with numFrames in hope that it will reference a page currently in memory more, thus showing the performance benefit of LRU over FIFO, but that didn't give me a clear result tho. Just FYI, I tried the same app with the following parameters (Pages/Frames ratio is still kept the same, I reduced the size of data to make things faster).

--numReferences 1000 --numPages 100 --numFrames 10 --numPageReferenced 20

The result is

Still not such a big difference. Am I right to say if I increase numPageReferenced relative to numFrames, LRU should have a better performance as it is referencing pages in memory more? Or perhaps I am mis-understanding something?

For random, I am thinking along the lines of:

  • Suppose theres high stability and small working set. It means that the pages referenced are very likely to be in memory. So the need for the page replacement algorithm to run is lower?

Hmm maybe I got to think about this more :)

UPDATE: Trashing less obvious on lower stablity

Here, I am trying to show the trashing as working set size exceeds the number of frames (100) in memory. However, notice thrashing appears less obvious with lower stability (high t), why might that be? Is the explanation that as stability becomes low, page faults approaches maximum thus it does not matter as much what the working set size is?

解决方案

These results are reasonable given your current implementation. The rationale behind that, however, bears some discussion.

When considering algorithms in general, it's most important to consider the properties of the algorithms currently under inspection. Specifically, note their corner cases and best and worst case conditions. You're probably already familiar with this terse method of evaluation, so this is mostly for the benefit of those reading here whom may not have an algorithmic background.

Let's break your question down by algorithm and explore their component properties in context:

  1. FIFO shows an increase in page faults as the size of your working set (length axis) increases.

    This is correct behavior, consistent with Bélády's anomaly for FIFO replacement. As the size of your working page set increases, the number of page faults should also increase.

  2. FIFO shows an increase in page faults as system stability (1 - depth axis) decreases.

    Noting your algorithm for seeding stability (if random.random() < stability), your results become less stable as stability (S) approaches 1. As you sharply increase the entropy in your data, the number of page faults, too, sharply increases and propagates the Bélády's anomaly.

    So far, so good.

  3. LRU shows consistency with FIFO. Why?

    Note your seeding algorithm. Standard LRU is most optimal when you have paging requests that are structured to smaller operational frames. For ordered, predictable lookups, it improves upon FIFO by aging off results that no longer exist in the current execution frame, which is a very useful property for staged execution and encapsulated, modal operation. Again, so far, so good.

    However, recall how you seeded your input data: using random.random, thus giving you a uniform distribution of data with your controllable level of entropy. Because of this, all values are equally likely to occur, and because you've constructed this in floating point space, recurrences are highly improbable.

    As a result, your LRU is perceiving each element to occur a small number of times, then to be completely discarded when the next value was calculated. It thus correctly pages each value as it falls out of the window, giving you performance exactly comparable to FIFO. If your system properly accounted for recurrence or a compressed character space, you would see markedly different results.

  4. For random, stability period and working set size doesn't seem to affect the performance at all. Why are we seeing this scribble all over the graph instead of giving us a relatively smooth manifold?

    In the case of a random paging scheme, you age off each entry stochastically. Purportedly, this should give us some form of a manifold bound to the entropy and size of our working set... right?

    Or should it? For each set of entries, you randomly assign a subset to page out as a function of time. This should give relatively even paging performance, regardless of stability and regardless of your working set, as long as your access profile is again uniformly random.

    So, based on the conditions you are checking, this is entirely correct behavior consistent with what we'd expect. You get an even paging performance that doesn't degrade with other factors (but, conversely, isn't improved by them) that's suitable for high load, efficient operation. Not bad, just not what you might intuitively expect.

So, in a nutshell, that's the breakdown as your project is currently implemented.

As an exercise in further exploring the properties of these algorithms in the context of different dispositions and distributions of input data, I highly recommend digging into scipy.stats to see what, for example, a Gaussian or logistic distribution might do to each graph. Then, I would come back to the documented expectations of each algorithm and draft cases where each is uniquely most and least appropriate.

All in all, I think your teacher will be proud. :)

这篇关于虚拟内存页面替换算法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆