AWS Lambda和不正确的内存分配 [英] AWS Lambda and inaccurate memory allocation
问题描述
我意识到,我需要为我的AWS Lambda函数分配比所需更多的内存,否则我将得到:
I realized that I need to allocate much more memory than needed to my AWS Lambda functions otherwise I get:
{
"errorMessage": "Metaspace",
"errorType": "java.lang.OutOfMemoryError"
}
例如,我有一个分配了128MB的Lambda函数,它始终因该错误而崩溃,而在控制台中则显示使用的最大内存为56 MB".然后,我分配了256MB,它不再崩溃,但始终为我提供75至85MB之间的已用最大内存".
For instance I have a Lambda function with 128MB allocated, it crashes all the time with that error whereas in the console it says "Max memory used 56 MB".
Then I allocate 256MB, it doesn't crash anymore but it always give me a "Max memory used" between 75 and 85MB.
为什么?谢谢.
推荐答案
分配给java lambda函数的内存量由堆,元和保留的代码内存共享.
The amount of memory you allocate to your java lambda function is shared by heap, meta, and reserved code memory.
容器为分配的256M函数执行的Java命令类似于:
The java command executed by the container for a function allocated 256M is something like:
java -XX:MaxHeapSize=222823k -XX:MaxMetaspaceSize=26214k -XX:ReservedCodeCacheSize=13107k -XX:+UseSerialGC -Xshare:on -XX:-TieredCompilation -jar /var/runtime/lib/LambdaJavaRTEntry-1.0.jar
222823k + 26214k + 13107k = 256M
222823k + 26214k + 13107k = 256M
容器为分配的384M函数执行的java命令类似于
The java command executed by the container for a function allocated 384M is something like
java -XX:MaxHeapSize=334233k -XX:MaxMetaspaceSize=39322k -XX:ReservedCodeCacheSize=39322k -XX:+UseSerialGC -Xshare:on -XX:-TieredCompilation -jar /var/runtime/lib/LambdaJavaRTEntry-1.0.jar
334233k + 39322k + 39322k = 384M
334233k + 39322k + 39322k = 384M
所以,公式似乎是
85%的堆+ 10%的元+ 5%的保留代码缓存=已配置功能内存的100%
85% heap + 10% meta + 5% reserved code cache = 100% of configured function memory
老实说,我不知道如何计算Cloudwatch日志中报告的已用最大内存"值.它与我所看到的内容不符.
I honestly don't know how the "Max Memory Used" value reported in Cloudwatch logs is calculated. It doesn't align with anything that I'm seeing.
这篇关于AWS Lambda和不正确的内存分配的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!