Docker 中的 Neo4j - 最大堆大小导致硬崩溃 137 [英] Neo4j in Docker - Max Heap Size Causes Hard crash 137

查看:29
本文介绍了Docker 中的 Neo4j - 最大堆大小导致硬崩溃 137的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在 OSX (El Capitan) 上运行的 Docker 容器中启动 Neo4j 3.1 实例(通过 Docker-Compose).一切都很好,除非我尝试将 Neo 可用的最大堆空间增加到默认值 512MB 以上.

根据

  • 我正在运行的查询是跨很多的大规模更新包含全文属性的节点(> 150k),以便它们可以使用插件同步到 ElasticSearch.有没有办法让 Neo 一次完成 500 个节点,只使用cypher(如果可以的话,我宁愿避免编写脚本,感觉有点为此很脏).

    N/A 这是一个 NEO4J 特定问题.最好将其与上面列出的 Docker 问题分开.

  • I'm trying to spin up a Neo4j 3.1 instance in a Docker container (through Docker-Compose), running on OSX (El Capitan). All is well, unless I try to increase the max-heap space available to Neo above the default of 512MB.

    According to the docs, this can be achieved by adding the environment variable NEO4J_dbms_memory_heap_maxSize, which then causes the server wrapper script to update the neo4j.conf file accordingly. I've checked and it is being updated as one would expect.

    The problem is, when I run docker-compose up to spin up the container, the Neo4j instance crashes out with a 137 status code. A little research tells me this is a linux hard-crash, based on heap-size maximum limits.

    $ docker-compose up
    Starting elasticsearch
    Recreating neo4j31
    Attaching to elasticsearch, neo4j31
    neo4j31          | Starting Neo4j.
    neo4j31 exited with code 137
    

    My questions:

    1. Is this due to a Docker or an OSX limitation?
    2. Is there a way I can modify these limits? If I drop the requested limit to 1GB, it will spin up, but still crashes once I run my heavy query (which is what caused the need for increased Heap space anyway).
    3. The query that I'm running is a large-scale update across a lot of nodes (>150k) containing full-text attributes, so that they can be syncronised to ElasticSearch using the plug-in. Is there a way I can get Neo to step through doing, say, 500 nodes at a time, using only cypher (I'd rather avoid writing a script if I can, feels a little dirty for this).

    My docker-compose.yml is as follows:

    ---
    version: '2'
    services:
     # ---<SNIP>
    
      neo4j:
        image: neo4j:3.1
        container_name: neo4j31
        volumes:
          - ./docker/neo4j/conf:/var/lib/neo4j/conf
          - ./docker/neo4j/mnt:/var/lib/neo4j/import
          - ./docker/neo4j/plugins:/plugins 
          - ./docker/neo4j/data:/data
          - ./docker/neo4j/logs:/var/lib/neo4j/logs
        ports:
            - "7474:7474"
            - "7687:7687"
        environment:
            - NEO4J_dbms_memory_heap_maxSize=4G
    
     # ---<SNIP>
    

    解决方案

    1. Is this due to a Docker or an OSX limitation?

      NO Increase the amount of available RAM to Docker to resolve this issue.

    2. Is there a way I can modify these limits? If I drop the requested limit to 1GB, it will spin up, but still crashes once I run my heavy query (which is what caused the need for increased Heap space anyway).

    3. The query that I'm running is a large-scale update across a lot of nodes (>150k) containing full-text attributes, so that they can be syncronised to ElasticSearch using the plug-in. Is there a way I can get Neo to step through doing, say, 500 nodes at a time, using only cypher (I'd rather avoid writing a script if I can, feels a little dirty for this).

      N/A This is a NEO4J specific question. It might be better to seperate this from the Docker questions listed above.

    这篇关于Docker 中的 Neo4j - 最大堆大小导致硬崩溃 137的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆