Neo4j在Docker - 最大堆大小导致严重碰撞137 [英] Neo4j in Docker - Max Heap Size Causes Hard crash 137

查看:1231
本文介绍了Neo4j在Docker - 最大堆大小导致严重碰撞137的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图在一个Docker容器(通过Docker-Compose)中启动Neo4j 3.1实例,在OSX(El Capitan)上运行。一切都很好,除非我尝试增加Neo的最大堆空间,默认值为512MB。



根据


  • 我正在运行的查询是包含全文属性的大量
    节点(> 150k)的大规模更新,以便它们可以使用插件同步到ElasticSearch。有没有办法我可以
    让Neo一步一步地说,一个500节点,一次只使用
    cypher(我宁愿避免写一个脚本,如果可以,感觉一点点
    肮脏为此)。



    N / A 这是一个NEO4J的具体问题。从Docker以上列出的问题分离出来可能更好。



  • I'm trying to spin up a Neo4j 3.1 instance in a Docker container (through Docker-Compose), running on OSX (El Capitan). All is well, unless I try to increase the max-heap space available to Neo above the default of 512MB.

    According to the docs, this can be achieved by adding the environment variable NEO4J_dbms_memory_heap_maxSize, which then causes the server wrapper script to update the neo4j.conf file accordingly. I've checked and it is being updated as one would expect.

    The problem is, when I run docker-compose up to spin up the container, the Neo4j instance crashes out with a 137 status code. A little research tells me this is a linux hard-crash, based on heap-size maximum limits.

    $ docker-compose up
    Starting elasticsearch
    Recreating neo4j31
    Attaching to elasticsearch, neo4j31
    neo4j31          | Starting Neo4j.
    neo4j31 exited with code 137
    

    My questions:

    1. Is this due to a Docker or an OSX limitation?
    2. Is there a way I can modify these limits? If I drop the requested limit to 1GB, it will spin up, but still crashes once I run my heavy query (which is what caused the need for increased Heap space anyway).
    3. The query that I'm running is a large-scale update across a lot of nodes (>150k) containing full-text attributes, so that they can be syncronised to ElasticSearch using the plug-in. Is there a way I can get Neo to step through doing, say, 500 nodes at a time, using only cypher (I'd rather avoid writing a script if I can, feels a little dirty for this).

    My docker-compose.yml is as follows:

    ---
    version: '2'
    services:
     # ---<SNIP>
    
      neo4j:
        image: neo4j:3.1
        container_name: neo4j31
        volumes:
          - ./docker/neo4j/conf:/var/lib/neo4j/conf
          - ./docker/neo4j/mnt:/var/lib/neo4j/import
          - ./docker/neo4j/plugins:/plugins 
          - ./docker/neo4j/data:/data
          - ./docker/neo4j/logs:/var/lib/neo4j/logs
        ports:
            - "7474:7474"
            - "7687:7687"
        environment:
            - NEO4J_dbms_memory_heap_maxSize=4G
    
     # ---<SNIP>
    

    解决方案

    1. Is this due to a Docker or an OSX limitation?

      NO Increase the amount of available RAM to Docker to resolve this issue.

    2. Is there a way I can modify these limits? If I drop the requested limit to 1GB, it will spin up, but still crashes once I run my heavy query (which is what caused the need for increased Heap space anyway).

    3. The query that I'm running is a large-scale update across a lot of nodes (>150k) containing full-text attributes, so that they can be syncronised to ElasticSearch using the plug-in. Is there a way I can get Neo to step through doing, say, 500 nodes at a time, using only cypher (I'd rather avoid writing a script if I can, feels a little dirty for this).

      N/A This is a NEO4J specific question. It might be better to seperate this from the Docker questions listed above.

    这篇关于Neo4j在Docker - 最大堆大小导致严重碰撞137的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆