解决NFS问题 [英] Solr over NFS problems

查看:137
本文介绍了解决NFS问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们的应用程序使用嵌入式Solr实例进行搜索。数据目录位于NFS上,我无法更改。 Solr的用法非常简单,有一个线程可以定期更新索引,并且有几个读者线程 - 这些都在一个java进程中。没有其他Solr交互发生。

Our application uses embedded Solr instance for search. The data directory is located on NFS and I cannot change that. The usage of Solr is very simple, there's a single thread that periodically updates index and there are several reader threads - these all are inside one java process. No other Solr interaction takes place.

使用默认的solrconfig.xml,我有时遇到java.nio.channels.OverlappingFileLockException。据我所知,原因实际上是SimpleFSLockFactory与NFS无法正常工作。

With default "solrconfig.xml" I sometimes run into "java.nio.channels.OverlappingFileLockException". As far as I understand the reason is actually "SimpleFSLockFactory" not working correctly with NFS.

问题:


  1. 鉴于上述应用场景(否)并发索引修改),NoLockFactory应该不够用吗?使用NoLockFactory有什么缺点吗?如果我设置NoLockFactory,我会在错误日志中收到一些条目,说配置警告:锁被禁用。为什么该消息会进入错误日志?这真的被认为是一个错误案例,为什么?

  1. Given the application scenario described above (no concurrent index modifications), shouldn't NoLockFactory be enough? Are there any drawbacks in using NoLockFactory? If I setup NoLockFactory I get a number of entries in error log saying that "CONFIGURATION WARNING: locks are disabled". Why does that message go into error log? Is that really considered an error case and why?

也许有比使用NoLockFactory更好的解决方案?

Maybe there's a better solution than using "NoLockFactory"?

不确定这与NFS有关,但有时(很少见)我的索引被破坏了,我在尝试更新索引时得到了很多java.io.FileNotFoundException:_i.fdx。除了手动删除整个索引目录并从头开始之外,没有办法解决这个问题。为什么会发生这种情况,是否有任何优雅的方法可以自动检测损坏的索引并恢复?

Not sure this is related to NFS, but sometimes (quiet rare) my index gets corrupted and I get lots of "java.io.FileNotFoundException: _i.fdx" while trying to update an index. There's no way out of this other than manually delete the whole index directory and start from scratch. Why can this happen and is there any graceful way to automatically detect broken index and recover?


推荐答案

在NFS上存储索引很容易出问题,但如果有的话已经在NFS上运行,我预测这个问题可能源于不使用NFSv4或者没有正确使用它。 NFSv4是第一个支持锁定字节范围的版本,NFSv2& v3(很差)支持整个文件,并且没有运行portmap,rpc.lockd和rpc.statd - 这些锁可能只是建议性的(而不是强制性的),但绝对不会涵盖字节范围锁定。

Storing your indexes over NFS is prone to problems, but if it had to be run over NFS, I predict that this problem arises possibly from not using NFSv4, or not using it correctly. NFSv4 is the first version to support locking byte-ranges, NFSv2 & v3 (poorly) support entire files, and without running portmap, rpc.lockd and rpc.statd - the locks are probably only advisory (as opposed to mandatory), but definitely not going to be covered for byte-range locking.

java。 nio.channels.OverlappingFileLockException 表示

Unchecked exception thrown when an attempt is made to acquire a lock on a region of a file 
that overlaps a region already locked by the same Java virtual machine, or when another 
thread is already waiting to lock an overlapping region of the same file.

粗略搜索Lucene邮件列表会返回许多结果,这些结果似乎表明使用Lucene(和通过扩展,Solr)通过NFS是一个坏主意

A cursory search of the Lucene mailing list returns many results that seem to indicate that using Lucene (and, by extension, Solr) over NFS is a bad idea.

除了锁定问题之外,性能也可能非常糟糕。

Locking issues aside, the performance will probably be pretty bad as well.

我知道这不是你希望的答案,但是这是你需要的答案。

I know this isn't the answer you were hoping for, but it's the answer you need.

这篇关于解决NFS问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆