Hibernate搜索:持久原因有时会导致OverlappingFileLockException [英] Hibernate search: persist causes sometimes OverlappingFileLockException

查看:299
本文介绍了Hibernate搜索:持久原因有时会导致OverlappingFileLockException的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个带JPA(Hibernate 5.0.11.Final)和Hibernate Search(5.5.5.Final)的WebAppilcation runnig,其中用户试图保存一个新的实体。因此,会有如下几个调用:

pre $ protected $ saveNewEntity ){
factory = Persistence
.createEntityManagerFactory(DBBase.PERSISTENCE_UNIT);
}
EntityManager em = initEntityManager();
尝试{
em.setFlushMode(FlushModeType.COMMIT);
EntityTransaction transaction = em.getTransaction();
transaction.begin();
em.persist(toSave);
transaction.commit();
} catch(Exception e){
throw e;
} finally {
finalizeEntityManager(em);
}
返回保存;
}

protected void finalizeEntityManager(EntityManager em){
if(em!= null&& em.isOpen()){
em.close );


$ / code $ / pre
$ b $ / p>被索引的实体也以这种方式保存。它没有Cascading并且完全平坦(没有涉及其他表)。

大多数情况下,这将运行良好,并且索引将被更新。



但有时,我不知道为什么,会发生以下异常,因此索引不会更新:

  2017-04-04 10:30:48,552错误[LuceneBackendQueueTask:run:54] HSEARCH000073:后端错误
java.nio.channels.OverlappingFileLockException $ b $ sun.nio .ch.SharedFileLockTable.checkList(FileLockTable.java:255)〜[?:1.8.0_121]
at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152)〜[?:1.8.0_121]
at sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1108)〜[?:1.8.0_121]
位于java.nio.channels.FileChannel.tryLock(FileChannel.java:1155) 〜[?:1.8.0_121]
在org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:114)〜[lucene-core-5.3.1.jar:5.3.1 1703449 - noble - 2015-09-17 01:38:09]
在org.ap ache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41)〜[lucene-core-5.3.1.jar:5.3.1 1703449 - noble - 2015-09-17 01:38:09]
在org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45)〜[lucene-core-5.3.1.jar:5.3.1 1703449 - noble - 2015-09-17 01:38:09]
at org.apache.lucene.index.IndexWriter。< init>(IndexWriter.java:775)〜[lucene-core-5.3.1.jar:5.3.1 1703449 - noble - 2015-09-17 01 :38:09]
at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:123)〜[hibernate-search-engine-5.5.5.Final.jar:5.5。 5.Final]
at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:89)〜[hibernate-search-engine-5.5.5.Final.jar:5.5.5 .Final]
at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:112)〜[hibernate-search-engine-5.5.5.Final.jar:5.5.5。最后]
在org.hibernate.search.backend .impl.lucene.AbstractWorkspaceImpl.getIndexWriterDelegate(AbstractWorkspaceImpl.java:198)〜[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
at org.hibernate.search.backend。 impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:80)〜[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
at org.hibernate.search.backend.impl .lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:46)[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
at org.hibernate.search.backend.impl.lucene .SyncWorkProcessor $ Consumer.applyChangesets(SyncWorkProcessor.java:162)[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
at org.hibernate.search.backend.impl.lucene .SyncWorkProcessor $ Consumer.run(SyncWorkProcessor.java:148)[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
at java.lang.Thread.run(Thread.java :745)[?:1.8.0_121]
2017-04-04 10:30:48,555错误[LogErrorHandler:handleException:67] HSEARCH000058:发生异常java.nio.channels.OverlappingFileLockException
主要失败:
实体com.rhenus.de.cm.essentials.entities.ContractSearchEntity Id 96926工作类型org.hibernate.search.backend.AddLuceneWork

sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255)〜[?:1.8.0_121]
在sun.nio.ch上的
java.nio.channels.OverlappingFileLockException SharedFileLockTable.add(FileLockTable.java:152)〜[?:1.8.0_121]
at sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1108)〜[?:1.8.0_121]
在java.nio.channels.FileChannel.tryLock(FileChannel.java:1155)〜[?:1.8.0_121]
在org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:114)〜 [lucene-core-5.3.1.jar:5.3.1 1703449 - noble - 2015-09-17 01:38:09]
在org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java: 41)〜[lucene-core-5.3.1.jar:5.3.1 1703449 - noble - 2015-09-17 01:38:09]
在org.apache.lucene.store.BaseDirectory.obtainL ock(BaseDirectory.java:45)〜[lucene-core-5.3.1.jar:5.3.1 1703449 - noble - 2015-09-17 01:38:09]
在org.apache.lucene.index .indexWriter。< init>(IndexWriter.java:775)〜[lucene-core-5.3.1.jar:5.3.1 1703449 - noble - 2015-09-17 01:38:09]
at org .http://www.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:123)〜[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
at org。 hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:89)〜[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
at org.hibernate .search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:112)〜[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
at org.hibernate。 search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriterDelegate(AbstractWorkspaceImpl.java:198)〜[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
at org.hibernate.search .backend.impl.l ucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:80)[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
at org.hibernate.search.backend.impl.lucene。 LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:46)[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor $ Consumer.applyChangesets(SyncWorkProcessor.java:162)[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor $ Consumer.run(SyncWorkProcessor.java:148)[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]

我不会手动锁定或更新索引,也不会索引不同的线程,应用程序或其他。
我读过了,如果实体已经有一个id,并且在堆栈跟踪中声明了这种情况,就会发生这种情况。但我确定不要坚持带有ID的实体。所以也许有一个并发效应?



任何提示和帮助表示赞赏。如果您需要更多信息,只需询问并提供即可。谢谢。

解决方案

这个问题实际上并不在Hibernate Search中,而是在前三行代码中。 b
$ b

您声明不是不同线程,应用程序或任何其他应用程序使用的索引,但您也建议这是一个Web应用程序,因此它很可能需要对不受控制的事件做出反应,可能同时发生,即使您没有想到。



什么是保证这些最初的代码行不被同时调用? EntityManagerFactory 的初始化可能会被触发多次,但是没有代码会关闭已经启动的副本。



实际上,您有多个Hibernate Search副本运行并争取获得独占索引锁。



我建议不要禁用锁定机制:它是旨在保护你免受类似的错误。由于很好的原因,默认情况下启用它。



我还建议使用一些标准方法来初始化Hibernate和/或JPA,任何流行的JavaEE容器(如 WildFly )或框架很可能会做得对,而一些最高级的可以自动启用一些疯狂的优化。

I have a WebAppilcation runnig with JPA (Hibernate 5.0.11.Final) and Hibernate Search (5.5.5.Final) in which the user tries to save a new entity. Therefor there will be serveral calls like:

protected Object saveNewEntity(Object toSave) {
   if (factory == null) {
       factory = Persistence
          .createEntityManagerFactory(DBBase.PERSISTENCE_UNIT);
   }
   EntityManager em = initEntityManager();
   try {
       em.setFlushMode(FlushModeType.COMMIT);
       EntityTransaction transaction = em.getTransaction();
       transaction.begin();
       em.persist(toSave);
       transaction.commit();
   } catch (Exception e) {
       throw e;
   } finally {
       finalizeEntityManager(em);
   }
   return toSave;
}

protected void finalizeEntityManager(EntityManager em) {
   if (em != null && em.isOpen()) {
      em.close();
   }
}

The entity which is indexed also saved this way. It has no Cascading and is completly flat (no other tables involved).

Most of the time this will run fine and the index will be updatet.

But sometimes, I don't know why, the follwing exception will occur and therefor the index will not be updated:

2017-04-04 10:30:48,552 ERROR [LuceneBackendQueueTask:run:54] HSEARCH000073: Error in backend
java.nio.channels.OverlappingFileLockException
    at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255) ~[?:1.8.0_121]
    at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152) ~[?:1.8.0_121]
    at sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1108) ~[?:1.8.0_121]
    at java.nio.channels.FileChannel.tryLock(FileChannel.java:1155) ~[?:1.8.0_121]
    at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:114) ~[lucene-core-5.3.1.jar:5.3.1 1703449 - noble - 2015-09-17 01:38:09]
    at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-5.3.1.jar:5.3.1 1703449 - noble - 2015-09-17 01:38:09]
    at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-5.3.1.jar:5.3.1 1703449 - noble - 2015-09-17 01:38:09]
    at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:775) ~[lucene-core-5.3.1.jar:5.3.1 1703449 - noble - 2015-09-17 01:38:09]
    at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:123) ~[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
    at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:89) ~[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
    at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:112) ~[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
    at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriterDelegate(AbstractWorkspaceImpl.java:198) ~[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
    at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:80) ~[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
    at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:46) [hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
    at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.applyChangesets(SyncWorkProcessor.java:162) [hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
    at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.run(SyncWorkProcessor.java:148) [hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
    at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
2017-04-04 10:30:48,555 ERROR [LogErrorHandler:handleException:67] HSEARCH000058: Exception occurred java.nio.channels.OverlappingFileLockException
Primary Failure:
    Entity com.rhenus.de.cm.essentials.entities.ContractSearchEntity  Id 96926  Work Type  org.hibernate.search.backend.AddLuceneWork

java.nio.channels.OverlappingFileLockException
    at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255) ~[?:1.8.0_121]
    at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152) ~[?:1.8.0_121]
    at sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1108) ~[?:1.8.0_121]
    at java.nio.channels.FileChannel.tryLock(FileChannel.java:1155) ~[?:1.8.0_121]
    at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:114) ~[lucene-core-5.3.1.jar:5.3.1 1703449 - noble - 2015-09-17 01:38:09]
    at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-5.3.1.jar:5.3.1 1703449 - noble - 2015-09-17 01:38:09]
    at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-5.3.1.jar:5.3.1 1703449 - noble - 2015-09-17 01:38:09]
    at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:775) ~[lucene-core-5.3.1.jar:5.3.1 1703449 - noble - 2015-09-17 01:38:09]
    at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:123) ~[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
    at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:89) ~[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
    at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:112) ~[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
    at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriterDelegate(AbstractWorkspaceImpl.java:198) ~[hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
    at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:80) [hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
    at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:46) [hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
    at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.applyChangesets(SyncWorkProcessor.java:162) [hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
    at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.run(SyncWorkProcessor.java:148) [hibernate-search-engine-5.5.5.Final.jar:5.5.5.Final]
    at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]

I do not manually lock or update the index, nor is the index used by different threads, applications or whatever. I've read, that this can occur, if the entity has already an id and in the stacktrace the id is stated. But I definetly don't persist an entity with an id. So maybe there is a concurrency effect?

Any tips and help is appreciated. I f you need more information, just ask and I'll provide it. Thank you.

解决方案

The problem is actually not in Hibernate Search but in the first three lines of code.

You stated that "nor is the index used by different threads, applications or whatever", yet you also suggest this is a Web Application so it most likely needs to react to events out of your control, possibly concurrently even if you didn't expect that.

What is guaranteeing that those initial lines of code are not invoked concurrently? The initialization of the EntityManagerFactory might be triggered multiple times, yet there is no code shutting down the copies which have already been started.

In practice you are having multiple copies of Hibernate Search running and fighting to obtain the exclusive index lock.

Let me recommend to never disable the locking mechanism: it's designed to protect you from similar mistakes. It's enabled by default for very good reasons.

I would also suggest to use some standard approach to initialise Hibernate and/or JPA, any popular JavaEE container (like WildFly) or framework is likely to "do it right", and some of the most advanced ones can automatically enable some insane optimisations.

这篇关于Hibernate搜索:持久原因有时会导致OverlappingFileLockException的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆