Lucene / Hibernate搜索锁定例外 [英] Lucene / Hibernate Search Lock Exception
问题描述
我使用Hibernate Search在Web应用程序上索引和全文搜索项目,没有问题!
I use Hibernate Search to index and full-text search items on a web application, problem-less!
从我的pom.xml:
From my pom.xml:
<hibernate.search.version>3.4.2.Final</hibernate.search.version>
<apache.lucene.version>3.6.2</apache.lucene.version>
<apache.solr.version>3.6.2</apache.solr.version>
<hibernate.version>3.6.9.Final</hibernate.version>
现在,在开始制作之前,我试图用Apache测试我的网络应用程序的搜索功能JMeter的。当使用多个线程进行测试时,我会收到大量以下异常:
Now, before going to production I tried to stress test the search feature of my web-application using Apache JMeter. When testing with more then one thread, I receive tons of following Exception:
17:11:57,670 ERROR LogErrorHandler:82 - Exception occurred org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: SimpleFSLock@/opt/myapp/item_index/myapp.item.domain.Item/write.lock
Primary Failure:
Entity myapp.item.domain.Item Id 4 Work Type org.hibernate.search.backend.DeleteLuceneWork
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: SimpleFSLock@/opt/myapp/item_index/myapp.item.domain.Item/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1098)
at org.hibernate.search.backend.Workspace.createNewIndexWriter(Workspace.java:202)
at org.hibernate.search.backend.Workspace.getIndexWriter(Workspace.java:180)
at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:103)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
17:11:57,670 ERROR PerDPQueueProcessor:118 - Unexpected error in Lucene Backend:
org.hibernate.search.SearchException: Unable to remove class myapp.item.domain.Item#4 from index.
at org.hibernate.search.backend.impl.lucene.works.DeleteExtWorkDelegate.performWork(DeleteExtWorkDelegate.java:77)
at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:106)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.NullPointerException
at org.hibernate.search.backend.impl.lucene.works.DeleteExtWorkDelegate.performWork(DeleteExtWorkDelegate.java:72)
... 7 more
我的索引创建是使用以下函数完成的(从@Trar调用) nsactional方法):
My index creation is done using the following function (called from a @Transactional method):
@Override
public void createInitialIndexFromDB() {
// get session and set flush mode to manually to control the commit
FullTextSession fullTextSession = getFullTextSession();
fullTextSession.setFlushMode(FlushMode.MANUAL);
// do not add any data to the object context
fullTextSession.setCacheMode(CacheMode.IGNORE);
addResultsToIndex(fullTextSession, FETCH_ITEMS_TO_INDEX);
addResultsToIndex(fullTextSession, FETCH_DRAFTS_TO_INDEX);
addResultsToIndex(fullTextSession, FETCH_RESERVATIONS_TO_INDEX);
addResultsToIndex(fullTextSession, FETCH_SALES_TO_INDEX);
fullTextSession.flushToIndexes();
fullTextSession.clear();
}
private void addResultsToIndex(FullTextSession fullTextSession, String query) {
ScrollableResults results = fullTextSession.createQuery(query).scroll(
ScrollMode.FORWARD_ONLY);
for (int index = 1; results.next(); index++) {
fullTextSession.index(results.get(0));
if (index % BATCH_SIZE == 0 || results.isLast()) {
fullTextSession.flushToIndexes();
fullTextSession.clear();
}
}
}
private FullTextSession getFullTextSession() {
Session session = this.sessionFactory.getCurrentSession();
return Search.getFullTextSession(session);
}
创建索引后,对索引的所有更改都将通过自定义FullTextIndexEventListener进行:
After index creation all changes to my indexes are going through a custom FullTextIndexEventListener:
public final class HibernateItemEventListener extends
FullTextIndexEventListener {
private static final Logger log = LoggerFactory
.getLogger(HibernateItemEventListener.class);
public HibernateItemEventListener() {
super(Installation.SINGLE_INSTANCE);
}
@Override
public void onPostInsert(PostInsertEvent event) {
log.debug("onPostInsert");
if (!isIndexed(event.getEntity()))
return;
// Without these checks the elements are added twice to the index!
if (event.getEntity() instanceof ItemReservation)
return;
if (event.getEntity() instanceof ItemSale)
return;
super.onPostInsert(event);
}
@Override
public void onPostUpdate(PostUpdateEvent event) {
log.debug("onPostUpdate - Start");
if (!isIndexed(event.getEntity()))
return;
Serializable id = event.getId();
log.debug("onPostUpdate - Need update for id " + id);
if (used) {
boolean identifierRollbackEnabled = event.getSession().getFactory()
.getSettings().isIdentifierRollbackEnabled();
final Object entity = event.getEntity();
if (searchFactoryImplementor.getDocumentBuilderIndexedEntity(entity
.getClass()) != null
|| searchFactoryImplementor
.getDocumentBuilderContainedEntity(entity
.getClass()) != null) {
// Remove item
if (entity instanceof Item) {
Item item = (Item) entity;
if (item.getQuantity() < 1) {
processWork(entity, id, WorkType.PURGE, event,
identifierRollbackEnabled);
return;
}
}
// Remove reservation
if (entity instanceof ItemReservation) {
ItemReservation ir = (ItemReservation) entity;
if (ir.getActive() < 1) {
processWork(entity, id, WorkType.PURGE, event,
identifierRollbackEnabled);
return;
}
}
// Update entity
processWork(entity, id, WorkType.UPDATE, event,
identifierRollbackEnabled);
} else {
// Add entity
processWork(entity, id, WorkType.ADD, event,
identifierRollbackEnabled);
}
}
}
@Override
public void onPostDelete(PostDeleteEvent event) {
log.debug("onPostDelete - Start");
if (!isIndexed(event.getEntity()))
return;
log.debug("onPostDelete - Need delete for id " + event.getId());
super.onPostDelete(event);
}
private boolean isIndexed(Object entity) {
return entity instanceof Item || entity instanceof Draft
|| entity instanceof ItemReservation
|| entity instanceof ItemSale;
}
}
上述例外不会影响应用程序本身(搜索确实有效),但它有时会导致另一个更严重的异常(我认为与锁定问题有关):
The exception above doesn't affect the application itself (the search does work), but it sometimes lead to another more critical exception (which I suppose is related with the lock problem):
17:11:58,866 ERROR LogErrorHandler:82 - Exception occurred java.io.FileNotFoundException: _iz.fdx
java.io.FileNotFoundException: _iz.fdx
at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:284)
at org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:303)
at org.apache.lucene.index.LogMergePolicy.sizeBytes(LogMergePolicy.java:193)
at org.apache.lucene.index.LogByteSizeMergePolicy.size(LogByteSizeMergePolicy.java:45)
at org.apache.lucene.index.LogMergePolicy.useCompoundFile(LogMergePolicy.java:147)
at org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:593)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3587)
at org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:3376)
at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3485)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3467)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3451)
at org.hibernate.search.backend.Workspace.commitIndexWriter(Workspace.java:220)
at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:109)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
17:11:59,991 ERROR LogErrorHandler:82 - Exception occurred java.io.FileNotFoundException: /opt/myapp/item_index/myapp.item.domain.Item/_iz.cfs (No such file or directory)
任何提示?
编辑:
在库调整和JMeter攻击性增加(更多线程)之后,异常和索引损坏仍然出现:
EDIT: After library adjustment and JMeter aggressiveness increased (more threads) the exception and index corruption still appears:
12:08:11,163 ERROR LogErrorHandler:82 - Exception occurred java.io.FileNotFoundException: /opt/myapp/item_index/myapp.item.domain.Item/_8gy.cfs (No such file or directory)
Primary Failure:
Entity myapp.item.domain.Item Id 596 Work Type org.hibernate.search.backend.DeleteLuceneWork
Subsequent failures:
Entity myapp.item.domain.Item Id 596 Work Type org.hibernate.search.backend.AddLuceneWork
Entity myapp.item.domain.Item Id 734 Work Type org.hibernate.search.backend.DeleteLuceneWork
Entity myapp.item.domain.Item Id 734 Work Type org.hibernate.search.backend.AddLuceneWork
Entity myapp.item.domain.Item Id 599 Work Type org.hibernate.search.backend.DeleteLuceneWork
Entity myapp.item.domain.Item Id 599 Work Type org.hibernate.search.backend.AddLuceneWork
Entity myapp.item.domain.Item Id 735 Work Type org.hibernate.search.backend.DeleteLuceneWork
Entity myapp.item.domain.Item Id 735 Work Type org.hibernate.search.backend.AddLuceneWork
Entity myapp.item.domain.Item Id 598 Work Type org.hibernate.search.backend.DeleteLuceneWork
Entity myapp.item.domain.Item Id 598 Work Type org.hibernate.search.backend.AddLuceneWork
Entity myapp.item.domain.Item Id 720 Work Type org.hibernate.search.backend.DeleteLuceneWork
Entity myapp.item.domain.Item Id 720 Work Type org.hibernate.search.backend.AddLuceneWork
java.io.FileNotFoundException: /opt/myapp/item_index/myapp.item.domain.Item/_8gy.cfs (No such file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.<init>(SimpleFSDirectory.java:69)
at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.<init>(SimpleFSDirectory.java:90)
at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.<init>(NIOFSDirectory.java:91)
at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:78)
at org.apache.lucene.index.CompoundFileReader.<init>(CompoundFileReader.java:66)
at org.apache.lucene.index.CompoundFileReader.<init>(CompoundFileReader.java:55)
at org.apache.lucene.index.IndexWriter.getFieldInfos(IndexWriter.java:1193)
at org.apache.lucene.index.IndexWriter.getCurrentFieldInfos(IndexWriter.java:1213)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1149)
at org.hibernate.search.backend.Workspace.createNewIndexWriter(Workspace.java:202)
at org.hibernate.search.backend.Workspace.getIndexWriter(Workspace.java:180)
at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:103)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
12:08:11,163 ERROR PerDPQueueProcessor:118 - Unexpected error in Lucene Backend:
org.hibernate.search.SearchException: Unable to remove class myapp.item.domain.Item#596 from index.
at org.hibernate.search.backend.impl.lucene.works.DeleteExtWorkDelegate.performWork(DeleteExtWorkDelegate.java:77)
at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:106)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.NullPointerException
编辑调试后:
FileNotFound(以及因此IndexWriter NPE )生成于
EDIT After debugging: FileNotFound (and therefore IndexWriter NPE) generated at
IndexWriter writer = new IndexWriter( directoryProvider.getDirectory(), writerConfig );
在Workspace.createNewIndexWriter()方法中。
in Workspace.createNewIndexWriter() method.
writerConfig:
matchVersion=LUCENE_31
analyzer=org.apache.lucene.analysis.SimpleAnalyzer
delPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy
commit=null
openMode=CREATE_OR_APPEND
similarity=org.apache.lucene.search.DefaultSimilarity
termIndexInterval=128
mergeScheduler=org.hibernate.search.backend.impl.lucene.overrides.ConcurrentMergeScheduler
default WRITE_LOCK_TIMEOUT=1000
writeLockTimeout=1000
maxBufferedDeleteTerms=-1
ramBufferSizeMB=16.0
maxBufferedDocs=-1
mergedSegmentWarmer=null
mergePolicy=[LogByteSizeMergePolicy: minMergeSize=1677721, mergeFactor=10, maxMergeSize=2147483648, maxMergeSizeForOptimize=9223372036854775807, calibrateSizeByDeletes=true, maxMergeDocs=2147483647, useCompoundFile=true]
maxThreadStates=8
readerPooling=false
readerTermsIndexDivisor=1
推荐答案
我想我已经解决了。
I think I solved it.
问题不在于Hibernate Search本身,在并发事务阈值超过
(可能使文件描述符保持打开状态)之后,
杀死了线程的Atomikos。我增加了
并发事务限制(它是50)并且问题已经消失。
The problem was not the Hibernate Search itself, it was Atomikos which killed threads after the concurrent transaction threshold has been exceeded (probably leaving the file descriptor open). I increased the concurrent transaction limit (it was 50) and the problem was gone.
尽管如此,我仍然不明白为什么hibernate搜索会打开
每次执行lucene查询时都是索引编写器。
Despite this, I still don't understand why is hibernate search opening an index writer each time I perform a lucene query.
这篇关于Lucene / Hibernate搜索锁定例外的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!