Use Hibernate Search with Hibernate ORM and Elasticsearch/OpenSearch
You have a Hibernate ORM-based application? You want to provide a full-featured full-text search to your users? You’re at the right place.
With this guide, you’ll learn how to synchronize your entities to an Elasticsearch or OpenSearch cluster in a heartbeat with Hibernate Search. We will also explore how you can query your Elasticsearch or OpenSearch cluster using the Hibernate Search API.
If you want to index entities that are not Hibernate ORM entities, see this dedicated guide instead. |
先决条件
完成这个指南,你需要:
-
大概20 minutes
-
编辑器
-
JDK 17+ installed with
JAVA_HOME
configured appropriately -
Apache Maven 3.9.9
-
A working container runtime (Docker or Podman)
-
如果你愿意的话,还可以选择使用Quarkus CLI
-
如果你想构建原生可执行程序,可以选择安装Mandrel或者GraalVM,并正确配置(或者使用Docker在容器中进行构建)
应用结构
The application described in this guide allows to manage a (simple) library: you manage authors and their books.
The entities are stored in a PostgreSQL database and indexed in an Elasticsearch cluster.
解决方案
我们建议您按照下一节的说明逐步创建应用程序。然而,您可以直接转到已完成的示例。
克隆 Git 仓库: git clone https://github.com/quarkusio/quarkus-quickstarts.git
,或下载一个 存档 。
The solution is located in the hibernate-search-orm-elasticsearch-quickstart
directory.
The provided solution contains a few additional elements such as tests and testing infrastructure. |
创建Maven项目
首先,我们需要一个新的工程项目。用以下命令创建一个新项目:
For Windows users:
-
If using cmd, (don’t use backward slash
\
and put everything on the same line) -
If using Powershell, wrap
-D
parameters in double quotes e.g."-DprojectArtifactId=hibernate-search-orm-elasticsearch-quickstart"
This command generates a Maven structure importing the following extensions:
-
Hibernate ORM with Panache,
-
the PostgreSQL JDBC driver,
-
Hibernate Search + Elasticsearch,
-
Quarkus REST (formerly RESTEasy Reactive) and Jackson.
If you already have your Quarkus project configured, you can add the hibernate-search-orm-elasticsearch
extension
to your project by running the following command in your project base directory:
quarkus extension add hibernate-search-orm-elasticsearch
./mvnw quarkus:add-extension -Dextensions='hibernate-search-orm-elasticsearch'
./gradlew addExtension --extensions='hibernate-search-orm-elasticsearch'
这将在你的 pom.xml
中添加以下内容:
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-hibernate-search-orm-elasticsearch</artifactId>
</dependency>
implementation("io.quarkus:quarkus-hibernate-search-orm-elasticsearch")
Creating the bare entities
First, let’s create our Hibernate ORM entities Book
and Author
in the model
subpackage.
package org.acme.hibernate.search.elasticsearch.model;
import java.util.List;
import java.util.Objects;
import jakarta.persistence.CascadeType;
import jakarta.persistence.Entity;
import jakarta.persistence.FetchType;
import jakarta.persistence.OneToMany;
import io.quarkus.hibernate.orm.panache.PanacheEntity;
@Entity
public class Author extends PanacheEntity { (1)
public String firstName;
public String lastName;
@OneToMany(mappedBy = "author", cascade = CascadeType.ALL, orphanRemoval = true, fetch = FetchType.EAGER) (2)
public List<Book> books;
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (!(o instanceof Author)) {
return false;
}
Author other = (Author) o;
return Objects.equals(id, other.id);
}
@Override
public int hashCode() {
return 31;
}
}
1 | We are using Hibernate ORM with Panache, it is not mandatory. |
2 | We are loading these elements eagerly so that they are present in the JSON output. In a real world application, you should probably use a DTO approach. |
package org.acme.hibernate.search.elasticsearch.model;
import java.util.Objects;
import jakarta.persistence.Entity;
import jakarta.persistence.ManyToOne;
import com.fasterxml.jackson.annotation.JsonIgnore;
import io.quarkus.hibernate.orm.panache.PanacheEntity;
@Entity
public class Book extends PanacheEntity {
public String title;
@ManyToOne
@JsonIgnore (1)
public Author author;
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (!(o instanceof Book)) {
return false;
}
Book other = (Book) o;
return Objects.equals(id, other.id);
}
@Override
public int hashCode() {
return 31;
}
}
1 | We mark this property with @JsonIgnore to avoid infinite loops when serializing with Jackson. |
Initializing the REST service
While everything is not yet set up for our REST service, we can initialize it with the standard CRUD operations we will need.
Create the org.acme.hibernate.search.elasticsearch.LibraryResource
class:
package org.acme.hibernate.search.elasticsearch;
import java.util.List;
import java.util.Optional;
import jakarta.enterprise.event.Observes;
import jakarta.inject.Inject;
import jakarta.transaction.Transactional;
import jakarta.ws.rs.DELETE;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.POST;
import jakarta.ws.rs.PUT;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.core.MediaType;
import org.acme.hibernate.search.elasticsearch.model.Author;
import org.acme.hibernate.search.elasticsearch.model.Book;
import org.hibernate.search.mapper.orm.session.SearchSession;
import org.jboss.resteasy.reactive.RestForm;
import org.jboss.resteasy.reactive.RestQuery;
import io.quarkus.runtime.StartupEvent;
@Path("/library")
public class LibraryResource {
@PUT
@Path("book")
@Transactional
@Consumes(MediaType.APPLICATION_FORM_URLENCODED)
public void addBook(@RestForm String title, @RestForm Long authorId) {
Author author = Author.findById(authorId);
if (author == null) {
return;
}
Book book = new Book();
book.title = title;
book.author = author;
book.persist();
author.books.add(book);
author.persist();
}
@DELETE
@Path("book/{id}")
@Transactional
public void deleteBook(Long id) {
Book book = Book.findById(id);
if (book != null) {
book.author.books.remove(book);
book.delete();
}
}
@PUT
@Path("author")
@Transactional
@Consumes(MediaType.APPLICATION_FORM_URLENCODED)
public void addAuthor(@RestForm String firstName, @RestForm String lastName) {
Author author = new Author();
author.firstName = firstName;
author.lastName = lastName;
author.persist();
}
@POST
@Path("author/{id}")
@Transactional
@Consumes(MediaType.APPLICATION_FORM_URLENCODED)
public void updateAuthor(Long id, @RestForm String firstName, @RestForm String lastName) {
Author author = Author.findById(id);
if (author == null) {
return;
}
author.firstName = firstName;
author.lastName = lastName;
author.persist();
}
@DELETE
@Path("author/{id}")
@Transactional
public void deleteAuthor(Long id) {
Author author = Author.findById(id);
if (author != null) {
author.delete();
}
}
}
Nothing out of the ordinary here: it is just good old Hibernate ORM with Panache operations in a REST service.
In fact, the interesting part is that we will need to add very few elements to make our full text search application working.
Using Hibernate Search annotations
Let’s go back to our entities.
Enabling full text search capabilities for them is as simple as adding a few annotations.
Let’s edit the Book
entity again to include this content:
package org.acme.hibernate.search.elasticsearch.model;
import java.util.Objects;
import jakarta.persistence.Entity;
import jakarta.persistence.ManyToOne;
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.FullTextField;
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.Indexed;
import com.fasterxml.jackson.annotation.JsonIgnore;
import io.quarkus.hibernate.orm.panache.PanacheEntity;
@Entity
@Indexed (1)
public class Book extends PanacheEntity {
@FullTextField(analyzer = "english") (2)
public String title;
@ManyToOne
@JsonIgnore
public Author author;
// Preexisting equals()/hashCode() methods
}
1 | First, let’s use the @Indexed annotation to register our Book entity as part of the full text index. |
2 | The @FullTextField annotation declares a field in the index specifically tailored for full text search.
In particular, we have to define an analyzer to split and analyze the tokens (~ words) - more on this later. |
Now that our books are indexed, we can do the same for the authors.
Open the Author
class and include the content below.
Things are quite similar here: we use the @Indexed
, @FullTextField
and @KeywordField
annotations.
There are a few differences/additions though. Let’s check them out.
package org.acme.hibernate.search.elasticsearch.model;
import java.util.List;
import java.util.Objects;
import jakarta.persistence.CascadeType;
import jakarta.persistence.Entity;
import jakarta.persistence.FetchType;
import jakarta.persistence.OneToMany;
import org.hibernate.search.engine.backend.types.Sortable;
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.FullTextField;
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.Indexed;
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.IndexedEmbedded;
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.KeywordField;
import io.quarkus.hibernate.orm.panache.PanacheEntity;
@Entity
@Indexed
public class Author extends PanacheEntity {
@FullTextField(analyzer = "name") (1)
@KeywordField(name = "firstName_sort", sortable = Sortable.YES, normalizer = "sort") (2)
public String firstName;
@FullTextField(analyzer = "name")
@KeywordField(name = "lastName_sort", sortable = Sortable.YES, normalizer = "sort")
public String lastName;
@OneToMany(mappedBy = "author", cascade = CascadeType.ALL, orphanRemoval = true, fetch = FetchType.EAGER)
@IndexedEmbedded (3)
public List<Book> books;
// Preexisting equals()/hashCode() methods
}
1 | We use a @FullTextField similar to what we did for Book but you’ll notice that the analyzer is different - more on this later. |
2 | As you can see, we can define several fields for the same property.
Here, we define a @KeywordField with a specific name.
The main difference is that a keyword field is not tokenized (the string is kept as one single token) but can be normalized (i.e. filtered) - more on this later.
This field is marked as sortable as our intention is to use it for sorting our authors. |
3 | The purpose of @IndexedEmbedded is to include the Book fields into the Author index.
In this case, we just use the default configuration: all the fields of the associated Book entities are included in the index (i.e. the title field).
The nice thing with @IndexedEmbedded is that it is able to automatically reindex an Author if one of its Book s has been updated thanks to the bidirectional relation.
@IndexedEmbedded also supports nested documents (using the structure = NESTED attribute), but we don’t need it here.
You can also specify the fields you want to embed in your parent index using the includePaths /excludePaths attributes if you don’t want them all. |
Analyzers and normalizers
简介
Analysis is a big part of full text search: it defines how text will be processed when indexing or building search queries.
The role of analyzers is to split the text into tokens (~ words) and filter them (making it all lowercase and removing accents for instance).
Normalizers are a special type of analyzers that keeps the input as a single token. It is especially useful for sorting or indexing keywords.
There are a lot of bundled analyzers, but you can also develop your own for your own specific purposes.
You can learn more about the Elasticsearch analysis framework in the Analysis section of the Elasticsearch documentation.
Defining the analyzers used
When we added the Hibernate Search annotations to our entities, we defined the analyzers and normalizers used. Typically:
@FullTextField(analyzer = "english")
@FullTextField(analyzer = "name")
@KeywordField(name = "lastName_sort", sortable = Sortable.YES, normalizer = "sort")
We use:
-
an analyzer called
name
for person names, -
an analyzer called
english
for book titles, -
a normalizer called
sort
for our sort fields
but we haven’t set them up yet.
Let’s see how you can do it with Hibernate Search.
Setting up the analyzers
It is an easy task, we just need to create an implementation of ElasticsearchAnalysisConfigurer
(and configure Quarkus to use it, more on that later).
To fulfill our requirements, let’s create the following implementation:
package org.acme.hibernate.search.elasticsearch.config;
import org.hibernate.search.backend.elasticsearch.analysis.ElasticsearchAnalysisConfigurationContext;
import org.hibernate.search.backend.elasticsearch.analysis.ElasticsearchAnalysisConfigurer;
import io.quarkus.hibernate.search.orm.elasticsearch.SearchExtension;
@SearchExtension (1)
public class AnalysisConfigurer implements ElasticsearchAnalysisConfigurer {
@Override
public void configure(ElasticsearchAnalysisConfigurationContext context) {
context.analyzer("name").custom() (2)
.tokenizer("standard")
.tokenFilters("asciifolding", "lowercase");
context.analyzer("english").custom() (3)
.tokenizer("standard")
.tokenFilters("asciifolding", "lowercase", "porter_stem");
context.normalizer("sort").custom() (4)
.tokenFilters("asciifolding", "lowercase");
}
}
1 | Annotate the configurer implementation with the @SearchExtension qualifier
to tell Quarkus it should be used in the default persistence unit, for all Elasticsearch indexes (by default).
The annotation can also target a specific persistence unit ( |
2 | This is a simple analyzer separating the words on spaces, removing any non-ASCII characters by its ASCII counterpart (and thus removing accents) and putting everything in lowercase. It is used in our examples for the author’s names. |
3 | We are a bit more aggressive with this one and we include some stemming: we will be able to search for mystery and get a result even if the indexed input contains mysteries .
It is definitely too aggressive for person names, but it is perfect for the book titles. |
4 | Here is the normalizer used for sorting. Very similar to our first analyzer, except we don’t tokenize the words as we want one and only one token. |
Alternatively, if for some reason you can’t or don’t want to annotate your analysis configurer with
|
For more information about configuring analyzers, see this section of the reference documentation.
Adding full text capabilities to our REST service
In our existing LibraryResource
, we just need to inject the SearchSession
:
@Inject
SearchSession searchSession; (1)
1 | Inject a Hibernate Search session, which relies on the EntityManager under the hood.
Applications with multiple persistence units can use the CDI qualifier @io.quarkus.hibernate.orm.PersistenceUnit
to select the right one:
see CDI integration. |
And then the magic begins.
When we added annotations to our entities, we made them available for full text search;
we can now query the index using the Hibernate Search DSL,
simply by adding the following method (and a few import
s):
@GET
@Path("author/search")
@Transactional (1)
public List<Author> searchAuthors(@RestQuery String pattern, (2)
@RestQuery Optional<Integer> size) {
return searchSession.search(Author.class) (3)
.where(f ->
pattern == null || pattern.trim().isEmpty() ?
f.matchAll() : (4)
f.simpleQueryString()
.fields("firstName", "lastName", "books.title").matching(pattern) (5)
)
.sort(f -> f.field("lastName_sort").then().field("firstName_sort")) (6)
.fetchHits(size.orElse(20)); (7)
}
1 | Important point: we need a transactional context for this method. |
2 | Use the org.jboss.resteasy.reactive.RestQuery annotation type to avoid repeating the parameter name. |
3 | We indicate that we are searching for Author s. |
4 | We create a predicate: if the pattern is empty, we use a matchAll() predicate. |
5 | If we have a valid pattern, we create a simpleQueryString() predicate on the firstName , lastName and books.title fields matching our pattern. |
6 | We define the sort order of our results. Here we sort by last name, then by first name. Note that we use the specific fields we created for sorting. |
7 | Fetch the size top hits, 20 by default. Obviously, paging is also supported. |
The Hibernate Search DSL supports a significant subset of the Elasticsearch predicates (match, range, nested, phrase, spatial…). Feel free to explore the DSL using autocompletion. When that’s not enough, you can always fall back to defining a predicate using JSON directly. |
Automatic data initialization
For the purpose of this demonstration, let’s import an initial dataset.
Let’s create a src/main/resources/import.sql
file with the following content
(we’ll reference it in configuration later):
INSERT INTO author(id, firstname, lastname) VALUES (1, 'John', 'Irving');
INSERT INTO author(id, firstname, lastname) VALUES (2, 'Paul', 'Auster');
ALTER SEQUENCE author_seq RESTART WITH 3;
INSERT INTO book(id, title, author_id) VALUES (1, 'The World According to Garp', 1);
INSERT INTO book(id, title, author_id) VALUES (2, 'The Hotel New Hampshire', 1);
INSERT INTO book(id, title, author_id) VALUES (3, 'The Cider House Rules', 1);
INSERT INTO book(id, title, author_id) VALUES (4, 'A Prayer for Owen Meany', 1);
INSERT INTO book(id, title, author_id) VALUES (5, 'Last Night in Twisted River', 1);
INSERT INTO book(id, title, author_id) VALUES (6, 'In One Person', 1);
INSERT INTO book(id, title, author_id) VALUES (7, 'Avenue of Mysteries', 1);
INSERT INTO book(id, title, author_id) VALUES (8, 'The New York Trilogy', 2);
INSERT INTO book(id, title, author_id) VALUES (9, 'Mr. Vertigo', 2);
INSERT INTO book(id, title, author_id) VALUES (10, 'The Brooklyn Follies', 2);
INSERT INTO book(id, title, author_id) VALUES (11, 'Invisible', 2);
INSERT INTO book(id, title, author_id) VALUES (12, 'Sunset Park', 2);
INSERT INTO book(id, title, author_id) VALUES (13, '4 3 2 1', 2);
ALTER SEQUENCE book_seq RESTART WITH 14;
Because this data above will be inserted into the database without Hibernate Search’s knowledge, it won’t be indexed — unlike upcoming updates coming through Hibernate ORM operations, which will be synchronized automatically to the full text index.
In our existing LibraryResource
, let’s add the following content (and a few import
s)
to index that initial data:
If you don’t import data manually in the database, you don’t need this: the mass indexer should then only be used when you change your indexing configuration (adding a new field, changing an analyzer’s configuration…) and you want the new configuration to be applied to your existing data. |
@Inject
SearchMapping searchMapping; (1)
void onStart(@Observes StartupEvent ev) throws InterruptedException { (2)
// only reindex if we imported some content
if (Book.count() > 0) {
searchMapping.scope(Object.class) (3)
.massIndexer() (4)
.startAndWait(); (5)
}
}
1 | Inject a Hibernate Search SearchMapping ,
which relies on the EntityManagerFactory under the hood.
Applications with multiple persistence units can use the CDI qualifier @io.quarkus.hibernate.orm.PersistenceUnit
to select the right one:
see CDI integration. |
2 | Add a method that will get executed on application startup. |
3 | Create a "search scope" targeting all indexed entity types that extend Object — that is, every single indexed entity types (Author and Book ). |
4 | Create an instance of Hibernate Search’s mass indexer, which allows indexing a lot of data efficiently (you can fine tune it for better performance). |
5 | Start the mass indexer and wait for it to finish. |
配置该应用程序
As usual, we can configure everything in the Quarkus configuration file, application.properties
.
Edit src/main/resources/application.properties
and inject the following configuration:
quarkus.ssl.native=false (1)
quarkus.datasource.db-kind=postgresql (2)
quarkus.hibernate-orm.sql-load-script=import.sql (3)
quarkus.hibernate-search-orm.elasticsearch.version=8 (4)
quarkus.hibernate-search-orm.indexing.plan.synchronization.strategy=sync (5)
%prod.quarkus.datasource.jdbc.url=jdbc:postgresql://localhost/quarkus_test (6)
%prod.quarkus.datasource.username=quarkus_test
%prod.quarkus.datasource.password=quarkus_test
%prod.quarkus.hibernate-orm.database.generation=create
%prod.quarkus.hibernate-search-orm.elasticsearch.hosts=localhost:9200 (6)
1 | We won’t use SSL, so we disable it to have a more compact native executable. |
2 | Let’s create a PostgreSQL datasource. |
3 | We load some initial data on startup (see Automatic data initialization). |
4 | We need to tell Hibernate Search about the version of Elasticsearch we will use.
It is important because there are significant differences between Elasticsearch mapping syntax depending on the version.
Since the mapping is created at build time to reduce startup time, Hibernate Search cannot connect to the cluster to automatically detect the version.
Note that, for OpenSearch, you need to prefix the version with opensearch: ; see OpenSearch compatibility. |
5 | This means that we wait for the entities to be searchable before considering a write complete.
On a production setup, the write-sync default will provide better performance.
Using sync is especially important when testing as you need the entities to be searchable immediately. |
6 | For development and tests, we rely on Dev Services,
which means Quarkus will start a PostgreSQL database and Elasticsearch cluster automatically.
In production mode, however,
we will want to start a PostgreSQL database and Elasticsearch cluster manually,
which is why we provide Quarkus with this connection info in the prod profile (%prod. prefix). |
Because we rely on Dev Services, the database and Elasticsearch schema
will automatically be dropped and re-created on each application startup
in tests and dev mode
(unless If for some reason you cannot use Dev Services, you will have to set the following properties to get similar behavior:
|
For more information about configuration of the Hibernate Search ORM extension, refer to the Configuration Reference. |
创建一个网页
Now let’s add a simple web page to interact with our LibraryResource
.
Quarkus automatically serves static resources located under the META-INF/resources
directory.
In the src/main/resources/META-INF/resources
directory, overwrite the existing index.html
file with the content from this
index.html file.
Time to play with your application
现在你可以与你的REST服务进行交互:
-
start your Quarkus application with:
CLIquarkus dev
Maven./mvnw quarkus:dev
Gradle./gradlew --console=plain quarkusDev
-
open a browser to
http://localhost:8080/
-
search for authors or book titles (we initialized some data for you)
-
create new authors and books and search for them too
As you can see, all your updates are automatically synchronized to the Elasticsearch cluster.
构建一个本地可执行文件
你可以使用常用命令构建本机可执行文件:
quarkus build --native
./mvnw install -Dnative
./gradlew build -Dquarkus.native.enabled=true
As usual with native executable compilation, this operation consumes a lot of memory. It might be safer to stop the two containers while you are building the native executable and start them again once you are done. |
Running the native executable is as simple as executing ./target/hibernate-search-orm-elasticsearch-quickstart-1.0.0-SNAPSHOT-runner
.
You can then point your browser to http://localhost:8080/
and use your application.
The startup is a bit slower than usual: it is mostly due to us dropping and recreating the database schema and the Elasticsearch mapping every time at startup. We also inject some data and execute the mass indexer. In a real life application, it is obviously something you won’t do on every startup. |
Dev Services (Configuration Free Datastores)
Quarkus supports a feature called Dev Services that allows you to start various containers without any config.
In the case of Elasticsearch this support extends to the default Elasticsearch connection.
What that means practically, is that if you have not configured quarkus.hibernate-search-orm.elasticsearch.hosts
,
Quarkus will automatically start an Elasticsearch container when running tests or in dev mode,
and automatically configure the connection.
When running the production version of the application, the Elasticsearch connection needs to be configured as normal,
so if you want to include a production database config in your application.properties
and continue to use Dev Services
we recommend that you use the %prod.
profile to define your Elasticsearch settings.
Dev Services for Elasticsearch is currently unable to start multiple clusters concurrently, so it only works with the default backend of the default persistence unit: named persistence units or named backends won’t be able to take advantage of Dev Services for Elasticsearch. |
For more information you can read the Dev Services for Elasticsearch guide.
Programmatic mapping
If, for some reason, adding Hibernate Search annotations to entities is not possible,
mapping can be applied programmatically instead.
Programmatic mapping is configured through the ProgrammaticMappingConfigurationContext
that is exposed via a mapping configurer (HibernateOrmSearchMappingConfigurer
).
A mapping configurer ( |
Below is an example of a mapping configurer that applies programmatic mapping:
package org.acme.hibernate.search.elasticsearch.config;
import org.hibernate.search.mapper.orm.mapping.HibernateOrmMappingConfigurationContext;
import org.hibernate.search.mapper.orm.mapping.HibernateOrmSearchMappingConfigurer;
import org.hibernate.search.mapper.pojo.mapping.definition.programmatic.TypeMappingStep;
import io.quarkus.hibernate.search.orm.elasticsearch.SearchExtension;
@SearchExtension (1)
public class CustomMappingConfigurer implements HibernateOrmSearchMappingConfigurer {
@Override
public void configure(HibernateOrmMappingConfigurationContext context) {
TypeMappingStep type = context.programmaticMapping() (2)
.type(SomeIndexedEntity.class); (3)
type.indexed() (4)
.index(SomeIndexedEntity.INDEX_NAME); (5)
type.property("id").documentId(); (6)
type.property("text").fullTextField(); (7)
}
}
1 | Annotate the configurer implementation with the @SearchExtension qualifier
to tell Quarkus it should be used by Hibernate Search in the default persistence unit.
The annotation can also target a specific persistence unit ( |
2 | Access the programmatic mapping context. |
3 | Create mapping step for the SomeIndexedEntity entity. |
4 | Define the SomeIndexedEntity entity as indexed. |
5 | Provide an index name to be used for the SomeIndexedEntity entity. |
6 | Define the document id property. |
7 | Define a full-text search field for the text property. |
Alternatively, if for some reason you can’t or don’t want to annotate your mapping configurer with
|
OpenSearch compatibility
Hibernate Search is compatible with both Elasticsearch and OpenSearch, but it assumes it is working with an Elasticsearch cluster by default.
To have Hibernate Search work with an OpenSearch cluster instead,
prefix the configured version with opensearch:
,
as shown below.
quarkus.hibernate-search-orm.elasticsearch.version=opensearch:2.16
All other configuration options and APIs are exactly the same as with Elasticsearch.
You can find more information about compatible distributions and versions of Elasticsearch in this section of Hibernate Search’s reference documentation.
Multiple persistence units
Configuring multiple persistence units
With the Hibernate ORM extension, you can set up multiple persistence units, each with its own datasource and configuration.
If you do declare multiple persistence units, you will also configure Hibernate Search separately for each persistence unit.
The properties at the root of the quarkus.hibernate-search-orm.
namespace define the default persistence unit.
For instance, the following snippet defines a default datasource and a default persistence unit,
and sets the Elasticsearch host for that persistence unit to es1.mycompany.com:9200
.
quarkus.datasource.db-kind=h2
quarkus.datasource.jdbc.url=jdbc:h2:mem:default;DB_CLOSE_DELAY=-1
quarkus.hibernate-search-orm.elasticsearch.hosts=es1.mycompany.com:9200
quarkus.hibernate-search-orm.elasticsearch.version=8
Using a map based approach, it is also possible to configure named persistence units:
quarkus.datasource."users".db-kind=h2 (1)
quarkus.datasource."users".jdbc.url=jdbc:h2:mem:users;DB_CLOSE_DELAY=-1
quarkus.datasource."inventory".db-kind=h2 (2)
quarkus.datasource."inventory".jdbc.url=jdbc:h2:mem:inventory;DB_CLOSE_DELAY=-1
quarkus.hibernate-orm."users".datasource=users (3)
quarkus.hibernate-orm."users".packages=org.acme.model.user
quarkus.hibernate-orm."inventory".datasource=inventory (4)
quarkus.hibernate-orm."inventory".packages=org.acme.model.inventory
quarkus.hibernate-search-orm."users".elasticsearch.hosts=es1.mycompany.com:9200 (5)
quarkus.hibernate-search-orm."users".elasticsearch.version=8
quarkus.hibernate-search-orm."inventory".elasticsearch.hosts=es2.mycompany.com:9200 (6)
quarkus.hibernate-search-orm."inventory".elasticsearch.version=8
1 | Define a datasource named users . |
2 | Define a datasource named inventory . |
3 | Define a persistence unit called users pointing to the users datasource. |
4 | Define a persistence unit called inventory pointing to the inventory datasource. |
5 | Configure Hibernate Search for the users persistence unit,
setting the Elasticsearch host for that persistence unit to es1.mycompany.com:9200 . |
6 | Configure Hibernate Search for the inventory persistence unit,
setting the Elasticsearch host for that persistence unit to es2.mycompany.com:9200 . |
Attaching model classes to persistence units
For each persistence unit, Hibernate Search will only consider indexed entities that are attached to that persistence unit. Entities are attached to a persistence unit by configuring the Hibernate ORM extension.
CDI integration
Injecting entry points
You can inject Hibernate Search’s main entry points, SearchSession
and SearchMapping
, using CDI:
@Inject
SearchSession searchSession;
This will inject the SearchSession
of the default persistence unit.
To inject the SearchSession
of a named persistence unit (users
in our example),
just add a qualifier:
@Inject
@PersistenceUnit("users") (1)
SearchSession searchSession;
1 | This is the @io.quarkus.hibernate.orm.PersistenceUnit annotation. |
You can inject the SearchMapping
of a named persistence unit using the exact same mechanism:
@Inject
@PersistenceUnit("users")
SearchMapping searchMapping;
Plugging in custom components
The Quarkus extension for Hibernate Search with Hibernate ORM will automatically
inject components annotated with @SearchExtension
into Hibernate Search.
The annotation can optionally target a specific persistence unit (@SearchExtension(persistenceUnit = "nameOfYourPU")
),
backend (@SearchExtension(backend = "nameOfYourBackend")
), index (@SearchExtension(index = "nameOfYourIndex")
),
or a combination of those
(@SearchExtension(persistenceUnit = "nameOfYourPU", backend = "nameOfYourBackend", index = "nameOfYourIndex")
),
when it makes sense for the type of the component being injected.
This feature is available for the following component types:
org.hibernate.search.engine.reporting.FailureHandler
-
A component that should be notified of any failure occurring in a background process (mainly index operations).
Scope: one per persistence unit.
See this section of the reference documentation for more information.
org.hibernate.search.mapper.orm.mapping.HibernateOrmSearchMappingConfigurer
-
A component used to configure the Hibernate Search mapping, in particular programmatically.
Scope: one or more per persistence unit.
See this section of this guide for more information.
org.hibernate.search.mapper.pojo.work.IndexingPlanSynchronizationStrategy
-
A component used to configure how to synchronize between application threads and indexing.
Scope: one per persistence unit.
Can also be set to built-in implementations through
quarkus.hibernate-search-orm.indexing.plan.synchronization.strategy
.See this section of the reference documentation for more information.
org.hibernate.search.backend.elasticsearch.analysis.ElasticsearchAnalysisConfigurer
-
A component used to configure full text analysis (e.g. analyzers, normalizers).
Scope: one or more per backend.
See this section of this guide for more information.
org.hibernate.search.backend.elasticsearch.index.layout.IndexLayoutStrategy
-
A component used to configure the Elasticsearch layout: index names, index aliases, …
Scope: one per backend.
Can also be set to built-in implementations through
quarkus.hibernate-search-orm.elasticsearch.layout.strategy
.See this section of the reference documentation for more information.
Offline startup
By default, Hibernate Search sends a few requests to the Elasticsearch cluster on startup. If the Elasticsearch cluster is not necessarily up and running when Hibernate Search starts, this could cause a startup failure.
To address this, you can configure Hibernate Search to not send any request on startup:
-
Disable Elasticsearch version checks on startup by setting the configuration property
quarkus.hibernate-search-orm.elasticsearch.version-check.enabled
tofalse
. -
Disable schema management on startup by setting the configuration property
quarkus.hibernate-search-orm.schema-management.strategy
tonone
.
Of course, even with this configuration, Hibernate Search still won’t be able to index anything or run search queries until the Elasticsearch cluster becomes accessible.
If you disable automatic schema creation by setting See this section of the reference documentation for more information. |
Coordination through outbox polling
Coordination through outbox polling is considered preview. In preview, backward compatibility and presence in the ecosystem is not guaranteed. Specific improvements might require changing configuration or APIs, or even storage formats, and plans to become stable are under way. Feedback is welcome on our mailing list or as issues in our GitHub issue tracker. |
While it’s technically possible to use Hibernate Search and Elasticsearch in distributed applications, by default they suffer from a few limitations.
These limitations are the result of Hibernate Search not coordinating between threads or application nodes by default.
In order to get rid of these limitations, you can
use the outbox-polling
coordination strategy.
This strategy creates an outbox table in the database to push entity change events to,
and relies on a background processor to consume these events and perform indexing.
To enable the outbox-polling
coordination strategy, an additional extension is required:
quarkus extension add hibernate-search-orm-outbox-polling
./mvnw quarkus:add-extension -Dextensions='hibernate-search-orm-outbox-polling'
./gradlew addExtension --extensions='hibernate-search-orm-outbox-polling'
Once the extension is there, you will need to explicitly select the outbox-polling
strategy
by setting quarkus.hibernate-search-orm.coordination.strategy
to outbox-polling
.
Finally, you will need to make sure that the Hibernate ORM entities added by Hibernate Search (to represent the outbox and agents) have corresponding tables/sequences in your database:
-
If you are just starting with your application and intend to let Hibernate ORM generate your database schema, then no worries: the entities required by Hibernate Search will be included in the generated schema.
-
Otherwise, you must manually alter your schema to add the necessary tables/sequences.
Once you are done with the above, you’re ready to use Hibernate Search with an outbox. Don’t change any code, and just start your application: it will automatically detect when multiple applications are connected to the same database, and coordinate the index updates accordingly.
Hibernate Search mostly behaves the same when using the However, there is one key difference: index updates are necessarily asynchronous; they are guaranteed to happen eventually, but not immediately. This means in particular that the configuration property
This behavior is consistent with Elasticsearch’s near-real-time search and the recommended way of using Hibernate Search even when coordination is disabled. |
For more information about coordination in Hibernate Search, see this section of the reference documentation.
For more information about configuration options related to coordination, see Configuration of coordination with outbox polling.
AWS request signing
If you need to use Amazon’s managed Elasticsearch service, you will find it requires a proprietary authentication method involving request signing.
You can enable AWS request signing in Hibernate Search by adding a dedicated extension to your project and configuring it.
See the documentation for the Hibernate Search ORM + Elasticsearch AWS extension for more information.
Management endpoint
Hibernate Search’s management endpoint is considered preview. In preview, backward compatibility and presence in the ecosystem is not guaranteed. Specific improvements might require changing configuration or APIs, or even storage formats, and plans to become stable are under way. Feedback is welcome on our mailing list or as issues in our GitHub issue tracker. |
The Hibernate Search extension provides an HTTP endpoint to reindex your data through the management interface. By default, this endpoint is not available. It can be enabled through configuration properties as shown below.
quarkus.management.enabled=true (1)
quarkus.hibernate-search-orm.management.enabled=true (2)
1 | Enable the management interface. |
2 | Enable Hibernate Search specific management endpoints. |
Once the management endpoints are enabled, data can be re-indexed via /q/hibernate-search/reindex
, where /q
is the default management root path
and /hibernate-search
is the default Hibernate Search root management path.
It (/hibernate-search
) can be changed via configuration property as shown below.
quarkus.hibernate-search-orm.management.root-path=custom-root-path (1)
1 | Use a custom custom-root-path path for Hibernate Search’s management endpoint.
If the default management root path is used then the reindex path becomes /q/custom-root-path/reindex . |
This endpoint accepts POST
requests with application/json
content type only.
All indexed entities will be re-indexed if an empty request body is submitted.
If only a subset of entities must be re-indexed or
if there is a need to have a custom configuration of the underlying mass indexer
then this information can be passed through the request body as shown below.
{
"filter": {
"types": ["EntityName1", "EntityName2", "EntityName3", ...], (1)
},
"massIndexer":{
"typesToIndexInParallel": 1, (2)
}
}
1 | An array of entity names that should be re-indexed. If unspecified or empty, all entity types will be re-indexed. |
2 | Sets the number of entity types to be indexed in parallel. |
The full list of possible filters and available mass indexer configurations is presented in the example below.
{
"filter": { (1)
"types": ["EntityName1", "EntityName2", "EntityName3", ...], (2)
"tenants": ["tenant1", "tenant2", ...] (3)
},
"massIndexer":{ (4)
"typesToIndexInParallel": 1, (5)
"threadsToLoadObjects": 6, (6)
"batchSizeToLoadObjects": 10, (7)
"cacheMode": "IGNORE", (8)
"mergeSegmentsOnFinish": false, (9)
"mergeSegmentsAfterPurge": true, (10)
"dropAndCreateSchemaOnStart": false, (11)
"purgeAllOnStart": true, (12)
"idFetchSize": 100, (13)
"transactionTimeout": 100000, (14)
}
}
1 | Filter object that allows to limit the scope of reindexing. |
2 | An array of entity names that should be re-indexed. If unspecified or empty, all entity types will be re-indexed. |
3 | An array of tenant ids, in case of multi-tenancy. If unspecified or empty, all tenants will be re-indexed. |
4 | Mass indexer configuration object. |
5 | Sets the number of entity types to be indexed in parallel. |
6 | Sets the number of threads to be used to load the root entities. |
7 | Sets the batch size used to load the root entities. |
8 | Sets the cache interaction mode for the data loading tasks. |
9 | Whether each index is merged into a single segment after indexing. |
10 | Whether each index is merged into a single segment after the initial index purge, just before indexing. |
11 | Whether the indexes and their schema (if they exist) should be dropped and re-created before indexing. |
12 | Whether all entities are removed from the indexes before indexing. |
13 | Specifies the fetch size to be used when loading primary keys if objects to be indexed. |
14 | Specifies the timeout of transactions for loading ids and entities to be re-indexed.
Note all the properties in the JSON are optional, and only those that are needed should be used. |
For more detailed information on mass indexer configuration see the corresponding section of the Hibernate Search reference documentation.
Submitting the reindexing request will trigger indexing in the background. Mass indexing progress will appear in the application logs.
For testing purposes, it might be useful to know when the indexing finished. Adding wait_for=finished
query parameter to the URL
will result in the management endpoint returning a chunked response that will report when the indexing starts and then when it is finished.
When working with multiple persistence units, the name of the persistence unit to reindex can be supplied through the
persistence_unit
query parameter: /q/hibernate-search/reindex?persistence_unit=non-default-persistence-unit
.
Further reading
If you are interested in learning more about Hibernate Search, the Hibernate team publishes an extensive reference documentation, as well as a page listing other relevant resources.
FAQ
Why Elasticsearch only?
Hibernate Search supports both a Lucene backend and an Elasticsearch backend.
In the context of Quarkus and to build scalable applications, we thought the latter would make more sense. Thus, we focused our efforts on it.
We don’t have plans to support the Lucene backend in Quarkus for now, though there is an issue tracking progress on such an implementation in the Quarkiverse: quarkiverse/quarkus-hibernate-search-extras#179.
Configuration Reference for Hibernate Search with Hibernate ORM
Main Configuration
Configuration property fixed at build time - All other configuration properties are overridable at runtime
Configuration property |
类型 |
默认 |
||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Whether Hibernate Search is enabled during the build. If Hibernate Search is disabled during the build, all processing related to Hibernate Search will be skipped,
but it will not be possible to activate Hibernate Search at runtime:
Environment variable: Show more |
boolean |
|
||||||||||||||||||||||||||||||||
A bean reference to a component that should be notified of any failure occurring in a background process (mainly index operations). The referenced bean must implement See this section of the reference documentation for more information.
Environment variable: Show more |
string |
|||||||||||||||||||||||||||||||||
The strategy to use for coordinating between threads or even separate instances of the application, in particular in automatic indexing. See coordination for more information. Environment variable: Show more |
string |
|
||||||||||||||||||||||||||||||||
One or more bean references to the component(s) used to configure the Hibernate Search mapping, in particular programmatically. The referenced beans must implement See Programmatic mapping for an example on how mapping configurers can be used to apply programmatic mappings.
Environment variable: Show more |
list of string |
|||||||||||||||||||||||||||||||||
Whether Hibernate Search should be active for this persistence unit at runtime. If Hibernate Search is not active, it won’t index Hibernate ORM entities, and accessing the SearchMapping/SearchSession of the relevant persistence unit for search or other operation will not be possible. Note that if Hibernate Search is disabled (i.e. Environment variable: Show more |
boolean |
|
||||||||||||||||||||||||||||||||
The schema management strategy, controlling how indexes and their schema are created, updated, validated or dropped on startup and shutdown. Available values:
See this section of the reference documentation for more information. Environment variable: Show more |
|
|
||||||||||||||||||||||||||||||||
The strategy to use when loading entities during the execution of a search query. Environment variable: Show more |
|
|
||||||||||||||||||||||||||||||||
The fetch size to use when loading entities during the execution of a search query. Environment variable: Show more |
int |
|
||||||||||||||||||||||||||||||||
How to synchronize between application threads and indexing,
in particular when relying on (implicit) listener-triggered indexing on entity change,
but also when using a Defines how complete indexing should be before resuming the application thread after a database transaction is committed.
Available values:
This property also accepts a bean reference
to a custom implementations of See this section of the reference documentation for more information.
Environment variable: Show more |
string |
|
||||||||||||||||||||||||||||||||
An exhaustive list of all tenant identifiers that may be used by the application when multi-tenancy is enabled. Mainly useful when using the {@code outbox-polling} coordination strategy, since it involves setting up one background processor per tenant. Environment variable: Show more |
list of string |
|||||||||||||||||||||||||||||||||
类型 |
默认 |
|||||||||||||||||||||||||||||||||
The version of Elasticsearch used in the cluster. As the schema is generated without a connection to the server, this item is mandatory. It doesn’t have to be the exact version (it can be There’s no rule of thumb here as it depends on the schema incompatibilities introduced by Elasticsearch versions. In any case, if there is a problem, you will have an error when Hibernate Search tries to connect to the cluster. Environment variable: Show more |
ElasticsearchVersion |
|||||||||||||||||||||||||||||||||
Path to a file in the classpath holding custom index settings to be included in the index definition when creating an Elasticsearch index. The provided settings will be merged with those generated by Hibernate Search, including analyzer definitions. When analysis is configured both through an analysis configurer and these custom settings, the behavior is undefined; it should not be relied upon. See this section of the reference documentation for more information. Environment variable: Show more |
string |
|||||||||||||||||||||||||||||||||
Path to a file in the classpath holding a custom index mapping to be included in the index definition when creating an Elasticsearch index. The file does not need to (and generally shouldn’t) contain the full mapping: Hibernate Search will automatically inject missing properties (index fields) in the given mapping. See this section of the reference documentation for more information. Environment variable: Show more |
string |
|||||||||||||||||||||||||||||||||
One or more bean references to the component(s) used to configure full text analysis (e.g. analyzers, normalizers). The referenced beans must implement See Setting up the analyzers for more information.
Environment variable: Show more |
list of string |
|||||||||||||||||||||||||||||||||
The list of hosts of the Elasticsearch servers. Environment variable: Show more |
list of string |
|
||||||||||||||||||||||||||||||||
The protocol to use when contacting Elasticsearch servers. Set to "https" to enable SSL/TLS. Environment variable: Show more |
|
|
||||||||||||||||||||||||||||||||
The username used for authentication. Environment variable: Show more |
string |
|||||||||||||||||||||||||||||||||
The password used for authentication. Environment variable: Show more |
string |
|||||||||||||||||||||||||||||||||
The timeout when establishing a connection to an Elasticsearch server. Environment variable: Show more |
|
|||||||||||||||||||||||||||||||||
The timeout when reading responses from an Elasticsearch server. Environment variable: Show more |
|
|||||||||||||||||||||||||||||||||
The timeout when executing a request to an Elasticsearch server. This includes the time needed to wait for a connection to be available, send the request and read the response. Environment variable: Show more |
||||||||||||||||||||||||||||||||||
The maximum number of connections to all the Elasticsearch servers. Environment variable: Show more |
int |
|
||||||||||||||||||||||||||||||||
The maximum number of connections per Elasticsearch server. Environment variable: Show more |
int |
|
||||||||||||||||||||||||||||||||
Defines if automatic discovery is enabled. Environment variable: Show more |
boolean |
|
||||||||||||||||||||||||||||||||
Refresh interval of the node list. Environment variable: Show more |
|
|||||||||||||||||||||||||||||||||
The size of the thread pool assigned to the backend. Note that number is per backend, not per index. Adding more indexes will not add more threads. As all operations happening in this thread-pool are non-blocking, raising its size above the number of processor cores available to the JVM will not bring noticeable performance benefit. The only reason to alter this setting would be to reduce the number of threads; for example, in an application with a single index with a single indexing queue, running on a machine with 64 processor cores, you might want to bring down the number of threads. Defaults to the number of processor cores available to the JVM on startup. Environment variable: Show more |
int |
|||||||||||||||||||||||||||||||||
Whether partial shard failures are ignored ( Environment variable: Show more |
boolean |
|
||||||||||||||||||||||||||||||||
Whether Hibernate Search should check the version of the Elasticsearch cluster on startup. Set to Environment variable: Show more |
boolean |
|
||||||||||||||||||||||||||||||||
The minimal Elasticsearch cluster status required on startup. Environment variable: Show more |
|
|
||||||||||||||||||||||||||||||||
How long we should wait for the status before failing the bootstrap. Environment variable: Show more |
|
|||||||||||||||||||||||||||||||||
The number of indexing queues assigned to each index. Higher values will lead to more connections being used in parallel, which may lead to higher indexing throughput, but incurs a risk of overloading Elasticsearch, i.e. of overflowing its HTTP request buffers and tripping circuit breakers, leading to Elasticsearch giving up on some request and resulting in indexing failures. Environment variable: Show more |
int |
|
||||||||||||||||||||||||||||||||
The size of indexing queues. Lower values may lead to lower memory usage, especially if there are many queues, but values that are too low will reduce the likeliness of reaching the max bulk size and increase the likeliness of application threads blocking because the queue is full, which may lead to lower indexing throughput. Environment variable: Show more |
int |
|
||||||||||||||||||||||||||||||||
The maximum size of bulk requests created when processing indexing queues. Higher values will lead to more documents being sent in each HTTP request sent to Elasticsearch, which may lead to higher indexing throughput, but incurs a risk of overloading Elasticsearch, i.e. of overflowing its HTTP request buffers and tripping circuit breakers, leading to Elasticsearch giving up on some request and resulting in indexing failures. Note that raising this number above the queue size has no effect, as bulks cannot include more requests than are contained in the queue. Environment variable: Show more |
int |
|
||||||||||||||||||||||||||||||||
A bean reference to the component used to configure the Elasticsearch layout: index names, index aliases, … The referenced bean must implement Available built-in implementations:
See this section of the reference documentation for more information.
Environment variable: Show more |
string |
|||||||||||||||||||||||||||||||||
类型 |
默认 |
|||||||||||||||||||||||||||||||||
Path to a file in the classpath holding custom index settings to be included in the index definition when creating an Elasticsearch index. The provided settings will be merged with those generated by Hibernate Search, including analyzer definitions. When analysis is configured both through an analysis configurer and these custom settings, the behavior is undefined; it should not be relied upon. See this section of the reference documentation for more information. Environment variable: Show more |
string |
|||||||||||||||||||||||||||||||||
Path to a file in the classpath holding a custom index mapping to be included in the index definition when creating an Elasticsearch index. The file does not need to (and generally shouldn’t) contain the full mapping: Hibernate Search will automatically inject missing properties (index fields) in the given mapping. See this section of the reference documentation for more information. Environment variable: Show more |
string |
|||||||||||||||||||||||||||||||||
One or more bean references to the component(s) used to configure full text analysis (e.g. analyzers, normalizers). The referenced beans must implement See Setting up the analyzers for more information.
Environment variable: Show more |
list of string |
|||||||||||||||||||||||||||||||||
The minimal Elasticsearch cluster status required on startup. Environment variable: Show more |
|
|
||||||||||||||||||||||||||||||||
How long we should wait for the status before failing the bootstrap. Environment variable: Show more |
|
|||||||||||||||||||||||||||||||||
The number of indexing queues assigned to each index. Higher values will lead to more connections being used in parallel, which may lead to higher indexing throughput, but incurs a risk of overloading Elasticsearch, i.e. of overflowing its HTTP request buffers and tripping circuit breakers, leading to Elasticsearch giving up on some request and resulting in indexing failures. Environment variable: Show more |
int |
|
||||||||||||||||||||||||||||||||
The size of indexing queues. Lower values may lead to lower memory usage, especially if there are many queues, but values that are too low will reduce the likeliness of reaching the max bulk size and increase the likeliness of application threads blocking because the queue is full, which may lead to lower indexing throughput. Environment variable: Show more |
int |
|
||||||||||||||||||||||||||||||||
The maximum size of bulk requests created when processing indexing queues. Higher values will lead to more documents being sent in each HTTP request sent to Elasticsearch, which may lead to higher indexing throughput, but incurs a risk of overloading Elasticsearch, i.e. of overflowing its HTTP request buffers and tripping circuit breakers, leading to Elasticsearch giving up on some request and resulting in indexing failures. Note that raising this number above the queue size has no effect, as bulks cannot include more requests than are contained in the queue. Environment variable: Show more |
int |
|
||||||||||||||||||||||||||||||||
类型 |
默认 |
|||||||||||||||||||||||||||||||||
Root path for reindexing endpoints.
This value will be resolved as a path relative to Environment variable: Show more |
string |
|
||||||||||||||||||||||||||||||||
If management interface is turned on the reindexing endpoints will be published under the management interface.
This property allows to enable this functionality by setting it to Environment variable: Show more |
boolean |
|
About the Duration format
To write duration values, use the standard You can also use a simplified format, starting with a number:
In other cases, the simplified format is translated to the
|
About bean references
First, be aware that referencing beans in configuration properties is optional and, in fact, discouraged:
you can achieve the same results by annotating your beans with If you really do want to reference beans using a string value in configuration properties know that string is parsed; here are the most common formats:
Other formats are also accepted, but are only useful for advanced use cases. See this section of Hibernate Search’s reference documentation for more information. |
Configuration of coordination with outbox polling
These configuration properties require an additional extension. See Coordination through outbox polling. |
Configuration property fixed at build time - All other configuration properties are overridable at runtime
Configuration property |
类型 |
默认 |
---|---|---|
Whether the event processor is enabled, i.e. whether events will be processed to perform automatic reindexing on this instance of the application. This can be set to See this section of the reference documentation for more information. Environment variable: Show more |
boolean |
|
The total number of shards that will form a partition of the entity change events to process. By default, sharding is dynamic and setting this property is not necessary. If you want to control explicitly the number and assignment of shards,
you must configure static sharding and then setting this property as well as the assigned shards (see See this section of the reference documentation for more information about event processor sharding. Environment variable: Show more |
int |
|
Among shards that will form a partition of the entity change events, the shards that will be processed by this application instance. By default, sharding is dynamic and setting this property is not necessary. If you want to control explicitly the number and assignment of shards, you must configure static sharding and then setting this property as well as the total shard count is necessary. Shards are referred to by an index in the range See this section of the reference documentation for more information about event processor sharding. Environment variable: Show more |
list of int |
|
How long to wait for another query to the outbox events table after a query didn’t return any event. Lower values will reduce the time it takes for a change to be reflected in the index, but will increase the stress on the database when there are no new events. See this section of the reference documentation for more information. Environment variable: Show more |
|
|
How long the event processor can poll for events before it must perform a "pulse", updating and checking registrations in the agents table. The pulse interval must be set to a value between the polling interval and one third (1/3) of the expiration interval. Low values (closer to the polling interval) mean less time wasted not processing events when a node joins or leaves the cluster, and reduced risk of wasting time not processing events because an event processor is incorrectly considered disconnected, but more stress on the database because of more frequent checks of the list of agents. High values (closer to the expiration interval) mean more time wasted not processing events when a node joins or leaves the cluster, and increased risk of wasting time not processing events because an event processor is incorrectly considered disconnected, but less stress on the database because of less frequent checks of the list of agents. See this section of the reference documentation for more information. Environment variable: Show more |
|
|
How long an event processor "pulse" remains valid before considering the processor disconnected and forcibly removing it from the cluster. The expiration interval must be set to a value at least 3 times larger than the pulse interval. Low values (closer to the pulse interval) mean less time wasted not processing events when a node abruptly leaves the cluster due to a crash or network failure, but increased risk of wasting time not processing events because an event processor is incorrectly considered disconnected. High values (much larger than the pulse interval) mean more time wasted not processing events when a node abruptly leaves the cluster due to a crash or network failure, but reduced risk of wasting time not processing events because an event processor is incorrectly considered disconnected. See this section of the reference documentation for more information. Environment variable: Show more |
|
|
How many outbox events, at most, are processed in a single transaction. Higher values will reduce the number of transactions opened by the background process
and may increase performance thanks to the first-level cache (persistence context),
but will increase memory usage and in extreme cases may lead to See this section of the reference documentation for more information. Environment variable: Show more |
int |
|
The timeout for transactions processing outbox events. When this property is not set, Hibernate Search will use whatever default transaction timeout is configured in the JTA transaction manager, which may be too low for batch processing and lead to transaction timeouts when processing batches of events. If this happens, set a higher transaction timeout for event processing using this property. See this section of the reference documentation for more information. Environment variable: Show more |
||
How long the event processor must wait before re-processing an event after its previous processing failed. Use the value See this section of the reference documentation for more information. Environment variable: Show more |
|
|
How long to wait for another query to the agent table when actively waiting for event processors to suspend themselves. Low values will reduce the time it takes for the mass indexer agent to detect that event processors finally suspended themselves, but will increase the stress on the database while the mass indexer agent is actively waiting. High values will increase the time it takes for the mass indexer agent to detect that event processors finally suspended themselves, but will reduce the stress on the database while the mass indexer agent is actively waiting. See this section of the reference documentation for more information. Environment variable: Show more |
|
|
How long the mass indexer can wait before it must perform a "pulse", updating and checking registrations in the agent table. The pulse interval must be set to a value between the polling interval and one third (1/3) of the expiration interval. Low values (closer to the polling interval) mean reduced risk of event processors starting to process events again during mass indexing because a mass indexer agent is incorrectly considered disconnected, but more stress on the database because of more frequent updates of the mass indexer agent’s entry in the agent table. High values (closer to the expiration interval) mean increased risk of event processors starting to process events again during mass indexing because a mass indexer agent is incorrectly considered disconnected, but less stress on the database because of less frequent updates of the mass indexer agent’s entry in the agent table. See this section of the reference documentation for more information. Environment variable: Show more |
|
|
How long an event processor "pulse" remains valid before considering the processor disconnected and forcibly removing it from the cluster. The expiration interval must be set to a value at least 3 times larger than the pulse interval. Low values (closer to the pulse interval) mean less time wasted with event processors not processing events when a mass indexer agent terminates due to a crash, but increased risk of event processors starting to process events again during mass indexing because a mass indexer agent is incorrectly considered disconnected. High values (much larger than the pulse interval) mean more time wasted with event processors not processing events when a mass indexer agent terminates due to a crash, but reduced risk of event processors starting to process events again during mass indexing because a mass indexer agent is incorrectly considered disconnected. See this section of the reference documentation for more information. Environment variable: Show more |
|
|
类型 |
默认 |
|
Configuration for the mapping of entities used for outbox-polling coordination |
类型 |
默认 |
The database catalog to use for the agent table. Environment variable: Show more |
string |
|
The schema catalog to use for the agent table. Environment variable: Show more |
string |
|
The name of the agent table. Environment variable: Show more |
string |
|
The UUID generator strategy used for the agent table. Available strategies:
Environment variable: Show more |
|
|
The name of the Hibernate ORM basic type used for representing an UUID in the outbox event table. Refer to this section of the Hibernate ORM documentation to see the possible UUID representations. Defaults to the special value Environment variable: Show more |
string |
|
The database catalog to use for the outbox event table. Environment variable: Show more |
string |
|
The schema catalog to use for the outbox event table. Environment variable: Show more |
string |
|
The name of the outbox event table. Environment variable: Show more |
string |
|
The UUID generator strategy used for the outbox event table. Available strategies:
Environment variable: Show more |
|
|
The name of the Hibernate ORM basic type used for representing an UUID in the outbox event table. Refer to this section of the Hibernate ORM documentation to see the possible UUID representations. Defaults to the special value Environment variable: Show more |
string |
|
类型 |
默认 |
|
Whether the event processor is enabled, i.e. whether events will be processed to perform automatic reindexing on this instance of the application. This can be set to See this section of the reference documentation for more information. Environment variable: Show more |
boolean |
|
The total number of shards that will form a partition of the entity change events to process. By default, sharding is dynamic and setting this property is not necessary. If you want to control explicitly the number and assignment of shards,
you must configure static sharding and then setting this property as well as the assigned shards (see See this section of the reference documentation for more information about event processor sharding. Environment variable: Show more |
int |
|
Among shards that will form a partition of the entity change events, the shards that will be processed by this application instance. By default, sharding is dynamic and setting this property is not necessary. If you want to control explicitly the number and assignment of shards, you must configure static sharding and then setting this property as well as the total shard count is necessary. Shards are referred to by an index in the range See this section of the reference documentation for more information about event processor sharding. Environment variable: Show more |
list of int |
|
How long to wait for another query to the outbox events table after a query didn’t return any event. Lower values will reduce the time it takes for a change to be reflected in the index, but will increase the stress on the database when there are no new events. See this section of the reference documentation for more information. Environment variable: Show more |
|
|
How long the event processor can poll for events before it must perform a "pulse", updating and checking registrations in the agents table. The pulse interval must be set to a value between the polling interval and one third (1/3) of the expiration interval. Low values (closer to the polling interval) mean less time wasted not processing events when a node joins or leaves the cluster, and reduced risk of wasting time not processing events because an event processor is incorrectly considered disconnected, but more stress on the database because of more frequent checks of the list of agents. High values (closer to the expiration interval) mean more time wasted not processing events when a node joins or leaves the cluster, and increased risk of wasting time not processing events because an event processor is incorrectly considered disconnected, but less stress on the database because of less frequent checks of the list of agents. See this section of the reference documentation for more information. Environment variable: Show more |
|
|
How long an event processor "pulse" remains valid before considering the processor disconnected and forcibly removing it from the cluster. The expiration interval must be set to a value at least 3 times larger than the pulse interval. Low values (closer to the pulse interval) mean less time wasted not processing events when a node abruptly leaves the cluster due to a crash or network failure, but increased risk of wasting time not processing events because an event processor is incorrectly considered disconnected. High values (much larger than the pulse interval) mean more time wasted not processing events when a node abruptly leaves the cluster due to a crash or network failure, but reduced risk of wasting time not processing events because an event processor is incorrectly considered disconnected. See this section of the reference documentation for more information. Environment variable: Show more |
|
|
How many outbox events, at most, are processed in a single transaction. Higher values will reduce the number of transactions opened by the background process
and may increase performance thanks to the first-level cache (persistence context),
but will increase memory usage and in extreme cases may lead to See this section of the reference documentation for more information. Environment variable: Show more |
int |
|
The timeout for transactions processing outbox events. When this property is not set, Hibernate Search will use whatever default transaction timeout is configured in the JTA transaction manager, which may be too low for batch processing and lead to transaction timeouts when processing batches of events. If this happens, set a higher transaction timeout for event processing using this property. See this section of the reference documentation for more information. Environment variable: Show more |
||
How long the event processor must wait before re-processing an event after its previous processing failed. Use the value See this section of the reference documentation for more information. Environment variable: Show more |
|
|
How long to wait for another query to the agent table when actively waiting for event processors to suspend themselves. Low values will reduce the time it takes for the mass indexer agent to detect that event processors finally suspended themselves, but will increase the stress on the database while the mass indexer agent is actively waiting. High values will increase the time it takes for the mass indexer agent to detect that event processors finally suspended themselves, but will reduce the stress on the database while the mass indexer agent is actively waiting. See this section of the reference documentation for more information. Environment variable: Show more |
|
|
How long the mass indexer can wait before it must perform a "pulse", updating and checking registrations in the agent table. The pulse interval must be set to a value between the polling interval and one third (1/3) of the expiration interval. Low values (closer to the polling interval) mean reduced risk of event processors starting to process events again during mass indexing because a mass indexer agent is incorrectly considered disconnected, but more stress on the database because of more frequent updates of the mass indexer agent’s entry in the agent table. High values (closer to the expiration interval) mean increased risk of event processors starting to process events again during mass indexing because a mass indexer agent is incorrectly considered disconnected, but less stress on the database because of less frequent updates of the mass indexer agent’s entry in the agent table. See this section of the reference documentation for more information. Environment variable: Show more |
|
|
How long an event processor "pulse" remains valid before considering the processor disconnected and forcibly removing it from the cluster. The expiration interval must be set to a value at least 3 times larger than the pulse interval. Low values (closer to the pulse interval) mean less time wasted with event processors not processing events when a mass indexer agent terminates due to a crash, but increased risk of event processors starting to process events again during mass indexing because a mass indexer agent is incorrectly considered disconnected. High values (much larger than the pulse interval) mean more time wasted with event processors not processing events when a mass indexer agent terminates due to a crash, but reduced risk of event processors starting to process events again during mass indexing because a mass indexer agent is incorrectly considered disconnected. See this section of the reference documentation for more information. Environment variable: Show more |
|
About the Duration format
To write duration values, use the standard You can also use a simplified format, starting with a number:
In other cases, the simplified format is translated to the
|