Implement a rolling index strategy with Spring Data Elasticsearch 4.1

With the release of version 4.1 Spring Data Elasticsearch now supports the index templates of Elasticsearch. Index templates allow the user to define settings, mappings and aliases for indices that are automatically created by Elasticsearch when documents are saved to a not yet existing index.

In this blog post I will show how index templates can be used in combination with Spring Data Repository customizations to implement a rolling index strategy where new indices will be created automatically based on the date.

You should be familiar with the basic concepts of Spring Data Repositories and the use of Spring Data Elasticsearch.

As the most popular use case for rolling indexes is storing log entries in Elasticsearch, we will do something similar. Our application will offer an HTTP endpoint where a client can POST a message, this message will be stored in an index that is named msg-HH-MM where the index name will contain the hour and the minute when the message was received. Normally that would be something containing the date, but to be able to see this working, we need some different naming scheme.

When the user issues a GET request with a search word, the application will search across all indices by using the alias name msg which we will set up as an alias for all the msg-* indices.

Basic setup

The program

The source code for this example is available on GitHub. This project was set up using start.spring.io, selecting a Spring Boot 2.4.0 application with web and spring-data-elasticsearch support and Java version 15.

Note: I make use of Java 15 features like var definition of variables, this is not necessary for Spring Data Elasticsearch, you still can use Java 8 if you need to.

Elasticsearch

In order to run this example we need an Elasticsearch cluster, I use version 7.9.3 because that’s the version that Spring Data Elasticsearch 4.1, the version Spring Boot pulls in, is built with. I have downloaded Elasticsearch and have it running on my machine, accessible at http://localhost:9200. Please adjust the setup in the application configuration at  src/main/resources/application.yml accordingly.

Command line client

In order to access our program and to check what is stored in Elasticsearch I use httpie. An alternative would be curl.

The different parts of the application

The entity

The entity we use in this example looks like this:

@Document(indexName = "msg", createIndex = false)
public class Message {
    @Id private String id;

    @Field(type = FieldType.Text)
    private String message;

    @Field(type = FieldType.Date, format = DateFormat.date_hour_minute_second)
    private LocalDateTime timestamp = LocalDateTime.now();

    // getter/setter omitted here for brevity
}

Please note the following points:

  • the index name is set to msg, this will be the alias name that will point to all the different indices that will be created. Spring Data Repository methods will without adaption use this name. This is ok for reading, we will set up the writing part later.
  • the createIndex argument of the @Document annotation is set to false. We don’t want the application to automatically create an index named msg as Elasticsearch will automatically create the indices when documents are stored.
  • the properties are explicitly annotated with their types, so that the correct index mapping can be stored in the index template and later automatically be applied to a new created index.

The index template

To initialize the index template, we use a Spring Component:

@Component
public class TemplateInitializer {

    private static final String TEMPLATE_NAME = "msg-template";
    private static final String TEMPLATE_PATTERN = "msg-*";
    
    private final ElasticsearchOperations operations;

    public TemplateInitializer(ElasticsearchOperations operations) {
        this.operations = operations;
    }

    @Autowired
    public void setup() {

        var indexOps = operations.indexOps(Message.class);

        if (!indexOps.existsTemplate(TEMPLATE_NAME)) {

            var mapping = indexOps.createMapping();

            var aliasActions = new AliasActions().add(
                    new AliasAction.Add(AliasActionParameters.builderForTemplate()
                            .withAliases(indexOps.getIndexCoordinates().getIndexNames())
                            .build())
            );

            var request = PutTemplateRequest.builder(TEMPLATE_NAME, TEMPLATE_PATTERN)
                    .withMappings(mapping)
                    .withAliasActions(aliasActions)
                    .build();

            indexOps.putTemplate(request);
        }
    }
}

This bean class has a method setup() that is annotated with @Autowired. A method with this annotation will be executed once when the beans in the Spring ApplicationContext are all setup. So in the setup() method we can be sure that the injected ElasticsearchOperations instance has been set.

To work with the index templates we need an implementation of the IndexOperations interface which we get from the operations object. We then check if the index template already exists, as this initialization should only be done once.

If the index template does not exist, we first create the index mapping with indexOps.createMapping(). As the indexOps was bound to the Message class when we created it, the annotations from the Message class are used to create the mapping.

The next step is to create an AliasAction that will add an alias to an index when it is created. The name for the alias is retrieved from the Message class with indexOps.getIndexCoordinates().getIndexNames().

We then put the mapping and the alias action in a PutTemplateRequest together with a name for the template and the pattern when this template should be applied and send it off to Elasticsearch.

The repository

The Spring Data Repository we use is pretty simple:

public interface MessageRepository extends ElasticsearchRepository<Message, String> {

    SearchHits<Message> searchAllBy();

    SearchHits<Message> searchByMessage(String text);
}

It extends ElasticsearchRepository and defines one method to retrieve all messages and a second one to search for text in a message.

The repository customization

We now need to customize the repository as we want our own methods to be used when saving Message ojects to the index. In these methods we will set the correct index name. we do this by defining a new interface CustomMessageRepository. As we want to redefine methods that are already defined in the CrudRepository interface (which our MessageRepository already extends), it is important that our methods have exactly the same signature as the methods from CrudRepository. This is the reason we need to make this interface generic:

public interface CustomMessageRepository<T> {

    <S extends T> S save(S entity);

    <S extends T> Iterable<S> saveAll(Iterable<S> entities);
}

We provide an implementation of this interface in the class CustomMessageRepositoryImpl. This must have the same name as the interface with the suffix Impl, so that Spring Data can pick up this implementation:

public class CustomMessageRepositoryImpl implements CustomMessageRepository<Message> {

    final private ElasticsearchOperations operations;

    public CustomMessageRepositoryImpl(ElasticsearchOperations operations) {
        this.operations = operations;
    }

    @Override
    public <S extends Message> S save(S entity) {
        return operations.save(entity, indexName());
    }

    @Override
    public <S extends Message> Iterable<S> saveAll(Iterable<S> entities) {
        return operations.save(entities, indexName());
    }

    public IndexCoordinates indexName() {
        var indexName = "msg-" +
                LocalTime.now().truncatedTo(ChronoUnit.MINUTES).toString().replace(':', '-');
        return IndexCoordinates.of(indexName);
    }
}

We have an ElasticsearchOperation instance injected (no need to define this as @Component, Spring Data detects this by the class name and does the injection). The index name is provided by the indexName() method which uses the hour and minute to provide an index name of the pattern msg-HH-MM using the current time. A real life scenario would probably use the date instead of the time, but as we want test this with different entities and not wait a whole day between inserting them, this should be fine for now.

In the implementations of our save methods, we call the ElasticsearchOperations´s save method but provide our own index name, so that the one from the @Document annotation is not taken.

A last step we need to do is to have our MessageRepository implement this new repository as well:

public interface MessageRepository extends ElasticsearchRepository<Message, String>, CustomMessageRepository<Message> {

    SearchHits<Message> searchAllBy();

    SearchHits<Message> searchAllByMessage(String text);
}

oops, the controller

And of course we need something to test this all, so here we hav a simple controller to store and retrieve messages:

@RestController
@RequestMapping("/messages")
public class MessageController {

    private final MessageRepository repository;

    public MessageController(MessageRepository repository) {
        this.repository = repository;
    }

    @PostMapping
    public Message add(@RequestBody Message message) {
        return repository.save(message);
    }

    @GetMapping
    public SearchHits<Message> messages() {
        return repository.searchAllBy();
    }

    @GetMapping("/{text}")
    public SearchHits<Message> messages(@PathVariable("text") String text) {
        return repository.searchAllByMessage(text);
    }
}

This is just a plain old Spring REST controller with nothing special.

Let’s see it in action

Now let’s start up the program and check what we have (remember, I use httpie as a client).

In the beginning there are no indices:

$ http :9200/_cat/indices
HTTP/1.1 200 OK
content-length: 0
content-type: text/plain; charset=UTF-8

We check out the templates:

$ http :9200/_template/msg-template
HTTP/1.1 200 OK
content-encoding: gzip
content-length: 165
content-type: application/json; charset=UTF-8

{
    "msg-template": {
        "aliases": {
            "msg": {}
        },
        "index_patterns": [
            "msg-*"
        ],
        "mappings": {
            "properties": {
                "message": {
                    "type": "text"
                },
                "timestamp": {
                    "format": "date_hour_minute_second",
                    "type": "date"
                }
            }
        },
        "order": 0,
        "settings": {}
    }
}

The template definition with the mapping and alias definition is there. Now let’s add an entry:

$ http post :8080/messages message="this is the first message"
HTTP/1.1 200
Connection: keep-alive
Content-Type: application/json
Date: Tue, 17 Nov 2020 21:10:59 GMT
Keep-Alive: timeout=60
Transfer-Encoding: chunked

{
    "id": "TwYL2HUBIlu2470f4r6Y",
    "message": "this is the first message",
    "timestamp": "2020-11-17T22:10:58.541117"
}

We see that this message was persisted at 22:10, what about the indices?

$ http :9200/_cat/indices
HTTP/1.1 200 OK
content-encoding: gzip
content-length: 83
content-type: text/plain; charset=UTF-8

yellow open msg-22-10 bFfnss5wR8CuLOmSfJPDDw 1 1 1 0 4.5kb 4.5kb

We have a new index named msg-22-10, let’s check it’s setup:

$ http :9200/msg-22-10
HTTP/1.1 200 OK
content-encoding: gzip
content-length: 326
content-type: application/json; charset=UTF-8

{
    "msg-22-10": {
        "aliases": {
            "msg": {}
        },
        "mappings": {
            "properties": {
                "_class": {
                    "fields": {
                        "keyword": {
                            "ignore_above": 256,
                            "type": "keyword"
                        }
                    },
                    "type": "text"
                },
                "message": {
                    "type": "text"
                },
                "timestamp": {
                    "format": "date_hour_minute_second",
                    "type": "date"
                }
            }
        },
        "settings": {
            "index": {
                "creation_date": "1605647458601",
                "number_of_replicas": "1",
                "number_of_shards": "1",
                "provided_name": "msg-22-10",
                "routing": {
                    "allocation": {
                        "include": {
                            "_tier_preference": "data_content"
                        }
                    }
                },
                "uuid": "bFfnss5wR8CuLOmSfJPDDw",
                "version": {
                    "created": "7100099"
                }
            }
        }
    }
}

Let’s add another one:

$ http post :8080/messages message="this is the second message"                                           
HTTP/1.1 200
Connection: keep-alive
Content-Type: application/json
Date: Tue, 17 Nov 2020 21:13:52 GMT
Keep-Alive: timeout=60
Transfer-Encoding: chunked

{
    "id": "UAYO2HUBIlu2470fiL7G",
    "message": "this is the second message",
    "timestamp": "2020-11-17T22:13:52.336695"
}


$ http :9200/_cat/indices
HTTP/1.1 200 OK
content-encoding: gzip
content-length: 112
content-type: text/plain; charset=UTF-8

yellow open msg-22-13 gvs12CQvTOmdvqsQz7k6yw 1 1 1 0 4.5kb 4.5kb
yellow open msg-22-10 bFfnss5wR8CuLOmSfJPDDw 1 1 1 0 4.5kb 4.5kb

So we have two indices now. Now let’s get all the entries from our application:

$ http :8080/messages
HTTP/1.1 200
Connection: keep-alive
Content-Type: application/json
Date: Tue, 17 Nov 2020 21:15:57 GMT
Keep-Alive: timeout=60
Transfer-Encoding: chunked

{
    "aggregations": null,
    "empty": false,
    "maxScore": 1.0,
    "scrollId": null,
    "searchHits": [
        {
            "content": {
                "id": "TwYL2HUBIlu2470f4r6Y",
                "message": "this is the first message",
                "timestamp": "2020-11-17T22:10:58"
            },
            "highlightFields": {},
            "id": "TwYL2HUBIlu2470f4r6Y",
            "index": "msg-22-10",
            "innerHits": {},
            "nestedMetaData": null,
            "score": 1.0,
            "sortValues": []
        },
        {
            "content": {
                "id": "UAYO2HUBIlu2470fiL7G",
                "message": "this is the second message",
                "timestamp": "2020-11-17T22:13:52"
            },
            "highlightFields": {},
            "id": "UAYO2HUBIlu2470fiL7G",
            "index": "msg-22-13",
            "innerHits": {},
            "nestedMetaData": null,
            "score": 1.0,
            "sortValues": []
        }
    ],
    "totalHits": 2,
    "totalHitsRelation": "EQUAL_TO"
}

We get both entries. As we are returning SearchHits<Message> we also get the information in which index each result was found; this is important if you might want to edit one of these entries and store it again in it’s original index.

Let’s sum it up

We have defined and stored an index template that allows us to specify mappings and aliases for automatically created indices. We have set up our applicaion to read from the alias and to write to a dynamically created index name and so have implemented a rolling index pattern for our Elasticsearch storage all from within Spring Data Elasticsearch.

I hope you enjoyed this example.

How to use Elasticsearch’s range types with Spring Data Elasticsearch

Elasticsearch allows the data, that is stored in a document, to be not only of elementary types, but also of a range of types, see the documentation. With a short example I will explain this range type and how to use it in Spring Data Elasticsearch (the current version being 4.0.3).

For this example we want be able to answer the question: “Who was president of the United States of America in the year X?”. We will store in Elasticsearch a document describing a president with the name and his term, defined be a range of years, defined by a from and to value. We will then query the index for documents where this term range contains a given value

The first thing we need to define is our entity. I named it President:

@Document(indexName = "presidents")
public class President {
    @Id
    private String id;

    @Field(type = FieldType.Text)
    private String name;

    @Field(type = FieldType.Integer_Range)
    private Term term;

    static President of(String name, Integer from, Integer to) {
        return new President(name, new Term(from, to));
    }

    public President() {
    }

    public President(String name, Term term) {
        this(UUID.randomUUID().toString(), name, term);
    }

    public President(String id, String name, Term term) {
        this.id = id;
        this.name = name;
        this.term = term;
    }

    // getter/setter

    static class Term {
        @Field(name = "gte")
        private Integer from;
        @Field(name = "lte")
        private Integer to;

        public Term() {
        }

        public Term(Integer from, Integer to) {
            this.from = from;
            this.to = to;
        }

        // getter/setter
    }
}

There are the standard annotations for a Spring Data Elasticsearch entity like @Document and @Id, but in addition there is the property term that is annotated with @Field(type = FieldType.Integer_Range) (line 9). This marks it as an integer range property. The Term class is defined as inner class at line 31 (not to be confused with the Elasticsearch Term), it defines the term of a president with the two value from and to. Elasticsearch needs for a range the fields to be named gte and lte, this we achieve by defining these names with the @Field annotations in lines 32 and 34.

The rest is just a basic repository:

public interface PresidentRepository extends ElasticsearchRepository<President, String> {
    SearchHits<President> searchByTerm(Integer year);
}

Here we use a single Integer as value because Elasticsearch does the magic by finding the corresponding entries where the searched value is in the range of the stored documents.

And of yourse we have some Controller using it. This Controller has one endpoint that loads the presidents since World War II into Elasticsearch, and a second one returns the desired results:

@RequestMapping("presidents")
public class PresidentController {

    private final PresidentRepository repository;

    public PresidentController(PresidentRepository repository) {
        this.repository = repository;
    }

    @GetMapping("/load")
    public void load() {
        repository.saveAll(Arrays.asList(
                President.of("Dwight D Eisenhower", 1953, 1961),
                President.of("Lyndon B Johnson", 1963, 1969),
                President.of("Richard Nixon", 1969, 1974),
                President.of("Gerald Ford", 1974, 1977),
                President.of("Jimmy Carter", 1977, 1981),
                President.of("Ronald Reagen", 1981, 1989),
                President.of("George Bush", 1989, 1993),
                President.of("Bill Clinton", 1993, 2001),
                President.of("George W Bush", 2001, 2009),
                President.of("Barack Obama", 2009, 2017),
                President.of("Donald Trump", 2017, 2021)));
    }

    @GetMapping("/term/{year}")
    public SearchHits<President> searchByTerm(@PathVariable Integer year) {
        return repository.searchByTerm(year);
    }
}

See it in action (I am using HTTPie), my application is listening on port 9090:

$ http -b :9090/presidents/term/2009
{
    "aggregations": null,
    "empty": false,
    "maxScore": 1.0,
    "scrollId": null,
    "searchHits": [
        {
            "content": {
                "id": "c3a3a0d0-d835-4a02-a2e8-20cc1c0e9285",
                "name": "George W Bush",
                "term": {
                    "from": 2001,
                    "to": 2009
                }
            },
            "highlightFields": {},
            "id": "c3a3a0d0-d835-4a02-a2e8-20cc1c0e9285",
            "score": 1.0,
            "sortValues": []
        },
        {
            "content": {
                "id": "36416746-ff11-4243-a4f3-a6bb0cff9a93",
                "name": "Barack Obama",
                "term": {
                    "from": 2009,
                    "to": 2017
                }
            },
            "highlightFields": {},
            "id": "36416746-ff11-4243-a4f3-a6bb0cff9a93",
            "score": 1.0,
            "sortValues": []
        }
    ],
    "totalHits": 2,
    "totalHitsRelation": "EQUAL_TO"
}

$http -b :9090/presidents/term/2000
{
    "aggregations": null,
    "empty": false,
    "maxScore": 1.0,
    "scrollId": null,
    "searchHits": [
        {
            "content": {
                "id": "984fdf87-a7d8-4dc2-b2e8-5dd948065147",
                "name": "Bill Clinton",
                "term": {
                    "from": 1993,
                    "to": 2001
                }
            },
            "highlightFields": {},
            "id": "984fdf87-a7d8-4dc2-b2e8-5dd948065147",
            "score": 1.0,
            "sortValues": []
        }
    ],
    "totalHits": 1,
    "totalHitsRelation": "EQUAL_TO"
}

So just with putting the right types and names into our @Field annotations we are able to use the range types of Elasticsearch in our Spring Data Elasticsearch application.

Search entities within a geographic distance with Spring Data Elasticsearch 4

A couple of months ago I published the post Using geo-distance sort in Spring Data Elasticsearch 4. In the comments there came up the question “What about searching within a distance?”

Well, this is not supported by query derivation from the method name, but it can easily be done with a custom repository implementation (see the documentation for more information about that).

I updated the example – which is available on GitHub – and will explain what is needed for this implementation. I won’t describe the entity and setup, please check the original post for that.

The custom repository interface

First we need to define a new repository interface that defines the method we want to provide:

public interface FoodPOIRepositoryCustom {

    /**
     * search all {@link FoodPOI} that are within a given distance of a point
     *
     * @param geoPoint
     *     the center point
     * @param distance
     *     the distance
     * @param unit
     *     the distance unit
     * @return the found entities
     */
    List<SearchHit<FoodPOI>> searchWithin(GeoPoint geoPoint, Double distance, String unit);
}

The custom repository implementation

Next we need to provide an implementation, important here is that this is named like the interface with the suffix “Impl”:

public class FoodPOIRepositoryCustomImpl implements FoodPOIRepositoryCustom {

    private final ElasticsearchOperations operations;

    public FoodPOIRepositoryCustomImpl(ElasticsearchOperations operations) {
        this.operations = operations;
    }

    @Override
    public List<SearchHit<FoodPOI>> searchWithin(GeoPoint geoPoint, Double distance, String unit) {

        Query query = new CriteriaQuery(
          new Criteria("location").within(geoPoint, distance.toString() + unit)
        );

        // add a sort to get the actual distance back in the sort value
        Sort sort = Sort.by(new GeoDistanceOrder("location", geoPoint).withUnit(unit));
        query.addSort(sort);

        return operations.search(query, FoodPOI.class).getSearchHits();
    }
}

In this implementation we have an ElasticsearchOperations instance injected by Spring. In the method implementation we build a NativeSearchQuery that specifies the distance query we want. In addition to that we add a GeoDistanceSort to have the actual distance of the found entities in the output. We pass this query to the ElasticsearchOperations instance and return the search result.

Adapt the repository

We need to add the new interface to our FoodPOIRepository definition, which otherwise is unchanged:

public interface FoodPOIRepository extends ElasticsearchRepository<FoodPOI, String>, FoodPOIRepositoryCustom {

    List<SearchHit<FoodPOI>> searchTop3By(Sort sort);

    List<SearchHit<FoodPOI>> searchTop3ByName(String name, Sort sort);
}

Use it in the controller

In the rest controller, there is a new method that uses the distance search:

@PostMapping("/within")
List<ResultData> withinDistance(@RequestBody RequestData requestData) {

    GeoPoint location = new GeoPoint(requestData.getLat(), requestData.getLon());

    List<SearchHit<FoodPOI>> searchHits
        = repository.searchWithin(location, requestData.distance, requestData.unit);

    return toResultData(searchHits);
}

private List<ResultData> toResultData(List<SearchHit<FoodPOI>> searchHits) {
    return searchHits.stream()
        .map(searchHit -> {
            Double distance = (Double) searchHit.getSortValues().get(0);
            FoodPOI foodPOI = searchHit.getContent();
            return new ResultData(foodPOI.getName(), foodPOI.getLocation(), distance);
        }).collect(Collectors.toList());
}

We extract the needed parameters from the requestData that came in, call our repository method and convert the results to our output format.

And that’s it

So with a small custom repository implementation we were able to add the desired functionality to our repository

mapjfx display problem on Windows 10 seems solved

For some time now there was a bug that the map was not displaying properly on some Windows systems, see

It seems this was because of a bug in the WebView from JavaFX https://bugs.openjdk.java.net/browse/JDK-8234471. Thanks to https://github.com/vewert and https://github.com/Abu-Abdullah investigating into this.

This issue was fixed with JavaFX15, I tried this on a virtual machine with Windows10 and could not reproduce the error anymore.

There is no need to update mapjfx to JavaFX15 (as macOS and *nix are not hit by this bug). If you are on Windows10 you need to add the following dependency to your application:

<dependency>
    <groupId>org.openjfx</groupId>
    <artifactId>javafx-web</artifactId>
    <version>16-ea+1</version>
</dependency>

I tried 16-ea+1 and 15-ea+8, the version should be the same that is used for the whole application.

 

Use an index name defined by the entity to store data in Spring Data Elasticsearch 4.0

When using Spring Data Elasticsearch (I am referencing the current version 4.0.2), normally the name of the index where the documents are stored is taken from the @Document annotation of the entity class – here it’s books:

@Document(indexName="books")
public class Book {
  // ...
}

Recently in a discussion of a Pull Request in Spring Data Elasticsearch, someone told that she needed a possibility to extract the name from the entity itself, as entities might go to different indices.

In this post I will show how this can be done by using Spring Data Repository customization by providing a custom implementation for the save method. A complete solution would need to customize saveAll and other methods as well, but I will restrict this here to just one method.

The Hotel entity

For this post I will use an entity describing a hotel, with the idea that hotels from different countries should be stored in different Elasticsearch indices. The index name in the annotation is a wildcard name so that when searching for hotels all indices are considered.

Hotel.java

package com.sothawo.springdataelastictest;

import org.springframework.data.annotation.Id;
import org.springframework.data.elasticsearch.annotations.Document;
import org.springframework.data.elasticsearch.annotations.Field;
import org.springframework.data.elasticsearch.annotations.FieldType;
import org.springframework.lang.Nullable;

/**
 * @author P.J. Meisch (pj.meisch@sothawo.com)
 */
@Document(indexName = "hotel-*", createIndex = false)
public class Hotel {
    @Id
    @Nullable
    private String id;

    @Field(type = FieldType.Text)
    @Nullable
    private String name;

    @Field(type = FieldType.Keyword)
    @Nullable
    private String countryCode;

    // getter/setter ...
}

The custom repository

We need to define a custom repository interface that defines the methods we want to implement. Since we want to customize the save method that ElasticsearchRepository has by extending CrudRepository, we need to use the very same method signature including the generics:

CustomHotelRepository.java

package com.sothawo.springdataelastictest;

/**
 * @author P.J. Meisch (pj.meisch@sothawo.com)
 */
public interface CustomHotelRepository<T> {
    <S extends T> S save(S entity);
}

The next class to provide is an implementation of this interface. It is important that the implementation class is named like the interface with a Impl suffix:

CustomHotelRepositoryImpl.java

package com.sothawo.springdataelastictest;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.data.elasticsearch.core.ElasticsearchOperations;
import org.springframework.data.elasticsearch.core.IndexOperations;
import org.springframework.data.elasticsearch.core.document.Document;
import org.springframework.data.elasticsearch.core.mapping.IndexCoordinates;
import org.springframework.lang.NonNull;
import org.springframework.lang.Nullable;

import java.util.concurrent.ConcurrentHashMap;

/**
 * @author P.J. Meisch (pj.meisch@sothawo.com)
 */
@SuppressWarnings("unused")
public class CustomHotelRepositoryImpl implements CustomHotelRepository<Hotel> {

    private static final Logger LOG = LoggerFactory.getLogger(CustomHotelRepositoryImpl.class);

    private final ElasticsearchOperations operations;

    private final ConcurrentHashMap<String, IndexCoordinates> knownIndexCoordinates = new ConcurrentHashMap<>();
    @Nullable
    private Document mapping;

    @SuppressWarnings("unused")
    public CustomHotelRepositoryImpl(ElasticsearchOperations operations) {
        this.operations = operations;
    }

    @Override
    public <S extends Hotel> S save(S hotel) {

        IndexCoordinates indexCoordinates = getIndexCoordinates(hotel);
        LOG.info("saving {} to {}", hotel, indexCoordinates);

        S saved = operations.save(hotel, indexCoordinates);

        operations.indexOps(indexCoordinates).refresh();

        return saved;
    }

    @NonNull
    private <S extends Hotel> IndexCoordinates getIndexCoordinates(S hotel) {

        String indexName = "hotel-" + hotel.getCountryCode();
        return knownIndexCoordinates.computeIfAbsent(indexName, i -> {

                IndexCoordinates indexCoordinates = IndexCoordinates.of(i);
                IndexOperations indexOps = operations.indexOps(indexCoordinates);

                if (!indexOps.exists()) {
                    indexOps.create();

                    if (mapping == null) {
                        mapping = indexOps.createMapping(Hotel.class);
                    }

                    indexOps.putMapping(mapping);
                }
                return indexCoordinates;
            }
        );
    }
}

This implementation is a Spring Bean (no need for adding @Component) and so can use dependency injection. Let me explain the code.

Line 22: the ElasticsearchOperations object we will use to store the entity in the desired index, this is autowired by constructor injection in lines 29-31

Line 24-26: As we want to make sure that the index we write to exists and has the correct mapping, we keep track of which indices we already know. This is used in the getIndexCoordinates method explained later.

Line 34 to 44: This is the actual implementation of the save operation. First we call getIndexCoordinates which will make sure the index exists. We pass the indexCoordinates into the save method of the ElasticsearchOperations instance. If we would use ElasticsearchOperations.save(hotel), the name from the @Document annotation would be used. But when passing an IndexCoordinates as second parameter, the index name from this is used to store the entity. In line 41 there is a call to refresh, this is the behaviour of the original ElasticsearchRepository.save() method, so we do the same here. If you do not need the immediate refresh, omit this line.

Line 47 to 76: As Spring Data Elasticsearch does not yet support index templates (this will come with version 4.1) this method ensures, that when the first time that an entity is saved to an index, this index is created if necessary and writes the mappings to the new created index.

The HotelRepository to use in the application

We now need to combine our custom repository with the ElasticsearchRepository from Spring Data Elasticsearch:

HotelRepository.java

package com.sothawo.springdataelastictest;

import org.springframework.data.elasticsearch.core.SearchHits;
import org.springframework.data.elasticsearch.repository.ElasticsearchRepository;

/**
 * @author P.J. Meisch (pj.meisch@sothawo.com)
 */
public interface HotelRepository extends ElasticsearchRepository<Hotel, String>, CustomHotelRepository<Hotel> {
    SearchHits<Hotel> searchAllBy();
}

Here we combine the two interfaces and define an additional method that returns all hotels in a SearchHits object.

Use the repository in the code

The only thing that’s left is to use this repository, for example in a REST controller:

HotelController.java

package com.sothawo.springdataelastictest;

import org.springframework.data.elasticsearch.core.SearchHits;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

/**
 * @author P.J. Meisch (pj.meisch@sothawo.com)
 */
@RestController
@RequestMapping("/hotels")
public class HotelController {

    private final HotelRepository repository;

    public HotelController(HotelRepository repository) {
        this.repository = repository;
    }

    @GetMapping()
    public SearchHits<Hotel> all() {
        return repository.searchAllBy();
    }

    @PostMapping()
    public Hotel save(@RequestBody Hotel hotel) {
        return repository.save(hotel);
    }
}

This is a standard controller which has a HotelRepository instance injected (which Spring Data Elasticsearch will create for us). This looks exactly how it would without our customization. The difference is that the call to save() ends up in our custom implementation.

Conclusion

This post shows how easy it is to provide custom implementations for the methods that are normally provided by Spring Data Repositories (not just in Spring Data Elasticsearch) if custom logic is needed.

mapjfx 2.15.0 and 1.33.0 released adding circles and OpenLayers 6.4.2

I just released mapjfx versions 1.33.0 and 2.15.0, they will be available in maven central:

  <dependency>
    <groupId>com.sothawo</groupId>
    <artifactId>mapjfx</artifactId>
    <version>1.33.0</version>
  </dependency>
  <dependency>
    <groupId>com.sothawo</groupId>
    <artifactId>mapjfx</artifactId>
    <version>2.15.0</version>
  </dependency>

1.33.0 is built using Java 8 and 2.15.0 uses Java 11.

Circles can now be added to a map, giving the center coordinates and the radius in meters with custom coloring and transparency, thanks to Hanwoo Kim for this contribution!

The OpenLayers version now is 6.4.2.

How to provide a dynamic index name in Spring Data Elasticsearch using SpEL

In Spring Data Elasticsearch – at the time of writing, version 4.0 is the current version – the name of an index is normally defined by the @Document annotation on the entity class. For the following examples let’s assume we want to write some log entries to Elasticsearch with our application. We use the following entity:

@Document(indexName = "log")
public class LogEntity {
    @Id
    private String id = UUID.randomUUID().toString();

    @Field(type = FieldType.Text)
    private String text;

    @Field(name = "log-time", type = FieldType.Date, format = DateFormat.basic_date_time)
    private ZonedDateTime logTime = ZonedDateTime.now();

    public String getId() {
        return id;
    }

    public String getText() {
        return text;
    }

    public void setText(String text) {
        this.text = text;
    }

    public ZonedDateTime getLogTime() {
        return logTime;
    }

    public void setLogTime(ZonedDateTime logTime) {
        this.logTime = logTime;
    }
}

Here the index name is the fixed name log.

it is possible to use a dynamically defined name for an index by using Spring Expression Language SpEL. Important: We need to use a SpEL template expression, that is an expression enclosed in #{}. This allows for the following setups:

Use a value from the application configuration

Let’s assume we have the following entry in the application.properties file:

index.prefix=test

We then use this code

@Document(indexName = "#{@environment.getProperty('index.prefix')}-log")

and the index name to use changes to test-log.

Use a value provided by a static method of some class

The second example shows how to call a static function to get a dynamic value. We use the following definition to add the current date to the index name:

@Document(indexName = "log-#{T(java.time.LocalDate).now().toString()}")

Currently this would provide an index name of log-2020-07-28.

Use a value provided by a Spring bean

For the third case we provide a bean that will give us a dynamically created string to be used as part of the index name.

@Component
public class LogIndexNameProvider {

    public String timeSuffix() {
        return LocalTime.now().truncatedTo(ChronoUnit.MINUTES).toString().replace(':', '-');
    }
}

This bean, named logIndexNameProvider, can return a String that contains the current time as hh-mm (I would not use this for naming indices, but this is just an example).

Changing the definition to

@Document(indexName = "log-#{@logIndexNameProvider.timeSuffix()}")

will now create index names like log-08-25 or log-22-07.

Of course we can mix all of these together: add a prefix from the configuration, append the current date.

Important notice:

The evaluation of SpEL for index names is only done for the index names defined in the @Document annotation. It is not done for index names that are passed as a IndexCoordinates parameter in the different methods of the ElasticsearchOperations or IndexOperations interfaces. If it were allowed on these, it would be easy to set up a scenario, where an expression is read from some outside source. And then someone might send something like "log-#{T(java.lang.Runtime).getRuntime().exec(new String[]{'/bin/rm', '/tmp/somefile'})}" which will not provide an index name, but delete files on your computer.

Using geo-distance sort in Spring Data Elasticsearch 4

The release of Spring Data Elasticsearch in version 4.0 (see the documentation) brings two new features that now enable users to use geo-distance sorts in repository queries: The first is a new class GeoDistanceOrder and the second is a new return type for repository methods SearchHit<T>. In this post I will show how easy it is to use these classes to answer questions like “Which pubs are the nearest to a given location?”.

The source code

The complete runnable code used for this post is available on GitHub. In order to run the application you will need Java 8 or higher and a running instance of Elasticsearch. If this is not accessible at localhost:9200 you need to set the correct value in the src/main/resources/application.yaml file.

Update 12.09.2020

The original code was a little extended for the follow-up post Search entities within a geographic distance with Spring Data Elasticsearch 4

The sample data

For this sample application I use a csv file with POI data from OpenStreetMap that contains POIs in Germany which are categorized as kind of food, like restaurants, pubs, fast-food and more. All together there are 826843 records.

When the application is started, the index in Elasticsearch is created and loaded with the data if it does not yet exist. So the first startup takes a little longer, the progress can be seen on the console. Within the application, these POIs are modelled by the following entity:

@Document(indexName = "foodpois")
public class FoodPOI {
    @Id
    private String id;
    @Field(type = FieldType.Text)
    private String name;
    @Field(type = FieldType.Integer)
    private Integer category;
    private GeoPoint location;
    // constructors, getter/setter left out for brevity
}

The interesting properties for this blog post are the location and the name.

The Repository

In order to search the data we need a Repository Definition:

public interface FoodPOIRepository extends ElasticsearchRepository<FoodPOI, String> {
    List<SearchHit<FoodPOI>> searchTop3By(Sort sort);
    List<SearchHit<FoodPOI>> searchTop3ByName(String name, Sort sort);
}

We have two functions defined, the first we will use to search any POI near a given point, with the second on we can search for the POIs with a name. Defining these methods in the interface is all we need as Spring Data Elasticsearch will under the hood create the implementation for these methods by analyzing the method names and parameters.

In Spring Data Elasticsearch before version 4 we could only get a List<FoodPOI> from a repository method. But now there is the SearchHit<T> class, which not only contains the entity, but also other values like a score, highlights or – what we need here – the sort value. When doing a geo-distance sort, the sort value contains the actual distance of the POI to the value we passed into the search.

The Controller

We define a REST controller, so we can call our application to get the data. The request parameters will come in a POST body that will be mapped to the following class:

public class RequestData {
    private String name;
    private double lat;
    private double lon;
    // constructors, getter/setter ...
}

The result data that will be sent to the client looks like this:

public class ResultData {
    private String name;
    private GeoPoint location;
    private Double distance;

  // constructor, gette/setter ...
}

The controller has just one method:

@RestController
@RequestMapping("/foodpois")
public class FoodPOIController {

    private final FoodPOIRepository repository;

    public FoodPOIController(FoodPOIRepository repository) {
        this.repository = repository;
    }

    @PostMapping("/nearest3")
    List<ResultData> nearest3(@RequestBody RequestData requestData) {

        GeoPoint location = new GeoPoint(requestData.getLat(), requestData.getLon());
        Sort sort = Sort.by(new GeoDistanceOrder("location", location).withUnit("km"));

        List<SearchHit<FoodPOI>> searchHits;

        if (StringUtils.hasText(requestData.getName())) {
            searchHits = repository.searchTop3ByName(requestData.getName(), sort);
        } else {
            searchHits = repository.searchTop3By(sort);
        }

        return searchHits.stream()
            .map(searchHit -> {
                Double distance = (Double) searchHit.getSortValues().get(0);
                FoodPOI foodPOI = searchHit.getContent();
                return new ResultData(foodPOI.getName(), foodPOI.getLocation(), distance);
            }).collect(Collectors.toList());
    }
}

In line 15 we create a Sort object that specifies that Elasticsearch should return the data ordered by the geographical distance to the given value which we take from the request data. Then, depending if we have a name, we call the corresponding method and get back a List<SearchHit<FoodPOI>>.

We then in the lines 27 to 29 extract the information we need from the returned objects and build our result data object.

Check the result

After starting the application we can hit the endpoint. I use curl here and pipe the output through jq to have it formatted:

$curl -X "POST" "http://localhost:8080/foodpois/nearest3" \
     -H 'Content-Type: application/json; charset=utf-8' \
     -d $'{
  "lat": 49.02,
  "lon": 8.4
}'|jq

[
  {
    "name": "Cantina Majolika",
    "location": {
      "lat": 49.0190808,
      "lon": 8.4014792
    },
    "distance": 0.14860088197123017
  },
  {
    "name": "Waldgaststätte FSSV",
    "location": {
      "lat": 49.023578,
      "lon": 8.3954656
    },
    "distance": 0.5173117164589114
  },
  {
    "name": "Hatz",
    "location": {
      "lat": 49.0155358,
      "lon": 8.3975457
    },
    "distance": 0.5276800664204232
  }
]

And the Pubs?

curl -X "POST" "http://localhost:8080/foodpois/nearest3" \
     -H 'Content-Type: application/json; charset=utf-8' \
     -d $'{
  "lat": 49.02,
  "lon": 8.4,
  "name": "pub"
}'|jq
[
  {
    "name": "Scruffy's Irish Pub",
    "location": {
      "lat": 49.0116335,
      "lon": 8.3950194
    },
    "distance": 0.998711100164643
  },
  {
    "name": "Irish Pub “Sean O'Casey's”",
    "location": {
      "lat": 49.0090639,
      "lon": 8.4028365
    },
    "distance": 1.2335132790824628
  },
  {
    "name": "Oxford Pub",
    "location": {
      "lat": 49.0086149,
      "lon": 8.4129781
    },
    "distance": 1.5806674447458173
  }
]

And that’s it

Without even needing to know how these request are sent to Elasticsearch and what Elasticsearch sends back, we can easily use these features in our Spring application. Hope you enjoyed it!

 

 

 

mapjfx display problems update

For the last two years a problem was coming up occasionally with some users, that only the top left area of the map is displayed, and the rest is not loading:

Thanks to the analysis of Martin Stiel in this comment and Victor Ewert in issue 81 it seems that this can be traced to a problem when running the application on a high resolution display.

Alas I cannot reproduce this as I have no hardware with a resolution that might be high enough. So if you have this problem you might try the solution that Victor mentions in the issue linked above: start the application with -Dprism.allowhidpi=false.

So currently I cannot support this as for em this problem is not reproducible, But I’d be glad for feedback or any new information.

 

Update:

https://www.sothawo.com/2020/09/mapjfx-display-problem-on-windows-10-seems-solved/

mapjfx 1.32.0 and 2.14.0 adds the ability to rotate markers and labels

I just released mapjfx versions 1.32.0 and 2.14.0, they will be available in maven central:

  <dependency>
    <groupId>com.sothawo</groupId>
    <artifactId>mapjfx</artifactId>
    <version>1.32.0</version>
  </dependency>
  <dependency>
    <groupId>com.sothawo</groupId>
    <artifactId>mapjfx</artifactId>
    <version>2.14.0</version>
  </dependency>

1.32.0 is built using Java 8 and 2.14.0 uses Java 11.

Markers and Labels on a map now have a rotation property which will rotate the corresponding HTML Element. The values goes from 0 to 360 and defines the rotating angle clockwise.