How to use Elasticsearch’s range types with Spring Data Elasticsearch

Elasticsearch allows the data, that is stored in a document, to be not only of elementary types, but also of a range of types, see the documentation. With a short example I will explain this range type and how to use it in Spring Data Elasticsearch (the current version being 4.0.3).

For this example we want be able to answer the question: “Who was president of the United States of America in the year X?”. We will store in Elasticsearch a document describing a president with the name and his term, defined be a range of years, defined by a from and to value. We will then query the index for documents where this term range contains a given value

The first thing we need to define is our entity. I named it President:

@Document(indexName = "presidents")
public class President {
    @Id
    private String id;

    @Field(type = FieldType.Text)
    private String name;

    @Field(type = FieldType.Integer_Range)
    private Term term;

    static President of(String name, Integer from, Integer to) {
        return new President(name, new Term(from, to));
    }

    public President() {
    }

    public President(String name, Term term) {
        this(UUID.randomUUID().toString(), name, term);
    }

    public President(String id, String name, Term term) {
        this.id = id;
        this.name = name;
        this.term = term;
    }

    // getter/setter

    static class Term {
        @Field(name = "gte")
        private Integer from;
        @Field(name = "lte")
        private Integer to;

        public Term() {
        }

        public Term(Integer from, Integer to) {
            this.from = from;
            this.to = to;
        }

        // getter/setter
    }
}

There are the standard annotations for a Spring Data Elasticsearch entity like @Document and @Id, but in addition there is the property term that is annotated with @Field(type = FieldType.Integer_Range) (line 9). This marks it as an integer range property. The Term class is defined as inner class at line 31 (not to be confused with the Elasticsearch Term), it defines the term of a president with the two value from and to. Elasticsearch needs for a range the fields to be named gte and lte, this we achieve by defining these names with the @Field annotations in lines 32 and 34.

The rest is just a basic repository:

public interface PresidentRepository extends ElasticsearchRepository<President, String> {
    SearchHits<President> searchByTerm(Integer year);
}

Here we use a single Integer as value because Elasticsearch does the magic by finding the corresponding entries where the searched value is in the range of the stored documents.

And of yourse we have some Controller using it. This Controller has one endpoint that loads the presidents since World War II into Elasticsearch, and a second one returns the desired results:

@RequestMapping("presidents")
public class PresidentController {

    private final PresidentRepository repository;

    public PresidentController(PresidentRepository repository) {
        this.repository = repository;
    }

    @GetMapping("/load")
    public void load() {
        repository.saveAll(Arrays.asList(
                President.of("Dwight D Eisenhower", 1953, 1961),
                President.of("Lyndon B Johnson", 1963, 1969),
                President.of("Richard Nixon", 1969, 1974),
                President.of("Gerald Ford", 1974, 1977),
                President.of("Jimmy Carter", 1977, 1981),
                President.of("Ronald Reagen", 1981, 1989),
                President.of("George Bush", 1989, 1993),
                President.of("Bill Clinton", 1993, 2001),
                President.of("George W Bush", 2001, 2009),
                President.of("Barack Obama", 2009, 2017),
                President.of("Donald Trump", 2017, 2021)));
    }

    @GetMapping("/term/{year}")
    public SearchHits<President> searchByTerm(@PathVariable Integer year) {
        return repository.searchByTerm(year);
    }
}

See it in action (I am using HTTPie), my application is listening on port 9090:

$ http -b :9090/presidents/term/2009
{
    "aggregations": null,
    "empty": false,
    "maxScore": 1.0,
    "scrollId": null,
    "searchHits": [
        {
            "content": {
                "id": "c3a3a0d0-d835-4a02-a2e8-20cc1c0e9285",
                "name": "George W Bush",
                "term": {
                    "from": 2001,
                    "to": 2009
                }
            },
            "highlightFields": {},
            "id": "c3a3a0d0-d835-4a02-a2e8-20cc1c0e9285",
            "score": 1.0,
            "sortValues": []
        },
        {
            "content": {
                "id": "36416746-ff11-4243-a4f3-a6bb0cff9a93",
                "name": "Barack Obama",
                "term": {
                    "from": 2009,
                    "to": 2017
                }
            },
            "highlightFields": {},
            "id": "36416746-ff11-4243-a4f3-a6bb0cff9a93",
            "score": 1.0,
            "sortValues": []
        }
    ],
    "totalHits": 2,
    "totalHitsRelation": "EQUAL_TO"
}

$http -b :9090/presidents/term/2000
{
    "aggregations": null,
    "empty": false,
    "maxScore": 1.0,
    "scrollId": null,
    "searchHits": [
        {
            "content": {
                "id": "984fdf87-a7d8-4dc2-b2e8-5dd948065147",
                "name": "Bill Clinton",
                "term": {
                    "from": 1993,
                    "to": 2001
                }
            },
            "highlightFields": {},
            "id": "984fdf87-a7d8-4dc2-b2e8-5dd948065147",
            "score": 1.0,
            "sortValues": []
        }
    ],
    "totalHits": 1,
    "totalHitsRelation": "EQUAL_TO"
}

So just with putting the right types and names into our @Field annotations we are able to use the range types of Elasticsearch in our Spring Data Elasticsearch application.

Search entities within a geographic distance with Spring Data Elasticsearch 4

A couple of months ago I published the post Using geo-distance sort in Spring Data Elasticsearch 4. In the comments there came up the question “What about searching within a distance?”

Well, this is not supported by query derivation from the method name, but it can easily be done with a custom repository implementation (see the documentation for more information about that).

I updated the example – which is available on GitHub – and will explain what is needed for this implementation. I won’t describe the entity and setup, please check the original post for that.

The custom repository interface

First we need to define a new repository interface that defines the method we want to provide:

public interface FoodPOIRepositoryCustom {

    /**
     * search all {@link FoodPOI} that are within a given distance of a point
     *
     * @param geoPoint
     *     the center point
     * @param distance
     *     the distance
     * @param unit
     *     the distance unit
     * @return the found entities
     */
    List<SearchHit<FoodPOI>> searchWithin(GeoPoint geoPoint, Double distance, String unit);
}

The custom repository implementation

Next we need to provide an implementation, important here is that this is named like the interface with the suffix “Impl”:

public class FoodPOIRepositoryCustomImpl implements FoodPOIRepositoryCustom {

    private final ElasticsearchOperations operations;

    public FoodPOIRepositoryCustomImpl(ElasticsearchOperations operations) {
        this.operations = operations;
    }

    @Override
    public List<SearchHit<FoodPOI>> searchWithin(GeoPoint geoPoint, Double distance, String unit) {

        Query query = new CriteriaQuery(
          new Criteria("location").within(geoPoint, distance.toString() + unit)
        );

        // add a sort to get the actual distance back in the sort value
        Sort sort = Sort.by(new GeoDistanceOrder("location", geoPoint).withUnit(unit));
        query.addSort(sort);

        return operations.search(query, FoodPOI.class).getSearchHits();
    }
}

In this implementation we have an ElasticsearchOperations instance injected by Spring. In the method implementation we build a NativeSearchQuery that specifies the distance query we want. In addition to that we add a GeoDistanceSort to have the actual distance of the found entities in the output. We pass this query to the ElasticsearchOperations instance and return the search result.

Adapt the repository

We need to add the new interface to our FoodPOIRepository definition, which otherwise is unchanged:

public interface FoodPOIRepository extends ElasticsearchRepository<FoodPOI, String>, FoodPOIRepositoryCustom {

    List<SearchHit<FoodPOI>> searchTop3By(Sort sort);

    List<SearchHit<FoodPOI>> searchTop3ByName(String name, Sort sort);
}

Use it in the controller

In the rest controller, there is a new method that uses the distance search:

@PostMapping("/within")
List<ResultData> withinDistance(@RequestBody RequestData requestData) {

    GeoPoint location = new GeoPoint(requestData.getLat(), requestData.getLon());

    List<SearchHit<FoodPOI>> searchHits
        = repository.searchWithin(location, requestData.distance, requestData.unit);

    return toResultData(searchHits);
}

private List<ResultData> toResultData(List<SearchHit<FoodPOI>> searchHits) {
    return searchHits.stream()
        .map(searchHit -> {
            Double distance = (Double) searchHit.getSortValues().get(0);
            FoodPOI foodPOI = searchHit.getContent();
            return new ResultData(foodPOI.getName(), foodPOI.getLocation(), distance);
        }).collect(Collectors.toList());
}

We extract the needed parameters from the requestData that came in, call our repository method and convert the results to our output format.

And that’s it

So with a small custom repository implementation we were able to add the desired functionality to our repository

Use an index name defined by the entity to store data in Spring Data Elasticsearch 4.0

When using Spring Data Elasticsearch (I am referencing the current version 4.0.2), normally the name of the index where the documents are stored is taken from the @Document annotation of the entity class – here it’s books:

@Document(indexName="books")
public class Book {
  // ...
}

Recently in a discussion of a Pull Request in Spring Data Elasticsearch, someone told that she needed a possibility to extract the name from the entity itself, as entities might go to different indices.

In this post I will show how this can be done by using Spring Data Repository customization by providing a custom implementation for the save method. A complete solution would need to customize saveAll and other methods as well, but I will restrict this here to just one method.

The Hotel entity

For this post I will use an entity describing a hotel, with the idea that hotels from different countries should be stored in different Elasticsearch indices. The index name in the annotation is a wildcard name so that when searching for hotels all indices are considered.

Hotel.java

package com.sothawo.springdataelastictest;

import org.springframework.data.annotation.Id;
import org.springframework.data.elasticsearch.annotations.Document;
import org.springframework.data.elasticsearch.annotations.Field;
import org.springframework.data.elasticsearch.annotations.FieldType;
import org.springframework.lang.Nullable;

/**
 * @author P.J. Meisch (pj.meisch@sothawo.com)
 */
@Document(indexName = "hotel-*", createIndex = false)
public class Hotel {
    @Id
    @Nullable
    private String id;

    @Field(type = FieldType.Text)
    @Nullable
    private String name;

    @Field(type = FieldType.Keyword)
    @Nullable
    private String countryCode;

    // getter/setter ...
}

The custom repository

We need to define a custom repository interface that defines the methods we want to implement. Since we want to customize the save method that ElasticsearchRepository has by extending CrudRepository, we need to use the very same method signature including the generics:

CustomHotelRepository.java

package com.sothawo.springdataelastictest;

/**
 * @author P.J. Meisch (pj.meisch@sothawo.com)
 */
public interface CustomHotelRepository<T> {
    <S extends T> S save(S entity);
}

The next class to provide is an implementation of this interface. It is important that the implementation class is named like the interface with a Impl suffix:

CustomHotelRepositoryImpl.java

package com.sothawo.springdataelastictest;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.data.elasticsearch.core.ElasticsearchOperations;
import org.springframework.data.elasticsearch.core.IndexOperations;
import org.springframework.data.elasticsearch.core.document.Document;
import org.springframework.data.elasticsearch.core.mapping.IndexCoordinates;
import org.springframework.lang.NonNull;
import org.springframework.lang.Nullable;

import java.util.concurrent.ConcurrentHashMap;

/**
 * @author P.J. Meisch (pj.meisch@sothawo.com)
 */
@SuppressWarnings("unused")
public class CustomHotelRepositoryImpl implements CustomHotelRepository<Hotel> {

    private static final Logger LOG = LoggerFactory.getLogger(CustomHotelRepositoryImpl.class);

    private final ElasticsearchOperations operations;

    private final ConcurrentHashMap<String, IndexCoordinates> knownIndexCoordinates = new ConcurrentHashMap<>();
    @Nullable
    private Document mapping;

    @SuppressWarnings("unused")
    public CustomHotelRepositoryImpl(ElasticsearchOperations operations) {
        this.operations = operations;
    }

    @Override
    public <S extends Hotel> S save(S hotel) {

        IndexCoordinates indexCoordinates = getIndexCoordinates(hotel);
        LOG.info("saving {} to {}", hotel, indexCoordinates);

        S saved = operations.save(hotel, indexCoordinates);

        operations.indexOps(indexCoordinates).refresh();

        return saved;
    }

    @NonNull
    private <S extends Hotel> IndexCoordinates getIndexCoordinates(S hotel) {

        String indexName = "hotel-" + hotel.getCountryCode();
        return knownIndexCoordinates.computeIfAbsent(indexName, i -> {

                IndexCoordinates indexCoordinates = IndexCoordinates.of(i);
                IndexOperations indexOps = operations.indexOps(indexCoordinates);

                if (!indexOps.exists()) {
                    indexOps.create();

                    if (mapping == null) {
                        mapping = indexOps.createMapping(Hotel.class);
                    }

                    indexOps.putMapping(mapping);
                }
                return indexCoordinates;
            }
        );
    }
}

This implementation is a Spring Bean (no need for adding @Component) and so can use dependency injection. Let me explain the code.

Line 22: the ElasticsearchOperations object we will use to store the entity in the desired index, this is autowired by constructor injection in lines 29-31

Line 24-26: As we want to make sure that the index we write to exists and has the correct mapping, we keep track of which indices we already know. This is used in the getIndexCoordinates method explained later.

Line 34 to 44: This is the actual implementation of the save operation. First we call getIndexCoordinates which will make sure the index exists. We pass the indexCoordinates into the save method of the ElasticsearchOperations instance. If we would use ElasticsearchOperations.save(hotel), the name from the @Document annotation would be used. But when passing an IndexCoordinates as second parameter, the index name from this is used to store the entity. In line 41 there is a call to refresh, this is the behaviour of the original ElasticsearchRepository.save() method, so we do the same here. If you do not need the immediate refresh, omit this line.

Line 47 to 76: As Spring Data Elasticsearch does not yet support index templates (this will come with version 4.1) this method ensures, that when the first time that an entity is saved to an index, this index is created if necessary and writes the mappings to the new created index.

The HotelRepository to use in the application

We now need to combine our custom repository with the ElasticsearchRepository from Spring Data Elasticsearch:

HotelRepository.java

package com.sothawo.springdataelastictest;

import org.springframework.data.elasticsearch.core.SearchHits;
import org.springframework.data.elasticsearch.repository.ElasticsearchRepository;

/**
 * @author P.J. Meisch (pj.meisch@sothawo.com)
 */
public interface HotelRepository extends ElasticsearchRepository<Hotel, String>, CustomHotelRepository<Hotel> {
    SearchHits<Hotel> searchAllBy();
}

Here we combine the two interfaces and define an additional method that returns all hotels in a SearchHits object.

Use the repository in the code

The only thing that’s left is to use this repository, for example in a REST controller:

HotelController.java

package com.sothawo.springdataelastictest;

import org.springframework.data.elasticsearch.core.SearchHits;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

/**
 * @author P.J. Meisch (pj.meisch@sothawo.com)
 */
@RestController
@RequestMapping("/hotels")
public class HotelController {

    private final HotelRepository repository;

    public HotelController(HotelRepository repository) {
        this.repository = repository;
    }

    @GetMapping()
    public SearchHits<Hotel> all() {
        return repository.searchAllBy();
    }

    @PostMapping()
    public Hotel save(@RequestBody Hotel hotel) {
        return repository.save(hotel);
    }
}

This is a standard controller which has a HotelRepository instance injected (which Spring Data Elasticsearch will create for us). This looks exactly how it would without our customization. The difference is that the call to save() ends up in our custom implementation.

Conclusion

This post shows how easy it is to provide custom implementations for the methods that are normally provided by Spring Data Repositories (not just in Spring Data Elasticsearch) if custom logic is needed.

How to provide a dynamic index name in Spring Data Elasticsearch using SpEL

In Spring Data Elasticsearch – at the time of writing, version 4.0 is the current version – the name of an index is normally defined by the @Document annotation on the entity class. For the following examples let’s assume we want to write some log entries to Elasticsearch with our application. We use the following entity:

@Document(indexName = "log")
public class LogEntity {
    @Id
    private String id = UUID.randomUUID().toString();

    @Field(type = FieldType.Text)
    private String text;

    @Field(name = "log-time", type = FieldType.Date, format = DateFormat.basic_date_time)
    private ZonedDateTime logTime = ZonedDateTime.now();

    public String getId() {
        return id;
    }

    public String getText() {
        return text;
    }

    public void setText(String text) {
        this.text = text;
    }

    public ZonedDateTime getLogTime() {
        return logTime;
    }

    public void setLogTime(ZonedDateTime logTime) {
        this.logTime = logTime;
    }
}

Here the index name is the fixed name log.

it is possible to use a dynamically defined name for an index by using Spring Expression Language SpEL. Important: We need to use a SpEL template expression, that is an expression enclosed in #{}. This allows for the following setups:

Use a value from the application configuration

Let’s assume we have the following entry in the application.properties file:

index.prefix=test

We then use this code

@Document(indexName = "#{@environment.getProperty('index.prefix')}-log")

and the index name to use changes to test-log.

Use a value provided by a static method of some class

The second example shows how to call a static function to get a dynamic value. We use the following definition to add the current date to the index name:

@Document(indexName = "log-#{T(java.time.LocalDate).now().toString()}")

Currently this would provide an index name of log-2020-07-28.

Use a value provided by a Spring bean

For the third case we provide a bean that will give us a dynamically created string to be used as part of the index name.

@Component
public class LogIndexNameProvider {

    public String timeSuffix() {
        return LocalTime.now().truncatedTo(ChronoUnit.MINUTES).toString().replace(':', '-');
    }
}

This bean, named logIndexNameProvider, can return a String that contains the current time as hh-mm (I would not use this for naming indices, but this is just an example).

Changing the definition to

@Document(indexName = "log-#{@logIndexNameProvider.timeSuffix()}")

will now create index names like log-08-25 or log-22-07.

Of course we can mix all of these together: add a prefix from the configuration, append the current date.

Important notice:

The evaluation of SpEL for index names is only done for the index names defined in the @Document annotation. It is not done for index names that are passed as a IndexCoordinates parameter in the different methods of the ElasticsearchOperations or IndexOperations interfaces. If it were allowed on these, it would be easy to set up a scenario, where an expression is read from some outside source. And then someone might send something like "log-#{T(java.lang.Runtime).getRuntime().exec(new String[]{'/bin/rm', '/tmp/somefile'})}" which will not provide an index name, but delete files on your computer.

Using geo-distance sort in Spring Data Elasticsearch 4

The release of Spring Data Elasticsearch in version 4.0 (see the documentation) brings two new features that now enable users to use geo-distance sorts in repository queries: The first is a new class GeoDistanceOrder and the second is a new return type for repository methods SearchHit<T>. In this post I will show how easy it is to use these classes to answer questions like “Which pubs are the nearest to a given location?”.

The source code

The complete runnable code used for this post is available on GitHub. In order to run the application you will need Java 8 or higher and a running instance of Elasticsearch. If this is not accessible at localhost:9200 you need to set the correct value in the src/main/resources/application.yaml file.

Update 12.09.2020

The original code was a little extended for the follow-up post Search entities within a geographic distance with Spring Data Elasticsearch 4

The sample data

For this sample application I use a csv file with POI data from OpenStreetMap that contains POIs in Germany which are categorized as kind of food, like restaurants, pubs, fast-food and more. All together there are 826843 records.

When the application is started, the index in Elasticsearch is created and loaded with the data if it does not yet exist. So the first startup takes a little longer, the progress can be seen on the console. Within the application, these POIs are modelled by the following entity:

@Document(indexName = "foodpois")
public class FoodPOI {
    @Id
    private String id;
    @Field(type = FieldType.Text)
    private String name;
    @Field(type = FieldType.Integer)
    private Integer category;
    private GeoPoint location;
    // constructors, getter/setter left out for brevity
}

The interesting properties for this blog post are the location and the name.

The Repository

In order to search the data we need a Repository Definition:

public interface FoodPOIRepository extends ElasticsearchRepository<FoodPOI, String> {
    List<SearchHit<FoodPOI>> searchTop3By(Sort sort);
    List<SearchHit<FoodPOI>> searchTop3ByName(String name, Sort sort);
}

We have two functions defined, the first we will use to search any POI near a given point, with the second on we can search for the POIs with a name. Defining these methods in the interface is all we need as Spring Data Elasticsearch will under the hood create the implementation for these methods by analyzing the method names and parameters.

In Spring Data Elasticsearch before version 4 we could only get a List<FoodPOI> from a repository method. But now there is the SearchHit<T> class, which not only contains the entity, but also other values like a score, highlights or – what we need here – the sort value. When doing a geo-distance sort, the sort value contains the actual distance of the POI to the value we passed into the search.

The Controller

We define a REST controller, so we can call our application to get the data. The request parameters will come in a POST body that will be mapped to the following class:

public class RequestData {
    private String name;
    private double lat;
    private double lon;
    // constructors, getter/setter ...
}

The result data that will be sent to the client looks like this:

public class ResultData {
    private String name;
    private GeoPoint location;
    private Double distance;

  // constructor, gette/setter ...
}

The controller has just one method:

@RestController
@RequestMapping("/foodpois")
public class FoodPOIController {

    private final FoodPOIRepository repository;

    public FoodPOIController(FoodPOIRepository repository) {
        this.repository = repository;
    }

    @PostMapping("/nearest3")
    List<ResultData> nearest3(@RequestBody RequestData requestData) {

        GeoPoint location = new GeoPoint(requestData.getLat(), requestData.getLon());
        Sort sort = Sort.by(new GeoDistanceOrder("location", location).withUnit("km"));

        List<SearchHit<FoodPOI>> searchHits;

        if (StringUtils.hasText(requestData.getName())) {
            searchHits = repository.searchTop3ByName(requestData.getName(), sort);
        } else {
            searchHits = repository.searchTop3By(sort);
        }

        return searchHits.stream()
            .map(searchHit -> {
                Double distance = (Double) searchHit.getSortValues().get(0);
                FoodPOI foodPOI = searchHit.getContent();
                return new ResultData(foodPOI.getName(), foodPOI.getLocation(), distance);
            }).collect(Collectors.toList());
    }
}

In line 15 we create a Sort object that specifies that Elasticsearch should return the data ordered by the geographical distance to the given value which we take from the request data. Then, depending if we have a name, we call the corresponding method and get back a List<SearchHit<FoodPOI>>.

We then in the lines 27 to 29 extract the information we need from the returned objects and build our result data object.

Check the result

After starting the application we can hit the endpoint. I use curl here and pipe the output through jq to have it formatted:

$curl -X "POST" "http://localhost:8080/foodpois/nearest3" \
     -H 'Content-Type: application/json; charset=utf-8' \
     -d $'{
  "lat": 49.02,
  "lon": 8.4
}'|jq

[
  {
    "name": "Cantina Majolika",
    "location": {
      "lat": 49.0190808,
      "lon": 8.4014792
    },
    "distance": 0.14860088197123017
  },
  {
    "name": "Waldgaststätte FSSV",
    "location": {
      "lat": 49.023578,
      "lon": 8.3954656
    },
    "distance": 0.5173117164589114
  },
  {
    "name": "Hatz",
    "location": {
      "lat": 49.0155358,
      "lon": 8.3975457
    },
    "distance": 0.5276800664204232
  }
]

And the Pubs?

curl -X "POST" "http://localhost:8080/foodpois/nearest3" \
     -H 'Content-Type: application/json; charset=utf-8' \
     -d $'{
  "lat": 49.02,
  "lon": 8.4,
  "name": "pub"
}'|jq
[
  {
    "name": "Scruffy's Irish Pub",
    "location": {
      "lat": 49.0116335,
      "lon": 8.3950194
    },
    "distance": 0.998711100164643
  },
  {
    "name": "Irish Pub “Sean O'Casey's”",
    "location": {
      "lat": 49.0090639,
      "lon": 8.4028365
    },
    "distance": 1.2335132790824628
  },
  {
    "name": "Oxford Pub",
    "location": {
      "lat": 49.0086149,
      "lon": 8.4129781
    },
    "distance": 1.5806674447458173
  }
]

And that’s it

Without even needing to know how these request are sent to Elasticsearch and what Elasticsearch sends back, we can easily use these features in our Spring application. Hope you enjoyed it!

 

 

 

a simple web based chat application built with Kotlin, Vaadin, Spring Boot and Apache Kafka

Intro

In this post I show how to combine some language / frameworks and libraries / tools to build a web-based scalable chat application. I chose the following combination of tools:

As I am bad in creating cool names for projects I just put together the first letters of the used tools and named this whole thing kovasbak. The complete source code and project is available on GitHub.

What it will look like

The following screenshot shows four browser windows with four users chatting:

Running the backend

The first thing that I have to do is to get Apache Kafka running. I downloaded the actual version (0.11.0.0) from the Apache Kafka website and unpacked the download in a local directory. According to the Kafka documentation I started first zookeeper and then one Kafka broker:

./bin/zookeeper-server-start.sh config/zookeeper.properties &
./bin/kafka-server-start.sh config/server.properties &

I am just using the default values, that gets Kafka runnning on port 9092.

Setting up the project

I am using Java 1.8.0_131 and IntelliJ IDEA, but the project is totally maven based, so you can use the IDE / editor of your choice. To create the project, I used the Spring Intializr integration in IntelliJ, but of course you can create the project by using the Spring Initializr website.

I just selected Kotlin as language, Java version 1.8, Spring Boot 1.5.4 and additionally selected web/vaadin and io/kafka.

After creating the project you end up with the following pom.xml, I only added the highlighted lines to be able to have server-push (more on that later):

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.sothawo</groupId>
  <artifactId>kovasbak</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <packaging>jar</packaging>

  <name>kovasbak</name>
  <description>a simple chat system built with Kotlin, Vaadin, spring Boot and Apache Kafka</description>

  <parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.5.4.RELEASE</version>
    <relativePath/> <!-- lookup parent from repository -->
  </parent>

  <properties>
    <kotlin.compiler.incremental>true</kotlin.compiler.incremental>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
    <java.version>1.8</java.version>

    <kotlin.version>1.1.3</kotlin.version>
    <vaadin.version>8.0.6</vaadin.version>
  </properties>

  <dependencies>
    <dependency>
      <groupId>org.springframework.kafka</groupId>
      <artifactId>spring-kafka</artifactId>
    </dependency>
    <dependency>
      <groupId>com.vaadin</groupId>
      <artifactId>vaadin-spring-boot-starter</artifactId>
    </dependency>
    <dependency>
      <groupId>com.vaadin</groupId>
      <artifactId>vaadin-push</artifactId>
    </dependency>
    <dependency>
      <groupId>org.jetbrains.kotlin</groupId>
      <artifactId>kotlin-stdlib-jre8</artifactId>
      <version>${kotlin.version}</version>
    </dependency>
    <dependency>
      <groupId>org.jetbrains.kotlin</groupId>
      <artifactId>kotlin-reflect</artifactId>
      <version>${kotlin.version}</version>
    </dependency>

    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-test</artifactId>
      <scope>test</scope>
    </dependency>
  </dependencies>

  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>com.vaadin</groupId>
        <artifactId>vaadin-bom</artifactId>
        <version>${vaadin.version}</version>
        <type>pom</type>
        <scope>import</scope>
      </dependency>
    </dependencies>
  </dependencyManagement>

  <build>
    <sourceDirectory>${project.basedir}/src/main/kotlin</sourceDirectory>
    <testSourceDirectory>${project.basedir}/src/test/kotlin</testSourceDirectory>
    <plugins>
      <plugin>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-maven-plugin</artifactId>
      </plugin>
      <plugin>
        <artifactId>kotlin-maven-plugin</artifactId>
        <groupId>org.jetbrains.kotlin</groupId>
        <version>${kotlin.version}</version>
        <configuration>
          <compilerPlugins>
            <plugin>spring</plugin>
          </compilerPlugins>
          <jvmTarget>1.8</jvmTarget>
        </configuration>
        <executions>
          <execution>
            <id>compile</id>
            <phase>compile</phase>
            <goals>
              <goal>compile</goal>
            </goals>
          </execution>
          <execution>
            <id>test-compile</id>
            <phase>test-compile</phase>
            <goals>
              <goal>test-compile</goal>
            </goals>
          </execution>
        </executions>
        <dependencies>
          <dependency>
            <groupId>org.jetbrains.kotlin</groupId>
            <artifactId>kotlin-maven-allopen</artifactId>
            <version>${kotlin.version}</version>
          </dependency>
        </dependencies>
      </plugin>
    </plugins>
  </build>


</project>

The code

In this post I will only show the relevant lines from the code, I will skip package and import statements, the full code is available at GitHub.

The application class

The application class created by the initializr just gets one additional line:

@SpringBootApplication
@EnableKafka
class KovasbakApplication

fun main(args: Array<String>) {
    SpringApplication.run(KovasbakApplication::class.java, *args)
}

The @EnableKafka annotation is used to tell Spring Boot to pull in the kafka related classes and libs.

The UI classes

ChatDisplay

The ChatDisplay is the Panel displaying the chat messages. I first used a TextArea, but had problems with programmatically scrolling to the bottom. So I created this small class that uses a Label to display the data:

class ChatDisplay : Panel() {
    val text: Label

    init {
        setSizeFull()
        text = Label().apply { contentMode = ContentMode.HTML }
        content = VerticalLayout().apply { addComponent(text) }
    }

    fun addMessage(user: String, message: String) {
        text.value = when {
            text.value.isNullOrEmpty() -> "<em>$user:</em> $message"
            else -> text.value + "<br/><em>$user:</em> $message"
        }
        scrollTop = Int.MAX_VALUE
    }
}

ChatUI

This is the main UI class:

@SpringUI
@PreserveOnRefresh
@Push
class ChatUI : UI(), KafkaConnectorListener {

    lateinit var user: String
    val chatDisplay = ChatDisplay()
    val userLabel = Label()

    @Autowired
    lateinit var kafkaConnector: KafkaConnector

    // skipping content here....

    companion object {
        val log: Logger = LoggerFactory.getLogger(ChatUI::class.java)
    }
}

It is marked as a Vaadin UI with @SpringUI, @PreserveOnRefresh keeps the session when the browser is reloaded, and @Push marks this for server-push when new messages arrive from Kafka. The class implements an interface KafkaConnectorListener which is described together with the KafkaConnector class.

The ChatUI has the following fields:

  • user: the name of the user that is chatting
  • chatDisplay: the display panel for the messages
  • userLabel: sits at the bottom left to show the name of the user
  • kafkaConnector: used for sending the own messages and to register for getting the messages from kafka

It further has a companion object containing the Logger. I now show the methods of the class:

override fun init(vaadinRequest: VaadinRequest?) {
    kafkaConnector.addListener(this)
    content = VerticalLayout().apply {
        setSizeFull()
        addComponents(chatDisplay, createInputs())
        setExpandRatio(chatDisplay, 1F)
    }
    askForUserName()
}

private fun createInputs(): Component {
    return HorizontalLayout().apply {
        setWidth(100F, Sizeable.Unit.PERCENTAGE)
        val messageField = TextField().apply { setWidth(100F, Sizeable.Unit.PERCENTAGE) }
        val button = Button("Send").apply {
            setClickShortcut(ShortcutAction.KeyCode.ENTER)
            addClickListener {
                kafkaConnector.send(user, messageField.value)
                messageField.apply { clear(); focus() }
            }
        }
        addComponents(userLabel, messageField, button)
        setComponentAlignment(userLabel, Alignment.MIDDLE_LEFT)
        setExpandRatio(messageField, 1F)
    }
}

This sets up the basic layout with the ChatDisplay and the other UI elements, registers the ChatUI with the KafkaConnector. The click handler for the send button is set up so that the user name and the content of the message TextField are sent to the KafkaConnector (see marked line).

After setting up the layout, the user is asked for her name with the following method:

private fun askForUserName() {
    addWindow(Window("your user:").apply {
        isModal = true
        isClosable = false
        isResizable = false
        content = VerticalLayout().apply {
            val nameField = TextField().apply { focus() }
            addComponent(nameField)
            addComponent(Button("OK").apply {
                setClickShortcut(ShortcutAction.KeyCode.ENTER)
                addClickListener {
                    user = nameField.value
                    if (!user.isNullOrEmpty()) {
                        close()
                        userLabel.value = user
                        log.info("user entered: $user")
                    }
                }
            })
        }
        center()
    })
}

This shows a modal window where the user’s name must be entered.

There is a method that is called when the UI is disposed:

override fun detach() {
    kafkaConnector.removeListener(this)
    super.detach()
    log.info("session ended for user $user")
}

The code used to send the actual message to the kafka connector was already shown, the last thing in this class is the code that is called from the KafkaConnector when new messages arrive:

override fun chatMessage(user: String, message: String) {
    access { chatDisplay.addMessage(user, message) }
}

The received data is added to the chatDisplay, but this is wrapped as a Runnable in the UI.access() method for two reasons:

  1. the code is asynchronously from a different thread and must be wrapped to be run on the UI thread.
  2. Executing the code in access() in combination with the @Push annotation on the class results in a server push to the client which is necessary so that the new messages are immediately shown.

The Kafka connector class

All communication with Kafka is wrapped in a Spring Component (thus being a singleton) which just has the following code:

interface KafkaConnectorListener {
    fun chatMessage(user: String, message: String)
}

@Component
class KafkaConnector {

    val listeners = mutableListOf<KafkaConnectorListener>()

    fun addListener(listener: KafkaConnectorListener) {
        listeners += listener
    }

    fun removeListener(listener: KafkaConnectorListener) {
        listeners -= listener
    }

    @Autowired
    lateinit var kafka: KafkaTemplate<String, String>

    fun send(user: String, message: String) {
        log.info("$user sending message \"$message\"")
        kafka.send("kovasbak-chat", user, message)
    }

    @KafkaListener(topics = arrayOf("kovasbak-chat"))
    fun receive(consumerRecord: ConsumerRecord<String?, String?>) {
        val key: String = consumerRecord.key() ?: "???"
        val value: String = consumerRecord.value() ?: "???"
        log.info("got kafka record with key \"$key\" and value \"$value\"")
        listeners.forEach { listener -> listener.chatMessage(key, value) }
    }

    companion object {
        val log: Logger = LoggerFactory.getLogger(KafkaConnector::class.java)
    }
}

First I defined the KafkaConnectorListener interface which the ChatUI class implements so they can be registered for new messages.

The KafkaConnector has a list of listeners and the methods to add and remove listeners. Nothing special here.

For sending a new message to kafka, the send method uses the injected KafkaTemplate (which comes from the spring-kafka library) to send the data to kafka by using the username as key and the message text as payload. The topic name that is used is kovasbak-chat.

By marking the receive method with @KafkaListener the method is called every time when a message in kafka arrives from any client. The data is parsed for the username and message body and the it is sent to all the registered clients. And finally there is a companion object with a Logger.

The configuration

spring.kafka.consumer.group-id=${random.uuid}
spring.kafka.consumer.auto-offset-reset=latest
spring.kafka.bootstrap-servers=localhost:9092

I use a random kafka consumer-group id so that each instance of my webapp gets all messages, I am not interested in old messages and define the host and port of the kafka broker.

Fire it up

You can either run the program from within the IDE or go to the command line and:

mvn package
java -jar target/kovasbak-0.0.1-SNAPSHOT.jar

you can then as well start a second instance on a different port like and access the servers on both localhost:8080 and localhost:8081

java -jar target/kovasbak-0.0.1-SNAPSHOT.jar --server.port=8081

Conclusion

To sum it up: with just a handful of code lines we have a scalable web-based chat-service which uses a scalable backend for message processing.

Run a Spring-Boot application on OpenShift behind HTTPS only

When a Spring-Boot application is deployed on OpenShift, it can be reached both with a HTTP URL and a HTTPS URL. This is because OpenShift runs a proxy in front of the application which in case of HTTP just routes the request to the application. If a request comes in via HTTPS, the proxy does all the encryption handling with the client and then passes the decrypted request on to the application – on the HTTP channel – and encrypts the response before sending it to the client.

The advantage for an application developer is that you do not need to bother about the details of encryption, you just write your application and leave the rest to OpenShift.

This post shows how to setup and modify your application so that it only can be reached by HTTPS and so enforces the use of a secure conversation channel. To know how to set up a Spring-Boot application on OpenShift you might read this post.

Add the security to your project and set it to ssl only

When you setup your project with the Spring Initializr include the core/security. If you have an existing project, add the following dependency to your pom.xml:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-security</artifactId>
</dependency>

This enables Spring security and secures your application with the user named user and a password that is displayed on the console during startup.  To force the use of HTTPS you normally only need to add the following entry to the application.properties file:

security.require-ssl=true

Now you would have an application that will automatically redirect to HTTPS (the default port in Spring Boot for HTTPS is 8443 when running on unsecure port 8080) when called on port 8080. Besides not having a certificate and the configuration to run on HTTPS, we don’t need the Basic Authentication for our purpose of running HTTPS only. So we can disable it by adding the following entry to the application.properties file:

security.basic.enabled=false

When restarting the application the need for Basic Authentication is gone, but this also disables the require-ssl setting, so we can access our application as before on the normal HTTP port.

The solution to this problem is to provide a custom configuration class:

/**
 * Copyright (c) 2015 sothawo
 *
 * http://www.sothawo.com
 */
package com.sothawo.sayservice;

import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
import org.springframework.security.config.annotation.web.servlet.configuration.EnableWebMvcSecurity;

/**
 * Security configuration.
 *
 * @author P.J. Meisch (pj.meisch@sothawo.com).
 */
@Configuration
@EnableWebMvcSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.requiresChannel().anyRequest().requiresSecure();
        http.csrf().disable();
    }
}

After adding this class to our project (as I in this case don’t need CSRF for a pure REST service behind HTTPS, I disable it here), all requests to the HTTP port are redirected to HTTPS and the Basic Authentication is still disabled (you can remove the server-require-ssl entry from the config file). So far so good.

Fix eternal redirection on OpenShift

After deploying the application in this stage to OpenShift you will notice that both requests, HTTP on port 80 and HTTPS on port 443 result in an eternal redirection to the HTTPS URL. This happens because when the application is accessed by HTTPS, the OpenShift proxy does the HTTPS handling and then contacts the application on the normal internal HTTP port. The application checks the channel and sends a redirect request to the secure channel to the client which in turn request the application from the HTTPS proxy, which will strip the HTTPS part and so on.

To fix this you need to make your application honour two special HTTP headers. Add the following lines to the application.properties file:

server.tomcat.remote_ip_header=x-forwarded-for
server.tomcat.protocol_header=x-forwarded-proto

The x-forwarded- headers are set by the proxy and by putting these settings in your configuration, the embedded tomcat checks these headers when deciding whether a redirect is needed and so even when the application is called from the proxy on HTTP, a redirect will only be requested when the proxy itself was not accessed by a secure channel.

That’s all that’s needed to run your application HTTPS only.

Spring-Boot, Spring profiles and configuration files

Note to self: When using Spring-Boot, use application.conf as a base configuration for the needed values. Configuration values for the specific profile go into the application-<profile>.config file.

Profiles are activated by using either the -Dspring.profiles.active=<profile> VM flag or --spring.profiles.active=<profile> commandline arg.

Deploying a Spring-Boot application running with Java8 on OpenShift2

This post describes how to create and deploy a Spring-Boot application to RedHat OpenShift (version 2) when the application is using Java 8.

Edit 2015-10-04: In this newer post I show how to not install a custom JDK. So you should first read this post and then the linked one for additional information.

Normally deploying a Spring-Boot application on OpenShift is not too much pain and is explained in the Spring-Boot documentation. But some extra work is needed when the application is built and run with Java 8, as at the time of writing, the DIY cartridge of OpenShift only supports Java 7. And, to make things worse, the mvn command which is available in the DIY cartridge is rewritten by RedHat, so it will pick up Java 7 no matter what you set your JAVA_HOME to.

This post will show how to overcome these deficiencies by walking through the necessary steps to create a Spring-Boot based REST service which is deployed on OpenShift. To follow along you need

  • Java 8 installed
  • an OpenShift account
  • setup the rhc command line tool as described on OpenShift documentation
  • know how to create and set up an Spring Boot project (I use a maven project)

My sample is created on Mac OSX by using the terminal and IntelliJ. I will create a REST service named SayService which will just return it’s string input prepended by “you said: “. Not very interesting, but enough for this example.

Create the OpenShift application

As a first step I create the application on OpenShift. To do that, I change into the local directory where I want the app to be created and issue the following rhc command, assuming you are logged in to OpenShift with rhc:

rhc app-create sayservice diy

This creates the OpenShift application and clones it’s Git repository to your local sayservice directory. The structure is shown below:

sayservice
├── .git
├── .openshift
│   ├── README.md
│   ├── action_hooks
│   │   ├── README.md
│   │   ├── start
│   │   └── stop
│   ├── cron
│   │   ├── README.cron
│   │   ├── daily
│   │   │   └── .gitignore
│   │   ├── hourly
│   │   │   └── .gitignore
│   │   ├── minutely
│   │   │   └── .gitignore
│   │   ├── monthly
│   │   │   └── .gitignore
│   │   └── weekly
│   │       ├── README
│   │       ├── chrono.dat
│   │       ├── chronograph
│   │       ├── jobs.allow
│   │       └── jobs.deny
│   └── markers
│       └── README.md
├── README.md
├── diy
│   ├── index.html
│   └── testrubyserver.rb
└── misc
    └── .gitkeep

The diy subdirectory contains the sample, we ignore that. What we need to adjust later are scripts in the .openshift/action_hooks directory. And of course we need to add our source code for the service.

Create the Spring-Boot REST service

With the help of the Spring Boot Initializr (which I use from within IntelliJ, but the jar created on the Website is quite the same) I create a project that just has the Web/Web component added and where the Java version is set to 1.8. The important thing here is that the project is created in the sayservice directory so that the project files are added to the existing directory. After adding my standard .gitignore file, the directory contains the following data (not showing the contents of the .openshift directory again and skipping IntelliJ files):

sayservice
├── .git
├── .gitignore
├── .openshift
├── README.md
├── diy
├── misc
├── pom.xml
└── src
    ├── main
    │   ├── java
    │   │   └── com
    │   │       └── sothawo
    │   │           └── sayservice
    │   │               └── SayserviceApplication.java
    │   └── resources
    │       ├── application.properties
    │       ├── static
    │       └── templates
    └── test
        └── java
            └── com
                └── sothawo
                    └── sayservice
                        └── SayserviceApplicationTests.java

The following listing shows the pom.xml, notice the explicit setting of the java version to 1.8:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.sothawo</groupId>
  <artifactId>sayservice</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <packaging>jar</packaging>

  <name>sayservice</name>
  <description>Demo project for Spring Boot REST service on OpenShift</description>

  <parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.2.5.RELEASE</version>
    <relativePath/> <!-- lookup parent from repository -->
  </parent>

  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <java.version>1.8</java.version>
  </properties>

  <dependencies>
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-test</artifactId>
      <scope>test</scope>
    </dependency>
  </dependencies>
  
  <build>
    <plugins>
      <plugin>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-maven-plugin</artifactId>
      </plugin>
    </plugins>
  </build>
</project>

Add the Service implementation

At the moment we have an application that has not yet a service defined, so we add the following Sayservice class:

/**
 * Copyright (c) 2015 sothawo
 *
 * http://www.sothawo.com
 */
package com.sothawo.sayservice;

import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

/**
 * Sample Service.
 *
 * @author P.J. Meisch (pj.meisch@sothawo.com).
 */
@RestController
@RequestMapping("/")
public class Sayservice {
    @RequestMapping(value = "/say/{in}", method = RequestMethod.GET)
    public String echo(@PathVariable(value = "in") final String in) {
        return "you said: " + in;
    }
}

After creating and running the application with

mvn package && java -jar target/*.jar

you can access and test it:

curl http://localhost:8080/say/hello
you said: hello

Create an OpenShift build script to install Java8 and build the application

The following script named build must be put in the .openshift/action_hooks directory (it must be executable):

#!/bin/bash

# define some variables for JDK 8
JDK_TGZ=jdk-8u60-linux-i586.tar.gz
JDK_URL=http://download.oracle.com/otn-pub/java/jdk/8u60-b27/$JDK_TGZ
JDK_DIR=jdk1.8.0_60
JDK_LINK=jdk1.8

# download JDK1.8 to the data directory if it does not yet exist, extract it and create a symlink
cd ${OPENSHIFT_DATA_DIR}

if [[ ! -d $JDK_DIR ]]
then
  wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" $JDK_URL
  tar -zxf $JDK_TGZ
  rm -fr $JDK_TGZ
  rm $JDK_LINK
  ln -s $JDK_DIR $JDK_LINK
fi

# export environment vriables
export JAVA_HOME="$OPENSHIFT_DATA_DIR/$JDK_LINK"
export PATH=$JAVA_HOME/bin:$PATH

# call our own mvn script with the right settings
cd $OPENSHIFT_REPO_DIR
./.openshift/mvn package -s .openshift/settings.xml -DskipTests=true

The script downloads the Oracle JDK if it is not yet available and extracts it to the OPENSHIFT_DATA_DIR directory.

The next thing to adjust is the mvn script. The one that’s available in the DIY cartridge resets JAVA_HOME, so I put the following mvn script in the .openshift directory:

#!/bin/sh
prog=$(basename $0)
export JAVACMD=$JAVA_HOME/bin/java
export M2_HOME=/usr/share/java/apache-maven-3.0.4
exec $M2_HOME/bin/$prog "$@"

As an alternative you might add a download of maven to the build script.

The last needed file for the build is the settings.xml, which I also put into the .openshift directory (Edit 2015-10-04: fixed variable with {} and added /.m2/repository):

<settings>
 <localRepository>${OPENSHIFT_DATA_DIR}/.m2/repository</localRepository>
</settings>

Set up start and stop scripts

Replace the files start and stop in the .openshift/action_hooks directory with the following ones (they must be executable):

start

#!/bin/bash
# The logic to start up your application should be put in this
# script. The application will work only if it binds to
# $OPENSHIFT_DIY_IP:8080

JDK_LINK=jdk1.8

export JAVA_HOME="$OPENSHIFT_DATA_DIR/$JDK_LINK"
export PATH=$JAVA_HOME/bin:$PATH

cd $OPENSHIFT_REPO_DIR
nohup java -jar target/*.jar --server.port=${OPENSHIFT_DIY_PORT} --server.address=${OPENSHIFT_DIY_IP} &

stop

#!/bin/bash
source $OPENSHIFT_CARTRIDGE_SDK_BASH

# The logic to stop your application should be put in this script.
PID=$(ps -ef | grep java.*\.jar | grep -v grep | awk '{ print $2 }')
if [ -z "$PID" ]
then
    client_result "Application is already stopped"
else
    kill $PID
fi

Deploy and run the application

After committing your files to git a final

git push

will upload all your changes to OpenShift, build and start the application. Check it out by calling

curl http://sayservice-yourdomain.rhcloud.com/say/it-works

and getting the answer

you said: it-works

So much for my first post concerning OpenShift.