What can OData bring to ElasticSearch?

The key concept is that such integration allows to implement an indirection level between the logical schema, defined with OData, and the physical one, defined within ElasticSearch.

This allows to transparently apply processing strategies according to the mapping between these two schemas.

All along the post, we will describe in details the concepts behind an integration between OData and ElasticSearch. We can notice that all the general concepts apply to most noSQL databases.

Bridging logical and physical schemas

OData Entity Data Model

The central concepts in the EDM are entities and the relations between them. Entities are instances of Entity Types (for example, Customer, Employee, and so on) which are structured records consisting of named and typed properties and with a key. Complex Types are structured types also consisting of a list of properties but with no key, and thus can only exist as a property of a containing entity. An Entity Key is formed from a subset of properties of the Entity Type and is the way to uniquely identifying instances of Entity Types and allowing Entity Type instances to participate in relationships using navigation properties. Entities are grouped in Entity Sets. Finally, all instance containers like Entity Sets are grouped in an Entity Container.

ElasticSearch mapping

ElasticSearch defines metadata regarding the document kinds it manages within indices. These metadata allows to define types of properties and eventually their formats but also the way document they will be handled during the indexing phase (stored or not, indexed or not, analyzers to apply, ).

The following snippet describes a sample:

{
    "product": {
        "properties": {
            "name": {
                "type" : "string",
                "index": "analyzed,
                "store": true,
                "index_name" : "msg",
                "analyzer": "standard"
             },
            "description":{"type":"string"},
            "releaseDate":{"type":"date"},
            "discontinuedDate":{"type":"date"},
            "rating":{"type":"integer"},
            "price":{"type":"double"},
            "available":{"type":"boolean"},
            "hint":{"type":"string"}
    }
}

Such hints are indexing-oriented: they dont define relations between elements either constraints.

Need for an intermediate schema

As we saw, the ElasticSearch mapping focuses but doesnt contain all the neccessary hints to build an EDM. For this reason, an intermediate schema needs to be introduced.

It will contain additional hints about types (cardinalities, relations, denormalization, ). It will be used to deduce the corresponding EDM. Some hints wont be exposed through this model but will be useful when handling OData requests.

The following content describes the structure of this intermediate schema:

name (string)
pk (true | false);
minOccurs (0 | 1)
maxOccurs (integer or -1)
denormalizedFieldName (string)
notNull (boolean)
regexp (regexp);
uniqueBy (true | false)
autoGenerated (true | false)
indexed (true | false)
stored (true | false)
relationKind (parentChild | denormalized | reference)

In the case of ElasticSearch, this can be stored with the field _meta of type mappings, as described below:

{
    "properties": {
        "age": { "type":"integer" },
        "gender":{ "type":"boolean" },
        "phone":{ "type":"string" },
        "address": {
            "type": "nested",
            "properties": {
                "street": { "type":"string" },
                "city": { "type":"string" },
                "state": { "type":"string" },
                "zipCode": { "type":"string" },
                "country": { "type":"string" }
            }
        }
    },
    "_meta":{
        "constraints":{
            "personId":{ "pk":true, "type":"integer" }
        }
    }
}

Another approach consists in defining it outside ElasticSearch within the OData ElasticSearch support programatically or within a configuration file. Below is a possible solution:

MetadataBuilder builder = new MetadataBuilder();

TargetEntityType personDetailsAddressType
              = builder.addTargetComplexType(
                                  "odata", "personDetailsAddress");
personDetailsAddressType.addField("street", "Edm.String");
personDetailsAddressType.addField("city", "Edm.String");
personDetailsAddressType.addField("state", "Edm.String");
personDetailsAddressType.addField("zipCode", "Edm.String");
personDetailsAddressType.addField("country", "Edm.String");

TargetEntityType personDetailsType
               = builder.addTargetEntityType(
                                  "odata", "personDetails");
personDetailsType.addPkField("personId", "Edm.Int32");
personDetailsType.addField("age", "Edm.Int32");
personDetailsType.addField("gender", "Edm.Boolean");
personDetailsType.addField("phone", "Edm.String");
personDetailsType.addField("address", "odata.personDetailsAddress");

Data management

In the case of data management, this indirection level has an interest since the OData implementation for ElasticSearch can apply strategies regarding the kind of data. We wont dive here into details but we can distinguish these different use cases:

  • Handling primary keys. ElasticSearch manages the primary key of the entity by itself. The key isnt stored as a field in the document itself but in a special metadata called id of type string. ElasticSearch gives you the choice to provide the primary key value or let the database generate an unique string identifier for you. We can notice that only single primary keys are supported. The abstraction can integrate the best way to handle the primary and add a support for primary with other types.
  • OData supports partial updates and single property updates out of the box. ElasticSearch also provides this feature using scripts. This approach can be hidden within the OData implementation for ElasticSearch.
  • OData provides the concept navigation properties to manage links between different entities. Whereas this isnt supported natively in ElasticSearch (like for all noSQL databases), this can be simulate using parent / child support or denormalization. Based on collected metadata for the schema, the OData implementation for ElasticSearch can adapt the processing to transparently support such approaches.

Queries

For queries, the OData abstraction allows to adapt the underlying ElasticSearch queries according to the context and the element they apply on.

Simple queries

The most simpliest queries involve operators eq (equals) and ne (not equals). With ElasticSearch, we need to take care to avoid a classical pitfall. As a matter of fact, such queries need to use term queries but this can only apply in the case of document fields with type string to non indexed fields. Other types are natively supported. In the case of indexed fields are automatically analyzed, we will rather use the function contains to do a match query under the hood. As a matter of fact, a term query wont generally provide the right result.

Below here is described how such queries are handled:

  • Operator eq (equals): name eq 'my name', quantity eq 12

The following ElasticSearch query will be executed:

{
    "term" : {
        "name" : "my name"
    }
}

  • Operator eq with null value: name eq null

The following ElasticSearch query will be executed:

{
    "filtered" : {
        "query" : {
            "match_all" : { }
        },
        "filter" : {
            "missing" : {
                "field" : "name"
            }
        }
    }
}

  • Operator ne (not equals): name ne 'my name', quantity ne 12

The following ElasticSearch query will be executed:

{
    "filtered" : {
        "query" : {
            "match_all" : { }
        },
        "filter" : {
            "not" : {
                "filter" : {
                    "query" : {
                        "term" : {
                            "name" : "my name"
                        }
                    }
                }
            }
        }
    }
}

  • Operator ne with null value: name ne null

The following ElasticSearch query will be executed:

{
    "filtered" : {
        "query" : {
            "match_all" : { }
        },
        "filter" : {
            "exists" : {
                "field" : "name"
            }
        }
    }
}

Canonical functions in queries

The function contains allows to make a match query and can perfectly applied to analyzed fields.

  • Function contains: contains(name, 'my name')

The following ElasticSearch query will be executed:

{
    "match" : {
        "description" : {
            "query" : "whole",
            "type" : "boolean"
        }
    }
}

  • Function startswith: startswith(name, 'bre')

The following ElasticSearch query will be executed:

{
    "prefix" : {
        "name" : {
            "prefix" : "bre"
        }
    }
}

Handling nested fields

OData queries provides the ability to define paths with several levels. For example, expression like that are supported: address/city/name. There are several use cases depending on the relations between fields.

For example, if the field city is contained within a nested field, we can transparently adapt the ElasticSearch query to wrap it within a nested one. This can apply to all queries previously described here.

  • Operator eq (equals): address/city/name eq 'my name'

The following ElasticSearch query will be executed:

{
    "nested" : {
        "query" : {
            "term" : {
                "city.name" : "my name"
            }
        },
        "path" : "address"
    }
}

We dont go further here but we can handle the case when parent / child relations or denormalization come into account to deduce the ElasticSearch queries to execute.

Compounded queries

OData queries also support operators like and, or or not to compound all queries described previously.

  • Operator or: contains(name, 'my name') or contains(description, 'my description')

The following ElasticSearch query will be executed:

{
    "filtered" : {
        "query" : {
            "match_all" : { }
        },
        "filter" : {
            "or" : {
                "filters" : [ {
                    "query" : {
                        "match" : {
                            "name" : "my name"
                        }
                    }
                }, {
                    "query" : {
                        "match" : {
                            "description" : "my description"
                        }
                    }
                } ]
            }
        }
    }
}

Handling relations

We saw previously that we can easily and transparently handle nested fields. Its the same for parent / child relations. If we are in the case of a navigation property implemented in ElasticSearch with such feature, we can easily adapt the corresponding query and use a has_child query.

  • Operator eq (equals): address/street eq 'my street'

The following ElasticSearch query will be executed:

{
    "has_child": {
        "type": "address",
        "query": {
            "term": {
                "street": "my street"
            }
        }
    }
}

Updating the denormalized data

Denormalized data are duplicated within several ElasticSearch types in a single index or across several ones. This allows to simulate data joins and return a data graph within query results and by executing a single query.

However, there is always a data that triggers the updates of duplicated ones when updated. This data corresponds to the one that is present in the logical schema at a single place. Denormalized data dont appear within this schema since they correspond to a design choice of the physical schema.

When updating this data, the OData service will build a batch update request to update all the dependent ones. As we saw previously, we have the hints about such denormalization links within the intermediate schema. With such approach handling updates of denormalized data is completely transparent.

Posted in ElasticSearch, OData | Tagged , | Leave a comment

Handling OData queries with ElasticSearch

Olingo provides an Java implementation of OData for both client and server sides. Regarding the server side, it provides a frame to handle OData requests, specially the queries described with the OData within the query parameter $filter.

We dont provide here a start guide to implement an OData service with Olingo (it will be the subjet of another post) but focus on the way to handle queries. We first deal with the basic frame in Olingo to implement queries and then how to translate them to ElasticSearch ones. To finish, we also tackle other query parameters to control the entity fields returned ($select) and the data set returned and pagination ($top and $skip).

Handling OData queries in Olingo

Olingo is based on the concept of processor to handle OData requests. The library allows to register a processor class that implements a set of interfaces describing what it can handle. In the following snippet, we create a processor that can handle entity collection, entity collection count and entity requests.

public class ODataProviderEntityProcessor
                       implements EntityCollectionProcessor,
                                  CountEntityCollectionProcessor,
                                  EntityProcessor {
    @Override
    public void readEntityCollection(final ODataRequest request,
                                 ODataResponse response, final UriInfo uriInfo,
                                 final ContentType requestedContentType)
                  throws ODataApplicationException, SerializerException {
        (...)
    }

    @Override
    public void countEntityCollection(ODataRequest request,
                                 ODataResponse response, UriInfo uriInfo)
                    throws ODataApplicationException, SerializerException {
        (...)
    }

    @Override
    public void readEntity(final ODataRequest request, ODataResponse response,
                  final UriInfo uriInfo, final ContentType requestedContentType)
                    throws ODataApplicationException, SerializerException {
        (...)
    }
}

Imagine that we have a entity set called products of type Product. When we access the OData service with the URL http://myservice.org/odata.svc/products, Olingo will route the request to the method readEntityCollection of our processor. The objects provided as parameters will contain of the hints regarding the request and allow to set elements to return within the response.

If we want to use queries, we simply need to leverage the query parameter $filter. So if we want to get all the products with name MyProductName, we can simply use this URL: http://myservice.org/odata.svc/products?$filter=name eq 'MyProductName'. Within the processor the query expression can be reached using the parameter uriInfo, as described below:

@Override
public void readEntityCollection(final ODataRequest request,
                             ODataResponse response, final UriInfo uriInfo,
                             final ContentType requestedContentType)
              throws ODataApplicationException, SerializerException {
    FilterOption filterOption = uriInfo.getFilterOption();
    (...)
}

The query support of Olingo doesnt stop here, since it parses the query string for us and allows to based on the classical pattern Visitor. To implement such processing, we simply need to create a class that implements the interface ExpressionVisitor and uses it on the parsed expression, as described below:

Expression expression = filterOption.getExpression();
QueryBuilder queryBuilder = expression
                .accept(new ElasticSearchExpressionVisitor());

The visitor class contains the methods that can will be called when an element of the parsed expression is encountered. A sample empty implementation is described below with the main methods:

public class ElasticSearchExpressionVisitor implements ExpressionVisitor {
    @Override
    public Object visitBinaryOperator(BinaryOperatorKind operator,
                   Object left, Object right)
                     throws ExpressionVisitException,
                            ODataApplicationException {
        (...)
    }

    @Override
    public Object visitUnaryOperator(UnaryOperatorKind operator, Object operand)
                    throws ExpressionVisitException, ODataApplicationException {
        (...)
    }

    @Override
    public Object visitMethodCall(MethodKind methodCall, List parameters)
                    throws ExpressionVisitException, ODataApplicationException {
        (...)
    }

    @Override
    public Object visitLiteral(String literal)
                    throws ExpressionVisitException, ODataApplicationException {
        (...)
    }

    @Override
    public Object visitMember(UriInfoResource member)
                    throws ExpressionVisitException, ODataApplicationException {
        (...)
    }
}

This approach allows to handle several levels within queries. The returned elements of methods corresponds to the elements that will be passed as parameters to other method calls. Lets take a simple example based on the expression name eq 'MyProductName'. Here are the different method calls:

  • method visitMember. The variable member of type UriInfoResource contains potentially several parts to support something like that field1/subField2. We can here simply extract the string name and returns it.
  • method visitLiteral. The variable literal contains the value 'MyProductName'. Since we are in the case of a string literal, we need to extract the string value MyProductName and returns it. If it was an integer, we could convert it to an integer and return it.
  • method visitBinaryOperator. The variable operator contains the type of operator, BinaryOperatorKind.EQ in our case. The other parameters correspond to the values returned by the previous method.

Here is a sample implementation of methods visitMember and visitLiteral:

@Override
public Object visitLiteral(String literal)
         throws ExpressionVisitException, ODataApplicationException {
    return ODataQueryUtils.getRawValue(literal);
}

@Override
public Object visitMember(UriInfoResource member)
         throws ExpressionVisitException, ODataApplicationException {
    if (member.getUriResourceParts().size() == 1) {
        UriResourcePrimitiveProperty property
                                 = (UriResourcePrimitiveProperty)
                                              member.getUriResourceParts().get(0);
        return property.getProperty().getName();
    } else {
        List<String> propertyNames = new ArrayList<String>();
        for (UriResource property : member.getUriResourceParts()) {
            UriResourceProperty primitiveProperty
                                  = (UriResourceProperty) property;
            propertyNames.add(primitiveProperty.getProperty().getName());
        }
        return propertyNames;
    }
}

Now we have described general principles to handle OData queries within Olingo, we can focus now on how to convert these queries to ElasticSearch ones.

Implementing the interaction with ElasticSearch

Now we have tackle generic concepts and have a look at Olingo classes to implement queries, we will now focus on the ElasticSearch specific stuff. We will use the official Java client to execute such queries from Olingo processors. We leverage the class SearchRequestBuilder and create it using the method prepareSearch of the client. The query can be configured within this request. The corresponding result data will be then convert to OData entities and send back to the client.

The following code shows a sample implementation of such processing within the processor previously described:

@Override
public EntitySet readEntitySet(EdmEntitySet edmEntitySet,
                  FilterOption filterOption, SelectOption selectOption,
                  ExpandOption expandOption, OrderByOption orderByOption,
                  SkipOption skipOption, TopOption topOption) {
    EdmEntityType type = edmEntitySet.getEntityType();
    FullQualifiedName fqName = type.getFullQualifiedName();

    QueryBuilder queryBuilder = createQueryBuilder(
                                  filterOption, expandOption);

    SearchRequestBuilder requestBuilder = client
                          .prepareSearch(fqName.getNamespace())
                          .setTypes(fqName.getName())
                          .setQuery(queryBuilder);
    configureSearchQuery(requestBuilder, selectOption,
                          orderByOption, skipOption, topOption);

    SearchResponse response = requestBuilder.execute().actionGet();

    EntitySet entitySet = new EntitySetImpl();
    SearchHits hits = response.getHits();
    for (SearchHit searchHit : hits) {
        Entity entity = convertHitToEntity(
                            searchHit, type, edmProvider);
        entity.setType(fqName.getName());
        entitySet.getEntities().add(entity);
    }

    return entitySet;
}

We will now describe how to actually create ElasticSearch queries.

Creating ElasticSearch queries from OData requests

With OData, we can get all data for a particular type but also filter them using a query. If we want to get all data, we can use the query . In other case, the ElasticSearch query creation will be a bit more tricky. The latter will be created within a Olingo query expression visitor and can have serveral levels.

The following code describes the entry point method to create the ElasticSearch query:

public QueryBuilder createQueryBuilder(FilterOption filterOption) {
    if (filterOption != null) {
        Expression expression = filterOption.getExpression();
        return expression.accept(
             new ElasticSearchExpressionVisitor());
    } else {
        return QueryBuilders.matchAllQuery();
    }
}

We dont describe here all possible cases but focus on two different operators. The first one is the equality one. Its implementation is pretty straightforward using a match query within the method visitBinaryOperator of our expression visito. We need however be careful to handle the case where the value is null.

@Override
public Object visitBinaryOperator(
                  BinaryOperatorKind operator, Object left, Object right)
                     throws ExpressionVisitException, ODataApplicationException {
    if (BinaryOperatorKind.EQ.equals(operator)) {
        String fieldName = left;
        Object value = right;
        if (value!=null) {
            return QueryBuilders.matchQuery(fieldName, value);
        } else {
            return QueryBuilders.filteredQuery(QueryBuilders
                .matchAllQuery(), FilterBuilders.missingFilter(fieldName));
        }
    }
    (...)
}

We can notice that in the case where the field isnt indexed, a term query would be much relevant.

In the case of an operator, we only one level within the ElasticSearch query. The Olingo approach based on an expression visitor allows to compound more complex queries. We can take the sample of an operator that associates to sub queries, something like with OData query name eq 'MyProductName' and price eq 15. In this case, the following visitor methods will be called successfully:

  • method visitMember with member name.
  • method visitLiteral with value 'MyProductName'.
  • method visitBinaryOperator with operator eq that create the first sub query (query #1).
  • method visitMember with member price.
  • method visitLiteral with value 15.
  • method visitBinaryOperator with operator eq that create the second sub query (query #2).
  • method visitBinaryOperator with operator and. The first parameter corresponds to query #1 and the second to query #2.

Having understand this, we can leverage an ElasticSeach filter to create our composite query within the method visitBinaryOperator, as describe below:

@Override
public Object visitBinaryOperator(
                  BinaryOperatorKind operator, Object left, Object right)
                     throws ExpressionVisitException, ODataApplicationException {
    (...)
    if (BinaryOperatorKind.AND.equals(operator)) {
        return QueryBuilders.filteredQuery(QueryBuilders.matchAllQuery(),
                                  FilterBuilders.andFilter(
                                      FilterBuilders.queryFilter((QueryBuilder) left),
                                      FilterBuilders.queryFilter((QueryBuilder) right)));
    }
    (...)
}

We describe here how to translate OData queries to ElasticSearch ones by leveraging the expression visitor of Olingo. We took concrete samples of an equals query and of a composite one.

In the next section, we will describe how to take into account nested fields within queries

Handling queries on nested fields

Within our equals operator support, we didnt take into account the fact that OData supports sub fields. As a matter of fact, we can have something like that: details/fullName eq 'My product details'. The field details would be an OData complex field and an ElasticSearch nested field. For such use case, we need to extend our support of the operator to handle both case:

  • normal fields with match or term queries
  • complex fields with nested queries.

The following code describes an adapted version of our method visitBinaryOperator to support this case:

@Override
public Object visitBinaryOperator(BinaryOperatorKind operator,
                  Object left, Object right)
                     throws ExpressionVisitException,
                            ODataApplicationException {
    if (BinaryOperatorKind.EQ.equals(operator)) {
        List<String> fieldNames = getFieldNamesAsList(left);
        if (fieldNames.size() == 1) {
            String fieldName = fieldNames.get(0);
            Object value = right;
            if (value!=null) {
                return QueryBuilders.matchQuery(fieldName, value);
            } else {
                return QueryBuilders.filteredQuery(QueryBuilders
                  .matchAllQuery(), FilterBuilders.missingFilter(fieldName));
            }
        } else if (fieldNames.size() > 1) {
            Object value = right;
            if (value!=null) {
                return QueryBuilders.nestedQuery(getRootFieldName(fieldNames),
                    QueryBuilders.matchQuery(
                              getTargetNestedFieldNames(fieldNames), value));
            } else {
                return QueryBuilders.nestedQuery(getRootFieldName(fieldNames),
                    QueryBuilders.filteredQuery(QueryBuilders
                  .matchAllQuery(), FilterBuilders.missingFilter(
                           getTargetNestedFieldNames(fieldNames))));
            }
        }
        (...)
    }
    (...)
}

The last point will see here consists in the ability to parameterizing a subset of returned data.

Parameterizing the returned data

OData allows to specify a subset of data to return. This obviously applies to queries based on the following query parameters:

  • $select to specify which fields will be included in returned entities
  • $top to specify the maximum number of returned entities
  • $skip to specify the index of the first entity of the returned subset

The two last parameters are particularly convenient to implement data pagination with OData.

Such parameters can be used to parameterized the ElasticSearch search request, as described below:

public void configureSearchQuery(
                       SearchRequestBuilder requestBuilder,
                       SelectOption selectOption, OrderByOption orderByOption,
                       SkipOption skipOption, TopOption topOption) {
    requestBuilder.setSize(1000);

    if (selectOption!=null) {
        for (SelectItem selectItem : selectOption.getSelectItems()) {
            requestBuilder.addField(selectItem.getResourcePath()
                                      .getUriResourceParts().get(0).toString());
        }
    }

    if (topOption!=null) {
        requestBuilder.setSize(topOption.getValue());
    } else {
        requestBuilder.setSize(DEFAULT_QUERY_DATA_SIZE);
    }

    if (skipOption!=null) {
        requestBuilder.setFrom(skipOption.getValue());
    }
}

Posted in ElasticSearch, OData, Olingo, Queries | Tagged , , , | 1 Comment

Handling multiple actions for a POST method

When we go a bit ahead from the CRUD scope of REST, we often need to support several actions for a same resource. This is typically handled by a method POST and we need to implement a processing to route the request to the right method in our resource class to handle it. This can be since the provided payload for such requests and responses can be different.

In addition, we want to leverage the conversion support provided by REST frameworks to directly work on beans for the payloads.

Generally in this context, we use a dedicated header to specify the action to execute. In the following we will use a custom header named x-action.

In this post, we will describe how to implement such use cases with REST frameworks like Restlet and JAX-RS compliant ones.

Use case

In general, methods POST of list resources is used to create a corresponding element. We can imagine to need to have several actions to handle for a method POST:

  • an action to add a set of elements. In this case, the input content corresponds to an array.
  • an action against the list itself like reorder, clear, and so on

The use of method POST for actions can also be used for other kinds of resources.

With REST, a wrong approach consists in using the action names within the resource path itself. For example with previous samples, we could have: /elements/reorder or /elements/clear. A better approach is to use a specific header to specify which action must be executed.

Moreover, when implementing such approach with Java REST frameworks, we can generally work with low-level elements but also with beans describing structured request and response contents. So we need to find out a way to select the right methods to invoke the action processing and them the right parameters.

With Restlet

Restlet provides no support for this out of the box. At the moment, only query parameters can be declaratively used within an annotation Post to select a request. The following code describes how to use this feature:

@Post("?action=single")
public void handleSingleAdd(TestBean contact) {
    (...)
}

With Restlet, we need to implement an annotated method that will route the request to the right handling method. This routing method must works on low-level API of Restlet to have access to the custom header and directly use the converter service to create the right instances of objects for requests.

The following code describes how to implement such processing. We introduce a method getInputObject to convert the input content into beans and handle errors if the right content isnt provided.

private <T> T getInputObject(Representation representation, Class<T> clazz) {
    try {
        return getConverterService().toObject(representation, clazz, this);
    } catch (Exception ex) {
        throw new ResourceException(
              Status.CLIENT_ERROR_UNPROCESSABLE_ENTITY);
    }
}

@SuppressWarnings("unchecked")
@Post
public Representation handleAction(Representation representation)
                                                             throws IOException {
    Series<Header> headers = (Series<Header>)
       getRequestAttributes().get("org.restlet.http.headers");

    String actionHeader = headers.getFirstValue("x-action", "single");
    if ("single".equals(actionHeader)) {
        TestBean bean = getInputObject(representation, TestBean.class);
        TestBean returnedBean = handleSingleAction(bean);
        return getConverterService().toRepresentation(returnedBean);
    } else if ("list".equals(actionHeader)) {
        List<TestBean> beans = getInputObject(representation, List.class);
        List<TestBean> returnedBeans = handleMultipleAction(beans);
        return getConverterService().toRepresentation(returnedBeans);
    } else {
        throw new ResourceException(Status.CLIENT_ERROR_BAD_REQUEST);
    }
}

With JAX-RS

With JAX-RS, there are two possible ways to implement such feature. The first one is based on a filter and the other one directly implemented within the resource. Lets started with the first one.

Approach #1

JAX-RS allows to define pre-matching filters that are called before the resource call and even before the framework choose which resource class and method will be used to handle the request. With this approach, we are able to update the requested URI to add something to tell the resource which method to use.

Following code describes the implementation of such filter:

@PreMatching
@Provider
public class PreMatchingFilter implements ContainerRequestFilter {
    @Context
    private ResourceInfo resourceInfo;

    @Context
    private UriInfo uriInfo;

    @Override
    public void filter(ContainerRequestContext requestContext)
                                                                       throws IOException {
        String xActionValue = requestContext.getHeaderString("x-action");
        if ("list".equals(xActionValue)) {
            requestContext.setRequestUri(
                   URI.create(uriInfo.getRequestUri() + "/list"));
        } else {
            requestContext.setRequestUri(
                   URI.create(uriInfo.getRequestUri() + "/single"));
        }
    }
}

The corresponding resource implementation will provide for its methods a sub path to select which will be called for each case:

@Path("/beans")
public class BeansResource {
    @POST
    @Path("/single")
    public void testContent(TestBean content) {
        (...)
    }

    @POST
    @Path("/list")
    public void testContent(List<TestBean> content) {
        (...)
    }

The main drawback of this approach is that some sub paths are defined and they can potentially be called directly from the client.

Approach #2

The second approach handles the routing of the request handling directly within the resource class. This feature is part of the JAX-RS specication and is called . The latter gives us some control over the chosen resource to handle the request.

We need to define an abstract class that will be returned for the path by the JAX-RS annotatéd method for the path and the method that support several actions. According to the value of the header xx, we actually return a sub class that provides processing for such case.

Following code describes an implementation of such approach within a resource class:

@Path("/beans")
public class BeansResource {
    public static abstract class AbstractHeaderResource {
    }

    @Path("/")
    public AbstractHeaderResource doSomething(
                  @HeaderParam("X-Header") String xHeader) {
        if ("list".equals(xHeader)) {
            return new ListResource();
        } else {
            return new SingleResource();
        }
    }

    public static class SingleResource extends AbstractHeaderResource {
        @POST
        public Response doSometing(TestBean bean) {
            (...)
            return Response.ok("single action").build();
        }
    }

    public static class ListResource extends AbstractHeaderResource {
        @POST
        public Response doSometing(List<TestBean> beans) {
            (...)
            return Response.ok("list action").build();
        }
    }
}

We can notice that the class AbstractHeaderResource can define an abstract method is the actions manage all the same content format:

public static abstract class AbstractHeaderResource {
    @POST
    public abstract Response doSometing(TestBean bean);
}

Posted in JAX-RS, REST, Restlet | Tagged , , , , , | Leave a comment

Getting started with RestEasy

RestEasy is the REST framework of JBoss. It can be reached from the address http://. This framework implements the JAX-RS specification and allow to implement RESTful services. The latters can be deployed within any JavaEE Web container, not only the JBoss application server.

In this post, we will describe how to initialize a simple RESTful application with EasyRest. Whereas it seems not to so complicated, there are some details not to forget in order to make it work.

Configuring the project

The simplest way to configure an RestEasy application is to use Maven and define the client as a dependency in the file pom.xml, as described below:

<project xmlns="http://maven.apache.org/POM/4.0.0"
            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
            xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
                        http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>resteasy.test</groupId>
    <artifactId>JAXRS-RESTEasy</artifactId>
    <packaging>war</packaging>
    <version>0.0.1-SNAPSHOT</version>

    <properties>
        <java-version>1.7</java-version>
        <resteasy-version>3.0.10.Final</resteasy-version>
        <jackson-version>2.2.2</jackson-version>
        <wtp-version>2.0</wtp-version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.jboss.resteasy</groupId>
            <artifactId>resteasy-jaxrs</artifactId>
            <version>${resteasy-version}</version>
        </dependency>

        <dependency>
            <groupId>org.jboss.resteasy</groupId>
            <artifactId>resteasy-servlet-initializer</artifactId>
            <version>${resteasy-version}</version>
        </dependency>

        <dependency>
            <groupId>com.fasterxml.jackson.jaxrs</groupId>
            <artifactId>jackson-jaxrs-json-provider</artifactId>
            <version>${jackson-version}</version>
        </dependency>
    </dependencies>

    (...)
</project>

We need to add then the JBoss Maven repository to get the RestEasy dependencies, as described below:

<project>
    (...)
    <repositories>
        <repository>
            <id>JBoss repository</id>
            <url>https://repository.jboss.org/nexus/content/groups/public-jboss/</url&gt;
        </repository>
    </repositories>
    (...)
</project>

Since we want to integrate our project within Eclipse and WTP, we need to add the following configuration regarding build:

<project>
    (...)
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>${java-version}</source>
                    <target>${java-version}</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-dependency-plugin</artifactId>
                <executions>
                    <execution>
                        <id>install</id>
                        <phase>install</phase>
                        <goals>
                            <goal>sources</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-eclipse-plugin</artifactId>
                <configuration>
                    <wtpapplicationxml>true</wtpapplicationxml>
                    <wtpversion>${wtp-version}</wtpversion>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

We are now ready to start to configure our application.

Configuring the JAX-RS application

The first thing to do is to add a file called javax.servlet.ServletContainerInitializer under the folder WEB-INF/services with the following content

org.jboss.resteasy.plugins.servlet.ResteasyServletInitializer

With such approach, the file web.xml can remain empty, as described below, but must be present within your application.

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                 xmlns="http://java.sun.com/xml/ns/javaee"
                 xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
                         http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
                 id="JAXRS-RESTEasy" version="3.0">
    <display-name>JAXRS-RESTEasy</display-name>
</web-app>

We can notice that the link gives interesting hints to configure RestEasy 3: http://docs.jboss.org/resteasy/docs/3.0.4.Final/userguide/html/Installation_Configuration.html#d4e111.

The last thing to do before starting to implement our application classes themselves is to implement the application class. The latter must extend the class Application of RestEasy and define the path of the application, / in our case.

@ApplicationPath("/")
public class RESTEasyApplication extends Application {

}

Implementing a resource

A resource in JAX-RS simply consists in an annotated class using the annotations of this specification. We need to specify first the path to reach the resource using the annotation Path, as described below:

@Path("/ping")
public class PingResource {
    (...)
}

Now this done, we can add methods to serve different HTTP methods. We can leverage annotations to specify the corresponding HTTP method, extend the root path of the resource, inject path and query parameters in the method parameters. Below is the content of a single resource method:

@GET
@Path("/{pathParameter}")
public Response pong(
              @PathParam("pathParameter") String pathParameter,
             @DefaultValue("1000") @QueryParam("queryParameter")
             int queryParameter) {

    String response = "Pong - pathParameter : " + pathParameter
                                 + ", queryParameter : " + queryParameter;

    return Response.status(200).entity(response).build();
}

In the real world, we dont have such simple content but RESTful services use structured content using format like JSON, XML, YAML, JAX-RS like most REST frameworks provides a way to automatically convert beans to content using tools like Jackson. As we saw when defining dependencies in the file pom.xml, you have the Jackson provider for JAX-RS. We are able to such automatic conversion, as described below:

@Path("/contacts")
public class ContactListResource {

    @GET
    public List<ContactBean> getContacts() {
        // Get contacts from database
        List<ContactBean> contacts = (...)

        return contacts;
    }

    @POST
    public void addContact(ContactBean contact) {
        // Add contact into database
        (...)
    }
}

At this level, we have a RESTful application that we can deploy into Tomcat7 and test.

Deploying into Tomcat 7

We dont describe here how to install a Tomcat 7 server within WTP in preferences (Server > Runtime Environments) and create an instance of such server in the view Servers. However, we need not to forget to check the configuration property in the server instance configuration (reachable from server context menu, entry ), as described below:

After having added the project to the WTP Tomcat 7 server, we can start it.

We can see the following traces during the Tomcat server startup. This shows that our application RESTEasyApplication was deployed.

(...)
mars 06, 2015 12:01:43 PM org.apache.catalina.startup.HostConfig deployDescriptor
INFOS: Déploiement du descripteur de configuration (...)/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/conf/Catalina/localhost/JAXRS-RESTEasy.xml
mars 06, 2015 12:01:43 PM org.apache.catalina.startup.SetContextPropertiesRule begin
AVERTISSEMENT: [SetContextPropertiesRule]{Context} Setting property 'source' to 'org.eclipse.jst.j2ee.server:TestResteasy' did not find a matching property.
mars 06, 2015 12:01:43 PM org.jboss.resteasy.spi.ResteasyDeployment 
INFOS: Deploying javax.ws.rs.core.Application: class resteasy.test.RESTEasyApplication
mars 06, 2015 12:01:43 PM org.apache.catalina.startup.HostConfig deployDescriptor
INFOS: Deployment of configuration descriptor (...)/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/conf/Catalina/localhost/JAXRS-RESTEasy.xml has finished in 804 ms
mars 06, 2015 12:01:43 PM org.apache.coyote.AbstractProtocol start
INFOS: Starting ProtocolHandler ["http-bio-8080"]
mars 06, 2015 12:01:43 PM org.apache.coyote.AbstractProtocol start
INFOS: Starting ProtocolHandler ["ajp-bio-8009"]
mars 06, 2015 12:01:43 PM org.apache.catalina.startup.Catalina start
INFOS: Server startup in 1304 ms

Testing your RESTful application

There are several HTTP clients that we can use to test a RESTful application. Most famous are curl with command line and Postman in Chrome if we want a friendly user interface.

Here are the curl commands to interact with our service:

curl -X GET http://localhost:8080/JAXRS-RESTEasy/ping?pathParameter=path&queryParameter=12
Pong - pathParameter: path - queryParameter: 12

curl -X POST -d '{"firstName":"first name", "lastName":"last name"}' -H "Content-Type: application/json" http://localhost:8080/JAXRS-RESTEasy/contacts

Posted in REST, RestEasy | Tagged , , , | Leave a comment

Exception handling with Restlet

Restlet provides several approaches to handle exceptions on both client and server sides. You can choose to be close to the Restlet API itself or use a higher level approach based on custom exceptions and / or annotated exceptions

On the server side

Restlet provides several approaches to handle exceptions on the server side. They correspond to different needs and allow to add flexibility when generating the corresponding response.

Basic approach for exceptions

Restlet allows to throw two kinds of exceptions within the annotated methods of server resources:

  • The exception ResourceException itself that corresponds to any exception that can occur within a server resource.
  • User-defined exceptions (both checked and unchecked exceptions). In the case of checked exceptions, we need to a throws in the method signatures. The Restlet engine will then catch them and wraps them within a ResourceException one.

That said, Restlet provides an indirection level to deduce the response status code and content when an exception is thrown within a server resource. As a matter of fact, the exception ResourceException is generic and only accepts to specify status code when creating it. A good approach is to implement user-defined exceptions describing the potential errors.

Then comes the status service of Restlet. The element is reponsible to create the response content when an exception is thrown. The default implementation returns description that comes along with a particular status code as text.

Restlet allows the service developer to override it and provide its own implementation.

Registering a custom status service can be simply done within the Restlet application class using the method setStatusService, as described below:

public class MyApplication extends Application {
    (...)

    public MyApplication() {
        (...)
        setStatusService(new MyCustomStatusService());
    }

A custom status service needs to extend the class StatusService and override the following methods:

  • Method toStatus to determine the response status code from the thrown exception
  • Method toRepresentation to build the corresponding representation content

Following code describes the skeleton of a custom status service:

public class MyCustomStatusService extends StatusService {
    @Override
    public Representation toRepresentation(Status status, Request request,
                               Response response) {
         Throwable t = status.getThrowable();
         (...)
         Representation result = (...)
         return result;
    }

    @Override
    public Status toStatus(Throwable t, Request request,
                               Response response) {
        Status status = (...)
        return status;
    }
}

We can notice that with version 2.3 of Restlet, overriden the method toStatus isnt required anymore since the status code associated to a user-defined exception can be defined in the exception class itself with an annotation.

Within the method toStatus, we simply need to check which exception was thrown and return the corresponding status code. Following code describes an implementation of this:

@Override
public Status toStatus(Throwable t, Request request,
                           Response response) {
    if (t instanceof MyValidationException) {
        return Status.CLIENT_ERROR_UNPROCESSABLE_ENTITY;
    } else if ((...)) {
    }
    return Status.CLIENT_ERROR_BAD_REQUEST;
}

Implementing the method toRepresentation can be a bit more complex. In fact we generally need to check if the client needs to receive back a structured representation or a user-friendly message (in HTML for example).

So we need first to implement an utility method that check this:

private boolean isHtmlContentRequested(Request request) {
    // Get accept media types for the client
    ClientInfo clientInfo = request.getClientInfo();
    List<Preference<MediaType>> mediaTypes
                        = clientInfo.getAcceptedMediaTypes();
    for (Preference<MediaType> mediaType : mediaTypes) {
        // Check if the media type is HTML
        if (MediaType.TEXT_HTML.equals(mediaType.getMetadata())) {
            return true;
        }
    }
    return false;
}

Based on this method, the skeleton of the method toRepresentation will be the following:

@Override
public Representation toRepresentation(
                 Status status, Request request, Response response) {
    // According to the preferred media type
    if (isHtmlContentRequested(request)) {
        // return HTML representation
        return toHtmlRepresentation(status, request, response);
    } else {
        // return structured representation of the error
        return toStructuredRepresentation(status, request, response);
    }
}

We dont provide the content of the method toHtmlRepresentation since there are different ways to implement it (based on classes FileRepresentation or template engines, ). We will focus now on the implementation of the method toStructuredRepresentation that aims to define a data representation for errors.

This method will create a bean from the exception content and then leverage the converter service to convert this bean into a representation. The following snippet describes a sample implementation:

private Representation toStructuredRepresentation(
                   Status status, Request request, Response response) {
    Representation result = null;

    Throwable ex = status.getThrowable();
    Object o = getBeanFromException(ex);

    if (o != null) {
        List<org.restlet.engine.resource.VariantInfo> variants
                    = org.restlet.engine.converter.ConverterUtils
                                                   .getVariants(o.getClass(), null);
        if (!variants.contains(VARIANT_HTML)) {
            variants.add(VARIANT_HTML);
        }
        Variant variant = getConnegService().getPreferredVariant(
                                     variants, request, getMetadataService());
        try {
            result = getConverterService().toRepresentation(o, variant);
        } catch (IOException e) {
            throw new RuntimeException(e);
        }
        return result;
    }

    return super.toRepresentation(status, request, response);
}

Restlet also provides from its version 2.3 a way to manage error representations in a more flexible way without having to provide a custom status service.

Using annotated exceptions

Since the version 2.3, Restlet introduces the concept of annotated exceptions along with classical annotated interfaces of Restlet. This feature allows to serialize such user-defined exceptions as bean within the returned response for a particular

For example, if you want to map a request that returns a contact into a bean Contact. You can use something like that:

public interface MyService {
    @Post
    Contact addContact(Contact contact);
}

You can use custom exceptions in this context and throws these, as described below:

public interface ContactResource {
    @Post
    Contact addContact(Contact contact) throws MyValidationException;
}

This exception can use the annotation Status, as described below:

@Status(value = 400, serialize = true)
public class MyValidationException extends RuntimeException {
    public ServiceValidationException(String message, Exception e) {
        super(message, e);
    }
}

This exception can be thrown on the server side and serialized in the response as a bean (using its fields) thanks to the converter feature of Restlet. You can notice that we can add user fields to the custom exception.

For example, here, we could have a JSON content like this:

{
    "message": "my validation message"
}

Lets now tackle how to handle exceptions on the client side with Restlet.

On the client side

Like for the server side, we can use these two approaches to handle exceptions and errors on the client side with Restlet.

Basic approach

By default, Restlet throws an exception ResourceException when a status code other than 2xx is received in a response. We need to catch this exception to detect that an error occurs.

Whereas we dont have access to the representation returned by the call in such case, its nevertheless present in the response data and accessible using the method getResponseEntity of the class ClientResource.

The following snippet describes this:

ClientResource clientResource = (...)
JSONObject jsonObj = (...)
try {
    Representation representation = clientResource.post(
                                        new JsonRepresentation(jsonObj));
    (...)
} catch (ResourceException ex) {
    Representation responseRepresentation
                           = clientResource.getResponseEntity();
    JsonRepresentation jsonRepr
                   = new JsonRepresentation(responseRepresentation);
    JSONObject errors = jsonRepr.getJsonObject();
    (...)
}

We can notice that this approach can also be used along with annotated interfaces.

Catching annotated exceptions

On the client side, when the response is received, this custom exception will be thrown instead of a simple ResourceException and the content of the response (our JSON content for example) deserialized within the fields of the exception.

The following code describes how to use this approach on the client side along with the annotated interfaces one:

ClientResource cr = new ClientResource("http://...);
ContactResource contactResource = cr.wrap(ContactResource.class);
try {
    Contact newContact = new Contact();
    newContact.setName("my name");
    Contact addedContact = contactResource.addContact(newContact);
} catch(MyValidationException ex) {
    String errorMessage = ex.getMessage();
    (...)
}

In your context, the code of the previous exception MyValidationException described above is reused.

Posted in Restlet | Tagged , , , , , | 2 Comments

Optimizing Restlet server applications

There are several ways to optimize Restlet applications. We can leverage features of the HTTP protocol itself like caching or compression but also at the level of third-party tools like Restlet converters and template engines. This allows to have an application that provides quicker responses and optimized contents.

Using caching

One possible optimization is not to serve related resources (like images, css) when loading a particular resource with HTML content. An approach can be to use cache support provided by HTTP.

We describe here how to apply browser caching for all static elements loaded from a path with a subfolder called nocache. For these elements, headers for caching will be automatically added. For others, an expiration date of one month will be specified in headers.

This feature can be simply added with Restlet using filters within the method createInbountRoot of your application class. A filter containing caching stuff can to be added in front of the Restlet Directory that serves static content, as described below:

router.attach("/static", new Filter(getContext(),
                        new Directory(getContext(), (...))) {
    protected void afterHandle(Request request, Response response) {
        super.afterHandle(request, response);
            [adding caching stuff here]
        }
});

Once the filter is added in the processing chain, we have to handle caching headers based on objects of types Representation and Response. The method noCache of the class Response automatically adds the related headers for no cache. For expiration date, the method setExpirationDate of the class Representation allows to define the laps of time before reloading the element content. Following code describes the complete code:

router.attach("/static", new Filter(getContext(),
                         new Directory(getContext(), (...))) {
    protected void afterHandle(Request request, Response response) {
        super.afterHandle(request, response);
        if (response.getEntity() != null) {
            if (request.getResourceRef().toString(false, false)
                                                        .contains("nocache")) {
                response.getEntity().setModificationDate(null);
                response.getEntity().setExpirationDate(null);
                response.getEntity().setTag(null);
                response.getCacheDirectives().add(
                                                  CacheDirective.noCache());
            } else {
                response.setStatus(Status.SUCCESS_OK);
                Calendar c = new GregorianCalendar();
                c.setTime(new Date());
                c.add(Calendar.DAY_OF_MONTH, 1);
                response.getEntity().setExpirationDate(c.getTime());
                response.getEntity().setModificationDate(null);
            }
        }
    }
});

Compressing content

Modern browsers support compression for received content. This allows to reduce payload of exchanged data. Restlet supports this feature for server-side application using the class Encoder. The latter can take place within the processing chain like router, authenticator and filter. You simply need to configure it within the method createInbountRoot of your application class, as described below:

Encoder encoder = new Encoder(
             getContext(), false, true, getEncoderService());
encoder.setNext(router);
return encoder;

Configuring specific converters

The converter feature of Restlet allows to automatically convert beans to / from representation content transparently and directly use them at the level of annotated methods of server resources.

This feature commonly uses third-party tools. The most used one corresponds to the converter based on (Jackson). It supports several formats like XML, JSON and YAML.

Putting the corresponding Restlet extensions is enough to use converters. As a matter of fact, they are automatically registered against the Restlet engine.

To configure these converters, we need to get instances of the registered converters you need to configure. Following snippet describes how to get an instance of the Jackson converter:


private JacksonConverter getRegisteredJacksonConverter() {
    JacksonConverter jacksonConverter = null;
    List<ConverterHelper> converters
             = Engine.getInstance().getRegisteredConverters();
    for (ConverterHelper converterHelper : converters) {
        System.out.println(converterHelper.getClass());
        if (converterHelper instanceof JacksonConverter) {
            jacksonConverter = (JacksonConverter) converterHelper;
            break;
        }
    }
}

Now we get an instance of the converter, we can get an instance of its associated ObjectMapper. It allows to configure the serialization and deserialization of the tool. In the following code, we describe how to set a serialization property not to include null values within the generated content:

private void configureJacksonConverter() {
    JacksonConverter jacksonConverter = getRegisteredJacksonConverter();

    if (jacksonConverter != null) {
        ObjectMapper objectMapper = jacksonConverter.getObjectMapper();
        objectMapper.getSerializationConfig().withSerializationInclusion(
                                           JsonSerialize.Inclusi‌​on.NON_NULL);
   }
}

Configuring template engines

If we need to create on the server side by ourselves representation contents with formats like HTML and even XML or JSON, template engines can be used within server resources. Restlet provides dedicated representations for some of them with dedicated sub-classes of the class Representation. Its the case for (Freemarker).

Such engines can be configured to make content generation more efficient. For example, we can configure Freemarker to load templates once and share them against all content generations. For this, we need to set the property cacheStorage with an instance of class StrongCacheStorage. This can be done within the Restlet application class and then be accessible for all server resources, as described below:


public class SampleApplication extends Application {

    (...)
    private Configuration configuration;

 

    public static Configuration configureFreeMarker(Context context) {
        Configuration configuration = new Configuration();
        ClassTemplateLoader loader = new ClassTemplateLoader(
                                          SampleAppApplication.class,
                                          "/org/myapp/sample/server/templates/");
        configuration.setTemplateLoader(loader);
        configuration.setCacheStorage(new StrongCacheStorage());
        return configuration;
    }
 

    public Configuration getConfiguration() {
        return configuration;
    }
}

The class TemplateRepresentation of the Freemarker extension of Restlet can be then used within the server resources, as described below:


public class MyServerResource extends ServerResource {

    private SampleApplication getSampleApplication() {
        return (SampleApplication)getApplication();
    }

 

    private Representation toRepresentation(Map<String, Object> map,
                                           String templateName, MediaType mediaType) {
        return new TemplateRepresentation(templateName,
                getSampleApplication().getConfiguration(), map, mediaType);
    }
 

    @Get("html")
    public Representation getHtml() {
        Map<String, Object> model = new HashMap<String, Object>();
 

        model.put("titre", "my title");
        model.put("users", getUsers());

        return toRepresentation(model,
            "myTemplate", MediaType.TEXT_HTML);
    }
}

Posted in REST, Restlet, Tips | Tagged , , , , | Leave a comment

Implementing a Spring custom namespace for Olingo

Spring is a lightweight container that implements dependency injection. It provides a convenient and extensible way to implement DSLs within its XML configuration to make easier configurations of particular issues.

Olingo is a Java library that implements the Open Data Protocol (OData). Its version 4 provides support for version 3 and 4 of the OData specification at both client and server sides.

We will describe here how to implement such a custom namespace on a real-world use case to easily configure the Olingo server side. This post is based on a contribution for Olingo. The following issue in JIRA contains the source code in attachment: (OLINGO-562).

Lets begin by designing our custom namespace.

Designing the custom namespace

We need first to have a look at the Olingo code to create an HTTP handler.

The first part consists in the création of the service metadata, as described below:

OData odata = OData.newInstance();
EdmProvider edmProvider = getEdmProvider();

EdmxReference reference = new EdmxReference(
        URI.create("../v4.0/cs02/vocabularies/Org.OData.Core.V1.xml"));
reference.addInclude(new EdmxReferenceInclude(
        "Org.OData.Core.V1", "Core"));
List<EdmxReference> references = Arrays.asList(reference);
ServiceMetadata serviceMetadata = odata.createServiceMetadata(
                                                 edmProvider, references);

Then we can create the handler and register all processors for requests. The following code describes this:

List<Processor> odataProcessors = (...)
ODataHttpHandler dataHandler = odata.createHandler(serviceMetadata);
if (odataProcessors!=null) {
    for (Processor odataProcessor : odataProcessors) {
        dataHandler.register(odataProcessor);
    }
}

We can notice that the previous code uses static methods to create instances. Whereas its possible to use the factory feature support of Spring, well rather use its powerful concept of FactoryBean. The latter provides an indirection level to get instances of beans that cant be created simply using a new. We will deal with this aspect in section .

This first thing to do before dive into the design and the implementation of the namespace is to try to configure what we want to obtain using the default XML configuration of Spring. This is very useful since implementing a Spring custom namespace actually consists in configuring programmatically the container (bean definition, properties and so on) based on the hints we get from the custom XML elements.

Configuring an HTTP handler for Olingo within a Spring context is a bit verbose with the default XML configuration. It would look like something like that:

<bean id="httpHandler"
        class="org.apache.olingo.spring.factory.ODataHttpHandlerFactoryBean">
    <property name="odata" ref="odata" />
    <property name="serviceMetadata">
        <bean class="org.apache.olingo.spring.factory.ServiceMetadataFactoryBean">
            <property name="odata" ref="odata" />
            <property name="edmProvider" ref="edmProvider" />
            <property name="references">
                <list>
                    <bean class="org.apache.olingo.spring.factory.EdmxReferenceFactoryBean">
                        <property name="uri"
                             value="../v4.0/cs02/vocabularies/Org.OData.Core.V1.xml" />
                        <property name="includes">
                            <map>
                                <entry key="Org.OData.Core.V1" value="Core" />
                            </map>
                        </property>
                    </bean>
                </list>
            </property>
        </bean>
    </property>
    <property name="processors">
        <list>
            <ref bean="testProcessor" />
        </list>
    </property>
</bean>

Our custom namespace could look like something like to take into account all the necessary hints present in the previous XML configuration:

<olingo:http-handler id="httpHandler" edm-provider="edmProvider">
    <olingo:reference uri="../v4.0/cs02/vocabularies/Org.OData.Core.V1.xml">
        <olingo:include key="Org.OData.Core.V1" value="Core"/>
    </olingo:reference>
    <olingo:processor ref="testProcessor"/>
</olingo:http-handler>

We can notice that we can also provide something to make easier the configuration of an EDM provider declaratively within a Spring configuration.

Implementing Spring FactoryBeans

Our factory beans implement both interfaces:

  • FactoryBean. This interface defines the type of object to create (method getObject), if its a singleton (method isSingleton) and how to create it (method getObject).
  • InitializingBean. This interface allows to check if everythting is injected in the factory to make possible the creation of the object.

The following code describes how implement a factory bean to create an HTTP handler:

public class ODataHttpHandlerFactoryBean implements
                FactoryBean<ODataHttpHandler>, InitializingBean {
    private OData odata;
    private ServiceMetadata serviceMetadata;
    private List<Processor> processors;

    @Override
    public ODataHttpHandler getObject() throws Exception {
        ODataHttpHandler handler = odata.createHandler(serviceMetadata);
        if (processors != null) {
            for (Processor processor : processors) {
                handler.register(processor);
            }
        }
        return handler;
    }

    @Override
    public Class<?> getObject () {
        return ODataHttpHandler.class;
    }

    @Override
    public boolean isSingleton() {
        return true;
    }

    @Override
    public void afterPropertiesSet() throws Exception {
        if (odata == null) {
            throw new IllegalArgumentException(
                               "The property odata is required.");
        }

        if (serviceMetadata == null) {
            throw new IllegalArgumentException(
                       "The property serviceMetadata is required.");
        }
    }

    // Getters and setters
    (...)
}

We have everything now to begin to implement our custom namespace for Olingo.

Implementing the namespace handler

The first step consists in defining an XML schema for our grammar. This file will be used by Spring to validate that the specified configuration is correct with a Spring XML configuration. We wont dive here into more details. We provide below the XML schema for the definition of the configuration of an HTTP handler.

<xsd:schema
            xmlns="http://olingo.apache.org/schema/olingo/spring-olingo"
            xmlns:beans="http://www.springframework.org/schema/beans"
            xmlns:xsd="http://www.w3.org/2001/XMLSchema"
            targetNamespace="http://olingo.apache.org/schema/olingo/spring-olingo"
            elementFormDefault="qualified" attributeFormDefault="unqualified">

    <xsd:import namespace="http://www.springframework.org/schema/beans"
             schemaLocation="http://www.springframework.org/schema/beans
                          http://www.springframework.org/schema/beans/spring-beans.xsd" />

    <xsd:element name="http-handler" type="httpHandlerBeanType">
     </xsd:element>

    <xsd:complexType name="httpHandlerBeanType">
        <xsd:sequence>
            <xsd:element name="reference"
                          minOccurs="1"
                          maxOccurs="unbounded"
                          type="referenceBeanType"/>
            <xsd:element name="processor"
                          maxOccurs="unbounded"
                          type="processorBeanType"/>
        </xsd:sequence>
        <xsd:attribute name="id" type="xsd:string" use="required"/>
        <xsd:attribute name="edm-provider" type="xsd:string" use="required"/>
    </xsd:complexType>

    <xsd:complexType name="referenceBeanType">
        <xsd:sequence>
            <xsd:element name="include"
                          maxOccurs="unbounded"
                          type="referenceIncludeBeanType"/>
        </xsd:sequence>
        <xsd:attribute name="uri" type="xsd:string" use="required"/>
    </xsd:complexType>

    <xsd:complexType name="referenceIncludeBeanType">
        <xsd:attribute name="key" type="xsd:string" use="required"/>
        <xsd:attribute name="value" type="xsd:string" use="required"/>
    </xsd:complexType>

    <xsd:complexType name="processorBeanType">
        <xsd:attribute name="ref" type="xsd:string" use="required"/>
    </xsd:complexType>
</xsd:schema>

Spring requires two files to put within the folder META-INF/spring. They allow to map namespace URL used in the XML with entities to handle it. This actually corresponds to the two following things:

  • the XML schema file to validate the XML configuration
  • the namespace handler class

First, we need to configure namespace handler class for the namespace in the file spring.handlers:

http\://olingo.apache.org/schema/olingo/spring-olingo=
       org.apache.olingo.spring.config.OlingoNamespaceHandler

Then we do a similar thing for the XML Schema file in the file spring.schemas:

http\://olingo.apache.org/schema/olingo/spring-olingo.xsd=
                        org/apache/olingo/spring/config/spring-olingo.xsd

Now we have the configure done. Lets implement the class OlingoNamespaceHandler. The latter allows to define which element are handled at the root level. We associate a dedicated parser for each of them in order to parse the custom grammar and use its hints to configure corresponding beans in Spring. Following code describes the content of this class:

public class OlingoNamespaceHandler
                extends NamespaceHandlerSupport {
    public static final String HTTP_HANDLER_ELEMENT = "http-handler";

    public void init() {
        registerBeanDefinitionParser(HTTP_HANDLER_ELEMENT,
                                    new OlingoHttpHandlerBeanDefinitionParser());
    }
}

The parser delegates the parse processing to a dedicated helper class called OlingoHandlerBeanDefinitionHelper. The latter is responsible to parse the XML elements and create corresponding bean definition(s).

public class OlingoHttpHandlerBeanDefinitionParser
                             extends AbstractBeanDefinitionParser {
    @Override
    protected AbstractBeanDefinition parseInternal(Element element,
                                             ParserContext parserContext) {
        BeanDefinition configuration
               = OlingoHandlerBeanDefinitionHelper.parseHttpHandler(
                                             element, parserContext);
        return (AbstractBeanDefinition) configuration;
    }
}

Note: automatic registration / return null and use the registry

Spring provides an API for all concepts supported when configuring beans within XML files.

  • BeanDefinitionBuilder. This provides a builder to make easier
  • BeanDefinition. This corresponds to a bean definition.
  • RuntimeBeanReference. This allows to define a property that references a bean.
  • ManagedList. This allows to define list properties. They can contain both bean definitions, bean references and values.
  • ManagedMap. This allows to define map properties. They can contain both bean definitions, bean references and values.

The following methods of the class BeanDefinitionBuilder allow to link bean definitions between us and actually define dependency injection:

  • addPropertyValue. This method allows to define the value of a property. The second parameter of this method can be a primitive value, a bean definition, a bean reference, a list or a map.
  • addConstructorArg. This method allows to define a constructor parameter. It can contain the same things than the previous method.

We define some utility methods to get element attribute value (method elementAttribute) and create a bean definition builder for a specific class (method createBeanDefinitionBuilder).

private static String elementAttribute(
                       Element element, String name) {
    String value = element.getAttribute(name);
    return value.length() == 0 ? null : value;
}

private static BeanDefinitionBuilder
               createBeanDefinitionBuilder(Class<?> beanClass) {
    return BeanDefinitionBuilder.rootBeanDefinition(beanClass);
}

The code below corresponds to the complete processing to parse the XML element and generate corresponding beans. Its a bit long but it provides a concrete and from real-life example of use.

public static BeanDefinition parseHttpHandler(
                         Element element, ParserContext parserContext) {
    BeanDefinitionBuilder httpHandler = createBeanDefinitionBuilder(
                                        HTTP_HANDLER_FACTORY_BEAN_CLASS);

    // OData
    BeanDefinitionBuilder odataBuilder = createBeanDefinitionBuilder(
                                        ODATA_FACTORY_BEAN_CLASS);
    BeanDefinition odata = odataBuilder.getBeanDefinition();

    httpHandler.addPropertyValue(ODATA_PROPERTY, odata);

    // ServiceMetadata
    BeanDefinitionBuilder serviceMetadata = createBeanDefinitionBuilder(
                                     SERVICE_METADATA_FACTORY_BEAN_CLASS);
    serviceMetadata.addPropertyValue(ODATA_PROPERTY, odata);

    String edmProviderRef = elementAttribute(
                                       element, EDM_PROVIDER_ATTR);
    serviceMetadata.addPropertyValue(EDM_PROVIDER_PROPERTY,
                                    new RuntimeBeanReference(edmProviderRef));

    // References
    List<Element> referenceElements
                                 = DomUtils.getChildElementsByTagName(
                                               element, REFERENCE_ELEMENT);
    if (referenceElements.size() > 0) {
        ManagedList<BeanDefinition> referenceList
                          = new ManagedList<BeanDefinition>(
                                                     referenceElements.size());
        for (Element referenceElement : referenceElements) {
            BeanDefinition reference = parseReference(
                                        referenceElement, parserContext);
            referenceList.add(reference);
        }
        serviceMetadata.addPropertyValue(
                 REFERENCES_LIST_PROPERTY, referenceList);
    }

    httpHandler.addPropertyValue(
                              SERVICE_METADATA_PROPERTY,
                              serviceMetadata.getBeanDefinition());

    // Processors
    List<Element> processorElements
                     = DomUtils.getChildElementsByTagName(
                                          element, PROCESSOR_ELEMENT);
    if (processorElements.size() > 0) {
        ManagedList<RuntimeBeanReference> processorList
                        = new ManagedList<RuntimeBeanReference>(
                                                      processorElements.size());
        for (Element processorElement : processorElements) {
            RuntimeBeanReference processorRef = parseProcessor(
                                                  processorElement, parserContext);
            processorList.add(processorRef);
        }
        httpHandler.addPropertyValue(
                        PROCESSORS_LIST_PROPERTY,
                        processorList);
    }

    AbstractBeanDefinition configurationDef = httpHandler.getBeanDefinition();
    return configurationDef;
}

The method parseReference parses the reference definition:

private static BeanDefinition parseReference(
           Element referenceElement, ParserContext parserContext) {
    BeanDefinitionBuilder reference = createBeanDefinitionBuilder(
                                          EDMX_REFERENCE_FACTORY_BEAN);

    String uri = elementAttribute(referenceElement, URI_ATTR);
    reference.addPropertyValue(URI_PROPERTY, uri);

    // Processors
    List<Element> includeElements = DomUtils.getChildElementsByTagName(
                                           referenceElement, INCLUDE_ELEMENT);
    if (includeElements.size() > 0) {
        ManagedMap<String, String> includeMap
                     = new ManagedMap<String, String>(
                                                includeElements.size());
        for (Element includeElement : includeElements) {
            String key = elementAttribute(includeElement, KEY_ATTR);
            String value = elementAttribute(includeElement, VALUE_ATTR);
            includeMap.put(key, value);
        }
        reference.addPropertyValue(INCLUDES_PROPERTY, includeMap);
    }

    return reference.getBeanDefinition();
}

The method parseProcessor parses the processor definition:

private static RuntimeBeanReference parseProcessor(
        Element processorElement, ParserContext parserContext) {
    String ref = elementAttribute(processorElement, REF_ATTR);
    return new RuntimeBeanReference(ref);
}

Implementing a Spring

Now we have implemented a way to configure an Olingo HTTP handler, we need to implement a servlet that can be an instance of it from the Spring Web application context and use it. This servlet needs to the following:

  • Reference the Spring Web application context
  • Get an instance of ODataHttpHandler from this context
  • Use this instance to serve OData requests

Following code describes the implementation of such servlet:

public class OlingoSpringServlet extends HttpServlet {
    private WebApplicationContext context;
    private ODataHttpHandler httpHandler;

    private ODataHttpHandler getHttpHandler() throws ServletException {
        Map<String, ODataHttpHandler> odatas = context
                            .getBeansOfType(ODataHttpHandler.class);
        if (odatas.size() == 1) {
            return odatas.values().iterator().next();
        }

        throw new ServletException(
            "No OData HTTP handler can be found in the Spring container.");
    }

    private void initializeApplicationContext(ServletConfig config)
                                               throws ServletException {
        context = WebApplicationContextUtils.getWebApplicationContext(
                                               config.getServletContext());

        if (context==null) {
            throw new ServletException(
                "No Spring container is configured within the Web application.");
        }
    }

    @Override
    public void init(ServletConfig config) throws ServletException {
        super.init();
        initializeApplicationContext(config);
        httpHandler = getHttpHandler();
    }

    @Override
    protected void service(HttpServletRequest req,
           HttpServletResponse resp) throws ServletException,
                       IOException {
        try {
            httpHandler.process(req, resp);
        } catch (RuntimeException e) {
            throw new ServletException(e);
        }
    }
}

At this point, we have all the Spring support for Olingo implemented. Before finishing this post, we will describe how to use it within a Web application.

Using the namespace

To use the namespace, we need first to configure the Spring XML context with the file web.xml. The listener can be used for this. Our Spring-based servlet for Olingo can be then configured.

Following code describes the complete configuration within this file:

<web-app (...) id="OlingoWebApp" version="2.5">
    <display-name>Apache Olingo Spring</display-name>
    <context-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>/WEB-INF/applicationContext-namespace.xml</param-value>
    </context-param>
    <listener>
        <listener-class>
            org.springframework.web.context.ContextLoaderListener
        </listener-class>
    </listener>
    <servlet>
        <servlet-name>OlingoSpringServlet</servlet-name>
        <servlet-class>
            org.apache.olingo.providers.spring.ODataProvidersSpringServlet
        </servlet-class>
        <load-on-startup>1</load-on-startup>
    </servlet>
    <servlet-mapping>
        <servlet-name>OlingoSpringServlet</servlet-name>
        <url-pattern>/odata.svc/*</url-pattern>
    </servlet-mapping>
</web-app>

As you can see the Spring configuration will be contained in the file applicationContext-namespace.xml located in the folder WEB-INF. The first thing to do in this file is to define our namespace within the root XML element beans, as described below:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xmlns:olingo="http://olingo.apache.org/schema/olingo/spring-olingo"
         xsi:schemaLocation="http://www.springframework.org/schema/beans
               http://www.springframework.org/schema/beans/spring-beans.xsd
           http://olingo.apache.org/schema/olingo/spring-olingo
               http://olingo.apache.org/schema/olingo/spring-olingo.xsd">
    (...)
</beans>

Now this done, we can use the XML namespace olingo to configure the Olingo HTTP handle.

<beans (...)>
    <olingo:http-handler id="httpHandler" edm-provider="edmProvider">
        <olingo:reference uri="../v4.0/cs02/vocabularies/Org.OData.Core.V1.xml">
            <olingo:include key="Org.OData.Core.V1" value="Core"/>
        </olingo:reference>
        <olingo:processor ref="testProcessor"/>
    </olingo:http-handler>

    <bean id="edmProvider" class="org.apache.olingo.spring.edm.GenericEdmProvider" />

    <bean id="testProcessor" class="org.apache.olingo.spring.config.TestProcessor" />
</beans>

We dont but the Spring Olingo support also provides a convenient way to configure an EDM provider using the Spring XML configuration:

<beans (...)>
    <olingo:edm-provider id="edmProvider">
        <olingo:schema namespace="mynamespace" alias="myalias">
            <olingo:entityContainer>
                <olingo:entitySet name="books" type="books"/>
            </olingo:entityContainer>
            <olingo:entityType name="sources1">
                <olingo:key property-name="id"/>
                <olingo:property name="id" type="Edm.Int32"/>
                <olingo:property name="name" type="Edm.String"/>
                <olingo:property name="genre" type="Edm.String"/>
                <olingo:property name="publisher" type="Edm.String"/>
            </olingo:entityType>
        </olingo:schema>
    </olingo:edm-provider>
</beans>

Posted in Olingo, Spring | Tagged , , , , | 2 Comments