Jetslide uses ElasticSearch as Database

GraphHopper – A Java routing engine

karussell ads

This post explains how one could use the search server ElasticSearch as a database. I’m using ElasticSearch as my only data storage system, because for Jetslide I want to avoid maintenance and development time overhead, which would be required when using a separate system. Be it NoSQL, object or pure SQL DBs.

ElasticSearch is a really powerfull search server based on Apache Lucene. So why can you use ElasticSearch as a single point of truth (SPOT)? Let us begin and go through all – or at least my – requirements of a data storage system! Did I forget something? Add a comment :) !

CRUD & Search

You can create, read (see also realtime get), update and delete documents of different types. And of course you can perform full text search!

Multi tenancy

Multiple indices are very easy to create and to delete. This can be used to support several clients or simply to put different types into different indices like one would do when creating multiple tables for every type/class.

Sharding and Replication

Sharding and replication is just a matter of numbers when creating the index:

curl -XPUT 'http://localhost:9200/twitter/' -d '
index :
    number_of_shards : 3
    number_of_replicas : 2'

You can even update the number of replicas afterwards ‘on the fly’. To update the number of shards of one index you have to reindex (see the reindexing section below).

Distributed & Cloud

ElasticSearch can be distributed over a lot of machines. You can dynamically add and remove nodes (video). Additionally read this blog post for information about using ElasticSearch in ‘the cloud’.

Fault tolerant & Reliability

ElasticSearch will recover from the last snapshot of its gateway if something ‘bad’ happens like an index corruption or even a total cluster fallout – think time machine for search. Watch this video from Berlin Buzz Words (minute 26) to understand how the ‘reliable and asyncronous nature’ are combined in ElasticSearch.

Nevertheless I still recommend to do a backup from time to time to a different system (or at least different hard disc), e.g. in case you hit ElasticSearch or Lucene bugs or at least to make it really secure :)

Realtime Get

When using Lucene you have a real time latency. Which basically means that if you store a document into the index you’ll have to wait a bit until it appears when you search afterwards. Altought this latency is quite small: only a few milliseconds it is there and gets bigger if the index gets bigger. But ElasticSearch implements a realtime get feature in its latest version, which makes it now possible to retrieve the object even if it is not searchable by its id!

Refresh, Commit and Versioning

As I said you have a realtime latency when creating or updating (aka indexing) a document. To update a document you can use the realtime get, merge it and put it back in the index. Another approach which avoids further hits on ElasticSearch, would be to call refresh (or commit in Solr) of the index. But this is very problematic (e.g. slow) when the index is not tiny.

The good news is that you can again solve this problem with a feature from ElasticSearch – it is called versioning. This an identical to the ‘application site’ optimistical locking in the database world. Put the document in the index and if it fails e.g. merge the old state with the new and try again. To be honest this requires a bit more thinking using a failure-queue or similar, but now I have a really good working system secured with unit tests.

If you think about it, this is a really huge benefit over e.g. Solr. Even if Solrs’ raw indexing is faster (no one really did a good job in comparing indexing performance of Solr vs. ES) it requires a call of commit to make the documents searchable and slows down the whole indexing process a lot when comparing to ElasticSearch where you never really need to call the expensive refresh.


This is not necessary for a normal database. But it is crucial for a search server, e.g. to change an analyzer or the number of shards for an index. Reindexing sounds hard but can be easily implemented even without a separate data storage in ElasticSearch. For Jetslide I’m storing not single fields I’m storing the entire document as JSON in the _source. This is necessary to fetch the documents from the old index and put them into the newly created (with different settings).

But wait. How can I fetch all documents from the old index? Wouldn’t this be bad in terms of performance or memory for big indices? No, you can use the scan search type, which avoids e.g. scoring.

Ok, but how can I replace my old index with the new one? Can this be done ‘on the fly’? Yes, you can simply switch the alias of the index:

curl -XPOST 'http://localhost:9200/_aliases' -d '{
"actions" : [
   { "remove" : { "index" : "userindex6", "alias" : "userindex" } },
   { "add" : { "index" : "userindex7", "alias" : "uindex" } }]


Well, ElasticSearch is fast. But you’ll have to determine for youself if it is fast enough for your use case and compare it to your existing data storage system.

Feature Rich

ElasticSearch has a lot of features, which you do not find in a normal database. E.g. faceting or the powerful percolator to name only a few.


In this post I explained if and how ElasticSearch can be used as a database replacement. ElasticSearch is very powerfuly but e.g. the versioning feature requires a bit handwork. So working with ElasticSearch is comparable more to the JDBC or SQL world not to the ORM one. But I’m sure there will pop up some ORM tools for ElasticSearch, although I prefer to avoid system complexity and will always use the ‘raw’ ElasticSearch I guess.

Introducing Jetslide News Reader

Update: is no longer online. Checkout the projects snacktory and jetwick which were used in jetslide.

Flattr this

We are proud to announce the release of our Jetslide News Reader today! We know that there are a lot services aggregating articles from your twitter timeline such as the really nice or But as a hacker you’ll need a more powerful tool. You’ll need Jetslide. Read on to see why Jetslide is different and read this feature overview. By the way: yesterday we open sourced the content extractor called snacktory.

Jetslide is different …

… because it divides your ‘newspaper’ into easily navigatable topics and Jetslide prints articles from your timeline first! So you are following topics and not (only) people. See the first article which was referenced by a twitter friend and others, but it also prints articles from public. See the second article, where the highest share count (187) comes from digg. Click to view the reality of today or browse older content with the links under the articles:

Jetslide is smart …

… enough to skip duplicate articles and enhance your topics with related material. The relavance of every article is determined by an advanced algorithm (number of shares, quality, tweed, your browser language …) with the help of my database ElasticSearch – more on this in a later blog post.

And you can use a lot of geeky search queries to get what you want.

Jetslides are social

As pointed out under ‘Jetslide is different’ you’ll see articles posted in your twitter timeline first. But there is another features which make Jetslide more ‘social’. First, you get suggestions of users if they have the same or similar interests stored in their Jetslide. And second, Jetslide enables you to see others’ personal jetslide when adding e.g. the  parameter owner=timetabling to the url.

Jetslides means RSS 3.0

You can even use the boring RSS feed:

But this is less powerful. The recommended way to ‘consume’ your topics is via RSS 3.0 ;)

Log in to Jetslide and select “Read Mode:Auto”. Then every time you hit the ‘next’ arrow (or CTRL+right) the current viewed articles will be marked as read and only newer articles will pop up the next time you slide through. This way you can slide through your topics and come back everytime you want: after 2 hours or after 2 days (at the moment up to 7 days). In Auto-Read-Mode you’ll always see only what you have missed and what is relevant!

This is the most important point why we do not call Jetslide a search engine but a news service.

Jetslides are easily shareable

… because a Jetslide is just an URL – viewable on desktops,  smartphones and even WAP browsers (left):


Snacktory – Yet another Readability clone. This time in Java.

For Jetslide I needed a readability Java clone. There are already some tools, but I wanted some more and other features so I adapted the existing goose and jreadability and added some stuff. Check out the detection quality at Jetslide and fork it to improve it – since today snacktory is free software :) !

Copied from the README:

This is a small helper utility for pepole don’t want to write yet another java clone of Readability. In most cases, this is applied to articles, although it should work for any website to find its major area and extract its text and its important picture. Have a look into Jetslide where Snacktory is used. Jetslide is a new way to consume news, it does not only display the Websites’ title but it displays a small preview of the site (‘a snack’) and the important image if available.
The software stands under Apache 2 License and comes with NO WARRANTY
Snacktory borrows some ideas from jReadability and goose (ideas + a lot test cases)
The advantages over jReadability are
  • better article text detection than jReadability
  • only Java deps
  • more tests
The advantages over Goose are
  • similar article text detection although better detection for none-english sites (German, Japanese, …)
  • snacktory does not depend on the word count in its text detection to support CJK languages
  • no external Services required to run the core tests => faster tests
  • better charset detection
  • with caching support
  • skipping some known filetypes
The disadvantages to Goose are
  • only the detection of the top image and the top text is supported at the moment
  • some tests which passed do not pass. But added a bunch of other useful sites (stackoverflow, facebook, other languages …)
HtmlFetcher fetcher = new HtmlFetcher();
// set cache. e.g. take the map implementation from google collections:
// fetcher.setCache(new MapMaker().concurrencyLevel(20).
 //               maximumSize(count).expireAfterWrite(minutes, TimeUnit.MINUTES).makeMap();
JResult res = fetcher.fetchAndExtract(url, resolveTimeout, true);
res.getText(); res.getTitle(); res.getImageUrl();