Bird’s Eye View on ElasticSearch its Query DSL

I’ve copied the whole post into a gist so that you can simply clone, copy and paste the important stuff and even could contribute easily.

Several times per month there pop up questions regarding the query structure on the ElasticSearch user group.

Although there are good docs explaining this in depth probably the bird view of the Query DSL is necessary to understand what is written there. There is even already some good external documentation available. And there were attempts to define a schema but nevertheless I’ll add my 2 cents here. I assume you set up your ElasticSearch instance correctly and on the local machine filled with exactly those 3 articles.

Now we can query ElasticSearch as it is done there. Keep in mind to use the keyword analyzer for tags!

curl -X POST “http://localhost:9200/articles/_search?pretty=true” -d ‘
{“query” : { “query_string” : {“query” : “T*”} },
“facets” : {
“tags” : { “terms” : {“field” : “tags”} }
}}’

But when you now look into the query DSL docs you’ll only find the query part

{“query_string” : {
“default_field” : “content”,
“query” : “this AND that OR thus”
}}

And this query part can be replaced by your favourite query. Be it a filtered, term, a boolean or whatever query.

So what is the main structure of a query? Roughly it is:

curl -X POST “http://localhost:9200/articles/_search?pretty=true” -d ‘
{“from”: 0,
“size”: 10,
“query” : QUERY_JSON,
FILTER_JSON,
FACET_JSON,
SORT_JSON
}’

Keep in mind that the FILTER_JSON only applies to the query not to the facets. Read on how to do this. And now a short example how this nicely maps to the Java API:

SearchRequestBuilder srb = client.prepareSearch(“your_index”);
srb.setQuery(QueryBuilders.queryString(“title:test”));
srb.addSort(“tags”, SortOrder.ASC);
srb.addFacet(FacetBuilders.termsFacet(“tags”));
// etc -> use your IDE autocompletion function 😉

If you install my hack for ElasticSearch Head you can formulate the above query separation directly in your browser/in javascript. E.g.:

q ={ match_all:{} };
req = { query:q }

A more detailed query structure is as follows – you could easily obtain it via Java API, from the navigational elements from the official docs or directly from the source:

curl -X POST “http://localhost:9200/articles/_search?pretty=true” -d ‘
{“query” : QUERY_JSON,
“filter” : FILTER_JSON,
“from”: 0,
“size”: 10,
“sort” : SORT_ARRAY,
“highlight” : HIGHLIGHT_JSON,
“fields” : [“tags”, “title”],
“script_fields”: SCRIPT_FIELDS_JSON,
“preference”: “_local”,
“facets” : FACET_JSON,
“search_type”: “query_then_fetch”,
“timeout”: -1,
“version”: true,
“explain”: true,
“min_score”: 0.5,
“partial_fields”: PARTIAL_FIELDS_JSON,
“stats” : [“group1”, “group2”]
}’

Let us dig into a simple query with some filters and facets:
curl -XGET ‘http://localhost:9200/articles/_search?pretty=true’ -d ‘
{“query”: {
“filtered” : {
“query” : { “match_all” : {} },
“filter” : {“term” : { “tags” : “bar” }}
}},
“facets” : {
“tags” : { “terms” : {“field” : “tags”} }
}}’

You should get 2 out of the 3 articles and the filter directly applies on the facets as well. If you don’t want that then put the filter part under the query:

curl -XGET ‘http://localhost:9200/articles/_search?pretty=true’ -d ‘
{“query” : { “match_all” : {} },
“filter” : {“term” : { “tags” : “bar” }},
“facets” : {
“tags” : { “terms” : {“field” : “tags”} }
}}’

And how can I only filter on the facets? You’ll need facet_filter:

curl -XGET ‘http://localhost:9200/articles/_search?pretty=true’ -d ‘
{“query” : { “match_all” : {} },
“facets” : {
“mytags” : {
“terms” : {“field” : “tags”},
“facet_filter” : {“term” : { “tags” : “bar”}}
}
}}’

You’ll get 3 documents with filtered facets.

Hope this posts clarifies a bit and reduces your trouble. I’ll update the post according to your comments/suggestions. Let me know if you want something explained which is Query-DSL specific for all the other questions there is the user group.

Advertisement

EC2 = Easy Cloud 2.0? Getting started with the Amazon Cloud

If you are command line centric guy like me and you are on Ubuntu this post is for you.Getting starting with Amazon was a pain for me although once you understand the basics it is relativly easy. BTW: there are of course also other cloud systems like Rackspace or Azure.

If you want the official Ubuntu LTS Server (currently 10.04) running in the Amazon Cloud you can do:

ec2-run-instances ami-c00e3cb4 --region eu-west-1 --instance-type m1.small --key amazon-key

or go to this page and pick a different AMI. Hmmh, you are already sick of all the wording like AMI, EC2 and instances? Ok,

lets digg into the amazon world.

Let me know if I have something missing or incorrect:

  • AMI: Amazon Machine Image. This is a highly tuned linux distribution in our case and we can choose from a lot of different types – e.g. on this page.
  • EC2: Elastic Compute Cloud – which is a highly scalable hosting solution where you have root access to the server. You can choose the power and RAM of that instance (‘a server’) and start and stop instances as you like. In Germany Amazon is relative expensive compared to existing hosting solutions (not that case in the US). And since those services can also easy scale there is nearly no advantage of using Amazon or Rackspace.
  • EBS: Elastic block storage – This is where we store our data. An EBS can be attached to any instance but in my case I don’t need a separate volume I just can use the default EBS mounted at /mnt with ~150 GB or even the system partition / with ~8 GB. From wikipedia:
    EBS volumes provide persistent storage independent of the lifetime of the EC2 instance, and act much like hard drives on a real server.
    Also if you choose storage of type ‘ebs’ your instance can be stopped. If it is of type instance-store you could only clone the AMI and terminate. If you try to stop it you’ll get “The instance does not have an ‘ebs’ root device type and cannot be stopped.”
  • A running instance is always attached to one key (a named public key). Once started you cannot change it.
  • S3: Simple Storage Service. Can be used for e.g. backup purposes, has an own API (REST or SOAP). Not covered in this mini post.
  • Availability zone: The datacenter location e.g. eu-west-1 is Ireland or us-west-2 is Oregon. The advantage of having different regions/zones is that if one datacenter crashes you have a fall back in a different. But the big disadvantage of different zones is that e.g. transfering your customized AMIs to a different region is a bit complex and you’ll need to import your keys again etc.

But even now after ‘understanding’ of the wording it is not that easy to get started and e.g. the above command will not work out of the box.

To make the above command working you’ll need:

  1. An Amazon Account and a lot of money 😉 or use the micro instance which is free for one year and for a fresh account IMO
  2. The ec2 tools installed locally: sudo apt-get install ec2-api-tools
  3. The amazon credentials stored and added to your ssh-agent:
    export EC2_PRIVATE_KEY=/home/user/.ssh/certificate-privatekey.pem
    export EC2_CERT=/home/user/.ssh/certificate.pem
  4. Test the functionality via
    ec2-describe-instances –region eu-west-1
  5. Now you need to create a key pair and import the public one into your account (choose the right availability zone!)
    Aws Console -> Ec2 -> Network & Security -> Key Pairs -> Import Key Pair and choose amazon-key as name
  6. Then feed your local ssh-agent with the private key:
    ssh-add /home/user/.ssh/amazon-key
  7. Now you should be able to run the above command. To view the instance from the web UI you’ll have to refresh the site.
  8. Open port 22 for the default security group:
    Aws Console -> Ec2 -> Network & Security -> Security Groups -> Click on the default one and then on the ‘inbound’ Tab -> type ’22’ in port range -> Add Rule -> delete the other configurations -> Apply Rule Changes
  9. Now try to login
    ssh ubuntu@ec2-your-machine.amazonaws.com
    For the official amazon AMIs you’ll have to use ec2-user as login

That was easy 🙂 No?

Ok, now you’ll have to configure and install software as you like e.g.
sudo apt-get update && sudo apt-get upgrade -y

To proceed further you could

  • Attach a static IP to the instance so that external applications do not need to be changed after you moved the instance – or use that IP for your load balancer – or use the Amazon load balancer etc:
    Aws Console -> Ec2 -> Network & Security -> Elastic IPs -> Allocate New Address
  • Open some more ports like port 80
  • Or you could create an AMI of your already configured system. You can even publish this custom AMI.
  • Run ElasticSearch as search server in the cloud e.g. even via a debian package which makes it very easy.

Now if you have several instance and you want to

update software on all machines.

How would you do that? Here is one possibility

ips=`ec2-describe-instances --region eu-west-1 | grep running | cut -f17 | tr '\n' ' '`

for IP in $ips
do
 echo UPDATING $IP;
 ssh -A ubuntu@$IP "cd /somewhere; bash ./scripts/update.sh";
done

Jetslide uses ElasticSearch as Database

GraphHopper – A Java routing engine

karussell ads

This post explains how one could use the search server ElasticSearch as a database. I’m using ElasticSearch as my only data storage system, because for Jetslide I want to avoid maintenance and development time overhead, which would be required when using a separate system. Be it NoSQL, object or pure SQL DBs.

ElasticSearch is a really powerfull search server based on Apache Lucene. So why can you use ElasticSearch as a single point of truth (SPOT)? Let us begin and go through all – or at least my – requirements of a data storage system! Did I forget something? Add a comment 🙂 !

CRUD & Search

You can create, read (see also realtime get), update and delete documents of different types. And of course you can perform full text search!

Multi tenancy

Multiple indices are very easy to create and to delete. This can be used to support several clients or simply to put different types into different indices like one would do when creating multiple tables for every type/class.

Sharding and Replication

Sharding and replication is just a matter of numbers when creating the index:

curl -XPUT 'http://localhost:9200/twitter/' -d '
index :
    number_of_shards : 3
    number_of_replicas : 2'

You can even update the number of replicas afterwards ‘on the fly’. To update the number of shards of one index you have to reindex (see the reindexing section below).

Distributed & Cloud

ElasticSearch can be distributed over a lot of machines. You can dynamically add and remove nodes (video). Additionally read this blog post for information about using ElasticSearch in ‘the cloud’.

Fault tolerant & Reliability

ElasticSearch will recover from the last snapshot of its gateway if something ‘bad’ happens like an index corruption or even a total cluster fallout – think time machine for search. Watch this video from Berlin Buzz Words (minute 26) to understand how the ‘reliable and asyncronous nature’ are combined in ElasticSearch.

Nevertheless I still recommend to do a backup from time to time to a different system (or at least different hard disc), e.g. in case you hit ElasticSearch or Lucene bugs or at least to make it really secure 🙂

Realtime Get

When using Lucene you have a real time latency. Which basically means that if you store a document into the index you’ll have to wait a bit until it appears when you search afterwards. Altought this latency is quite small: only a few milliseconds it is there and gets bigger if the index gets bigger. But ElasticSearch implements a realtime get feature in its latest version, which makes it now possible to retrieve the object even if it is not searchable by its id!

Refresh, Commit and Versioning

As I said you have a realtime latency when creating or updating (aka indexing) a document. To update a document you can use the realtime get, merge it and put it back in the index. Another approach which avoids further hits on ElasticSearch, would be to call refresh (or commit in Solr) of the index. But this is very problematic (e.g. slow) when the index is not tiny.

The good news is that you can again solve this problem with a feature from ElasticSearch – it is called versioning. This an identical to the ‘application site’ optimistical locking in the database world. Put the document in the index and if it fails e.g. merge the old state with the new and try again. To be honest this requires a bit more thinking using a failure-queue or similar, but now I have a really good working system secured with unit tests.

If you think about it, this is a really huge benefit over e.g. Solr. Even if Solrs’ raw indexing is faster (no one really did a good job in comparing indexing performance of Solr vs. ES) it requires a call of commit to make the documents searchable and slows down the whole indexing process a lot when comparing to ElasticSearch where you never really need to call the expensive refresh.

Reindexing

This is not necessary for a normal database. But it is crucial for a search server, e.g. to change an analyzer or the number of shards for an index. Reindexing sounds hard but can be easily implemented even without a separate data storage in ElasticSearch. For Jetslide I’m storing not single fields I’m storing the entire document as JSON in the _source. This is necessary to fetch the documents from the old index and put them into the newly created (with different settings).

But wait. How can I fetch all documents from the old index? Wouldn’t this be bad in terms of performance or memory for big indices? No, you can use the scan search type, which avoids e.g. scoring.

Ok, but how can I replace my old index with the new one? Can this be done ‘on the fly’? Yes, you can simply switch the alias of the index:

curl -XPOST 'http://localhost:9200/_aliases' -d '{
"actions" : [
   { "remove" : { "index" : "userindex6", "alias" : "userindex" } },
   { "add" : { "index" : "userindex7", "alias" : "uindex" } }]
}'

Performance

Well, ElasticSearch is fast. But you’ll have to determine for youself if it is fast enough for your use case and compare it to your existing data storage system.

Feature Rich

ElasticSearch has a lot of features, which you do not find in a normal database. E.g. faceting or the powerful percolator to name only a few.

Conclusion

In this post I explained if and how ElasticSearch can be used as a database replacement. ElasticSearch is very powerfuly but e.g. the versioning feature requires a bit handwork. So working with ElasticSearch is comparable more to the JDBC or SQL world not to the ORM one. But I’m sure there will pop up some ORM tools for ElasticSearch, although I prefer to avoid system complexity and will always use the ‘raw’ ElasticSearch I guess.

Introducing Jetslide News Reader

Update: Jetsli.de is no longer online. Checkout the projects snacktory and jetwick which were used in jetslide.

Flattr this

We are proud to announce the release of our Jetslide News Reader today! We know that there are a lot services aggregating articles from your twitter timeline such as the really nice tweetedtimes.com or paper.li. But as a hacker you’ll need a more powerful tool. You’ll need Jetslide. Read on to see why Jetslide is different and read this feature overview. By the way: yesterday we open sourced the content extractor called snacktory.

Jetslide is different …

… because it divides your ‘newspaper’ into easily navigatable topics and Jetslide prints articles from your timeline first! So you are following topics and not (only) people. See the first article which was referenced by a twitter friend and others, but it also prints articles from public. See the second article, where the highest share count (187) comes from digg. Click to view the reality of today or browse older content with the links under the articles:

Jetslide is smart …

… enough to skip duplicate articles and enhance your topics with related material. The relavance of every article is determined by an advanced algorithm (number of shares, quality, tweed, your browser language …) with the help of my database ElasticSearch – more on this in a later blog post.

And you can use a lot of geeky search queries to get what you want.

Jetslides are social

As pointed out under ‘Jetslide is different’ you’ll see articles posted in your twitter timeline first. But there is another features which make Jetslide more ‘social’. First, you get suggestions of users if they have the same or similar interests stored in their Jetslide. And second, Jetslide enables you to see others’ personal jetslide when adding e.g. the  parameter owner=timetabling to the url.

Jetslides means RSS 3.0

You can even use the boring RSS feed:

http://jetsli.de/rss/owner/timetabling/time/today

But this is less powerful. The recommended way to ‘consume’ your topics is via RSS 3.0 😉

Log in to Jetslide and select “Read Mode:Auto”. Then every time you hit the ‘next’ arrow (or CTRL+right) the current viewed articles will be marked as read and only newer articles will pop up the next time you slide through. This way you can slide through your topics and come back everytime you want: after 2 hours or after 2 days (at the moment up to 7 days). In Auto-Read-Mode you’ll always see only what you have missed and what is relevant!

This is the most important point why we do not call Jetslide a search engine but a news service.

Jetslides are easily shareable

… because a Jetslide is just an URL – viewable on desktops,  smartphones and even WAP browsers (left):

 

How to backup ElasticSearch with rsync

Although there is a gateway feature implemented in ElasticSearch which basically recovers your index on start if it is corrupted or similar it is wise to create backups if there are bugs in Lucene or ElasticSearch (assuming you have set the fs gateway). The backup script looks as follows and uses the possibility to enable and disable the flushing for a short time:

# TO_FOLDER=/something
# FROM=/your-es-installation

DATE=`date +%Y-%m-%d_%H-%M`
TO=$TO_FOLDER/$DATE/
echo "rsync from $FROM to $TO"
# the first times rsync can take a bit long - do not disable flusing
rsync -a $FROM $TO

# now disable flushing and do one manual flushing
$SCRIPTS/es-flush-disable.sh true
$SCRIPTS/es-flush.sh
# ... and sync again
rsync -a $FROM $TO

$SCRIPTS/es-flush-disable.sh false

# now remove too old backups
rm -rf `find $TO_FOLDER -maxdepth 1 -mtime +7` &> /dev/null

E.g. you could call the backup script regularly (even hourly) from cron and it will create new backups. By the way – if you want to take a look on the settings of all indices (e.g. to check the disable flushing stuff) this might be handy:

curl -XGET 'localhost:9200/_settings?pretty=true'

Here are the complete scripts as gist which I’m using for my jetslide project.

ElasticSearch vs. Solr #lucene

GraphHopper – A Java routing engine

karussell ads

I prepared a small presentation of ‘Why one should use ElasticSearch over Solr’ **

There is also a German article available in the iX magazine which introduces you to ElasticSearch and takes several aspects to compare Apache Solr and ElasticSearch.

**

This slide is based on my personal opinion and experience with my twitter search jetwick and my news reader jetslide. It should not be used to show that Solr or ElasticSearch is ‘bad’.

Why Jetwick moved from Solr to ElasticSearch

I like both technologies Solr and ElasticSearch and a lot work is going into both. So, let me explain why I choose to migrate from Solr to ElasticSearch (ES).

What is elastic?

  • ES lets you add and remove nodes [Video] and the requests will be handled from the correct node. Nodes will even do ‘zero config’ discovery.
    To scale if the load increases you can use replicas. ElasticSearch will automatically play the loadbalancer and choose the appropriated node.
  • ES lets you scale if data amount increase, because then you can easily use sharding: it’s just a number in ES (either via API or via configuration).

With that features ES is well prepared for the century of the cloud [Blog]!

What’s the difference to Solr?

Solr wasn’t designed from the ground up with the ‘cloud’ in mind, but of course you can do sharding, use replication and use multiple cores with Solr. It’s just a bit more complicated.

When using Solr Cloud and ZooKeeper this gets better. You’ll also need to invest some time to make Solr near real time to be comparable with ES. This all seemed to be a bit too tricky to me (in Dec 2010) and I don’t have any time for administration work in my free time e.g. to set up backups/replicas, add shards/indices, …

Other Options?

What are my other options? There is Xapian, Sphinx etc. But  only the following two projects fullfilled my requirements:

  1. Using Solandra or
  2. Moving from Solr to ElasticSearch

I wanted a lucene based solution and a solution where it works out of the box to shard and create indices. I simply wanted more data from Twitter available in Jetwick.

The first option is very nice, no changes to your code are required – only a minor change in your solrconfig.xml and you will get a distributed and real time Solr! So, I tried Solandra and after a lot support from Jake (Thanks!) I got it running with Jetwick! But at the end I still had performance issues with my indexing strategy, so I tried – in parallel – the second step.

What are the advantages of ElasticSearch?

To be honest Jetwick doesn’t really need to be elastic – I’m only using the sharding feature at the moment as I don’t own capacity on a cloud. BUT ElasticSearch is also elastic in a different area: ES lets you manage indices very very easy! A clever thing in ES is that you don’t define the document structure in an index like you do in Solr – no, you define types and then create documents of a specific type in a specific index. And documents in ES don’t need to be flat – they can be nested as they are pure JSON.

That and the ‘elasticity’ could make ES suitable as a hip NoSql storage 😉

Another advantage over Solr is the near real time behaviour, which you’ll get at no costs when switching to ES.

The Move!

Moving to ElasticSearch with Jetwick wasn’t that easy as I hoped. Although I’m sure one can make a normal migration in one day with my experience now ;). It took a lot of time to understand the new technology and more importantly to migrate my UI code where I made too much use to construct a SolrQuery object. At the end I created a custom Solr2ElasticHelper utility to avoid this clumsy work at the beginning. And at some day I will fully migrate even this code. This is now migrated to my own query object which makes it easy for me to add and remove filters etc.

When moving to ElasticSearch be sure that it supports all feature Solr has. Although Shay works really hard to integrate new features into ES he cannot do all the work alone! E.g. I had to integrate Solrs’ WordDelimiterFilter, but this wasn’t that difficult – just copy & paste; plus some configuration.

ES uses netty under the hood – no other webserver is necessary. Just start the node either via API or in directly via bin/elasticsearch and then query the node via curl or the browser. For example you can use the nice ElasticSearch Head project:

or ElasticSearch-JS which are equivalents to the Solr admin page. To add a node simply start another ES instance and they will automagically discover each other. You can also use curl on the command line to query and feed the index as documented in the REST API documentation.

No technology is perfect so keep in mind the following disadvantages which will disappear over time in my opinion:

  • Solr has more Analyzers, Filters, etc., but it is relative easy to use them in ES as well.
  • Solr has a larger community, a larger user base and more companies offering professional support
  • Solr has better documentation and more books. Regarding the docs of ES: they are moving now to the github wiki and the docs will now improve IMO.
  • Solr has more tooling e.g. solrmonitor, LucidGaze and newrelic, but you still have yourkit and jvisualvm 😉

But keep in mind also the following unmentioned notes:

  • Shay fixes bugs very quickly!
  • ElasticSearch has a more recent Lucene version and releases more frequently
  • It is very easy to contribute via github (just a pull request away ;))

To get introduced into ElasticSearch you can read this article.

Get Started with ElasticSearch and Wicket

GraphHopper – A Java routing engine

karussell ads

This article will show you the most basic steps required to make ElasticSearch working for the simplest scenario with the help of the Java API – it shows installing, indexing and querying.

1. Installation

Either get the sources from github and compile it or grab the zip file of the latest release and start a node in foreground via:

bin/elasticsearch -f

To make things easy for you I have prepared a small example with sources derived from jetwick where you can start ElasticSearch directly from your IDE – e.g. just click ‘open projects’ in NetBeans run then start from the ElasticNode class. The example should show you how to do indexing via bulk API, querying, faceting, filtering, sorting and probably some more:

To get started on your own see the sources of the example where I’m actually using ElasticSearch or take a look at the shortest ES example (with Java API) in the last section of this post.

Info: If you want that ES starts automatically when your debian starts then read this documentation.

2. Indexing and Querying

First of all you should define all fields of your document which shouldn’t get the default analyzer (e.g. strings gets analyzed, etc) and specify that in the tweet.json under the folder es/config/mappings/_default

For example in the elasticsearch example the userName shouldn’t be analyzed:

{ "tweet" : {
   "properties" : {
     "userName": { "type" : "string", "index" : "not_analyzed" }
}}}

Then start the node:

import static org.elasticsearch.node.NodeBuilder.*;
...
Builder settings = ImmutableSettings.settingsBuilder();
// here you can set the node and index settings via API
settings.build();
NodeBuilder nBuilder = nodeBuilder().settings(settings);
if (testing)
 nBuilder.local(true);

// start it!
node = nBuilder.build().start();

You can get the client directly from the node:

Client client = node.client();

or if you need the client in another JVM you can use the TransportClient:

Settings s = ImmutableSettings.settingsBuilder().put("cluster.name", cluster).build();
TransportClient tmp = new TransportClient(s);
tmp.addTransportAddress(new InetSocketTransportAddress("127.0.0.1", 9200));
client = tmp;

Now create your index:

try {
  client.admin().indices().create(new CreateIndexRequest(indexName)).actionGet();
} catch(Exception ex) {
   logger.warn("already exists", ex);
}

When indexing your documents you’ll need to know where to store (indexName) and what to store (indexType and id):

IndexRequestBuilder irb = client.prepareIndex(getIndexName(), getIndexType(), id).
setSource(b);
irb.execute().actionGet();

where the source b is the jsonBuilder created from your domain object:

import static org.elasticsearch.common.xcontent.XContentFactory.*;
...
XContentBuilder b = jsonBuilder().startObject();
b.field("tweetText", u.getText());
b.field("fromUserId", u.getFromUserId());
if (u.getCreatedAt() != null) // the 'if' is not neccessary in >= 0.15
  b.field("createdAt", u.getCreatedAt());
b.field("userName", u.getUserName());
b.endObject();

To get a document via its id you do:

GetResponse rsp = client.prepareGet(getIndexName(), getIndexType(), "" + id).
execute().actionGet();
MyTweet tweet = readDoc(rsp.getSource(), rsp.getId());

Getting multiple documents at once is currently not supported via ‘prepareGet’, but you can create a terms query with the indirect field ‘_id’ to achieve this bulk-retrieving. When updating a lots of documents there is already a bulk API.

In test cases after indexing you’ll have to make sure that the documents are actually ‘commited’ before searching (don’t do this in production):

RefreshResponse rsp = client.admin().indices().refresh(new RefreshRequest(indices)).actionGet();

To write tests which uses ES you can take a look into the source code how I’m doing this (starting ES on beforeClass etc).

Now let use search:

SearchRequestBuilder builder = client.prepareSearch(getIndexName());
XContentQueryBuilder qb = QueryBuilders.queryString(queryString).defaultOperator(Operator.AND).
   field("tweetText").field("userName", 0).
   allowLeadingWildcard(false).useDisMax(true);
builder.addSort("createdAt", SortOrder.DESC);
builder.setFrom(page * hitsPerPage).setSize(hitsPerPage);
builder.setQuery(qb);

SearchResponse rsp = builder.execute().actionGet();
SearchHit[] docs = rsp.getHits().getHits();
for (SearchHit sd : docs) {
  //to get explanation you'll need to enable this when querying:
  //System.out.println(sd.getExplanation().toString());

  // if we use in mapping: "_source" : {"enabled" : false}
  // we need to include all necessary fields in query and then to use doc.getFields()
  // instead of doc.getSource()
  MyTweet tw = readDoc(sd.getSource(), sd.getId());
  tweets.add(tw);
}

The helper method readDoc is simple:

public MyTweet readDoc(Map source, String idAsStr) {
  String name = (String) source.get("userName");
  long id = -1;
  try {
     id = Long.parseLong(idAsStr);
  } catch (Exception ex) {
     logger.error("Couldn't parse id:" + idAsStr);
  }

  MyTweet tweet = new MyTweet(id, name);
  tweet.setText((String) source.get("tweetText"));
  tweet.setCreatedAt(Helper.toDateNoNPE((String) source.get("createdAt")));
  tweet.setFromUserId((Integer) source.get("fromUserId"));
  return tweet;
}

When you want that the facets will be return in parallel to the search results you’ll have to ‘enable’ it when querying:

facetName = "userName";
facetField = "userName";
builder.addFacet(FacetBuilders.termsFacet(facetName)
   .field(facetField));

Then you can retrieve all term facet via:

SearchResponse rsp = ...
if (rsp != null) {
 Facets facets = rsp.facets();
 if (facets != null)
   for (Facet facet : facets.facets()) {
     if (facet instanceof TermsFacet) {
         TermsFacet ff = (TermsFacet) facet;
         // => ff.getEntries() => count per unique value
...

This is done in the FacetPanel.

I hope you now have a basic understanding of ElasticSearch. Please let me know if you found a bug in the example or if something is not clearly explained!

In my (too?) small Solr vs. ElasticSearch comparison I listed also some useful tools for ES. Also have a look at this!

3. Some hints

  • Use ‘none’ gateway for tests. Gateway is used for long term persistence.
  • The Java API is not well documented at the moment, but now there are several Java API usages in Jetwick code
  • Use scripting for boosting, use JavaScript as language – most performant as of Dec 2010!
  • Restart the node to try a new scripting language
  • Use snowball stemmer in 0.15 use language:English (otherwise ClassNotFoundException)
  • See how your terms get analyzed:
    http://localhost:9200/twindexreal/_analyze?analyzer=index_analyzer “this is a #java test => #java + test”
  • Or include the analyzer as a plugin: put the jar under lib/ E.g. see the icu plugin. Be sure you are using the right guice annotation
  • You set port 9200 (-9300) for http communication and 9300 (-9400) for transport client.
  • if you have problems with ports: make sure at least a simple put + get is working via curl
  • Scaling-ElasticSearch
    This solution is my preferred solution for handling long term persistency of of a cluster since it means
    that node storage is completely temporal. This in turn means that you can store the index in memory for example,
    get the performance benefits that comes with it, without scarifying long term persistency.
  • Too many open files: edit /etc/security/limits.conf
    user soft nofile 15000
    user hard nofile 15000
    ! then login + logout !

4. Simplest Java Example

import static org.elasticsearch.node.NodeBuilder.*;
import static org.elasticsearch.common.xcontent.XContentFactory.*;
...
Node node = nodeBuilder().local(true).
settings(ImmutableSettings.settingsBuilder().
put("index.number_of_shards", 4).
put("index.number_of_replicas", 1).
build()).build().start();

String indexName = "tweetindex";
String indexType = "tweet";
String fileAsString = "{"
+ "\"tweet\" : {"
+ "    \"properties\" : {"
+ "         \"longval\" : { \"type\" : \"long\", \"null_value\" : -1}"
+ "}}}";

Client client = node.client();
// create index
client.admin().indices().
create(new CreateIndexRequest(indexName).mapping(indexType, fileAsString)).
actionGet();

client.admin().cluster().health(new ClusterHealthRequest(indexName).waitForYellowStatus()).actionGet();

XContentBuilder docBuilder = XContentFactory.jsonBuilder().startObject();
docBuilder.field("longval", 124L);
docBuilder.endObject();

// feed previously created doc
IndexRequestBuilder irb = client.prepareIndex(indexName, indexType, "1").
setConsistencyLevel(WriteConsistencyLevel.DEFAULT).
setSource(docBuilder);
irb.execute().actionGet();

// there is also a bulk API if you have many documents
// make doc available for sure – you shouldn't need this in production, because
// the documents gets available automatically in (near) real time
client.admin().indices().refresh(new RefreshRequest(indexName)).actionGet();

// create a query to get this document
XContentQueryBuilder qb = QueryBuilders.matchAllQuery();
TermFilterBuilder fb = FilterBuilders.termFilter("longval", 124L);
SearchRequestBuilder srb = client.prepareSearch(indexName).
setQuery(QueryBuilders.filteredQuery(qb, fb));

SearchResponse response = srb.execute().actionGet();

System.out.println("failed shards:" + response.getFailedShards());
Object num = response.getHits().hits()[0].getSource().get("longval");
System.out.println("longval:" + num);

Get more Friends on Twitter with Jetwick

Obviously you won’t need a tool to get more friends aka ‘following’ on twitter but you’ll add more friends when you tried our new feature called ‘friend search’. But let me start from the beginning of our recent, major technology shift for jetwick – our open source twitter search.

We have now moved the search server forward to ElasticSearch (from Solr) – more on that in a later post. This move will hopefully solve some data actuality problems but also make more tweets available in jetwick. All features should work as before.

To make it even more pleasant for you my fellow user I additionally implemented the friend search with all the jetwicked features: sort it against retweets, filter against language, filter away spam and duplicates …

Update: you’ll need to install jetwick

Try it on your own

  1. First login to jetwick. You need to ~2 minutes until all your friends will be detected …
  2. Then type your query or leave it empty to get all tweets
  3. Finally select ‘Only Friends’ as the user filter.
  4. Now you are able to search all the tweets of the people you follow.
  5. Be sure you clicked on without duplicates (on the left) etc. as appropriated

Here  the friend search results when querying ‘java’:

And now the normal search where the users rubenlirio and tech_it_jobs are not from my ‘friends’:

You don’t need to stay on-line all the time – jetwick automagically grab the tweets of your friends for you. And if you use the relaxed saved searches, then you’ll also be notified for all changes in you homeline – even after days!

That was the missing puzzle piece for me to be able to stay away from twitter and PC – no need to check every hour for the latest tweets in my homeline or my “twitter – saved searches”.

Jetwick is free so you’ll only need to login and try it! As a side effect of being logged in: your own tweets will be archived and you can search them though the established user search. With that user search you can use twitter as a bookmark application as I’m already doing it … BTW: you’ll notice the orange tweet which is a free ad: just create a tweet containing #jetwick and it’ll be shown on top of matching searches.

Another improvement is (hopefully) the user interface for jetwick, the search form should be more clear:

before it was:

and the blue logo should now look a bit better, before it was:

What do you think?