Releasing GraphHopper 0.2 – Further & Faster Road Routing!

Today we’re releasing version 0.2 of our Open Source road routing engine GraphHopper written in 100% Java.


  • All algorithms are faster due to bug fixes and fine tuning
  • A preparation is necessary for our optional speed-up technique called Contraction Hierarchy. This preparation is also faster.


  • We finally fixed GPS-exact routing so you don’t have to workaround a junction-to-junction results

More exciting news will follow …

Have fun and try GraphHopper Maps with world wide coverage for pedestrians, cars and bicycles! You need support? Have a look at our enterprise options!


Free the Java Duke now and use it in Blender!

Duke, the Java mascot is a well known ambassador for Java. It is so important even Oracle still has a prominent site for it. The only problem is that the original source files can only be opened with a commercial application called lightwave3d.

For GraphHopper I took those files and created an OpenStreetMap variant with lightwave3d:


But I wondered why there is no possibility to use the files in Blender. Actually there is a plugin for Blender from Remigiusz to import the DXF files, but at the beginning it did not work for those none-standard files. So I contacted Remigiusz and the story begins. He not only improved his importer to make it working, but he invested hours to make a really nice Blender version for the Java Duke files!

Have a look:


The files are at Github! Thanks a lot again to Remigiusz!

Setup Mapnik From Scratch

THIS document is a work in progress.

There are several options but mainly three web map servers (WMS): Mapnik, GeoServer and MapServer. Simple visualization:

 A) browser/client (leaflet, openlayers)
 B) tile server (mod_tile, tile cache, tile stache, mapproxy, geowebcache)
 C) map web service = WMS (MapServer, GeoServer, Mapnik)
 D) Data storage (PostgreSQL, vector tiles)
 E) OSM data (xml, pbf)


  • for C => Mapnik can use TileMill to style to map
  • leaflet can do tile layers (B) but also WMS (C)
  • Nearly always you need PostgreSQL, but in rare cases you can avoid it via vector tiles.
  • A common approach is to use apache2 with mod_tile. Serving the tiles from disc or creating the image via mapnik through renderd. But also nginx gets more popular. Tiledrawer has an old scripts also with mapnik and nginx.
  • You can also use GeoServer with a cache in front. Often it only serves some feature layers.
  • WFS = web feature service
  • MWS = map web service

Installation of PostgreSQL, Mapnik, mod_tile, renderd


First of all you need the OSM data in a PostgreSQL database via osm2psql import – an easy description is here. For a faster import you can try imposm instead of the more common osm2psql.

Install the Rest

Although nginx would be preferred I did not find a out of the box working solution for it. So you need to use apache2 and mod_tile: a simple installation. Also there is an older article which I did not try. There is also a good presentation with some useful information.

 sudo apt-get install python-software-properties
 sudo add-apt-repository ppa:kakrueger/openstreetmap
 sudo apt-get update
 # now install mod_tile and renderd with the default style
 sudo apt-get install libapache2-mod-tile
 sudo -u postgres osm2pgsql --slim -C 1500 --number-processes 4 /tmp/europe_germany_berlin.pbf
 sudo touch /var/lib/mod_tile/planet-import-complete
 sudo /etc/init.d/renderd restart
 # optionally pre-generate some tiles for faster rendering - see "How do I pre-render tiles ?"
 # difference to and
 go to http://localhost/osm/slippymap.html
# to install mapnik-config do:
 sudo apt-get install libmapnik2-dev

Custom Styles

Here you can find newer style is constructed via CartoCSS
install carto either via just using TileMill or do

sudo apt-get install npm
sudo npm install -g carto

TODO Get mapbox osm-bright working. This new style requires a specific version XY of mapnik which wasn’t found in my ubuntu version!?

edit config name and path in
# backup /etc/mapnik-osm-data/osm.xml before doing:
carto osm-bright/YourProject/project.mml > /etc/mapnik-osm-data/osm.xml
# the osm.xml is used from mod_tile which uses renderd /etc/renderd.conf
# mod_tile config is at: /etc/apache2/sites-available/tileserver_site
# The tiles are here: /var/lib/mod_tile/default/

TODO To update the style you can try

 sudo -u www-data touch /var/lib/mod_tile/planet-import-complete
 # important: do not delete 'default' itself
 sudo rm -R /var/lib/mod_tile/default/
 # how to restart renderd properly?
 sudo service renderd restart



For mapnik installation you’ll also need a web server like nginx or apache2. For nginx you can have a look into the script available here (clone the git repo):

Additionally you’ll need mod_tile and renderd or TileStache if you use nginx. To style the maps you can use TileMill.


Installation is a lot easier compared to Mapnik or MapServer. Only dependency is Java. Just grab the full zip of geo server and copy the war from web cache into webapps, then run ./bin/

If you need mapbox tilemile styles for geoserver have a look into:


Problems with nvidia driver 173 and kernel 3.2.0-52. Two monitors and card Quadro NVS 160M

After the recent kernel upgrade my old nvidia driver failed to load and X loads with a very low resolution and without detecting my second monitor. For my graphic card the nouveau driver is not an option. But I managed to fix the problems via removing current nvidia and nvidia-173. But probably you can just skip those commands and use the jockey-gtk as explained below.

sudo apt-get purge nvidia-current
sudo apt-get remove nvidia-173
sudo apt-get install –reinstall nvidia-173-updates
# now REBOOT!

Now it boots as normal with my second monitor but the nice configuration tool called ‘nvidia-settings’ did not work and says ‘The NVIDIA X driver on is not new enough to support the nvidia-settings Display Configuration page.’

Also my firefox 23 had performance problems for canvas rendering. I solved this via opening “about:config” and set layers.acceleration.force-enabled to true. I found this in the comments of this article.

Update: I was able to fix the all the issues just by switching to ‘(additional updates) (version 319-updates)’ when starting the tool called ‘jockey-gtk’! Afterwards also the tool xfce4-display-settings showed two monitors.

Java on iPhone or iPad

What options do I have to make my Java application working on the iPhone? It should not be necessary to jailbreak the phone. Also UI is not necessary for now although e.g. codenameone seems to support it. My application it not a complicated one, but uses Java 1.5 (generics etc) and e.g. memory mapped files. The good thing would be if the JUnit tests could be transformed too with the tool to check functionality.What I found is

Or somehow use the embedded JVM for ARM directly? Do you have experiences with this or more suggestions?

GraphHopper Maps 0.1 – High Performance and Customizable Routing in Java

Today we’re proud to announce the first stable release of GraphHopper! After over a year of busy development we finally reached version 0.1!

GraphHopper is a fast and Open Source road routing engine written in Java based on OpenStreetMap data. It handles the full planet on a 15GB server but is also scales down and can be embedded into your application! This means you’re able to run Germany-wide queries on Android with only 32MB in a few seconds. You can download the Android offline routing demo or have a look at our web instance which has world wide coverage for car, bike and pedestrian:

GraphHopper Java Routing

The trip to the current state of GraphHopper was rather stony as we had to start from scratch as there is currently no fast Java-based routing engine. What we’ve built is quite interesting as it shows that a Java application can be as fast as Bing or Google Maps (in 2011) and beats YOURS, MapQuest and Cloudmade according to the results outlined in a Blog post from Pascal and with tests against GraphHopper – although OSRM is still ahead. But how can a Java application be so fast? One important side is the used algorithm: Contraction Hierarchies – a ‘simple’ shortcutting technique to speed up especially lengthy queries. But even without this algorithm GraphHopper is fast which is a result of weeks of tuning for less memory consumption (yes, memory has something to do with speed), profiling and tweaking. But not only the routing is fast and memory efficient also the import process. And it should be easy to get started and modify GraphHopper to your needs.

Why would you use GraphHopper?

GraphHopper could be especially useful for more complicated or custom shortest/best path projects. E.g. if you need

  • to embed GraphHopper or only parts of it directly within your Java application, which is easily possible due to the Apache 2 license.
  • offline queries for your Android application
  • highly customized routing (like horse routing – see below) where Google/Bing API calls aren’t sufficient or even possible
  • many to many queries
  • the shortest path tree(s) directly

… you should tell us on the mailing list what you need!


GraphHopper is a young project but it makes great strides and it is already used in GPSies and in more places (cannot disclose all yet).

Last but not least I would like to thank NopMap for his work regarding OSM import and tuning, elevation data and much more! You can try out his horse routing prototype based on GraphHopper at the German (“trail riding map”)!

See the description on how you can contribute.

Have fun!

Make Your Dijkstra Faster

Today I did a bit of research for GraphHopper and I stumbled over yet another minor trick which could speed up the execution of the Dijkstra algorithm. Let me shortly introduce this shortest path algorithm:


If you need the path (and not only the shortest path tree) you will give the method an additional toNode parameter and compare this to distEntry.node to break the loop. When it was found you need to recursivly extract the path from the last distEntry.parent reference.

So, what should we improve?

Regarding performance I’ve already included a Map to directly get the DistanceEntry from a node, otherwise you would need to search it in the PriorityQueue which is too slow. Also Wikipedias says that we could use a Fibonacci heap which are optimal to decrease the key (aka weight) but those are very complicated to implement and memory intensive.

It turned out that you can entirely avoid the ‘decrease key’ operation if you do a visited.contains check after polling from the queue. This makes your heap bigger but you can avoid the costly update operation and use simpler data structures. Read the full paper “Priority Queues and Dijkstra’s Algorithm”.

What else can we improve?

Now we can tune some data structures:

  1. Make sure that you a traversing your graph with full speed. E.g. using just the graph in-memory without any persistence storage dependency could massivly improve performance. Also if you use node indices (pointing to an array) instead of node objects you can reduce memory consumption and e.g. use a BitSet instead of a set for the visited collection.
  2. In case your heap is relative big (>1000 entries) like for multi-dimensional graphs and even for plane graphs then a cached version with 2 or more stages could give you a 30% boost. When you like more complicated and efficient solutions you could implement the probably faster sequence heap and others.
  3. If you have a limited range of weights/keys you can try a TreeMap<Key, Set> which could speed up your code by roughly 10% if you heavily use the decreaseKey method.

For road networks and others you can apply A* which reduces the amount of visited nodes via guessing where the goal is – still the path is optimal IF the real path is longer than to what you guessed (e.g. use direct linear distance in road networks which is always smaller to the real distance):


Additionally if you accept some less optimal solutions you can apply heuristics like “don’t explore that much more nodes if you’r close the destination”.

If you don’t want less optimal paths and still want it faster you could

Running Shortest-Path Algorithms on the German Road Network within a 1.5GB JVM

Update: With changes introduced in January 2013 you only need 1GB – live demo!

In one of my last blog posts I wrote about memory efficient ways of coding Java. My conclusion was not a bright one for Java: “This time the simplicity of C++ easily beats Java, because in Java you need to operate on bits and bytes”. But I am still convinced that in nearly every other area Java is a good choice. Just some guys need to implement the dirty parts of memory efficient data structures and provide a nice API for the rest of the world. That’s what I had in mind with


GraphHopper does not attack memory efficient data structures like Trove4j etc. Instead it’ll focus on spatial indices, routing algorithms and other “geo-graph” experiments. A road network can already be stored and you can execute Dijkstra, bidirectional Dijkstra, A* etc on it.

Months ago I took the opportunity and tried to import the full road network of Germany via OSM. It failed. I couldn’t make it working in the first days due to massive RAM usage of HashMaps for around 100 mio data points (only 33 mio are associated to roads though). Even using only trove4j brought no success. When using Neo4J the import worked, but it was very slow and when executing algorithms the memory consumption was too high when too many nodes where requested (long paths).

Then after days I created a memory mapped graph implementation in GraphHopper and it worked too. But the implementation is a bit tricky to understand, not thread safe (even not for two reading threads yet), slower compared to a pure in-memory solution. But even more important: the speed was not very predictable and very ugly to debug if off-heap memory got rare.

I’ve now created a ‘safe’ in-memory graph, which saves the data after import and reads once before it starts. At the moment this is read-thread-safe only, as full thread safety would be too slow and is not necessary (yet).

Now performance wise on this big network, well … I won’t talk about the speed of a normal Dijkstra, give me some more time to improve the speed up technics. For a smaller network you can see below that even for this simplistic approach (no edge-contraction or edge-reduction at all) the query time is under 150ms and will be under 100ms for bidirectional A* (w/o approximation!), I guess.

In order to perform realistic route queries on a road network we would like to satisfy two use cases:

  1. Searching addresses (cities, streets etc)
  2. Clicking directly on the map to create a route query

The first one is simple to solve and it is very unlikely to avoid tons of additional RAM. But we can solve it very easy with ElasticSearch or Lucene: just associate the cities, streets etc to the node ids of the graph.

The second use case requires more thinking because we want it memory efficient. A normal quad-tree is not a good choice as it requires too many references. Even for a few million data points it requires several dozens of MB in addition to the graph! E.g. 80MB for only 4 mio nodes.

The solution is to use a raster over the area – which can be a simple array addressed by spatial keys. And per quadrant (aka tile) we store one array index of the graph as entry point. (In fact this is a quad-tree of depth one!) Then when a click on the map happened, we can calculate the spatial key from this point (A), then get the entry point from the array and traverse the graph to get the point in the graph which is the closest one to point A. Here is an implementation, where only one problem remains (which is solved in the new index).

Unfair Comparison

In the last days, just for the sake of fun, I took Neo4J and ran my bidirectional Dijkstra with a small data set – unterfranken (1 mio nodes). GraphHopper is around 8 times faster and uses 5 times less RAM:

The lower the better – it is the mean time in seconds per run on this road network where two of the algorithms (BiDijkstraRef, BiDijkstra) are used. The number in brackets is the actually used memory in GB for the JVM. The lowest possible memory for GraphHopper was around 160MB, but only for the more memory friendly version (BiDikstra).

For all Neo4J-Bashers: this is not a fair comparison as GraphHopper is highly specialized and will only be usable for 2D networks (roads, public transport) and it is also does not have transaction support etc. as pointed out by Michael:

But we can learn that sometimes it is really worth the effort to create a specialized solution. The right tools for the right job.


Although it is not easy to create memory efficient solutions in Java, with GraphHopper it is possible to import (2.5GB) and use (1.5GB) a road network of the size of Germany on a normal sized machine. This makes it possible to process even large road networks on one machine and e.g. lets you run algorithms on even a single, small Amazon instance. If you reduce memory usage of your (routing) application you are also very likely to avoid garbage collection tuning.

There is still a lot room to optimize memory usage and especially speed, because there is a lot of research about road networks! You’re invited to fork & contribute!

Failed Experiment: Memory Efficient Spatial Hashtable

The Background of my Idea

The idea is to use a hash table for points (aka HashMap in Java) and try to implement neighbor searches. First of all you’ll need to understand what a spatial key is. Here you can read the details, but in short it is a binary Geohash where you avoid the memory inefficient base 32 representation.

Now that we have the spatial key, you can think about an array which is used to map from indices (like spatial keys) to values. This is the simplest representation of a spatial index as we don’t need to store the keys at all. But it is only memory efficient iff we would have no empty entries, which is very unlikely for clustered, real-world GIS-data.

If we would solve this with a normal hash table we encounter two problems:

  • It is very unlikely that points in the same area come into the same hash bucket – making neighborhood searches slow i.e. O(n)
  • It would be necessary to store the entire point – not only the associated value. Otherwise it would be impossible in case of a hash-collision to detect which point belongs to which value.

My idea is to use parts of the spatial key for the hashcode and avoid storing the entire key. It is implemented in Java (open source and available at GitHub).

As you can see we’re still using an array of buckets and a “somehow” converted spatial key to get

The Bucket Index

Let me explain the necessary bucket index in more detail (see picture above on the right).

We skip the beginning bits of every spatial key as it is  identical for an area a lot smaller than the world boundaries like Germany.

If we use the first part of the spatial key – in the picture identified as x, then the array is small enough. But with some real world data (also available as osm) this is not sufficient. Too many overflows would happen, some buckets would have several thousands entries! If we move the used part a bit to the right this gets a lot better e.g. for 4 entries per bucket we have a RMS error of about 2.

We now have a form of

A Hashtable & Quadtree Mixture

We can tune if our data structure behaves like a quad tree or a hash table. When moving the bits taken from spatial key to the left we get an quad tree-like characteristics. Taking the bits more from the right we get hash table-like characteristics.

This would be fine if we have massive data. But we need to make this approach practical also e.g. for only 2 mio data points. Because the part of the spatial key is only 19 bits long: if we assume 4 entries per bucket we come to approx. 2 mio (4 * 2^19 = 4 * 524 288). So the bucket index alone is too short. The solution to this problem is to do a bit-operation of the left and the right part of the spatial key:

bucketIndex = x ^ y

Further reduce memory consumption

Besides the fact that we now have some kind of a pointer-less or linear quad tree we can further reduce the memory footprint. We store only the required part (e.g. all bits except y) and not the full spatial key. For this it was necessary that our bit operation (or more generic “hashing scheme”) is reversible. Ie.: we can regain the full spatial key from only the bucket index and the stored part of the key. And in our case the x XOR y it reversible. In fact this memory reduction can be applied to any hashing procedure which fulfills this ‘reversible’ requirement.

Speed of Neighbor Queries is Bad

Neighborhood searches are very slow, slower than I expected. The naiv approach resulted in 60 seconds for a 10km search – 30 times slower as it would take to process all 2 mio entries. When tuning the overflow schema we are now a bit under 2 seconds. Still 10 times slower than a normal quad tree and as slow as processing all entries. The reason for why this storage is only good for get and put operations is that the same bucket needs to be parsed several times: as the same bucket index needs to be the home for several different locations – yeah, exactly as intended.

The good news are:

1. When I was moving the bucket-index-window a lot to the left it gets faster and faster, but took dozens of seconds to create the storage due to the heavy overflow number even for my small data set (2 mio). It could be improved a bit when applying different overflow strategies e.g. not using a linear overflow but skipping every two buckets or others.

2. Even in this state this idea can be used as a memory efficient spatial key-value storage without the neighbor search. E.g. you already have a graph of roads but you need an entrance like a HashMap<Point,NodeId> for it, then our data structure is an efficient hash table. Also doing a simple rectangular neighbor search should be fast: requesting only the 8 surrounding bounding boxes. Then no tree traversal is necessary and every box can be done with just a loop through the bucket array.

3. Another possibility is to use a small quadtree as an entry (mapping spatial keys to ids) for a 2D-graph. Then traversing this graph to find neighbors. This way I’ve finally chosen as I already needed a road-network for Dijkstra. So I only need additional 10MB for the small quadtree inex, see a possible next blog entry.

4. I’m not alone – you can take my idea and try implementing a more efficient neighbor searches yourself 🙂 !


In this post I’ve explained how to create a spatial hash table which is optimized in memory usage. This is achieved combining two ideas: using a hash table-alike data structure still ‘somehow suited’ for neighborhood searches and reducing the amount of memory while storing only parts of the hashkey. This second idea could be applied on every kind of hash table but only if the hashkey creation is reversible.

The ideas are implemented in Java for the GraphHopper project –  see the geohash package. Sadly the perfomance for neighbor searches is really bad. Which created a different solution in my mind (see point 3 of good news).


In the literature similar data structures are called linear or pointer-less quad trees. After this experiment I come to the conclusion that the best way to implement a memory efficient spatial storage which is also able to perform fast neighbor queries could be a prefix quad tree. Still using pointers but storing two bits in very branch node and avoid those bits in the leaf nodes. Ongoing work for this is done currently in Spatial4J & Lucene 4.0 – actually without the use of spatial keys.