Feed on

Updating the ORS routing graph… frequently!

We’re very happy to announce two more important core features for our services:

  1. Weekly OSM planet.pbf update!
  2. md5 checksum included in response!

How old is the OpenStreetMap planet file which was used to build the openrouteservice road network? This is a question many of you have asked over the past. To this end, we finally added one very simple feature to the API response, namely the specific md5 checksum of the planet file which was used. You will find it in the information block – here a simple example:

"info": {
"attribution": "openrouteservice.org | OpenStreetMap contributors",
"osm_file_md5_hash": "3cf93f78507dd63f479e558854b55acb",
"engine": {
"version": "4.4.0",
"build_date": "2018-01-19T13:50:52Z"

ORS now responds in gpx!

Probably all of you have had the requirement of transforming formats into other formats. There are many ways to do so but every time it’s one more step of work. And honestly, this can become quite annoying. To this end, we have decided to extend the API’s capabilities. Our first feature in this direction is enabling the response format to be in GPX. This suggestion came from one of our many users who wanted to save the APIs route response directly to a TomTom navigation device. Quite handy! To give you an example, try this URL with your api_key:


Control the Snapping Tolerance

One interesting requirement which we have been asked again and again by the openrouteservice community is to implement means of control in regards to the snapping tolerance to the underlying street network. Sounds complicated but it isn’t. Well, by default our engine will look for the next street segment within a 50 kilometer radius. You might have noticed this behaviour on openrouteservice.org – add a waypoint that doesn’t necessarily lie on the road – which yet can be used to compute a route. This can now be controlled by you by using the radiuses parameter in your request (please be aware that this list length must correspond to the number of waypoints)!

Let’s give this a try by setting radiuses to 500 meters for each waypoint – perfect, it will find a route:


And now we will decrease to 50 meters, aha, it will respond with 
“Cannot find point 1: 50.720223,10.890477 within a radius of 50.0 meters”!


Happy routing!

we cordially invite everybody interested to our next open GIScience colloquium talk

Context-Aware Movement Analysis: An Application to Similarity Search of Trajectories

Dr. Mohammad Sharif
Department of Geographic Information Systems, Faculty of Geomatics Engineering, K. N. Toosi University of Technology, Tehran, Iran

Time and date: Mon, January 22, 2:15 pm
Venue: INF 348, Room 015, Department of Geography, Heidelberg University

Studying movement in geographic information science (GIScience) has received attention in recent years because it plays a crucial role in understanding and modeling various spatial activities and processes. In reality, movement of an object is embedded in context and is highly affected by both internal and external contexts. The former is any factor that is related to the object’s characteristic, state, and condition, while the latter is dedicated to the environmental conditions during the move. Such consequential influence has created new paradigms for context-aware movement data mining and analysis. Among the potential movement analysis research, studying moving point objects (MPOs) and measuring the similarities between their trajectories have been of interest recently because it can be the basis for understanding objects’ behaviors, extracting their movement patterns, and predicting their future movement trends. Despite such importance, less attention has been paid to contextualizing similarity search of trajectories, so far. In this research, after providing a new definition and a taxonomy for context in movement analysis, a series of distance functions have been developed for assessing the similarities of trajectories, by including not only the spatial footprints of MPOs but also a notion of their internal and external contexts. In other words, the degree of similarity between two trajectories not only is related to the spatial and temporal closeness of trajectories but also is highly associated with the commonalities in the contexts that they share. The effectiveness of the developed methods have been examined in several experiments on real datasets, i.e., commercial airplanes’, pedestrians’, and cyclists’ trajectories, in separate study areas, while accounting the internal and external context information during the movement. The results of these implementations demonstrate the significance of incorporating contextual information in movement studies, as movement is highly affected by context in both positive and negative manners.

Land use data created by humans (OSM) was fused with satellite remote sensing data, resulting in a conterminous land use data set without gaps. The first version is now available for all Germany at OSMlanduse.org.
When human input (OSM data) was absent a machine generated missing land use information learning from human inputs and using remote sensing time series as feature space was added. The method outlined in Schultz et al. 2017 http://www.sciencedirect.com/science/article/pii/S0303243417301605 was now used to create the new data set for all of Germany.

Data gaps in the global OSMlanduse.org map were filled for Germany using free remote sensing data, which resulted in a land cover (LC) prototype with complete coverage in this area. Sixty tags in the OSM were used to allocate a Corine Land Cover (CLC) level 2 land use classification, and the remaining gaps were filled with remote sensing data
Have a look at the result on osmlanduse.org (the new version is at the moment for Germany only, the other parts of the world currently use only OpenStreetMap without remote sensing data added).
Stay tuned for further versions and improvements!

Schultz, M., Voss, J., Auer, M., Carter, S., and Zipf, A. (2017): Open land cover from OpenStreetMap and remote sensing. International Journal of Applied Earth Observation and Geoinformation, 63, pp. 206-213. DOI: 10.1016/j.jag.2017.07.014.


Related earlier work:

Jokar Arsanjani, J., Mooney, P., Zipf, A., Schauss, A., (2015): Quality assessment of the contributed land use information from OpenStreetMap versus authoritative datasets. In: Jokar Arsanjani, J., Zipf, A., Mooney, P., Helbich, M., OpenStreetMap in GIScience: experiences, research, applications. ISBN:978-3-319-14279-1, PP. 37-58, Springer Press.

Dorn, H., Törnros, T. & Zipf, A. (2015): Quality Evaluation of VGI using Authoritative Data – A Comparison with Land Use Data in Southern Germany. ISPRS International Journal of Geo-Information. Vol 4(3), pp. 1657-1671, doi: 10.3390/ijgi4031657

Jokar Arsanjani, J., Helbich, M., Bakillah, M., Hagenauer, J., & Zipf, A. (2013). Toward mapping land-use patterns from volunteered geographic information. International Journal of Geographical Information Science, 2264-2278. DOI:10.1080/13658816.2013.800871.

This week the GIScience research group Heidelberg and HeiGIT visited the exhibition “MatheLiebe” in Heidelberg. The exhibition demonstrates that:
• Mathematics is understandable not only for scientists, but for everyone,
• that mathematics is interesting and useful and full of exciting surprises,
• and that mathematics is important for technological progress and for our everyday lives.
We very much enjoyed the guided tour led by Reinhold Weinmann and colleagues.
The traveling exhibition from Liechtenstein is presented in the MAINS, the mathematics computer science station of the Heidelberg Laureate Forum Foundation (HLFF).

“Experiencing Mathematics” is an international exhibition, initiated and supported by UNESCO. Since opening in 2004, it has been shown in more than 30 countries and over 150 times. With the exhibition in Heidelberg “Experiencing Mathematics” can be seen in Germany for the first time.

Further Information:


The Heidelberg Laureate Forum Foundation (HLFF) annually organizes the Heidelberg Laureate Forum (HLF), which is a networking event for mathematicians and computer scientists from all over the world. The HLF was initiated by the German foundation Klaus Tschira Stiftung (KTS), which promotes natural sciences, mathematics and computer science, and the Heidelberg Institute for Theoretical Studies (HITS). The KTS and the HITS were brought to fruition by physicist and co-founder of SAP, Klaus Tschira (1940 – 2015). The Forum is organized by the HLFF in cooperation with KTS and HITS as well as the Association for Computing Machinery (ACM), the International Mathematical Union (IMU), and the Norwegian Academy of Science and Letters (DNVA).

Admittedly, the OpenRouteService Matrix API is anything but new. Already implemented in mid 2017, it’s been soaring to high demand by numerous clients across the globe ever since. Finally we want to give it the credit it deserves.

The ORS Matrix API is the precursor to solve important problems like the traveling salesman problem and even more complex use cases logistics companies face every day. While we’re not exactly at the point of solving all their multi-faceted problems yet, you might still be interested in this tool to calculate up to 5.25 Mio routes per day. Yes, that’s right, 5.25 MILLION PER DAY

And that happens incredibly fast, too. Within a single request, you can fire off a matrix of 50×50 locations and it won’t take noticeably longer than calculating a single route from Milano to Berlin.

There’s gotta be a catch, you think? Well yes, there is: you will not see any geometry of your routes. You will only be returned what matters most: distance and/or duration. Who has time to examine 5 Mio routes every day anyway…

If we sparked your interest, take a look at the documentation, open your IDE and start stressing our servers!


A recently published paper presents an approach for classifying urban blocks according to their built-up structure based on high-resolution spaceborne InSAR images. Most attributes considered in the classification describe the geometric structure and spatial disposition of the polygon and line features extracted from each block. The feature extraction is carried out on two intensity images acquired at the satellite’s ascending and descending orbits. The strategy used for extracting polygon features is described in detail. We also present a Markov random field model used to perform context-based classification of built-up structures. The model establishes a probabilistic dependency between the class labels of two neighbouring blocks, taking in this way advantage of the fact that blocks with the same structure are frequently clustered. 1695 urban blocks were classified into five general built-up types. It is shown that the context-based classification accuracy is up to 6% more accurate than the standard classification on which the context-based model is based. We hence provide evidence (1) that urban block-based classifications can potentially be improved if context is considered and (2) that general built-up structures can be distinguished to a good extent using available high-resolution spaceborne radar images.


Novack, T. and Stilla, U. (2018): Context-Based Classification of Urban Blocks According to Their Built-up Structure PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science, Vol. 1 (5), pp. 1-12.


we cordially invite everybody interested to our next open GIScience colloquium talk!

Prof. Dr. Peter Baumann / Computer Science, Jacobs University, Bremen

Friday, January 19 2018, 10.15 am, INF 348, Room 013, Heidelberg University, Institute for Geography

Datacubes form an enabling paradigm for serving massive spatio-temporal Earth data in an analysis-ready way by combining individual files into single, homogenized objects for easy access, extraction, analysis, and fusion - “one cube says more than a million images”. In common terms, goal is to allow users to “ask any question, any time, on any size” thereby enabling them to “build their own product on the go”.
Today, large-scale datacubes are becoming reality: For server-side evaluation of datacube requests, a bundle of enabling techniques is known which can massively speed up response times, including adaptive partitioning, parallel and distributed processing, dynamic orchestration of mixed hardware, and even federations of data centers. Known datacube services exceed 600 TB, and datacube analytics queries have been split across 1,000+ cloud nodes. Intercontinental datacube fusion has been accomplished between ECMWF/UK and NCI Australia, as well as between ESA and NASA.
From a standards perspective, as per ISO and OGC, datacubes belong to the family of coverages, aka “spatio-temporally varying objects”. the coverage data model is represented by the OGC Coverage Implementation Schema (CIS) standard, the service model by OGC Web Coverage Service (WCS) together with its OGC Web Coverage Processing Service (WCPS), OGC’s geo datacube query language. Additionally, ISO is finalizing application-independent query support for massive multi-dimensional arrays in SQL.
In our talk we present the concept of datacubes, the standards that play a role, as well as interoperability successes and issues existing, based on our work on the OGC Reference Implementation, rasdaman.

Further details + dates?

We are looking forward to a large attendance!

Liebe Studierende aller Semester und Studiengänge,

ich möchte Sie herzlich zu zwei besonderen Veranstaltungen im Rahmen der Vorlesung “Geodatenerfassung” in die Geographie einladen, die Sie sicher interessieren könnten:

  • Montag, 22.01.2018 um 9.15 im gHS (INF 230, COS): Spezialvorlesung: Amtliche Geodaten bei der Stadt Heidelberg: Gewinnung, Nutzung und Vorhaltung. Hubert Zimmerer, Leiter des GIS der Stadt Heidelberg, berichtet in einem spannenden Vortrag aus der Praxis der Geodatenerfassung in “unserer” Stadt. Er zeigt neueste Beispiele und was uns die nahe Zukunft zum Thema Geodaten in Heidelberg bringt.
  • Montag, 29.01.2018 um 9.15 im gHS (INF 230, COS): Spezialvorlesung: Spielbasierte Ansätze in der Geodatenerfassung. Heinrich Lorei führt sie in die Welt der spielbasierten Geodatenerfassung ein und lässt Sie auch aktiv im Rahmen seiner Forschungsarbeit daran teilhaben.

H. Lorei: “Punkte, Levels und Highscores - von diesen Begriffen habt ihr bestimmt schon einmal was gehört: Sie sind elementare Bestandteile von Spielen und erfreuen sich steigender Popularität. Doch was macht Spiele so interessant und spannend? Anhand von Beispielen wie Tetris, World of Warcraft und Grand Theft Auto wird dies in der Vorlesung erläutert. Weiterhin wird unter Gamification die Verwendung von Spielelementen in spielfremden Kontexten verstanden. Sie wird immer öfter im Alltag, z.B. beim Joggen oder im Haushalt eingesetzt, um aus “langweiligen” Aktivitäten spannende Herausforderungen zu schaffen und den inneren Schweinehund zu überwinden. Deswegen stellt sich die Frage: können Spielelemente dazu genutzt werden, um auch in die Erfassung von Geodaten mehr Abwechslung zu bringen? Auf diese Weise können Geodaten häufiger aktualisiert werden und fehlende Attribute von den Spielern eingetragen werden. Apps wie KORT und StreetComplete haben dies am Beispiel von OpenStreetMap bereits realisiert, schöpfen ihr Potenzial jedoch nicht aus. Der letzte Teil der Vorlesung behandelt somit Erweiterungen, die infrage kommen, um dessen Wiederspielwert zu steigern.

Disaster events damage human infrastructure and its surroundings within seconds. To support humanitarian logistics, the Disaster OpenRouteService needs the latest, most accurate data available. While crowd-sourcing OSM updates during disasters proved very successful, there is not yet a convenient way of automatically accessing up-to-date OSM data for specific regions of interest. Addressing this need, HeiGIT @ GIScience Heidelberg developed a server that provides up-to-date OSM extracts via a web interface that is easy to use.

real-time OSM manages the creation and execution of update tasks, each responsible for extracting, updating and serving OSM data of a user-defined region of the earth. Tasks can be added, modified and deleted easily via an API or a convenient web interface.

We are now working towards deploying this server as a web service hosted by HeiGIT. This will provide you with easy access to the most current data as needed by the disasterORS. Stay tuned for further improvements and updates!

The tool was developed by Stefan Eberlein. The work at HeiGIT is supported by the Klaus Tschira Foundation, Heidelberg.

For more information, visit the real-time OSM Github page.

ps. check also our Open Position: Software Developer: OSM Routing Services

Recently, deep learning has been widely applied in pattern recognition with satellite images. Deep learning techniques like Convolutional Neural Network and Deep Belief Network have shown outstanding performance in detecting ground objects like buildings and roads, and the learnt deep features are further applied in some prediction tasks like poverty and population mapping. On the other hand, such deep learning techniques usually rely on a large set of labeled training samples (i.e., human knowledge) for supervision. Volunteered Geographic Information (VGI) like OpenStreetMap provides a way to easily get a large set of such training data. Meanwhile, utilizing VGI for deep learning brings new technical challenges like
1) how to deal with the noise in VGI data which are usually contributed by common people instead of experts, and
2) how to transfer learnt models from area to area and from time to time, as there is usually a gap between the volunteer labeled targets and the unknown targets waiting for prediction.
A chapter in a recently published book by Karimi and Karimi (2017) introduces the current work in this field, including satellite image classification with deep learning, challenges and solutions in utilizing VGI data, esp. OpenStreetMap (OSM) but also MapSwipe data, domain adaptation and feature transferring, and applications.

First the typical deep learning studies in satellite image classification as well as some classic benchmarks are analyzed, and then the chapter focuses on the problem of automatically extracting big sample sets from VGI data for the supervision of training deep networks. Two main technical challenges about sample noise and domain adaptation as well as their solutions in VGI data quality research and machine learning research are introduced. Finally, several applications where the above techniques and data can be applied are presented.

The chapter builds upon work done in the deepVGI project at HeiGIT (Heidelberg Institute for Geoinformation Technology) at Heidelberg University.

Chen, J., Zipf, A. (2017): Deep Learning with Satellite Images and Volunteered Geographic Information (VGI). In: Karimi, H. A. and Karimi, B. (eds.): Geospatial Data Science Techniques and Applications. Chapter 3. pp. 63-78. crc press. Taylor & Francis.


« Newer Posts - Older Posts »