Inferencing indoor room semantics using random forest and relational graph convolutional networks (deep learning)

Semantically rich maps are the foundation of indoor location‐based services. Many map providers such as OpenStreetMap and automatic mapping solutions focus on the representation and detection of geometric information (e.g., shape of room) and a few semantics (e.g., stairs and furniture) but often still neglect room usage. To mitigate the issue, a new published paper (early view) proposes a automated general room tagging method for public buildings, which can benefit both existing map providers and automatic mapping solutions by inferring the missing room usage based on indoor geometric maps.
Two kinds of statistical learning‐based room tagging methods are adopted:
– traditional machine learning (e.g., random forests) and
– deep learning, specifically relational graph convolutional networks (R‐GCNs), based on the geometric properties (e.g., area), topological relationships (e.g., adjacency and inclusion), and spatial distribution characteristics of rooms. In the machine learning‐based approach, a bidirectional beam search strategy is proposed to deal with the issue that the tag of a room depends on the tag of its neighbours in an undirected room sequence.
In the R‐GCN‐based approach, useful properties of neighbouring nodes (rooms) in the graph are automatically gathered to classify the nodes. Research buildings are taken as examples to evaluate the proposed approaches based on 130 floor plans with 3,330 rooms by using fivefold cross‐validation.
The experiments conducted show that the random forest‐based approach achieves a higher tagging accuracy (0.85) than R‐GCN (0.79).

Hu, X., Fan, H., Noskov, A., Wang, Z., Zipf, A., Gu, F., Shang, J. (2020 early view). Room Semantics Inference Using Random Forest and Relational Graph Convolutional Network: A Case Study of Research Building. Transactions in GIS. doi 10.1111/tgis.12664

Earlier related work:

Hu, X., Ding, L., Shang, J., Fan, H., Novack, T., Noskov, A., Zipf, A. (2019). A Data- driven Approach to Learning Saliency Model of Indoor Landmarks by Using Genetic Programming. International Journal of Digital Earth. https://doi.org/10.1080/17538947.2019.1701109

Hu X., Fan H., Noskov A., Zipf A., Wang Z., Shang J. (2019): Feasibility of Using Grammars to Infer Room Semantics. Remote Sensing. 11(13):1535. https://doi.org/10.3390/rs11131535

Goetz, M. & Zipf, A. (2012): Using Crowdsourced Indoor Geodata for Agent-Based Indoor Evacuation Simulations. ISPRS International Journal of Geo-Information. Vol.1(2), pp.186-208. MDPI. DOI:10.3390/ijgi1020186.

Novack T., Vorbeck L., Lorei H., Zipf A. (2020): Towards Detecting Building Facades with Graffiti Artwork Based on Street View Images. ISPRS International Journal of Geo-Information. 2020; 9(2):98.

Fan, H., A. Zipf and H. Wu (2016): Detecting repetitive structures on building footprints for the purpose of 3D modeling and reconstruction. International Journal of Digital Earth (IJDE). 1-13. http://dx.doi.org/10.1080/17538947.2016.1252433

Fan, H., Zipf, A., Fu, Q. & Neis, P. (2014): Quality assessment for building footprints data on OpenStreetMap. International Journal of Geographical Information Science (IJGIS). DOI: 10.1080/13658816.2013.867495.

Fan, H., Zipf, A. & Fu, Q. (2014): Estimation of building types on OpenStreetMap based on urban morphology analysis. AGILE Conference, Castellón, Spain, Lecture Notes in Geoinformation and Cartography, “Connecting a Digital Europe through Location and Place”, pp. 19-35. Springer.

Goetz, M. & Zipf, A. (2013): The Evolution of Geo-Crowdsourcing: Bringing Volunteered Geographic Information to the Third Dimension. In: Sui, D.Z., Elwood, S. & Goodchild, M.F. (eds.): Crowdsourcing Geographic Knowledge. Volunteered Geographic Information (VGI) in Theory and Practice. Berlin: Springer. 2013, XII, 396 pp. 139-159