How to use plain OpenData and Imagery, to train, an accurate Deep Learning model, able to detect inconsistencies in OSM dataset, to spot it and to extract features.
And make it works at scale, with OpenSource solution, named: RoboSat.pink.
Deep Learning approaches already proves that they can be helpful for QA or MissingMap areas.
RoboSat.pink as an efficient OpenSource Deep Learning toolbox dedicated to GeoSpatial Imagery, can definitely help to quickly compare two datasets, as OSM and a coverage Imagery, and do it at scale.
And spot where differences are significant enough, to value, that a human give them a look.
This talk will focus on:
- How to create an accurate trained model, for buildings and roads detection, from plain OpenData, without the needs to spend to much for hand-labeling features.
- How to generate predictions faster, to lower the IT hardware footprint as much as we can.
Point here, is to allow that anyone with a recent gamer video card, already can play with this tools.
For information, RoboSat.pink main characteristics:
- Provides several command line tools, you can combine together to build your own workflow
- Follows geospatial standards to ease interoperability and data preparation
- OSM data loader (using PyOsmium)
- Build-in cutting edge Computer Vision model and loss implementations (and allows to replace by your owns)
- Support either RGB or multibands imagery (as multispectral)
- Allows Data Fusion
- Rich and efficient Data Augmentation abilities (using Albumentations)
- Static Web-UI tools to easily display, hilight or select results
- High performances