Feed aggregator

Free and Open Source GIS Ramblings: Porting Processing scripts to QGIS3

Planet OSGeo feeds - Sun, 01/28/2018 - 18:10

I’ll start with some tech talk first. Feel free to jump to the usage example further down if you are here for the edge bundling plugin.

As you certainly know, QGIS 3 brings a lot of improvements and under-the-hood changes. One of those changes affects all Python scripts. They need to be updated to Python 3 and the new PyQGIS API. (See the official migration guide for details.)

To get ready for the big 3.0 release, I’ve started porting my Processing tools. The edge bundling script is my first candidate for porting to QGIS 3. I also wanted to use this opportunity to “upgrade” from a simple script to a plugin that integrates into Processing.

I used Alexander Bruy’s “prepair for Processing” plugin as a template but you can also find an example template in your Processing folder. (On my system, it is located in C:\OSGeo4W64\apps\qgis-dev\python\plugins\processing\algs\exampleprovider.)

Since I didn’t want to miss the advantages of a good IDE, I set up PyCharm as described by Heikki Vesanto. This will give you code completion for Python 3 and PyQGIS which is very helpful for refactoring and porting. (I also tried Eclipse with PyDev but if you don’t have a favorite IDE yet, I find PyCharm easier to install and configure.)

My PyCharm startup script qgis3_pycharm.bat is a copy of C:\OSGeo4W64\bin\python-qgis-dev.bat with the last line altered to start PyCharm:

@echo off call "%~dp0\o4w_env.bat" call qt5_env.bat call py3_env.bat @echo off<span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span> path %OSGEO4W_ROOT%\apps\qgis-dev\bin;%PATH% set QGIS_PREFIX_PATH=%OSGEO4W_ROOT:\=/%/apps/qgis-dev set GDAL_FILENAME_IS_UTF8=YES rem Set VSI cache to be used as buffer, see #6448 set VSI_CACHE=TRUE set VSI_CACHE_SIZE=1000000 set QT_PLUGIN_PATH=%OSGEO4W_ROOT%\apps\qgis-dev\qtplugins;%OSGEO4W_ROOT%\apps\qt5\plugins set PYTHONPATH=%OSGEO4W_ROOT%\apps\qgis-dev\python;%PYTHONPATH% start /d "C:\Program Files\JetBrains\PyCharm\bin\" pycharm64.exe

In PyCharm File | Settings, I configured the OSGeo4W Python 3.6 interpreter and added qgis-dev and the plugin folder to its path:

With this setup done, we can go back to the code.

I first resolved all occurrences of import * in my script to follow good coding practices. For example:

from qgis.core import *

became

from qgis.core import QgsFeature, QgsPoint, QgsVector, QgsGeometry, QgsField, QGis<span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span>

in this PR.

I didn’t even run the 2to3 script that is provided to make porting from Python 2 to Python 3 easier. Since the edge bundling code is mostly Numpy, there were almost no changes necessary. The only head scratching moment was when Numpy refused to add a map() return value to an array. So (with the help of Stackoverflow of course) I added a work around to convert the map() return value to an array as well:

flocal_x = map(forcecalcx, subtr_x, subtr_y, distance) electrostaticforces_x[e_idx, :] += np.array(list(flocal_x))

The biggest change related to Processing is that the VectorWriter has been replaced by a QgsFeatureSink. It’s defined as a parameter of the edgebundling QgsProcessingAlgorithm:

self.addParameter(QgsProcessingParameterFeatureSink( self.OUTPUT, self.tr("Bundled edges"), QgsProcessing.TypeVectorLine))

And when the algorithm is run, the sink is filled with the output features:

(sink, dest_id) = self.parameterAsSink( parameters, self.OUTPUT, context, source.fields(), source.wkbType(), source.sourceCrs()) # code that creates features sink.addFeature(feat, QgsFeatureSink.FastInsert)

The ported plugin is available on Github.

The edge bundling plugin in action

I haven’t uploaded the plugin to the official plugin repository yet, but you can already download if from Github and give it a try:

For this example, I’m using taxi pick-up and drop-off data provided by the NYC Taxi & Limousine Commission. I downloaded the January 2017 green taxi data and extracted all trips for the 1st of January. Then I created origin-destination (OD) lines using the QGIS virtual layer feature:

To get an interesting subset of the data, I extracted only those OD flows that cross the East River and have a count of at least 5 taxis:

Now the data is ready for bundling.

If you have installed the edge bundling plugin, the force-directed edge bundling algorithm should be available in the Processing toolbox. The UI of the edge bundling algorithm looks pretty much the same as it did for the QGIS 2 Processing script:

Since this is a small dataset with only 148 OD flows, the edge bundling processes is pretty quick and we can explore the results:

Beyond this core edge bundling algorithm, the repository also contains two more scripts that still need to be ported. They include dependencies on sklearn, so it will be interesting to see how straightforward it is to convert them.

Just van den Broecke: Emit #2 – On Air Quality Sensors

Planet OSGeo feeds - Sat, 01/27/2018 - 00:36

This is Emit #2, in a series of blog-posts around the Smart Emission Platform, an Open Source software component framework that facilitates the acquisition, processing and (OGC web-API) unlocking of spatiotemporal sensor-data, mainly for Air Quality. In Emit #1, the big picture of the platform was sketched. Subsequent Emits will detail technical aspects of the SE Platform. “Follow the data” will be the main trail loosely adhered to.

In this Emit I will talk a bit about sensors as the data flow originates there. Mind, this is not my area of expertise, but much of the SE platform, in particular data processing (ETL),  is built around challenges of dealing with (cheap) sensors for Air Quality.

Previously I posted about meteo sensors and weather stations (part 1, part 2, part 3): how to connect a weather station to a Raspberry Pi and publish weather data “to the cloud”.  Now this was relatively easy, due to the availability of:

So without any programming, you can be “in business” quite quickly with your personal weather station. In addition: meteo sensors (temperature, humidity, pressure, wind, rain) in general produce relatively “clean/interpretable data”. From a cheap  sensor like the $9,95 DH22 Temperature Humidity Sensor , it is relatively straightforward to read-out temperature and humidity via an Arduino Board or Raspberry Pi. Personal Weather Stations provide even more internal software, so most meteo data comes out in well-known units (Fahrenheit/Celsius, HectoPascal, etc).

Now this is a whole different story for (cheap) Air Quality sensors.  It begins with the fact that measuring Air Quality indicators like Carbon Monoxide/Dioxide (CO, CO2), Nitrogen Monoxide/Dioxide (NO, NO2), Particulate Matter (PM, i.e. fine dust), Ozone (O3) requires many ways …, with both simple chemical and physical methods and with more sophisticated electronic techniques (from www.enviropedia.org.uk). While techniques for measuring weather data have evolved for maybe hundreds of years, measuring Air Quality is relatively new and mostly within the domain of “chemistry”, and when it comes to sensors, very expensive.

Recently, this has changed. Not only are governmental environmental agencies facing lowering budgets, but more importantly, a growing number of civilian initiatives want to “take things in their own hand” with respect to measuring Air Quality. As a result more and more affordable/cheap sensors and creative solutions like the iSpex (measure PM on your iPhone) are entering the market. But given the (chemical) complexity, how reliable are these sensors? Is the data that they produce readily usable? Like with Celsius to Fahrenheit, is it a matter of providing some simple formula?

IMHO unfortunately not, but things are getting better as time passes. It also depends on the chemical component you want to measure. For example, Nitrogen Dioxide (NO2) and Ozone (O3) appear to be much harder to measure than CO/CO2. Particulate Matter is a whole story by itself as one deals with, well, “dust” in many shapes and especially sizes (PM10, PM2.5, PM1).

There is ample research for the quest of finding cheap AQ sensors: their limitations, reliabilities, particularities. Within the Smart Emission Project, I am working with RIVM, the  Dutch National Institute for Public Health and the Environment and the  European Union Joint Research Centre (JRC), who both did extensive research on cheap AQ sensors. I know there is much more, but forgot to mention that the main message of this Emit is that “measuring AQ has far more challenges than measuring weather data”. One of the main conclusions, again IMHO, is that, yes, it is possible (to use cheap AQ sensors), but one has to do Calibration. Below some links if you are interested in the state of RIVM and EU JRC research:

Though the latter EU JRC report may be a tough read, it is one of the most detailed and concise reports on the application of low-cost AQ sensors, and, I mention it again, different techniques for Calibration.

So back to the Smart Emission Platform, what sensors are we using currently? The SE Platform started with the Smart Emission Nijmegen Project, where participating citizens of the City of Nijmegen, would each have their own sensor station that publishes data to the SE Platform.

Our partner in the project Intemo, develops a sensor station, Josene nicknamed ‘Jose’, that measures not only AQ but also sound levels (Noise Pollution) and many other indicators, like light.

In the course of the project I was fortunate to follow a workshop at EU JRC for their amazing Open Hardware/Software product AirSensEUR. At the spot each of us assembled a complete ASE, connecting these to standard web services like SOS. The ASE Open Hardware approach also allows it to embed a multitude of sensor types and brands. The workshop had participants from the major national environmental agencies in Europe. In fact RIVM is now deploying and testing about 18 AirSensEURs. Coming year I have the task to deploy five ASEs within The Netherlands. Two of them are already  humming at my desk for testing.

AirSensEUR #2 at my desk

Describing AirSensEUR would require a full post by itself. Quoting: “AirSensEUR is an open framework focused on air quality monitoring using low cost sensors. The project started on 2014 from a group of passionate researchers and engineers. The framework is composed by a set of electronic boards, firmware, and several software applications.”

EU JRC AirSensEURs

So currently (jan 2018) the SE Platform supports both the Josene/Jose and AirSensEUR sensor devices.

The Air Quality sensor data coming out of  these devices still requires cleanup and  Calibration. This is part of the data handling within the SE platform, subject of one of the upcoming Emits.

This post was meant to give you a taste of the challenges around using (cheap) sensors for Air Quality and introduce the two sensor devices (Josene, AirSensEUR) currently used/supported by the Smart Emission Platform. Many details are still to be uncovered. These will be subjects of upcoming Emits.

 

gvSIG Team: 3rd gvSIG Festival: A new edition of the virtual conference about gvSIG

Planet OSGeo feeds - Fri, 01/26/2018 - 12:30

The third edition of the gvSIG Festival, the virtual conference about the gvSIG project, will be held in March 21st and 22nd 2018.

Just like in the last edition, a period for sending proposals of projects about the application has been open, so that users that can’t present their projects in any of the existing face-to-face conferences can do it from their city.

This event is free of charge and completely online, through the webinar service of the gvSIG Association, with the advantage to count with speakers from different countries and presentations in different languages, where users and developers from any part of the world can hold them.

If you have done any project with gvSIG and you want to present it at the gvSIG Festival, you can send a summary to the following e-mail address: conference-contact@gvsig.com, explaining the project. The summary will have no more than 300 words, it has to be written in Spanish or English, and you have to indicate the title and the language of the presentation.

Once the program is configured it will be published at the event website, and registrations will be opened.

We expect your proposals!

gvSIG Team: 3er gvSIG Festival: Una nueva edición de las jornadas virtuales sobre gvSIG

Planet OSGeo feeds - Fri, 01/26/2018 - 12:30

Los próximos días 21 y 22 de marzo de 2018 tendrá lugar la tercera edición del gvSIG Festival, una nueva edición de las jornadas virtuales sobre el proyecto gvSIG.

Al igual que en la anterior edición, se ha abierto un periodo de envío de propuestas de proyectos sobre la aplicación, lo que hace que los usuarios que no pueden presentar sus proyectos en las diferentes jornadas presenciales existentes puedan hacerlo desde su propia ciudad.

Este evento es gratuito y completamente online, a través del servicio de webinar en la Asociación gvSIG, con la ventaja de poder contar con ponentes de diferentes países, y con ponencias en diferentes idiomas, que pueden ser seguidas por usuarios de cualquier parte del mundo.

Si has realizado algún proyecto con gvSIG y quieres presentarlo en el gvSIG Festival, puedes enviar un resumen explicando en qué consiste a la dirección de correo conference-contact@gvsig.com. El resumen deberá ser en español o inglés, de un máximo de 300 palabras, y en él se deberá indicar el título de la ponencia y el idioma en el que se daría.

Una vez configurado el programa se publicará en la web del evento y se abrirán las inscripciones para cada presentación.

¡Esperamos vuestras propuestas!

gvSIG Team: Jornada gvSIG en la Universidad Politécnica de Cartagena (España)

Planet OSGeo feeds - Fri, 01/26/2018 - 11:07

El próximo 31 de enero la Asociación gvSIG estará en la Escuela Técnica Superior de Arquitectura y Edificación de la Universidad Politécnica de Cartagena (España) mostrando la Suite gvSIG e impartiendo talleres de formación.

El programa de la jornada es el siguiente:

  • Ponencia sobre Introducción a la suite gvSIG (16:00-17:00)
  • Taller de gvSIG aplicado a urbanismo (17:00-18:45)
  • Taller de geoestadística en gvSIG (19:15-21:00)

Si estás interesado/a en participar, solo debes escribir un correo a gvsigproject@gmail.com confirmando si te interesa asistir a la ponencia y a qué taller asistirías. Los talleres tienen plazas limitadas.

Si desde vuestra universidad también estáis interesados en montar una jornada gvSIG parecida, no dudéis en poneros en contacto con nosotros: info@gvsig.com

 

CARTO Inside Blog: How to use CARTO.js with React

Planet OSGeo feeds - Fri, 01/26/2018 - 00:00

The beta version of CARTO.js has been released at the end of last year and, as part of our testing program, we have created several proofs of concept with different frameworks like React or Angular.

This proof of concept is a map showing prices of AirBNB rentals per night in the city of Madrid. Using CARTO.js, we divide the apartments into seven categories according to their price, assigning one color per category. In addition, we create a dynamic histogram that indicates how many apartments belong to each category in the area of the map we are looking at.

Through this simple example we touch on the basic concepts of CARTO.js and we can see how to integrate it with different frameworks.

Basics of CARTO.js

CARTO.js is designed to work together with the CARTO platform in order to unlock the full potential of your geospatial data through a simple javascript API. Of course, the first step is to create a CARTO account and upload the data we want to process. Once you have a created account your username and an API key is all you need to get started!

Client

The carto.Client is the entry point to CARTO.js. It handles the communication between your app and your CARTO account and it contains the model of our application. In this model two types of objects can exist: layers and dataviews. Remember that these objects are useless by themselves and they must be added to a client in order to be interactive.

// Example of how a client is created const client = new carto.Client({ apiKey: '{API Key}', username: '{username}' }); Dataviews

Dataviews are objects used to extract data from a CARTO account in predefined ways (eg: count how many rentals are available, get the average price for an area, …)

This data is considered raw since its form is simply a JSON object from which you can show the data in the way you want. If you want to display this data on a map, you should use a layer.

To create a dataview you just need to indicate the carto.Source and the operation.

// Given the AirBNB dataset get the value of the most expensive rental const maxRentalPriceDataview = new carto.dataview.Formula(airbnbSource, 'price', { operation: carto.operation.MAX, });

Once created and added to a client, this object will fire events containing the requested data.

// Add the dataview to the client await client.addDataview(maxRentalPriceDataview); // Wait for the server to give the data maxRentalPriceDataview.on('dataChanged', newData => { console.log(`The highest AirBNB rental in madrid costs: ${newData}€`); }); Layers

Layers are object used to extract data from a CARTO account and represent them on a map.

As in dataviews, they need a carto.Source that indicates where to extract the data. They also need a carto.Style that contains the information about how the data should be displayed.

const rentalsLayer = new carto.layer.Layer(airbnbSource, airbnbStyle); Display carto.Layers in a map

When layers are created they should be added to a client in order to be displayed in a map.

Calling client.getLeafletLayer you can get a native leaflet object grouping all carto.layers contained in the client. You just need to add this object to your map to view the data! You can do the same with Google Maps in case you want to use CARTO.js with it as well.

// Display the cartoLayers in a leafletMap const cartoLeafletLayer = client.getLeafletLayer(); cartoLeafletLayer.addTo(leafletMap);

This object will remain linked to the client. This means that any changes in the client layers will be immediately reflected in the map. (eg: you hide a layer, or you change the layer style, …)

How to integrate CARTO.js in REACT

You can get the code used throughout this example at cartojs-react-example repository

We used create-react-app to scaffold the basics of the application.

Our project structure looks like this:

src/ ├── components │ ├── Histogram.css │ ├── Histogram.js │ └── Layer.js ├── data │ └── airbnb.js ├── index.js └── utils └── index.js
  • index.js: the entry point of our application.
  • components/Histogram: a widget showing how many rentals are in each one of our price categories.
  • components/Layer: a component used to display rentals in a map.
  • data/airbnb.js: contains the source and default style for the AirBNB dataset.
  • utils/index.js contains a function that creates custom cartoCSS.
Index.js

This is the entry point of the application. It contains the main component of our application which is initialized with a state and a cartoclient as follows:

// We track map's center and zoom and the layer style and visibility state = { center: [40.42, -3.7], zoom: 13, nativeMap: undefined, layerStyle: airbnb.style, hidelayers: true } // Manages the comunication against the server and will keep a list of all layers and dataviews cartoClient = new carto.Client({ apiKey: '{api_key}', username: '{username}' });

The main component contains a layer and a histogram and its JSX will look similar to this:

<!-- WARNING: Only for learning purposes don't copy & paste --> <main> <Map center={center} zoom={zoom} ref={node => { this.nativeMap = node && node.leafletElement }}> <Basemap attribution="" url={CARTO_BASEMAP} /> <Layer source={airbnb.source} style={this.state.layerStyle} client={this.cartoClient} hidden={this.state.hidelayers}/> </Map> <Histogram client={this.cartoClient} source={airbnb.source} nativeMap={this.state.nativeMap} onDataChanged={this.onHistogramChanged.bind(this)}/> </main>

The Map and the Basemap are created using components provided by the react-leaflet library while the CARTO layer and the histogram are built ad-hoc for this project.

Notice the parameters passed to our custom components:

Layer

  • source: string with a SQL query pointing to the geospatial data.
  • style: a CartoCSS string with information about how the data should be displayed.
  • Client: a carto.Client instance.
  • Hidden: a boolean attribute controlling the layer´s visibility.

Histogram

  • Client: a carto.Client instance.
  • source: string with a SQL query pointing to the geospatial data.
  • nativeMap: the leaflet-map element.
  • onDataChanged: a callback function that will be executed when the dataview fetches new data.
Layer Component

A layer component receives the properties listed above.

In the component constructor we use those properties to create the carto.source.SQL and carto.style.CartoCSS required in order to create a carto.layer.Layer.

We finally add our brand new layer to the client.

constructor(props) { super(props); const { client, hidden, source, style } = props; const cartoSource = new carto.source.SQL(source); const cartoStyle = new carto.style.CartoCSS(style); this.layer = new carto.layer.Layer(cartoSource, cartoStyle); client.addLayer(this.layer); }

According to the React lifecycle we must wait until the component has been mounted before trying to add a leafletLayer to the leaflet map. Once the component has been mounted we know this.context will reference the native leaflet map so we can get a leaflet-layer from the client and add it to our map.

componentDidMount() { const { client } = this.props; client.getLeafletLayer().addTo(this.context.map); }

This allows us to view a map as the following:

Histogram Widget

We want to create a histogram displaying the price per night for the rentals in the map.

As you probably know, we are going to create a React component wrapping a histogram dataview so you can see how easy is to get geospatial data from the CARTO server.

As in the Layer component, all the initialization is done in the constructor. To create the histogram we only need a carto.source.SQL pointing to the rentals data, the column name and the number of bins.

Since building the histogram requires server interaction, all the process will be asynchronous and we need to register a function callback that will be executed when the data is available.

Finally, remember to add the widget to the client. Otherwise nothing will happen!

constructor(props) { super(props); const { source, client } = props; // Create a cartoSource from the given source string const dataset = new carto.source.SQL(source) // Create a 7 bins histogram on the price column this.histogramDataview = new carto.dataview.Histogram(dataset, 'price', { bins: 7 }); // Wait for the server to return data this.histogramDataview.on('dataChanged', this.onDataChanged); // Register the dataview into the client client.addDataview(this.histogramDataview); }

The simplest onDataChanged callback could be one that just updates the React internal state:

onDataChanged = (data) => { this.setState(data); }

This will cause render to be called with the new state.

render() { return <ul> {this.state.bins.map(bin => <li> {bin.avg} € - {bin.freq} </li>)} </ul>; }

A simple render function like this will show a unordered list with the average price for every bin and how many rentals are in this bin.

With some CSS & HTML we can improve this visualization even more:

Once we get this… Won’t it be great to have a different color in the layer points according to its histogram bin?

Updating layer style

Once we get the histogram data, we want to update the layer and apply new styles to create a greater visualization. The first step will be updating our callback and notify the parent element about the new data arrival.

// Histogram.js onDataChanged = (data) => { this.setState(data); // Call callback function with the new data this.props.onDataChanged(data); }

On the parent element (index.js) we will process this data, generating a new style that should be applied to the layer.

// index.js onHistogramChanged(data) { const newStyle = utils.buildStyle(data); this.setState({ layerStyle: newStyle, hidelayers: false }) }

To generate the style we use a utility function that generates a CartoCSS from histogram data

export const COLORS = ['#fcde9c', '#faa476', '#f0746e', '#e34f6f', '#dc3977', '#b9257a', '#7c1d6f']; export function buildStyle(data) { const rules = data.bins.map((bin, index) => _createRule(bin, COLORS[index])).join(''); return ` #layer { marker-width: 10; marker-fill-opacity: 0.7; marker-allow-overlap: false; marker-line-width: 0; marker-comp-op: multiply; ${rules} } `; } function _createRule(bin, color) { return ` [price >= ${bin.start}] { marker-fill: ${color}; } `; } export default { buildStyle, COLORS };

We won’t explain this in detail since is not very relevant but the core concept here is that buildStyle transforms histogram data into a CartoCSS like the following:

#layer { marker-width: 10; marker-fill-opacity: 0.7; marker-allow-overlap: false; marker-line-width: 0; marker-comp-op: multiply; if (price >= 0 ) { marker-fill: green; } if (price > 50) { marker-fill: orange; } if (price > 100) { marker-fill: red; } }

This new CartoCSS is asigned to the layerStyle variable in the main app component state triggering a new render .

This style is passed to the layer as a property.

<Layer source={airbnb.source} style={this.state.layerStyle} // <---- client={this.cartoClient} hidden={this.state.hidelayers} />

So the layer must be aware of this changes. This is done using the shouldComponentUpdate function, checking if the style has changed.

shouldComponentUpdate(nextProps) { return nextProps.style !== this.props.style; }

So in our render function we only need to update the layer style with the new CartoCSS pased as a property. We can simply use the .setContent function to achieve this.

render() { const { style } = this.props; const layerStyle = this.layer.getStyle(); layerStyle.setContent(style); return null; }

Since our client connects everything, the map will be updated on its own:

Listening to map position

As a final step, we want our histogram to reflect the exact data we are seeing in the map.

In order to achieve this we need to filter our dataview to consider only data belonging to our current map area.

Luckily for us CARTO.js provides this exact functionality through what is known as filters. For this case we want to use a cartoFilterBoundingBox in the Histogram constructor just adding 2 lines: one for creating the filter and another one to add the filter to the widget.

constructor(props) { super(props); const dataset = new carto.source.SQL(props.source) this.histogramDataview = new carto.dataview.Histogram(dataset, 'price', { bins: 7 }); // Create a bboxFilter attached to the native leaflet map const bboxFilter = new carto.filter.BoundingBoxLeaflet(props.nativeMap); // Add the filter to the histogram this.histogramDataview.addFilter(bboxFilter); this.histogramDataview.on('dataChanged', this.onDataChanged); props.client.addDataview(this.histogramDataview); }

And that’s all! Now when we change the map position, the histogram widget will fire a dataChanged event with new data belonging to the visible portion of the map.

gvSIG Team: GIS applied to Municipality Management: Module 7.2 ‘Editing (Derivative geometries)’

Planet OSGeo feeds - Thu, 01/25/2018 - 09:54

The second video of the seventh module is now available, in which we will show a new tool related to the editing part in gvSIG.

The functionality that we are going to see in this video will allow us to create shapefiles from other ones. We will be able to create a polygon layer from point or line layers, and also a line layer from points. It will be very useful when we have the points of the axis of a street in our municipality, where we have a field with the order of the points (if we don’t have that field we should check in the View which are the points when we select them to check the order), so we wouldn’t have to digitalize point by point. The same thing would happen when we have the points that form a parcel. If we have a parcel formed by 500 points, by using this tool we wouldn’t have to digitalize those points one by one to create the polygon. At this way it would be created automatically.

The cartography to use in this video can be downloaded from the following link.

Here you have the second videotutorial of this seventh module:

Related posts:

GeoServer Team: GeoServer 2.12.2 Released

Planet OSGeo feeds - Wed, 01/24/2018 - 18:42

We are happy to announce the release of GeoServer 2.12.2. Downloads are available (zipwar, and exe) along with docs and extensions.

This is a stable release recommended for production use. This release is made in conjunction with GeoTools 18.2.

Highlights of this release are featured below, for more information please see the release notes (2.12.2, 2.12.12.12.0 | 2.12-RC1 | 2.12-beta).

New Features and Improvements
  • GetLegendGraphic rescale accounts for stroke thickness
  • WPS requests now support the use of CDATA to guard ComplexData
  • CSS expressions with units are now supported
Bug Fixes
  • Importer an now import shape files with spaces in the attribute names to PostGIS
  • An intermittent problem with WFS filter encoding has been resolved.
  • User interface improvements for layer group creation
  • Our community modules remain under active development with fixes to scripting, mbstyle, and backup and restore.
About GeoServer 2.12 Series

Additional information on the 2.12 series:

Volker Mische: Joining Protocol Labs

Planet OSGeo feeds - Wed, 01/24/2018 - 16:59

I’m pumped to announce that I’m joining Protocol Labs as a software engineer. Those following me on Twitter or looking on my GiHub activity might have already got some hints.

Short term

My main focus is currently on IPLD (InterPlanetary Linked Data). I’ll smooth things out and also work on the IPLD specs, mostly on IPLD Selectors. Those IPLD Selectors will be used to make the underlying graph more efficient to traverse (especially for IPFS). That’s a lot of buzzwords, I hope it will get clearer the more I’ll blog about this.

To get started I’ve done the JavaScript IPLD implementations for Bitcoin and Zcash. Those are the basis to make easy traversal through the Bitcoin and Zcash blockchains possible.

Longer term

In the longer term I’ll be responsible to bring IPLD to Rust. That’s especially exciting with Rust’s new WebAssembly backend. You’ll get a high performance Rust implementation, but also one that works in Browsers.

What about Noise?

Many of you probably know that I’ve been working full-time on Noise for the past 1.5 years. It shapes up nicely and is already quite usable. Of course I don’t want to see this project vanish and it won’t. At the moment I only work part-time at Protocol Labs, to also have some time for Noise. In addition to that there’s also interest within Protocol Labs to use Noise (or parts of it) for better query capabilities. So far it’s only rough ideas I mentioned briefly at the end of my talk about Noise at the [Lisbon IPFS Meetup] two weeks ago. But what’s the distributed web without search?

What about geo?

I’m also part of the OSGeo community and FOSS4G movement. So what’s the future there? I see a lot of potential in the Sneakernet. If geo-processing workflows are based around IPFS, you could use the same tools/scripts whether it is stored somewhere in the cloud, or access you local mirror/dump if your Internet connection isn’t that fast/reliable.

I expect non-realiable connectivity to be a hot topic at the FOSS4G 2018 conference in Dar es Salaam, Tansania.

Conclusion

I’m super excited. It’s a great team and I’m looking forward to push the distributed web a bit forward.

GIS for Thought: Updating A Plugin From QGIS 2 to QGIS 3

Planet OSGeo feeds - Wed, 01/24/2018 - 09:00

I have two plugins in the QGIS plugin repository, and with the release of QGIS 3 looming it was time to upgrade them for QGIS 3.

There is a short guide by the QGIS dev team that is a good starting point at:
https://github.com/qgis/QGIS/wiki/Plugin-migration-to-QGIS-3

But I had not done any development on these plugins for a while so a more step by step guide was useful, so hopefully, write the guide for the first plugin and follow it step by step for the second.

I am working on Windows, with OSGeo4W.

Before we start we will need to insure a couple of extras are installed through the OSGeo4w Installer:
Desktop:
qgis-dev
Libs:
python-future

Assuming GitHUB is the repo.
In git shell:

git clone https://github.com/HeikkiVesanto/QGIS_Multi_Ring_Buffer.git

There is a conversion script for QGIS plugins provided by the QGIS devs in the main repo.

We can download just the scripts folder using the following link:
https://minhaskamal.github.io/DownGit/#/home?url=https://github.com/qgis/QGIS/tree/master/scripts

Extract that into a location of your choice.

Then we can run the 2to3 script from the OSGeo4W console (cd to the folder you extracted the script to):

python 2to3 C:\path_to_plugin\QGIS_Multi_Ring_Buffer

This will print out changes that need to be made to convert from QGIS2 to QGIS3.

My first run resulted in many lines of:

RefactoringTool: Line 31: could not convert: from PyQt4.QtCore import * RefactoringTool: Cannot handle star imports.

So my plugins line of:

from PyQt4.QtCore import *

Was impossible to convert with the tool, since I was not 100% sure what I needed from the QtCore library (I was young when I wrote the plugin). I commented out the line for the plugin in QGIS 2.8, booted up QGIS 2.8 and tried running the plugin.

So python errors:
NameError: global name ‘QVariant’ is not defined
NameError: global name ‘Qt’ is not defined
Later. I ended up expanding my other import from QtCore to:

from PyQt4.QtCore import QSettings, QTranslator, qVersion, QCoreApplication, QVariant, Qt

Running the 2to3 script again looked ok, with a number of changes required. These changes can be applied with –w flag:

python 2to3 C:\path_to_plugin\QGIS_Multi_Ring_Buffer -w

For the next step I booted up my favourite IDE PyCharm. I created a bat file that launched PyCharm with the QGIS dev environmental variables. So copying the “python-qgis-dev.bat” file from:

I changed the final line of:

"%PYTHONHOME%\python" %*

To:

start /d "C:\Program Files\JetBrains\PyCharm Community Edition 2017.2.1\bin\" pycharm64.exe

Then from File> Settings> Project:> Project Interpreter> Set to “C:\OSGeo4W64\apps\Python36\python.exe”

It takes a while to update the interpreter.

I only had 2 errors, both for:
QgsMapLayerRegistry.instance().addMapLayer(vl)

There is a list of API breaks between QGIS 2 and QGIS 3 at:
https://qgis.org/api/api_break.html

Looks like QgsMapLayerRegistry was moved to QgsProject. So I edit it to:

QgsProject.instance().addMapLayer(vl)

Then we can edit the metadata.txt to QGIS 3:
qgisMinimumVersion=3.0

And increase the version number.

Then we need to recompile the icon and ui for Python3 and QT5.

I was struggling a bit with the environmental variables to get it working, and ended up using a great batch script form StackExchange:
https://gis.stackexchange.com/questions/260743/how-to-compile-qtdesigner-user-interface-ui-and-resource-qrc-files-with-qg

@ECHO OFF set OSGEO4W_ROOT=C:\OSGeo4W64 set PATH=%OSGEO4W_ROOT%\bin;%PATH% set PATH=%PATH%;%OSGEO4W_ROOT%\apps\qgis\bin @echo off call "%OSGEO4W_ROOT%\bin\o4w_env.bat" call "%OSGEO4W_ROOT%\bin\qt5_env.bat" call "%OSGEO4W_ROOT%\bin\py3_env.bat" @echo off path %OSGEO4W_ROOT%\apps\qgis-dev\bin;%OSGEO4W_ROOT%\apps\grass\grass-7.2.2\lib;%OSGEO4W_ROOT%\apps\grass\grass-7.2.2\bin;%PATH% cd /d %~dp0 @ECHO ON ::Ui Compilation call pyuic5 multi_ring_buffer_dialog_base.ui -o multi_ring_buffer_dialog_base.py ::Resources call pyrcc5 resources.qrc -o resources_rc.py @ECHO OFF GOTO END :ERROR echo "Failed!" set ERRORLEVEL=%ERRORLEVEL% pause :END @ECHO ON

So create the .bat file and run it in the folder of you plugin (editing where needed). Note: Your resources_rc may be called resource_rc or something slightly different.

Move the plugin folder to:
C:\\Users\\USERNAME\\AppData\\Roaming\\QGIS\\QGIS3\\profiles\\default\\python\\plugins\\

Boot up QGIS2.99/3.

I had a few more issues.

It seems QGIS 3 deals with the icon slightly differently.

icon_rc.py is no longer needed, and it seems was not used on my other plugin either.

So I removed the reference to it in the main python script:
from . import icon_rc

I still had some errors.

AttributeError: module ‘qgis.PyQt.QtGui’ has no attribute ‘QDialog’

It seems QDialog has moved to PyQt.QtWidgets.

So in my multi_ring_buffer_dialog.py file I needed to change some lines:

Add:

from qgis.PyQt.QtWidgets import QDialog, QDialogButtonBox

Change:
QtGui.QDialog
to:
QDialog
In the two instances in that file.

Working plugin!

Commit the changes back to the repo. Cd to the directory in git shell.

git add –A git commit –m “Updated for QGIS 3” git push

Zip the plugin up.
Upload to https://plugins.qgis.org/plugins/

Second plugin:
Same issue with import *
1 error with QgsMapLayerRegistry
My resources_rc file was called resource_rc so the batch script needed to be edited to:
call pyrcc5 resources.qrc -o resource_rc.py
Same issues with QtGui.QDialog

Now time for some improvements.

GeoTools Team: GeoTools 18.2 Released

Planet OSGeo feeds - Wed, 01/24/2018 - 07:30
The GeoTools team is pleased to announce the release of GeoTools 18.2:Thanks to everyone who contributed to this release. This release is made in conjunction with GeoServer 2.12.2.

This release is the current stable release recommended for new development.

Release highlights:
  • Image moasic fix for hetrogeneous mosaics crossing the dateline
  • streaming rendering fix to ensure that preparing geometry for display does not interact with geometry use for expressions
  • streaming renderer fix to ensure points with large mark size are not accidentally clipped when just off screen
  • Improvement to CSS styling allowing dynamic expressions and units to be used together.
For more information see release notes (18.2 | 18.1 | 18.0 | 18-RC1).

Fernando Quadro: Histórias sobre o impacto do Open Data no Reino Unido

Planet OSGeo feeds - Tue, 01/23/2018 - 11:30

Em 2010, o ano em que o Reino Unido lançou seu portal de dados aberto, um relatório da Iniciativa de Transparência e Responsabilidade destacou a promessa e o potencial dos dados abertos para melhorar os serviços e criar crescimento econômico.

Nos cinco anos seguintes, o progresso do Reino Unido na abertura de seus dados tem sido pioneiro e rápido, mas não sem desafios e questões sobre impacto. É este sucesso qualificado que os levou a comissionar este relatório em um esforço para entender se a promessa e o potencial de dados abertos estão sendo realizados.

O autor do relatório, Becky Hogge, descobre que os dados abertos tiveram impacto catalítico e significativo e que o tempo provavelmente revelará ainda mais valor. Ela também sinaliza desafios e obstáculos críticos, incluindo conjuntos de dados fechados, dados valiosos que não estão sendo coletados e considerações importantes de privacidade.

A Omidyar Network defendeu os dados abertos como um ingrediente chave para uma governança mais efetiva, eficiente e justa e uma cidadania mais capacitada e comprometida. Este relatório revitaliza o compromisso com este espaço e admiração pelas organizações que estão trabalhando diariamente para maximizar o valor dos dados abertos.

Baixe o relatório completo aqui.

Fonte: Open Data Charter

gvSIG Team: Imagen satelital con gvSIG Desktop: Video-tutoriales

Planet OSGeo feeds - Tue, 01/23/2018 - 11:12

El pasado año liberamos un curso en inglés denominado ‘Learn GIS for free’, que entre otros módulos incluía los relacionados al trabajo con imágenes satelitales.

Hoy os traemos los vídeo-tutoriales relacionados con ese módulo subtitulados al castellano, gracias a la aportación realizada por Ernesto Salvador Diaz Ponce Davalos, del Centro de Excelencia Virtual en Monitoreo Forestal en Mesoamérica y que agradecemos en nombre de toda la Comunidad gvSIG.

Os dejamos con los 3 vídeos:

Tema 1: Imágenes de satélite y datos disponibles

Tema 2: Compuestos de color RGB

Tema 3: Filtros RGB y mascaras

GeoSolutions: GeoSolutions al FOSS4G-IT 2018 a Roma

Planet OSGeo feeds - Tue, 01/23/2018 - 11:10

Cari lettori,

GeoSolutions sarà presente alla conferenza italiana sul software geografico e sui dati geografici liberi (FOSS4G-IT 2018) nei giorni dal 19 al 22 Febbraio 2018 a Roma (maggiori dettagli qui)

Durante l'evento verranno presentati i migliori lavori sull’utilizzo, lo sviluppo e la diffusione delle applicazioni libere e a codice aperto (Free and Open Source Software) in ambito GIS.

La giornata del 19 Febbraio sarà dedicata a workshop introduttivi su vari software tra cui MapStore, GeoNetworkGeoServer e GeoNodeNelle giornate successive (20 e 21 Febbraio) si terrà la conferenza vera e propria, con interventi sui principali progetti in ambito FOSS4G. Infine nella giornata del 22 Febbraio si terranno delle attività specifiche su OpenStreetMap. Il programma completo della conferenza è disponibile a questo link.

GeoSolutions sarà presente con alcune presentazioni e workshop sui propri prodotti Open Source. Di seguito il programma dei nostri workshop e delle nostre presentazioni:

Programma dei workshop:

  • Servizi OGC con GeoServer: dai primi passi alle funzionalità avanzate, 19/2/2018, Aula 6, 4h di mattina.
  • Introduzione a MapStore: costruisci le tue mappe in modo semplice, 19/2/2018, Aula 6, 4h di pomeriggio.
  • Introduzione a Geonode, 19/2/2018, Aula 7, 4h di mattina.
Programma delle nostre presentazioni:
  • GeoServer, il server Open Source per la gestione interoperabile dei dati geospaziali, Mercoledì 21/2, ore 9:45
  • GeoNode, il CMS geospaziale Open Source, Mercoledì 19/2, ore 10:00
  • GeoNetwork, the open source server for the interoperable management of metadata,  Mercoledì 19/2, ore 10:15
La conferenza sarà anche l'occasione per parlare di come poter soddisfare i vostri bisogni attraverso i nostri prodotti e i nostri piani di supporto. Vi aspettiamo numerosi! Il team di GeoSolutions,

Fernando Quadro: Redes neurais e Ordenamento territorial

Planet OSGeo feeds - Mon, 01/22/2018 - 11:30

Foi publicado recentemente pelo meu amigo Luis Sadeck, do blog Geotecnologias, sua dissertação de mestrado que trata do assunto Redes neurais e ordenamento territorial.

O assunto é muito interessante, e recomendo a leitura da dissertação a todos que tem algum envolvimento ou até mesmo curiosidade sobre o assunto, pois o Sadeck é refencia quando se fala em geoprocessamento nesse país.

Gostaria de aproveitar e parabenizá-lo pelo grande trabalho.

gvSIG Team: GIS applied to Municipality Management: Module 7.1 ‘Editing (new layers, graphical and alphanumeric editing)’

Planet OSGeo feeds - Mon, 01/22/2018 - 11:06

The first video of the seventh module is now available, in which we will see the editing tools in gvSIG.

Editing is a very important part in a Geographic Information System, since it allows us to create new vector layers, digitize elements, add alphanumeric information to these geometries… This is what we will see in this first part of the module.

There are many tools available in the gvSIG editing module, and one of the main ones is the creation of new elements (points, lines, polygons …). Once they are created, we can rotate, scale or move them, we can create parallels, we also can lengthen or cut lines, join or split geometries, create autopolygons, etc.

We can digitize both with reference cartography, for example an orthophoto, and use the editing console to write the coordinates of the inserting point.

The cartography to use in this video can be downloaded from the following link.

Here you have the first videotutorial of this seventh module:

Related posts:

From GIS to Remote Sensing: Semi-Automatic Classification Plugin version 6 officially released

Planet OSGeo feeds - Mon, 01/22/2018 - 00:24
I am glad to announce that the Semi-Automatic Classification Plugin (SCP) version 6 (codename Greenbelt) has been released.
This is the result of a long work of development related to my PhD research. I am really thankful to all the supporters and users of SCP that have motivated me to do my best.



Please note that SCP 6 is compatible with QGIS 3 only; therefore you need to install the QGIS development version until QGIS 3 is officially released. Please read this previous post for a guide about how to install QGIS 3 in Windows OS.
You can install SCP from the plugin installer in QGIS.

In the next few days I'm going to publish the updated user manual and the first video tutorial.
Please consider reporting any bugs or issues that you may encounter.

From GIS to Remote Sensing: How to install QGIS 3 using OSGeo4W in Windows OS

Planet OSGeo feeds - Sun, 01/21/2018 - 16:07
The Semi-Automatic Classification Plugin (SCP) version 6 (codename Greenbelt) will be released very soon.
This new version is compatible with QGIS 3 only; therefore you need to install the QGIS development version until QGIS 3 is officially released.

This post is a brief guide about how to install QGIS 3 in Windows OS using the OSGeo4W installer.


Free and Open Source GIS Ramblings: Creating reports in QGIS3

Planet OSGeo feeds - Sun, 01/21/2018 - 10:00

QGIS 3 has a new feature: reports! In short, reports are the good old Altas feature on steroids.

Let’s have a look at an example project:

To start a report, go to Project | New report. The report window is quite similar to what we’ve come to expect from Print Composer (now called Layouts). The most striking difference is the report panel at the left side of the screen.

When a new report is created, the center of the report window is empty. To get started, we need to select the report entry in the panel on the left. By selecting the report entry, we get access to the Include report header and Include report footer checkboxes. For example, pressing the Edit button next to the Include report header option makes it possible to design the front page (or pages) of the report:

Similarly, pressing Edit next to the Include report footer option enables us to design the final pages of our report.

Now for the content! We can populate our report with content by clicking on the plus button to add a report section or a “field group”. A field group is basically an Atlas. For example, here I’ve added a field group that creates one page for each country in the Natural Earth countries layer that I have loaded in my project:

Note that in the right panel you can see that the Controlled by report option is activated for the map item. (This is equivalent to a basic Atlas setup in QGIS 2.x.)

With this setup, we are ready to export our report. Report | Export Report as PDF creates a 257 page document:

As configured, the pages are ordered by country name. This way, for example, Australia ends up on page 17.

Of course, it’s possible to add more details to the individual pages. In this example, I’ve added an overview map in Robinson projection (to illustrate again that it is now possible to mix different CRS on a map).

Happy QGIS mapping!

Just van den Broecke: Emit #1 – Into Spatiotemporal

Planet OSGeo feeds - Sat, 01/20/2018 - 18:02

Smart Emission Googled for Photos

One of my new year’s resolutions for 2018 was to “blog more”. Not being very active on the well-known social media: a bit tired of Twitter, never really into Facebook, bit of LinkedIn.  OSGeo mailing lists, GitHub and Gitter is where you can find me most (thanks Jody, for reminding!). And I read many blogs, especially on my Nexus 10 tablet and Fairphone 2 via the awesome Feedly App. If you have not heard of Feedly (or any other blog-feed collectors), stop here and check out Feedly! Most blogs (like this one) provide an RSS/Atom-feed. Via Feedly you can search/add RSS-feeds and thus create your own “reading table”. My favorite feeds are related to Open Source Geospatial, Python and IoT, like:

Feedly shown in web browser

Enough sidestepping, my goal is to share tech around the Open Source Smart Emission Platform (SE Platform) in a series of posts, dubbed  ‘Emits’. This is Emit #1. Since 2014 I have been working on several projects, often through Geonovum, and recently via the European Union Joint Research Centre (JRC), that deal with the acquisition, management, web-api-unlocking and visualization of environmental sensor-data, mainly for Air Quality (AQ).

Smart Emission Googled

What made these projects exciting for me is that they brought together many aspects and technologies (read: Open Source projects and OSGeo software) I had been working on through the years. Also, it was the first time I got back into Environmental Chemistry, for which I hold a master’s degree from the University of Amsterdam, co-authoring some publications, yes, many many years ago.

So what is the Smart Emission Platform and what makes it exciting and relevant? In a nutshell (read the tech doc here): The goal of the SE Platform is to facilitate the acquisition (harvesting)  of sensor-data from a multitude of sensor devices and make this data available via standardized formats and web-APIs (mainly: OGC Standards) and Viewers. The SE Platform originates, what is now called the award-winningSmart Emission Nijmegen project in 2015-2017. Quoting from the paper “Filling the feedback gap of place-related externalities in smart cities” :

“…we present the set-up of the pilot experiment in project “Smart Emission”, constructing an experimental citizen-sensor-network in the city of Nijmegen. This project, as part of research program ‘Maps 4 Society,’ is one of the currently running Smart City projects in the Netherlands. A number of social, technical and governmental innovations are put together in this project: (1) innovative sensing method: new, low-cost sensors are being designed and built in the project and tested in practice, using small sensing-modules that measure air quality indicators, amongst others NO2, CO2, ozone, temperature and noise load. (2) big data: the measured data forms a refined data-flow from sensing points at places where people live and work: thus forming a ‘big picture’ to build a real-time, in-depth understanding of the local distribution of urban air quality (3)empowering citizens by making visible the ‘externality’ of urban air quality and feeding this into a bottom-up planning process: the community in the target area get the co-decision-making control over where the sensors are placed, co-interpret the mapped feedback data, discuss and collectively explore possible options for improvement (supported by a Maptable instrument) to get a fair and ‘better’ distribution of air pollution in the city, balanced against other spatial qualities. ….”

So from the outset the SE Platform is geared towards connecting citizen-owned sensor devices. Many similar programs and initiatives are currently evolving, often under the flag of Citizen Science and Smart Cities. Within the Netherlands, where the SE Nijmegen project originated, the Dutch National Institute for Public Health and the Environment (RIVM) was an active project partner, and still stimulates citizens measuring Air Quality via a project and portal: “Together Measuring Air Quality”. In the context of discussions on Air Quality, climate change and lowering budgets for governmental environmental institutions, citizen-participation becomes more and more relevant. A whole series of blogs could be devoted to social and political aspects of Citizen Science, but I will stick to tech-stuff here.

What made working on the SE Nijmegen project exciting and challenging, is that I was given time and opportunity by the project partners (see pic) to not just build a one-time project-specific piece of software, but a reusable set of Open Source components: the Smart Emission Platform (sources on GitHub).

Having had some earlier experience within the Geonovum SOSPilot project (2014-2015), investigating among others the OGC Sensor Observation Service to unlock RIVM AQ data (LML), I was aware of the challenges dealing with what can be called Spatiotemporal (Big) Data.

 

The figure below shows The Big Picture of the SE Platform. Red arrows denote the flow of data: originating from sensor devices, going through Data Management (ETL), unlocked via various web-APIs, and finally “consumed” in client-apps and viewers.

 

There are many aspects of the SE Platform that can be expanded. These are for upcoming Emits. For now a summary of some of the challenges and applied technologies, to be detailed later:

  • raw data from sensors: requires refinement: validation/calibration/aggregation
  • dealing with Big Data that is both spatial (location-based) and temporal (time-based)
  • applying an Artificial Neural Network (ANN) for sensor-data calibration
  • databases for Spatiotemporal data: PostGIS and InfluxDB and TICK Stack
  • applying the Stetl framework for all data management (ETL)
  • metadata for sensors and sensor networks, always a tough and often avoided subject
  • connecting the Open Hardware EU JRC AirSensEUR AQ sensor-box to the SE Platform
  • using OGC WMS (with Dimensions for Time) and WFS for viewing and downloading sensor data
  • is OGC Sensor Observation Service (SOS) and SWE still viable?
  • how powerful is the OGC SensorThings API (STA) standard?
  • deployment with Docker and Docker Compose
  • Docker and host systems monitoring: Prometheus + Grafana
  • OGC Services Monitoring with GeoHealthCheck
  • Visualizations: custom viewers with Heron/Leaflet/OpenLayers, Grafana dashboards
  • from development to test and production: Vagrant+VirtualBox, Ubuntu, Docker
  • using component-subsets of the platform for small deployments

Monitoring SE Docker Containers: Prometheus+cAdvisor+Grafana

A lot of stuff to uncover, hopefully got your interest if you have read all the way to here. Will try to treat one aspect/technology in each subsequent Emit-blog post. And off course the entire SE platform is Open Source (GNU GPL), so you are free to download and experiment, and maybe even would like to contribute.

 

Pages

Subscribe to soilinformationstandards.org aggregator