Planet OSGeo feeds

Subscribe to Planet OSGeo feeds feed
Planet OSGeo - http://planet.osgeo.org
Updated: 2 hours 39 min ago

CARTO Inside Blog: ETL into CARTO with ogr2ogr

Mon, 02/12/2018 - 11:39

The default CARTO data importer is a pretty convenient way to quickly get data into the platform, but for enterprises setting up automated update it has some limitations:

  • there’s no way to define type coercions for CSV data;
  • some common GIS formats like File Geodatabase aren’t supported;
  • the sync facility is “pull” only, so data behind a firewall is inaccessible; and,
  • the sync cannot automatically filter the data before loading it into the system.

Fortunately, there’s a handy command-line tool that can automate many common enterprise data loads: ogr2ogr.

Basic Operation

ogr2ogr has a well-earned reputation for being hard to use. The commandline options are plentiful and terse, the standard documentation page lacks examples, and format-specific documentation is hidden away with the driver documentation.

Shapefile

The basic structure of an ogr2ogr call is “ogr2ogr -f format destination source”. Here’s a simple shapefile load.

ogr2ogr \ --debug ON \ --config CARTO_API_KEY abcdefghijabcdefghijabcdefghijabcdefghij \ -t_srs "EPSG:4326" \ -nln interesting \ -f Carto \ "Carto:pramsey" \ interesting_things.shp

The parameters are:

  • –debug turns on verbose debugging, which is useful during development to see what’s happening behind the scenes.
  • –config is used to pass generic “configuration parameters”. In this case we pass our CARTO API key so we are allowed to write to the database.
  • -t_srs is the “target spatial reference system”, telling ogr2ogr to convert the spatial coordinates to “EPSG:4326” (WGS84) before writing them to CARTO. The CARTO driver expects inputs in WGS84, so this step is mandatory.
  • -nln is the “new layer name”, so the name of the uploaded table can differ from that of the input file.
  • -f is the format of the destination layer, so for uploads to CARTO, it is always “Carto”.
  • Carto:pramsey is the “destination datasource”, so it’s a CARTO source, in the “pramsey” account. Change this to your user name. (Note for multi-user accounts: you must supply your user name here, not your organization name.)
  • interesting_things.shp is the “source datasource”, which for a shapefile is just the path to the file.
File Geodatabase

Loading a File Geodatabase is almost the same as loading a shapefile, except that a file geodatabase can contain multiple layers, so the conversion must also specify which layer to convert, by adding the source layer name after the data source. You can load multiple layers in one run by providing multiple layer names.

ogr2ogr \ --debug ON \ --config CARTO_API_KEY abcdefghijabcdefghijabcdefghijabcdefghij \ -t_srs "EPSG:4326" \ -nln cities \ -f Carto \ "Carto:pramsey" \ CountyData.gdb Cities

In this example, we take the “Cities” layer from the county database, and write it into the “cities” table of CARTO. Note that if you do not re-map the layer name to all lower case, you’ll get a mixed case layer in CARTO, which you may not want.

Filtering

You can use OGR on any input data source to filter the data prior to loading. This can be useful for loads of large inputs that are “only the data since time X” or “only the data in this region”, like this:

ogr2ogr \ --debug ON \ --config CARTO_API_KEY abcdefghijabcdefghijabcdefghijabcdefghij \ -t_srs "EPSG:4326" \ -nln cities \ -f Carto \ -sql "SELECT * FROM Cities WHERE state_fips = 53" \ "Carto:pramsey" \ CountyData.gdb Cities

Since the filter is just a SQL statement, the filter can both reduce the number of records and also apply transforms to the output on the way: reduce the number of columns, apply some data transformations, anything that is possible using the SQLite dialect of SQL.

Overwrite or Append

By default, ogr2ogr runs in “append” mode (you can force it with the -append flag, so if you run the same translation multiple times, you’ll get rows added into your table. This be useful for processes that regularly take the most recent entries and copy them into CARTO.

For translations where you want to replace the existing table, use the -overwrite mode, which will drop the existing table, and create a new one in its place.

Because of some limitations in how the OGR CARTO driver handles primary keys, the OGR -update mode does not work correctly.

OGR Virtual Format

As you can see, the command-line complexity of an OGR conversions starts high. The complexity only goes up as advanced features like filtering and arbitrary SQL are added.

To contain the complexity in one location, you can use the OGR “virtual format”, VRT files, to define your data sources. This is handy for managing a library of conversions in source control. Each data source becomes it’s own VRT file, and the actual OGR commands become smaller.

CSV Type Enforcement

CSV files are convenient ways of passing data, but they are under-defined: they supply column names, but not column types. This forces CSV consumers to do type guessing based on the input data, or to coerce every input to a lowest common denominator string type.

Particularly for repeated and automated uploads it would nice to define the column types once beforehand and have them respected in the final CARTO table.

For example, take this tiny CSV file:

longitude,latitude,name,the_date,the_double,the_int,the_int16,the_int_as_str,the_datetime -120,51,"First Place",2018-01-01,2.3,123456789,1234,00001234,2014-03-04 08:12:23 -121,52,"Second Place",2017-02-02,4.3,423456789,4234,00004234,"2015-05-05 09:15:25"

Using a VRT, we can define a CSV file as a source, and also add the rich metadata needed to support proper type definitions:

<OGRVRTDataSource> <OGRVRTLayer name="test_csv"> <SrcDataSource>/data/exports/test_csv.csv</SrcDataSource> <GeometryField encoding="PointFromColumns" x="longitude" y="latitude"/> <GeometryType>wkbPoint</GeometryType> <LayerSRS>WGS84</LayerSRS> <OpenOptions> <OOI key="EMPTY_STRING_AS_NULL">YES</OOI> </OpenOptions> <Field name="name" type="String" nullable="false" /> <Field name="a_date" type="Date" src="the_date" nullable="true" /> <Field name="the_double" type="Real" nullable="true" /> <Field name="the_int" type="Integer" nullable="true" /> <Field name="the_int16" type="Integer" subtype="Int16" nullable="true" /> <Field name="the_int_as_str" type="String" nullable="true" /> <Field name="the_datetime" type="DateTime" nullable="true" /> </OGRVRTLayer> </OGRVRTDataSource>

This example has a number of things going on:

  • The <SrcDataSource> is an OGR connection string, as defined in the driver documentation for the format. For a CSV, it’s just the path to a file with a “csv” extension.
  • The <GeometryField> line maps coordinate columns into a point geometry.
  • The <LayerSRS> confirms the coordinates are WGS84. They could also be some planar format, and OGR can reproject them if requested.
  • The <OpenOptions> let us pass one of the many CSV open options.
  • The <Field> type definitions, using the “type” attribute to explicitly define types, including obscure ones like 16-bit integers.
  • Column renaming, in the “a_date” <Field>, maps the source column name “the_date” to “a_date” in the target.
  • Null enforcement, in the “name” <Field>, creates a target column with a NOT NULL constraint.

To execute the translation, we use the VRT as the source argument in the ogr2ogr call.

ogr2ogr \ --debug ON \ --config CARTO_API_KEY abcdefghijabcdefghijabcdefghijabcdefghij \ -t_srs "EPSG:4326" \ -f Carto \ "Carto:pramsey" \ test_csv.vrt Database-side SQL

Imagine you have an Oracle database with sales information in it, and you want to upload a weekly snaphot of transactions. The database is behind the firewall, and the transactions need to be joined to location data in order to mapped. How to do it?

With VRT tables and the OGR Oracle driver, it’s just some more configuration during the load step:

<OGRVRTDataSource> <OGRVRTLayer name="detroit_locations"> <SrcDataSource>OCI:scott/password@ora.company.com</SrcDataSource> <LayerSRS>EPSG:26917</LayerSRS> <GeometryField encoding="PointFromColumns" x="easting" y="northing"/> <SrcSQL> SELECT sales.sku, sales.amount, sales.tos, locs.latitude, locs,longitude FROM sales JOIN locs ON sales.loc_id = locs.loc_id WHERE locs.city = 'Detroit' AND sales.transaction_date > '2018-01-01' </SrcSQL> </OGRVRTLayer> </OGRVRTDataSource>

Some things to note in this example:

  • The <SrcDataSource> holds the Oracle connection string
  • The coordinates are stored in UTM17, in northing/easting columns, but we can still easily map them into a point type for reprojection later.
  • The output data source is actually the result of a join executed on the Oracle database, attributing each sale with the location it was made. We don’t have to ship the tables to CARTO separately.

The ability to run any SQL on the source database is a very powerful tool to ensure that the uploaded data is “just right” before it arrives on the CARTO side for analysis and display.

As before, the VRT is run with a simple execution of the ogr2ogr command line:

ogr2ogr \ --debug ON \ --config CARTO_API_KEY abcdefghijabcdefghijabcdefghijabcdefghij \ -t_srs "EPSG:4326" \ -f Carto \ "Carto:pramsey" \ test_oracle.vrt Multi-format Sources

Suppose you have a source of attribute information and a source of location information, but they are in different formats in different databases: how to bring them together in CARTO? One way, as usual, would be to upload them separately and join them on the CARTO side with SQL. Another way is to use the power of ogr2ogr and VRT to do the join during the data upload.

For example, imagine having transaction data in a PostgreSQL database, and store locations in a Geodatabase. How to bring them together? Here’s a joined_stores.vrt file that does the join in ogr2ogr:

<OGRVRTDataSource> <OGRVRTLayer name="sales_data"> <SrcDataSource>Pg:dbname=pramsey</SrcDataSource> <SrcLayer>sales.sales_data_2017</SrcLayer> </OGRVRTLayer> <OGRVRTLayer name="stores"> <SrcDataSource>store_gis.gdb</SrcDataSource> <SrcLayer>Stores</SrcLayer> </OGRVRTLayer> <OGRVRTLayer name="joined"> <SrcDataSource>joined_stores.vrt</SrcDataSource> <SrcSQL dialect="SQLITE"> SELECT stores.*, sales_data.* FROM sales_data JOIN stores ON sales_data.store_id = stores.store_id </SrcSQL> </OGRVRTLayer> </OGRVRTDataSource>

Some things to note:

  • The “joined” layer uses the VRT file itself in the <SrcDataSource> definition!
  • Each <OGRVRTLayer> is a full-fledged VRT layer, so you can do extra processing in them. Apply type definitions to CSV, run complex SQL on a remote database, whatever you want.
  • The join layer uses the “SQLite” dialect, so anything available in SQLite is available to you in the join step.
Almost an ETL

Combining the ability to read and write from multiple formats with the basic functionality of the SQLite engine, and chaining operations through multi-layer VRT layers, ogr2ogr provides the core functions of an “extract-transform-load” engine, in a package that is easy to automate and maintain.

For users with data behind a firewall, who need more complex processing during their loads, or who have data locked in formats the CARTO importer cannot read, ogr2ogr should be an essential tool.

Getting ogr2ogr

OGR is a subset of the GDAL suite of libraries and tools, so you need to install GDAL to get ogr2ogr

  • For Linux, look for “gdal” packages in your Linux distribution of choice.
  • For Mac OS X, use the GDAL Framework builds.
  • For Windows, use the MS4 package system or pull the Mapserver builds from GisInternals and use the included GDAL binaries.

gvSIG Team: GIS applied to Municipality Management: Module 11 ‘Reprojecting vector layers’

Mon, 02/12/2018 - 09:29

The video of the eleventh module is now available, in which we will show how to reproject vector layers.

Sometimes municipalities need external geographic information to work, for example cartography published by another administration, such as regional or national. That cartography can be in a different system than technicians usually work on in the municipality. If we don’t take the reference systems into account, both cartographies would not be overlapped correctly.

The municipality technicians can also use old cartography, which is in an obsolete reference system, and they need to have it in an updated reference system. For this, it will be necessary to reproject that cartography.

In module 2 you can consult all the information related to the reference systems.

Apart from reprojecting from one reference system to another one, sometimes it will be necessary to apply a transformation to improve the reprojection. For example in the case of Spain, to reproject a layer available in ED50, the official reference system until a few years ago, to ETRS89, the official system currently, it is necessary to apply a transformation by grid, otherwise we would have a difference of about 7 meters between these layers.

The cartography to follow this video can be downloaded from this link.

Here you have the videotutorial of this new module:

Related posts:

From GIS to Remote Sensing: Available the User Manual of the Semi-Automatic Classification Plugin v. 6

Sat, 02/10/2018 - 16:34
I've updated the user manual of the Semi-Automatic Classification Plugin (SCP) for the new version 6.This updated version contains the description of all the tools included in SCP as well as a brief introduction to the remote sensing definitions, and the first basic tutorial.
The user manual in English is available at this link. Also, other languages are available, although some translations are incomplete. 
I'd like to deeply thank all the volunteers that have translated the previous versions of this user manual, and I invite you to help translating this new version to your language.
It is possible to easily translate the user manual to any language, because it is written in reStructuredText as markup language (using Sphinx). Therefore, your contribution is fundamental for the translation of the manual to your language.


Jackie Ng: One MapGuide/FDO build system to rule them all?

Fri, 02/09/2018 - 17:38
Before the bombshell of the death of Autodesk Infrastructure Map Server was dropped, I was putting the final finishing touches of making (pun intended) the MapGuide/FDO build experience on Linux a more pleasant experience.

Namely, I had it with autotools (some may describe it as autohell) as the way to build MapGuide/FDO on Linux. For FDO, this was the original motivation for introducing CMake as an alternative. CMake is a more pleasant way to build software on Linux over autotools because:
  1. CMake builds your sources outside of the source tree. If you're doing development on a SVN working copy this is an absolute boon as it means when it comes time to commit any changes, you don't have to deal with sifting through tons of autotools-generated junk that is left in your actual SVN wc.
  2. CMake builds are faster than their autotools counterpart.
  3. It is much easier to find and consume external libraries with CMake than it is through autotools, which makes build times faster because we can just source system-installed copies of thirdparty libraries we use, instead of waste time having to build these copies (in our internal thirdparty source tree) ourselves. If we are able to use system-installed copies of libraries when building FDO, then we can take advantage of SVN sparse checkouts and be able to skip downloading whole chunks of thirdparty library sources that we never have to build!
Sadly, while this sounds nice in theory, the CMake way to build FDO had fallen into a state of disrepair. My distaste for autotools was sufficient motivation to get the CMake build back into working condition. Several weeks of bashing at various CMakeLists.txt files later, the FDO CMake build was operational again and had some several major advantages over the autotools build (in addition to what was already mentioned):
  • We can setup the CMake to generate build configurations for Ninja instead of standard make. A ninja-powered CMake build is faster than standard make ^.
  • On Ubuntu 14.04 LTS (the current Ubuntu version we're targeting), all the thirdparty libraries we use were available for us to apt-get install in the right version ranges, and the CMake build can take advantage of all of them. Not a single internal thirdparty library copy needs to be built!
  • We can easily light up compiler features like AddressSanitizer and linking with the faster gold instead of ld. AddressSanitizer in particular easily helped us catch some issues that have flew under the radar.
  • All of the unit tests are build-able and more importantly ... executable outside the source tree, making it easier to fix up whatever was failing.
Although we now had a functional FDO CMake build. MapGuide still was built on Linux using autotools. So for the same reasons and motivations, I started the process of introducing CMake to the MapGuide build system for Linux.

Unlike FDO, MapGuide still needed some of the internal thirdparty libraries built.
  • DBXML - No ubuntu package available, though we can get it to build against a system-provided version of xerces, so we can at least skip building that part of DBXML.
  • Apache HTTPD - Ubuntu package available, but having MapGuide be able to integrate with an Ubuntu-provided httpd installation was not in the scope of this work, even though this is a nice thing to have.
  • PHP - Same reasons as Apache HTTPD
  • Antigrain Geometry - No ubuntu package available. Also the AGG sources are basically wedded to our Renderers project anyways.
  • DWF Toolkit - No ubuntu package available
  • CS-Map - No ubuntu package available
For everything else, Ubuntu provided the package in the right version ranges for CMake to take advantage of. Another few weeks of bashing various CMakeLists.txt files into shape and we had FDO and MapGuide both build-able on Linux via CMake. To solve the problem of still needing to build some internal thirdparty libs, but still be able to retain the CMake quality of everything is built outside the source tree, some wrapper shell scripts are provided that will copy applicable thirdparty library sources out of the current directory, build them in their copied directories and then invoke CMake and pass in all the required parameters so that it will know where to look for the internal libraries to link against when it comes to build MapGuide proper.

This was also backported to FDO, so that on distros where we do not have all our required thirdparty libraries available, we can selectively build internal copies and be able to find/link the rest, and have CMake take care of all of that for us.

So what's with the title of this post?

Remember when I wrote about how interesting vcpkg was?

What is best used with vcpkg to easily consume thirdparty libraries on Windows? Why CMake of course! Now building MapGuide on Windows via CMake is not on the immediate horizon. We'll still be maintaining Visual Studio project files by hand (instead of auto-generating them with CMake) for the immediate future, but can you imagine being able to build FDO and MapGuide on both Windows and Linux with CMake and not have to waste time on huge SVN checkouts and building thirdparty libraries? That future is starting to look real possible now!

For the next major release of MapGuide Open Source, it is my plan to use CMake over autotools as the way to build both MapGuide and FDO on Linux.

^ Well, the ninja-powered CMake build used to be blazing fast until Meltdown and Spectre happened. My dev environment got the OS security patches and whatever build performance gains that were made through ninja and CMake were instantly wiped out and we were back to square one in terms of build time. Still, the autotools build performed worse after the meltdown patches, so while CMake still beats the autotools build in terms of build time, we ultimately gained nothing on this front.

Thanks Intel!!!

gvSIG Team: GIS applied to Municipality Management: Module 10 ‘How to convert cartography from CAD to GIS’

Thu, 02/08/2018 - 10:48

The video of the tenth module is now available, in which we will show how to load and manage cartography in CAD format on gvSIG.

Many municipalities have their geographic information in CAD format, and in many cases there’s an only file for the whole municipality that contains all type of information, such as power lines, parcels, drinking water system, sewage system…, each one in a different layer.

It sometimes makes it difficult to manage, even we have to divide the municipality into sheets to manage that information, where we lose information of our municipality as a group. In that case, to make queries, calculations…, we would have to open the different files.

The advantage of working with a Geographic Information System is that each type of information would be available in a different file (that would be the optimal way to work), and we would be able to overlap the different files (which would be ‘layers’ in our GIS) in the same View to be able to make analysis, consultations…

Another important advantage is that the vector layers in a GIS have an associated attribute table, and on the .SHP format, the most common in GIS, we can add all the fields that we want to that attribute table (length, area, owner, release date…). We will have a great amount of alphanumeric information of the different elements.

By having alphanumeric information it is easy, for example, to know the areas of all the parcels of our municipality at the same time, we wouldn’t have to select them individually like in a CAD. We could also make inquiries about them. For example we can make a query of parcels the area of which is larger than 1000 square meters with a simple sentence, where they would appear selected directly.

The cartography to follow this video can be downloaded from this link.

Here you have the videotutorial of this new module:

Related posts:

GIS for Thought: QGIS Multi Ring Buffer Plugin Version 1

Thu, 02/08/2018 - 00:23

After about 3 years of existing. I am releasing version 1 of the Multi Ring Buffer Plugin.

QGIS plugin repository.

With version 1 comes a few new features:

  • Ability to choose the layer to buffer after launching the plugin
  • Ability to supply buffer distances as comma separated text string
  • Ability to make non-doughnut buffers
  • Doughnut buffer:

    Non-doughnut buffer (regular):

    gvSIG Team: Adding new colour tables in gvSIG Desktop, more options to represent our data

    Wed, 02/07/2018 - 17:30

    The colour tables are used in gvSIG Desktop to represent both raster data (for example, a Digital Elevation Model) and vector data (they can be applied in legends such as unique values or heat maps). By default gvSIG Desktop has a small catalog of colour tables. But most of the users don’t know that it’s very easy to add new colour tables. Do you want to see how easy it is?

    First of all you have to know that the colour tables used by gvSIG Desktop are stored as xml files in the ‘colortable’ folder, inside the ‘gvSIG’ folder. So, if you delete some of these xml, those tables will no longer be available in gvSIG Desktop.

    Let’s see now how we can add new colour tables in gvSIG Desktop.

    To download new colour tables we will access to this website:

    http://soliton.vm.bytemark.co.uk/pub/cpt-city/

    As you will see that website contains hundreds of colour tables, many of them applicable to the world of cartography that can be downloaded in a wide variety of formats, including ‘ggr’ (GIMP gradient) format, supported by gvSIG Desktop. We will download some of the colour tables offered in that ‘ggr’ format.

    We launch the tool ‘Colour table’ in gvSIG Desktop and in the new window we press the button ‘Import library’ … .then we select the ggr files that we have downloaded and we already have them available.

    Finally, in the video we will see how they can be applied to raster and vector data once imported.

    And if you want to download ALL the colour tables in ‘ggr’ format and in a zipped file… they are available here.

    gvSIG Team: Webinar on “gvSIG Suite: open source software for geographic information management in agriculture – Technology and case studies” (February 15)

    Wed, 02/07/2018 - 13:16

    GODAN and gvSIG Association invite you to join the webinar about “gvSIG Suite: open source software for geographic information management in agriculture – Technology and case studies“, in February 15th at 2PM GMT.

    This webinar will deal with the gvSIG Suite, the whole catalog of open source software solutions offered by the gvSIG Association, and case studies about forestry and agriculture of the different products.

    With free registration, this event is appropriate for all users interested in knowing how to work with an open source Geographic Information System in agriculture and forestry sectors.

    We will speak about gvSIG Desktop, the Desktop GIS to manage geographic information and make vector and raster analysis, gvSIG Mobile, for field data gathering with mobile devices, and gvSIG Online, an integrated platform for Spatial Data Infrastructure, to create geoportals in an easy way and manage cartography between different departments in an organization.

    Attendees will be able to interact with the speakers by sending their comments and questions through chat.

    Registrations are available from: https://app.webinarjam.net/register/24718/18244a0afc

    The webinar details are:

    gvSIG Team: Añadir nuevas tablas de color a gvSIG Desktop, ampliando las opciones para representar nuestros datos

    Wed, 02/07/2018 - 12:16

    Las tablas de color se utilizan en gvSIG Desktop tanto para representar datos ráster (por ejemplo, un Modelo Digital del Terreno) como para representar datos vectoriales (se pueden aplicar en leyendas como la de valores únicos o en la de mapas de calor). Por defecto gvSIG Desktop tiene un pequeño catálogo de tablas de color. Lo que la mayor parte de usuarios no sabe es que es muy sencillo añadir tablas de color nuevas. ¿Queréis ver lo fácil que es?

    En primer lugar debéis saber que las tablas de color que usa gvSIG Desktop se almacenan como ficheros xml en la carpeta ‘colortable’, dentro de la carpeta ‘gvSIG’. Así, si por ejemplo borráis algunos de estos xml, esas tablas dejaran de estar disponibles en gvSIG Desktop.

    En el vídeo demo que acompaña a este post hemos borrado todas menos la denominada ‘Default’, que siempre debéis tener la precaución de no borrar. Como se muestra en el vídeo, al borrar los xml tan sólo nos queda una tabla de color que podamos aplicar a nuestras capas ráster y vectoriales.

    Veamos ahora la parte interesante de verdad que no es como borrar tablas de color ya existentes sino añadir otras nuevas.

    Para descargar tablas de color nuevas vamos a utilizar esta web:

    http://soliton.vm.bytemark.co.uk/pub/cpt-city/

    Como veréis contiene cientos de tablas de color, muchas de ellas aplicables al mundo de la cartografía y descargables en una amplia diversidad de formatos, incluidos algunos como el ‘ggr’ (GIMP gradient) soportados por gvSIG Desktop. De los cientos de tablas de color que ofrece la página vamos a descargar algunos de ellos en este formato ‘ggr’.

    Lanzamos la herramienta de ‘Tabla de color’ en gvSIG Desktop y en la ventana que nos aparece pulsamos el botón de ‘importar librerías’….a continuación seleccionamos los ficheros ggr que nos hemos descargada y voilà!…ya las tenemos disponibles.

    Finalmente en el vídeo demostrativo veremos como una vez importadas se pueden aplicar tanto a los datos ráster como vectoriales.

    Y si queréis descargar TODAS las tablas de calor en formato ggr y en un fichero comprimido…las tenéis disponibles aquí.

    gvSIG Team: Sentilo and gvSIG: Agreement to collaborate

    Tue, 02/06/2018 - 13:37

    We are pleased to announce that Sentilo and gvSIG communities have reached an agreement to collaborate closely in order to make it easier for users, partners and developers of both communities to deploy an integrated sensor platform and a Geographic Information System, both based on open source.

    Sentilo is an open source sensor and actuator platform designed to fit in the Smart City architecture of any city who looks for openness and easy interoperability. It is the piece of architecture that will isolate the applications that are developed to exploit the information “generated by the city” and the layer of sensors deployed across the city to collect and broadcast this information.

    It’s built, used, and supported by an active and diverse community of cities and companies that believe that using open standards and free software is the first smart decision a Smart City should take. In order to avoid vertical solutions, Sentilo is designed as a cross platform with the objective of sharing information between heterogeneous systems and to easily integrate legacy applications.

    The collaboration agreement will provide mutual priority support among and for members of the two communities who wish to integrate Sentilo and gvSIG in their projects.

    Both gvSIG and Sentilo were awarded in the Sharing & Reuse Conference 2017, organized by the European Commission, in the “Cross Border” category (gvSIG won the first prize and Sentilo won the third prize).

    gvSIG Team: Cambiando el ‘look and feel’ de gvSIG Desktop en un par de pasos

    Mon, 02/05/2018 - 20:02

    En alguna ocasión me han preguntado por cómo poder cambiar la apariencia que viene ‘de serie’ de gvSIG Desktop. La verdad es que siempre ha habido algunas opciones, aunque desconocidas para la mayoría de los usuarios. Con la versión 2.4 de gvSIG Desktop se amplían al poder crear y utilizar nuevos juegos de iconos.

    Os voy a poner un pequeño ejemplo para en un par de pasos cambiar ‘el estilo’ por defecto de gvSIG Desktop 2.4.

    En primer lugar vamos a modificar el tema de Java que usa gvSIG. Para ello vamos al botón de ‘Preferencias’, seleccionamos del árbol de opciones ‘General’ y ‘Apariencia’. A continuación seleccionamos la denominada ‘Texture’.

    Al reiniciar veremos un aspecto similar al de la imagen.

    Segundo paso, instalamos mediante el ‘Administrador de complementos’, opción ‘Instalación desde URL’, el juego de iconos denominado ‘TreCC 22×22’. Aunque nos indica que es necesario reiniciar, en este caso no lo es. Vamos de nuevo a ‘Preferencias’ y seleccionamos del árbol de opciones ‘General’ y ‘Juego de iconos’, eligiendo el que acabamos de instalar de ‘TreCC 22×22’.

    Reiniciamos gvSIG Desktop y encontraremos algo similar al siguiente vídeo:

    gvSIG Team: GIS applied to Municipality Management: Module 9 ‘Hyperlink’

    Mon, 02/05/2018 - 12:35

    The video of the ninth module is now available, in which we will show how to work with hyperlinks in gvSIG.

    This tool allows us to associate images, text files, pdf files, folders, web pages … to the geometries of our vector layer. For that we must have one or more fields in the attribute table of that shapefile in which we will have the file or folder paths or the web page URL.

    When we create the field for the paths it’s important to indicate a large size (for example 200 characters), since if we link to a very long path and the field length is smaller, that path will be cut and you will not find the linked element.

    This functionality will be very useful to show reviews, pictures, reports … of our geographical entities.

    The cartography to follow this video can be downloaded from this link.

    Here you have the videotutorial of this new module:

    Related posts:

    From GIS to Remote Sensing: Basic tutorial 1: Land Cover Classification of Landsat Images

    Mon, 02/05/2018 - 00:41
    This is a basic tutorial about the use of the new Semi-Automatic Classification Plugin version 6 for QGIS for the classification of a multispectral image. It is recommended to read the Brief Introduction to Remote Sensing before this tutorial.The purpose of the classification is to identify the following land cover classes:
    1. Water;
    2. Built-up;
    3. Vegetation;
    4. Bare soil.
    The study area of this tutorial is Greenbelt (Maryland, USA) which is the site of NASA’s Goddard Space Flight Center (the institution that will lead the development of the future Landsat 9 flight segment).


    gvSIG Team: gvSIG 2.4 RC4 ya está disponible para descargar

    Fri, 02/02/2018 - 10:23

    gvSIG 2.4 RC4, la cuarta distribución candidata a versión (Release Candidate) de gvSIG 2.4, ya está disponible para descargar desde la web de gvSIG.

    Con la publicación de este nuevo build os animamos a que lo probéis y a que nos reportéis los posibles errores y sugerencias que encontréis a través de la lista de usuarios.

    Las principales novedades de esta versión las podéis encontrar en los distintos post publicados en el blog de gvSIG, destacando, entre otras, la descarga de datos de OpenStreetMap o el acceso a las herramientas de administración de H2 desde gvSIG Desktop.

    Gracias por vuestra colaboración.

    gvSIG Team: gvSIG 2.4 RC4 is available to download now

    Fri, 02/02/2018 - 10:23

    gvSIG 2.4 RC4, the fourth gvSIG 2.4 Release Candidate is now available to download from the gvSIG website.

    With the release of this new build we encourage you to test it and send us any errors and suggestions in the users mailing list.

    The main new features of this version have been published at the gvSIG blog during the last weeks. Some of them are the possibility to download data from Open Street Map or the access to H2 from gvSIG Desktop.

    Thanks for your collaboration.

    GeoSolutions: New release of MapStore with Charts and Revised Filtering

    Thu, 02/01/2018 - 15:58

    Dear Reader,

    we are pleased to announce the release 2018.01.00 of MapStore, our flagship Open Source webgis product. The full list of changes for this release can be found here, but the most interesting additions are the following:

    • Charts: you can now add charts to your maps for data analysis.
    • New Simplified Query Builder with Cross Layer Filtering: support for cross layer filtering from the query builder.
    • Various bug fixes and performance improvements.
    More on charts

    With this release we added an important data analysis tool that can enhance your maps with useful data. MapStore now allows to quickly generate charts (pies, lines, bars, gauges) from layer's data. Using GeoServer's powerful services, you can aggregate data and add them to the map. You can play with this map to get a feeling about this new functionalities.

    You can create a chart, and add it to the map, directly from the Table of contents, as shown below. [gallery type="slideshow" link="none" size="large" ids="3843,3842,3841,3844,3845,3854"]

    Every chart can be configured to be in sync with the map viewport, that means the data will be filtered using the current map viewport. Chart will then update everytime you pan and zoom the map to reflect the data that falls within the viewport.

    [caption id="attachment_3770" align="aligncenter" width="1024"] Chart sync with Maps[/caption] You can even provide additional filters using the query builder to refine the data that powers your charts (see below). More work is planned on the Charts functionality to provide additional chart's types and enhance the current ones. We also aim to add more elements that go beyond pure charts hence we decided to call these elements widgets, to account for future additions. Revised Query Builder and Cross Layer Filtering You will notice a new look and feel for the query builder. [caption id="attachment_3848" align="aligncenter" width="425"] New look and feel for query builder[/caption] [caption id="attachment_3849" align="alignright" width="300"] Filter all roads that intersect New York's Central Park[/caption]

    In addition, now you can filter data using the geometries from another layer of the map using the brand new "Layer Filter". Select the layer you want to use as filter and the geometric operation to match data. In addition you can add an attribute filter to the filter layer too.

    This greatly increases the analysis possibilities. You can simply find the roads that intersect New York Central Park (like below) or make more complex filters combining cross layer, spatial filter and attribute filter.

    This feature can also be used to filter the data for the charts, so you can generate charts directly from the data filtered using the cross layer.

    Advanced filtering, data aggregation and charts makes MapStore a useful tool for data analysis and visualization that goes beyond pure maps. For the future releases we plan to enhance these functionalities with new widgets and new analysis features.

    News for developers/custom projects

    The developers will notice we changed the build files and documented more the application to support the following functionalities:

    • JS/CSS versioning: now javascript and css are loaded by version, so if you're doing hard client side caching you don't need to empty the browser cache to see changes anymore. Learn how to migrate your project here.
    • Configurable and Documented I18N: now you can configure the languages you want in configuration file. Learn How

    You can also refer to the MapStore developer documentation to learn more about this feature.

    Future work

    For the next releases we plan to (in sparse order):

    • Improve existing charts and add new widgets (text, counters and statistics, dashboard...)
    • Integration with GeoNode
    • Integrated styler for GeoServer
    • Support for layers with TIME
    • Support for more general map annotations, beyond simple markers
    Stay tuned for additional news on the next features!

    If you are interested in learning about how we can help you achieving your goals with open source products like GeoServerMapstore, GeoNode and GeoNetwork through our Enterprise Support Services and GeoServer Deployment Warranty offerings, feel free to contact us!

    The GeoSolutions team,

    GRASS GIS: GRASS GIS 7.4.0 released

    Thu, 02/01/2018 - 15:55
    We are pleased to announce the GRASS GIS 7.4.0 release

    Markus Neteler: GRASS GIS 7.4.0 released

    Thu, 02/01/2018 - 15:09
    We are pleased to announce the GRASS GIS 7.4.0 release

    GRASS GIS 7.4.0: Wildfire in Australia, seen by Sentinel-2B

    What’s new in a nutshell

    After a bit more than one year of development the new update release GRASS GIS 7.4.0 is available. It provides more than 480 stability fixes and improvements compared to the previous stable version 7.2. An overview of the new features in the 7.4 release series is available at New Features in GRASS GIS 7.4.

    Efforts have concentrated on making the user experience even better, providing many small, but useful additional functionalities to modules and further improving the graphical user interface. Users can now directly download pre-packaged demo data locations in the GUI startup window. Several modules were migrated from addons to the core GRASS GIS package and the suite of tools for ortho-rectification was re-implemented in the new GRASS 7 GUI style. In order to support the treatment of massive datasets, new compression algorithms were introduced and NULL (no-data) raster files are now also compressed by default. For a detailed overview, see the list of new features. As a stable release series, 7.4.x enjoys long-term support.

    Binaries/Installer download:

    Source code download:

    More details:

    See also our detailed announcement:

    About GRASS GIS

    The Geographic Resources Analysis Support System (https://grass.osgeo.org/), commonly referred to as GRASS GIS, is an Open Source Geographic Information System providing powerful raster, vector and geospatial processing capabilities in a single integrated software suite. GRASS GIS includes tools for spatial modeling, visualization of raster and vector data, management and analysis of geospatial data, and the processing of satellite and aerial imagery. It also provides the capability to produce sophisticated presentation graphics and hardcopy maps. GRASS GIS has been translated into about twenty languages and supports a huge array of data formats. It can be used either as a stand-alone application or as backend for other software packages such as QGIS and R geostatistics. It is distributed freely under the terms of the GNU General Public License (GPL). GRASS GIS is a founding member of the Open Source Geospatial Foundation (OSGeo).

    The GRASS Development Team, Feb 2018

    The post GRASS GIS 7.4.0 released appeared first on GFOSS Blog | GRASS GIS Courses.

    gvSIG Team: Nueva jornada sobre gvSIG, en la Escuela Técnica Superior de Ingenieros Agrónomos y de Montes (ETSIAM) de la Universidad de Castilla La Mancha (Albacete)

    Thu, 02/01/2018 - 14:02

    El próximo jueves 8 de febrero se celebrará en la Escuela Técnica Superior de Ingenieros Agrónomos y de Montes (ETSIAM) de la Universidad de Castilla La Mancha, en Albacete, una jornada sobre gvSIG.

    La jornada será gratuita, con plazas limitadas, y la inscripción puede realizarse escribiendo un correo electrónico a la dirección jornadagvsigab2018@gmail.com, indicando nombre, apellidos, e-mail, organización, y a qué ponencia y/o taller/es se desea asistir. El programa de la misma es el siguiente:

    • 9:00-10:00: Ponencia “Introducción a la Suite gvSIG”
    • 10:00-11:30: Taller “Introducción a gvSIG Desktop”
    • 11:30-13:00: Taller “Introducción a scripting con gvSIG Desktop“
    • 13:00-14:30: Taller “Geoestadística con gvSIG y R“

    ¡Os esperamos!

    gvSIG Team: GIS applied to Municipality Management: Module 8.2 ‘Creation of point layers from tables (Event layers)’

    Thu, 02/01/2018 - 13:33

    The second video of the eighth module is now available, in which we continue showing how to create point layers from a table. In this case we will create an event layer, that means, a point shapefile from a table with coordinates.

    For example, the table can be composed of geographic coordinates that we could get from a topography survey with GPS.

    This functionality is another way to generate our cartography in a town hall, in this case when we only have the coordinates of the points.

    The cartography to follow this video can be downloaded from this link.

    Here you have the second videotutorial of this eighth module:

    Related posts:

    Pages