Exploring Local
Mike Dobson of TeleMapics on Local Search and All Things Geospatial

The 3D Arms Race and Thoughts About Analytics – Part 2 (of 3)

June 26th, 2008 by MDob

Today I would like to continue an exploration prompted by Richard Water’s article in the Financial Times titled “Way to go? Mapping looks to be web’s next big thing”. The article can be found here.

It is clear to me that the public’s potential fascination with 3D-egocentric data (what does the world look like around me?) will drive the future growth of many companies that collect data for purposes of spatial analysis in its various guises. What seems to be lacking in the reporter’s brief overview is some attention to “non-finding”, analytic uses of the 3D-data to answer more complicated questions about the environment, that is questions not only dealing with my location, but how I can interact with it. I think it is the more analytic questions that will generate significant consumer interest. For example, the use of a 3D-roads database, like that being generated by Intermap, to answer the question “How can I get to a destination using the least gasoline?”

In the Water’s article, Eric Jorgensen of Microsoft is quoted as saying that Microsoft is building a “digital representation of the globe to a high degree of accuracy” that will bring about “a change in how you think about the internet”. He adds, “We’re very much betting on a paradigm shift. We believe it will be a way that people can socialise, shop and share information.”

I think Erik Jorgensen is right but he misses an important issue. The 3D revolution is not going to change how we think about the Internet, it is going to change how we think about geographical data and how we use the Internet to develop new Map-Like-Objects (MLOs*) and new functionalities to explicate our place in that environment. Although not all the applications and functionalities for these data are yet clear, it is obvious that the race is on to build the best 3D-representation of the world in which we live.

While there is a fascination today with “building footprints” and architectural facades (part of answering the “finding” questions), I believe that there will soon be an equally important market in providing more accurate horizontal and vertical elevations. These data will help us to correctly position other 3-D data for “finding”, but more importantly for personal analytics – a field that languishes today because Navteq and TeleAtlas data quality does not appear to provide a base for these more advanced spatial applications.

During my review of the Waters article it occurred to me that one of the issues of importance in this discussion is that the Internet is the great equalizer. Mapping was limited for centuries because not many people knew how to do it (only cartographers) and the data was too expensive to distribute on a wide-scale (paper maps). Later, GIS generated the need for spatial data and GIS developers began to incorporate mapping techniques into the software, freeing spatial analysts of the need to understand the details of mapping, while reducing the distribution cost (software and networking). Just as the GIS revolution freed the masses, so to does Internet mapping free even more people to use spatial data and MLOs, while not requiring them to know anything about how it works. 3D-spatial data will create even greater popularity. I suspect the user will think, “This map stuff looks just like…reality and I already know how to use that!

Later in his article, Water’s opines that the company the controls the map interface on the Internet could one day own something “…as prevalent and powerful as Google’s simple search box.”
Gosh, I am getting to be a real nitpicker here! No one is going to control the interface to MLOs on the Internet. The more interesting issue is “who can produce the data that is needed to feed the 3D- oriented spatial functionality used on the Internet today and tomorrow?” Clearly, Nokia’s interest in Navteq and TomTom’s interest in TeleAtlas is driven by their realization that it is the data and not the interface that drives the application. In addition, the robustness of the data is a limiting factor in the effectiveness of the application.

One issue that seems to elude the Waters and, perhaps several of the Internet companies involved in the 3D Arms Race, is that many of the data types and elements being collected are not static. Collecting data that mutates over time is a difficult task. Both TeleAtlas and Navteq have trouble updating the roads they now map and will have even greater problems if they need to re-map these data at higher levels of horizontal and positional accuracy. In addition, there is the issue of keeping up-to-date all the data that will be layered on top of the base date. Who will update these data? Who will be able to afford the cost?

One of the answers to the updating problem (and one that Mike Liebhold who is quoted in the article believes in) is User Generated Content – similar to TomTom’s MapShare. As you know from my blogs, I think there is a significant role for UGC in the collection of some data – new streets and roads, name changes, address changes, POI changes, etc – but the accuracy and currentness requirements for some data types, especially 3D spatial data) will continue to be the province of the commercial data collection organizations (e.g. precise elevations and horizontal positions).

Next time, let’s think about the market for 3D Spatial Data and conclude our exploration of the Waters’ article.

* MLOs is a term introduced to me by Barbara Bartz Pechenik during one of our lunches at Gordon in Chicago a number of years ago. We (both me and the profession) miss you Barbara!

Bookmark and Share

Posted in Data Sources, Geospatial, Geotargeting, Google, Mapping, Microsoft, Mike Dobson, Navteq, Nokia

One Response

  1. Duane Marble

    Your comment to the effect that:
    “Just as the GIS revolution freed the masses, so to does Internet mapping free even more people to use spatial data and MLOs, while not requiring them to know anything about how it works.”
    is certainly correct. The problem that this generates here (as it did in GIS when it became affordable) is that in using spatial data and doing spatial analysis there is a critical difference between what one can do and what one should do.
    Just because spatial data is available in digital form does not mean that it represents “truth” and being able to easily create “mashups” of different spatial data sets does not mean that anything meaningful or useful will result.

    -from MDob – Duane – your comment points to a common interest in “neogeography”. I have seen maps that defy all of the conventions of data handling and symbolism but seem to be accepted by the rest of the audience as valid input. In an age where spatial data and mapping tools are widely available what would be the best practice for helping to improve the use of maps and spatial data by those who may not have the enormous amount of experience that you and other professionals possess? – Thanks for your comment- Mike