Exploring Local
Mike Dobson of TeleMapics on Local Search and All Things Geospatial

Better Maps Through Local Thinking – Conclusion

August 10th, 2010 by MDob

Is it possible that the failure of national, street-level, navigable databases companies to provide robust, quality spatial data across the breadth of their database coverage might serve as the impetus for the development of street map companies focused on producing navigable map databases of local or regional areas? After reading the two previous installments on this topic, you only have to read this one to end it all. Aren’t we glad!

Many believe that all efforts at mapping are fatally flawed and that the degree of failure increases exponentially as the size of the area mapped increases. I suppose this is true, since the data necessary to represent a place may include a near-infinite number of observations. However, if we stipulate that what we want to build is a database for navigation across a limited spatial domain, such as a city or county, we have made the problem more digestible and, at least theoretically, less likely to be as erroneous as similar attempts across larger domains, such as nations or continents.

When most people use maps and navigation databases, they assume that the quality of the database is uniform across the extent of the coverage. However, this simply is not the case. Places that are “established” as more important than others will generally have accurate and current maps. Places that are rated as “less” important will generally be updated less thoroughly and less frequently than locations considered more important.

Even though a hierarchy of preferential treatment exists in the world of commercial mapping, it can be quite unstructured in practice. Usually everyone in top management has a favorite, though little known place, that they visit from time to time and the researchers will hear about the errors in the map of this location until they get it right. In addition, key customers will complain if there are problems in the areas where they have test tracks or in areas used for testing by key customers (of course this includes the areas around their offices) and so on. However, the major producers of navigable map databases actively attempt to harmonize the quality of their spatial data across the coverages they offer and it is this discipline that distinguishes the players from the pretenders in the world of maps and mapping.

In part, it is the active attempt to maintain consistency and currentness across their database that leads to some providers of navigation map databases being considered authoritative. However, being authoritative may not be the same as being correct. We need only look to many of the older USGS quadrangles to realize that some USGS products are both authoritative and inaccurate. Mike Goodchild in his writings on Volunteered Geographic Information (what many would call crowdsourcing or User Generated Content) discusses the difference between “authoritative” and “asserted” data. In today’s world, most creators of map data are asserting that their data is accurate and whether we ever regard these companies as “authoritative” is unclear, although, curiously, most of us do consider them “references”.

So what makes a source authoritative in the world of mapping? One way of thinking is that “authoritative” sources, such as national mapping organizations, set standards, collect data according to these standards and revise their products on a consistent basis so they remain synchronized with the set standards. In the world of commercial navigation database providers, we might consider NAVTEQ and Tele Atlas authoritative sources. Google might be another, although no one seems to know the specific mapping standards to which the company aspires.

In previous blogs, we have contended that “authoritative” producers of navigation databases even though they rely on field data collection crews, instrumented data collection vans, high resolution aerial or satellite imagery and data mining techniques are likely never to get the story quite right. Although data quality is an integral part of the process for NAVTEQ, Tele Atlas and Google, none of these companies can afford to spend unlimited amounts of money in the pursuit of spatial accuracy across the vast spatial domains for which they provide map coverage. Perhaps more importantly, it is unlikely that any of these companies really knows how they measure up against common map accuracy standards. Instead, they do the best that they can in a manner designed to maximize the time value of money, while, hopefully, increasing the integrity of their databases.

The reality is that today’s “authoritative sources” in mapping, although guided by self-imposed data quality standards, field collect what data they can, use imagery to find what they cannot afford to collect in the field and augment their map coverages through data mining. In areas where their teams do not directly collect data through field methods, they attempt to use alternative field-data collection methods, including crowd sourcing and for-hire field representatives (map-temps), but, in the main, rely on imagery and data mining to fill gaps in databases initially built by field observations.

While mapping producers presumably may have a vague ideal of where their data are weak in terms of coverage and quality, it is less clear that they employ useful and diagnostic measures of data quality (e.g. measures that might help them understand which techniques could be used to resolve the specific weaknesses found in their data coverage). Looking at the kinds of errors commonly found in the offerings of the major map and navigation services, it defies imagination that these companies have actually taken the time to develop or perhaps implement overarching data quality measures or practice any type of forensics on their data capture methods and sources.

A couple of years ago, Steve Guptill, a colleague on a current project and friend of mine for quite some time, tried to convince me that we should start a map quality business, an endeavor similar to providing a Good Housekeeping seal on a consumer household goods product. At the time, I told Steve that setting this up would be expensive and difficult to maintain. Today, I think Steve was right, but now we might need to put a tag on a map database similar to Underwriters Laboratories, certifying the database fit for use (you know, something like “low chance of this database producing routes that would kill you”, “moderate chance of it finding where you are going”, “likely chance of geocoding error”, “occasional wrong turn on one-way streets,” etc). But I digress and will simply note that someone needs to establish (not just propose) realistic standards to evaluate the quality of commercial map navigation databases.

Map Compilation

Map compilation models generally can be described as a three- legged stool. The legs of the stool are field data collection, imagery, and data mining. While properly conducted field data collection is the “gold standard” of the mapping world, it is also the most expensive way to gather data. At least one of the major providers does all that it can to keep from sending people in to the field. While this company understands the value of field observations, it has concluded that crowdsourcing, data mining and imagery analysis and are preferable alternatives and that these methods provide a “reasonable” solution with a reduced cost. Others realize the value of the gold standard, but even then cannot afford to apply this technique in as many areas as they would like.

Up-to-date satellite or aerial Imagery is a valuable compilation tool that is an efficient method for finding “missing” elements in map coverages. Unfortunately, imagery provides little in the way of attribute information other than geometry and some clues on potential road class identification. The twin problem in using aerial or satellite imaging is that it is expensive and time-sensitive. It is rare for a mapping company to task an image provider with producing specific coverage just for them. Instead, most rely on the most current off-the-shelf imagery due to cost sensitivities. In addition, economic considerations force these companies to limit coverages imaged by vans and other mobile platforms.

High resolution street-level photography (like Google Street View and the kinds of imagery available from the Tele Atlas vans and the new generation of vans fielded by NAVTEQ) can capture additional attributes such as road width, slope, house addresses, some postal addresses, street names and other information gathered from imaging signs and intersections. While a promising technology, street-level imaging is yet another expensive endeavor, requiring sophisticated technology and the need to train field teams in the deployment of these systems.

Data mining for map compilation takes several forms. In today’s environment, it is considered a discovery process in which online sources of spatial information are interrogated and imported for conflation, presuming these data meet quality standards. In addition, data mining includes the concept of “sourcing”, an omnibus term for the body of spatial data sources that can be discovered by canvassing government agencies, institutes, regional authorities, professional organizations and others entities that may be able to provide leads on useful data.

Each of the compilation sources described above comes with obvious benefits and limitations. However, due to the limitations of these methods, everyone in the mapping business thinks about how to deal with the issue of map updating, knowing that their map coverages are not of uniform quality and that the data collected previously may have changed. In many cases, this means reusing the tools described in the three-legged stool, but with a different perspective.

All map updating is focused on change detection. As a map database producer, you are gratified to know that your data is correct, but, after all, accuracy that is the quality level to which you aspired. Researchers tend to be interested in indicators capable of isolating locations where they did not get it “quite right”, so that they can find it and fix it without having to sift through the entire database in the process. Every major provider of navigable map databases fixes some part of their database each quarter, but they are not interested in looking for things that are correct; they are looking for things that are wrong and change detection is a useful tool for this exploration.

There are many ways to approach change detection (comparing map layers with imagery layers, maps with other maps etc.), but the best approach is to establish a feedback loop with those who actually uses your database for its intended purpose. In fact, this is what Map Share brilliantly accomplishes for TomTom and Tele Atlas. It is used by NAVTEQ in the form of feedback provided by users of their data in the market for logistics. OSM is a sense, is completely based on user feedback, as crowdsourcing most often reflects a users self interest and reflects their desire to have the streets, roads, trails and other paths that they take shown correctly on the maps that they use.

It is my contention that there is no better measure of error in maps built for commercial purposes than consumer complaints. However, lodging a complaint, having it fixed and seeing the result are not common experiences for most users.

The “why’ behind this problem is complex and, in part, may be related to the update schedules of the majors. While Google tries to do a better job of this than others, they have a good record of making valid changes and then changing them back, and forth, and back and forth so many times that those who contribute corrections just give up and use another product. However, one mitigating factor is the size of the problem. Google, NAVTEQ and Tele Atlas are trying to map the world with a high degree of accuracy. Indeed, they have collectively created more, accurate coverage of the world than ever before available. Unfortunately, the size of the endeavor means that they have generated more map errors than have ever been created in the history of map making. While some users will attempt to work with you to correct errors in your data, it is my sense that none of the major producers has yet to get the equation quite right in terms of customer input and customer satisfaction with the process.

One way of looking at the problems that NAVTEQ, Google and Tele Atlas face is to suggest that the mapping errors in their databases are functionally related to the amount of coverage the companies are attempting to map. At some point, it would seem that the energy (personnel, resources, money, etc) they have to expend on the task will fail to resolve the problem. In fact, that is my bottom line for the types of errors that I now see in the databases of the major producers of navigable databases. Perhaps more perilously, except for Tele Atlas, I think consumer sentiment is working against their contributing map corrections to Google and the lack of an obvious, well-advertised venue for contributing map corrections works against NAVTEQ. While I like the TomTom/Tele Atlas approach, it may be that the cost of and preference for buying a TomTom device and using it to advise TA of map errors, limits the audience who will be willing and able contribute information. (On the other hand, TA might say, “Well, we’ve got over two trillion GPS points from Map Share Mike Dobson, how many have you collected from your stinking blog?”)

And Now For the News

Well, by this point, I have probably exhausted you and you are ready to submit. Yep, the best solution to the map accuracy and incomplete coverage problems is to break the universe into numerous, discrete, localized units and make mapping a local business. By reducing coverage to a local area (say a Kansas City, or a Chicago where you might have to cover nine or more counties) you simplify the problem of focus and increase the probability that you can find and leverage local sources to help improve your database. In addition, by developing meaningful relationships with local sources, you will discover information more quickly than your geographically distant competitors.

However, “being local” is only one part of the solution. The other is that you need to make the users in the local area your customers. You need to develop a superior product that is constantly updated. Give your customers a workable crowdsourcing system that allows them to interact with you on their terms and let them help you solve their mapping problems. If you do this, then the local area and its residents, business and governments become your field office.

Everybody has seen the examples of the local company that decided to grow and was so successful at it that it lost its brand identity and its connection with its roots. Usually these businesses fail after a few years and everybody remembers how it used to be. The modern world of mapping is different. Our major companies started large and are growing even larger. They never had a local base and never had the local values that go with it.

I realize that some of you will say, well, I would not be able to use that local company of yours to find out how to navigate in some other city. I agree, but you could use it to find the best routes and most accurate database in the area where you live and spend the majority of your time navigating. Isn’t that a choice you would make if the alternative existed?

I suppose you are chafing and sputtering out ideas on why this would not work. If so, think about this for a second. When the map service that you are using does not work, contains an error and for some reason gets you lost while you are using one of their routes, who do you ask for directions? Do you call Google at 1 Net Neutrality? Or, explain to the 411 operator that you want NAVTEQ customer service? Hah! Well, I ask a local, since they are the only ones around who really know where we are located.

In some sense, this topic on local map databases is yet another example of Waldo Tobler’s First Law of Geography that “Everything is related to everything else, but near things are more related to each other.” You should take a few minutes to think about Waldo’s First Law. It helps explain why “local” mapping could prosper and explains why local should be a low cost, up-to-date solution. Yep, local mapping could trump national mapping and a consortium of local mappers could be an interesting venture. The issue that clouds the future of local mapping is distribution. Of course, with Google and Verizon guiding Net Neutrality, maybe local will never gain the access it would need to be the next new thing.

You know, the only people who should be sputtering about this article are those who believe that OSM is the model I have just described. I am not sure that’s true and may write my explanation in a future blog, but for now, I am sure you are as tired of this topic as I am.

Click for our contact Information

Bookmark and Share

Posted in Authority and mapping, Data Sources, Google, Google maps, map compilation, map updating, Mapping, Mike Dobson, Navteq, OSM, Tele Atlas, TeleAtlas, User Generated Content

One Response

  1. Mary Bowling

    Mike, Thanks for such a well thought out treatment of what I also believe is the underlying problem with Local Search. The more we come to rely on internet mapping, the more critical it’s weaknesses become. It can’t work well without adding up-to-date local knowledge to the equation.

    Hi, Mary – thank for your comment. I, too, think local knowledge is key.

    Mike