Exploring Local
Mike Dobson of TeleMapics on Local Search and All Things Geospatial

Does Anyone Need to Know Anything About Maps Anymore (2)?

February 20th, 2014 by admin

(This is NOT the blog I had planned next for the series, but it is one that may help clarify why this topic is of such significance. If you were not wild about the last blog, you might skip this one.)

In a comment on my last blog regarding cartographic knowledge, Pat McDevitt, VP of Engineering at AOL, formerly with MapQuest, TomTom and Tele Atlas, mentioned his interest in “map-like-graphics”, such as subway maps (see my response to his comment for more detail). In the 1980s, Barbara Bartz Petchenik coined a term for such displays by naming them “map-like-objects”, or MLOs. MLOs sacrifice some aspect of cartographic accuracy to promote easier understanding and use by a selected population. Let’s explore this concept a bit, as a discussion may help to further illustrate the points I was making in my last blog.

The class of MLOs that represent subway maps includes purpose-built graphics designed to help riders of these transportation systems understand how rail lines connect stations in a manner that can be used to plan journeys. Since the rider only can access and exit the trains at specific stops, the actual geometry of the network ( in terms of distance and direction) is of inferior importance to creating a display that is readable, interpretable and actionable in a manner that allows the user to ride between an origin and an intended destination. The argument here is that while MLOs may sacrifice cartographic accuracy, they are tools that can be more effective than using an accurate, detailed map of the same spatial objects. If only the use-case were so simple! Let’s explore by personal example.

I have visited London at least 20 times during the course of my adult life. I usually explore the city riding the London Underground to travel to a location near my planned destination. I admit, with some shame, that of all the urban geographies I have explored I know London’s geography the least well. I find this curious since this location is one of my favorite travel destinations. It is, also, a destination I have visited more frequently than other urban areas that I seem to be able to navigate with little problem.

During my visits to London I was bothered that the objective reality I gained while walking its streets seemed to conflict with where I expected the city’s spatial features to be located. While I was certain that some time/space perturbation was afoot, I was not sure if popping out of the Underground’s “wormholes/tube stations” so distorted my mental map of London that it could not be remediated.

More recently I started exploring the notion that my ill-conceived geography of London actually was a result of traveling using the Underground. I realized, after some consideration of the issue, that my “relative mental spatial reference” for the locations of features of interest in London was likely based on where the nearest tube station was positioned. What is problematic here is that my sense of the geography of the tube stations was informed by the Tube map. Was it really possible that I had used my knowledge of where stations were shown on the ubiquitous Tube map to inform the reality of my above ground wanderings on my probable location? Sounds like science fiction, but could it be true?

To that point, my irrational view of London’s geography might be because the Tube map includes a variety of planned misrepresentations, which you can read about in the article What does London’s Tube Map Really Look Like? Of additional relevance is a study from 2011 by Zhan Guo called Mind the Map (a parody on Mind the Gap – signage familiar to all who have ridden the Tube). Gou concluded that 30 percent of passenger take longer routes because the Tube map misrepresents distances between stations. (You can read a concise version of Gou’s report in the Daily Mail.)

Based on this brief diversion we might conclude that while MLOs can be useful, they may be extremely misleading. Many would say that the problems generated by MLOs result from the users of these maps employing them for purposes for which they were not intended. If that is so, maybe these map-like-objects should come with a use warning, like those on the mirrors of some American cars – perhaps something like:

This map probably represents a spatial area that is considerably larger than this paper/display screen. True distances, directions and spatial context are not represented correctly or reliably. Reliance on this map for any use, even riding the Tube, is not recommended and may result in serious injury, lost time, exposure to buskers, or other inconveniences. The publisher and producer of this map and related licensees are not responsible for errors of omission, commission, or other misrepresentations resulting from lack of cartographic knowledge, incompetency, lack of moral fortitude regarding international border disputes, editorial policies, advertorial policies or, more commonly, frequent cost avoidance cutbacks in map compilation efforts.

While such warning might sound humorous (hopefully), the multiple use issue is of considerable concern. While those who create MLOs may realize the shortcomings of this type of spatial display, I am not sure this type of knowledge is known by users of the map. It is likely that a large proportion of the population that use MLOs will be unaware of the limitations that complicate extending the use environment that the original MLO was designed to allow. In some ways the problem is similar to that experienced by the twenty-six percent of U.S. citizens who, having observed the sky (or not) concluded that the sun revolves around the earth!

The problem of representing spatial reality in maps is extremely difficult. People who use maps do so in one of several manners, but all of these uses involve, to some extent, answering the question “where?” In many cases map use is data driven, prompting people to browse the map in a manner that helps them organize it into a familiar/understandable patterns.

To illustrate this case, imagine that you are viewing an election map displaying states that voted Republican (colored red) or Democrat (colored blue). Most people would explore this display by examining their home state, comparing other nearby states and then looking for the states that voted their preference, followed by those that supported the opposite side. The recollection that most people would have of this map is the patterns made by red and blue states and their spatial clustering across the extent of the map. Even the most cursory inspection of a map usually results in the acquisition of a pattern that is matched with other map patterns that users have acquired. The unfortunate complication here is that users do not know when they are observing an MLO that works well only for a selected purpose, or when they are observing a cartographic display that has been tightly controlled to produce a spatially accurate a representation of the variable being mapped.

Perhaps more pernicious is the hybrid MLO. The American Airlines map that I showed last time was designed to function as an MLO, but was based on a highly accurate cartographic display. In addition, the map was created by a production system that was designed to produce both reference and detailed street maps, but apparently not to produce advertisements or MLOs. Imagine teasing the cartographic reality out of that map. Someone who had not seen a world map before might assume that the globe really does look like what was shown in that display. Well, so what?

I recently read an interesting article by Henry Petroski titled “Impossible Points, Erroneous Walks,” (American Scientist March-April 2014 Volume 102, Number 2, available only by subscription) that was brought to my attention by Dr. Duane Marble shortly after I published my last blog. Petroski, a noted author (he is both a Professor of Civil Engineering and History at Duke University), was railing about an illustration in the New York Times that incorrectly represented the scallops on a sharpened pencil. His thoughts on the seriousness on this seemingly modest error were equally true of MLOs. He wrote:

Books, newspapers, and magazines are also teachers, as are television and radio and the web, as well as the inescapable advertisements. Whether or not we are consciously aware of it, the whole of our everyday experience is an ongoing teaching event.

This is why it is important that what we and our children are exposed to in the broader culture be presented and represented accurately. The words and images that we encounter in common spaces can be no less influential in shaping our perception of the world than what we learn in a formal classroom setting. If we find ourselves surrounded by incorrect depictions of objects, our sense of how things look and work can become so skewed that we lose some of our sense of reality.

Petroski continues:

This is not to say that there is no room for imagination and creativity in both engineering and artistic work. But even the most abstract of ideas and renderings of them should follow rules of geometry grammar and aesthetics that make them meaningful to venture capitalist and museum-goers alike.” (Petroski, 2014, P1-2 Impossible Walks, Erroneous Points).

There we have it. In the context of maps, we might substitute “But even the most abstract of spatial ideas and rendering them should follow the rules of cartography, map grammar and the design of displays representing spatial distributions…” That of course would return us to the title of my last blog, which was “Does Anyone Need to Know Anything About Maps Anymore?” Of course they should! Next time, let’s resume why this lack of cartographic insight will become a greater problem in the future of online mapping.

Thanks for visiting,

Dr. Mike

Bookmark and Share

Posted in Authority and mapping, Geospatial, Mapping, Mike Dobson, map compilation | 1 Comment »

Does Anyone Need to Know Anything About Maps Anymore?

February 10th, 2014 by admin

As many of you may have noticed, Exploring Local has not been updated recently. For the last year I have been engaged as an expert witness in an issue involving the type of subject matter that I usually comment on in Exploring Local. Due to the sensitivity of the proceedings, I decided not to
write any new blogs while I was engaged in the proceeding. Recently the matter concluded and I intend to focus some of my time on issues related to mapping and location-based services that are of interest to me and that I would like to share with you. So, here we go-

A few months ago I saw a blurb on my LinkedIn page about a debate that was going on regarding maps in a forum titled “GIS and Technological Innovation.” You can find the article and some of the comments here, in case you do not belong to LinkedIn.

I cringed at the pejorative title of the argument, which was, “Do Programmers Really Make the Best Cartographers?” While this is not quite as ill-phrased as, “Do Code Monkeys Really Make Better Maps than Wonky Map Makers?”, somehow the original title seemed to not quite set the right tone. The most problematic issue with the original question, at least for me, was the lack of context. For example, my interest in the comparison was, “When doing what?” In essence, was the original question designed to explore 1) who writes the best code for cartographic applications, or 2) who makes the best maps using available applications? In my opinion, both questions are non-productive.

Let’s substitute these questions instead. First, “Does anyone know how to “make” maps (or mapping software) that effectively communicates the spatial information they were designed to convey?” If someone does know how to do this, the question of interest then becomes, “Do mapping systems permit their users to exercise these capabilities?” A third important question is, “Does anyone compile the spatial data that fuel mapping systems in a manner that accurately reports these data as they exist in the real world?”

Now, for purposes of continuing this argument, let’s make an assumption though clearly not true, that all spatial databases are of equivalent quality. If we accept this position for purposes of exposition, then the next meaningful issue is, “Does the mapping system function to inform the reader of the spatial information it is designed to map in a manner that retains the fidelity of spatial relationships as they occur in the real world?” This leads us conceptually to a two-sided map-making platform; on one side we have the mapping functionality and on the other we have the actor who uses the functionality to prepare maps.

Analyzing the capabilities provided by software-based mapping programs will lead us to conclude that some level of cartographic practice has been embedded in all software systems designed to produce maps. I think we can agree that, the software mapping tools convey someone’s (or some development team’s) understanding, hopefully informed by cartographic knowledge, of the functional requirements of a mapping system. These requirements, for example might include consideration of the goals that use of the mapping tools should accomplish, how the tools should operate, how the desired capabilities of the tools might be formalized as functional software, and whether or not user input is allowed to modify the functionality in any meaningful way.

We should, also, acknowledge that some of the end-users of these systems may have knowledge of the cartographic process and seek to use these systems to create a map that melds the capabilities of the available software functionality modified by their personal experience with rendering spatial data. In practice, the use-situation is often constrained because many mapping applications, for example Bing Maps, Apple Maps, and Google Maps, are structured to meet a specific publishing goal that influences how the available software interacts with spatial data. While this potential limitation may influence how a person uses an online system to create maps other than those normally provided by the system, it does not teach away from the general tenet that knowledge of cartographic theory and practice should underlay how well maps function in communicating spatial information, regardless of who makes them or who creates the software functionality.

If software developers and modern cartographers have some degree of cartographic knowledge, where do they get it? Although there is a small (and declining) worldwide cadre of academic cartographers who continue to research improvements in the communication of spatial data using maps, there are just not that many people who benefit from or are even aware of these efforts. Conversely, even if the developer of an online mapping system has discovered accepted cartographic theory and practice and used it to shape the functionality of their software, the truth table is whether or not the use of its functionality can be harnessed to present data advantageously, that is in a manner that accurately represents the spatial data. I think that this is the critical question that pervades all modern map use. Restated, we might ask, “Are the capabilities that mapping systems offers us today based on mapping engines whose developers and users (map makers) have been adequately informed on cartographic theory and practice?”

My response to this question is mixed. For example, most online mapping systems appear to have been developed by people who understand the mathematics of map projections, although they appear not to appreciate the use-limitations of projection types. Conversely, most online systems seem to have been developed without a clear understanding of the complexities of data categorization, classification and symbolization.

If I could get the online mappers to listen to me I would plead for them to include the famous “That’s Stupid” functionality, which automatically erases your map when you have created an illogical presentation or one that is misleading due to errors in representation, symbolization, generalization, classification, technique, etc. Of course, if such functionality were ever implemented, there might be no online mapping whatsoever.

Laugh if you will, but take a look at this fine example of modern online mapping brought to us by American Airlines as part of a recent promotion urging people to travel on a worldwide basis. The map appears to have been created by Microsoft and it is copyright both by Nokia (HERE) and Microsoft (BING).

American Airlines, Microsoft and Nokia give you the world and more.

Click here for a larger version of this map.

You may have noticed that you have a choice of visiting any of the approximately twenty-seven, apparently non-unique, continents (one representation of Europe seems to have mysteriously disappeared into the seam at the right edge of the map and does not show up on the left continuation). The map is exquisitely crafted using shaded relief, although I suppose this could be a representation of the earth during a previous ice age since there are no countries shown, nor airports with which to fly American Airlines.

I am not certain of the distances involved on the map as there is no scale. Although we know that the equatorial circumference of the earth is, oh – a) 24,901 miles (Google), b) 24,859.82 miles (Geography-About.com), c) 25,000 miles (AstroAnswers), d) 24,902 (Lyberty.com), or e) 24,900 (dummies.com). Don’t even ask about the polar circumference! Well, some measurement must be appropriate, but which one applies to the map in question? Further, where does it apply and how does it change over space?

Perhaps my interest in scale has been rendered a historical artifact, replaced by the ubiquitous use of “Zoom Level?” I presume you have heard modern “zoom level” conversations, as in, “These two towns are about an inch apart on my screen at zoom level 17. How far apart are they at zoom level 12? I don’t know, I don’t use Bing, I use Apple Maps and my screen has more pixels per inch than yours. Is that important?”

Why does this matter?

Without further belaboring the numerous problems with today’s most common mapping systems, it is important to note that online mapping is about to take a significant turn from street maps and simple navigation towards GIS and what might be called spatial inquiry systems. Users will benefit from a move beyond street maps to geographical inference engines that can answer user questions in a highly targeted spatial manner. However, much of the promise of these types of systems is based on understanding spatial data and methods used to represent it. In the next few blogs I will discuss where I think this evolution will take us in the online world of mapping and how we might get there by solving some interesting problems. However, I will likely mix in a few product reviews along the way, as there are a number of companies claiming some remarkable, but unlikely, potentials.

Until next time –

Best,

Dr. Mike

Bookmark and Share

Posted in Apple, Bing maps, Geospatial, Mapping, Microsoft, Nokia, map compilation | 2 Comments »

Waze-Crazy – Would Facebook Drop a Billion on Waze?

May 9th, 2013 by admin

As often happens lately, I had no intention of writing about anything in the news due to lack of interest. However, Marc Prolieau wrote an article this morning on the rumor that Facebook would pay one billion dollars for Waze, and then wrote me to ask my thoughts. I, then, saw an article berating Google for not being the company that was buying Waze. It was at that point that I began thinking that I must have missed some major development at Waze. In turn, this idea prompted me to do some research and write a commentary on the potential acquisition.

In the spirit of openness, I was contacted by someone representing themselves as Facebook’s “location” guy shortly after my blog about the problems associated with the release of Apple Maps in 2012. We never connected. So, I do not have any contacts at Facebook, nor do I have any contacts at Waze with whom I am in communication. Also, in the spirit of openness, I thought about titling this essay “Startup Crowd-Sources Corporate Value.” So, let’s get going.

Waze describes itself as follows:

“After typing in their destination address, users just drive with the app open on their phone to passively contribute traffic and other road data, but they can also take a more active role by sharing road reports on accidents, police traps, or any other hazards along the way, helping to give other users in the area a ‘heads-up’ about what’s to come.”

“In addition to the local communities of drivers using the app, Waze is also home to an active community of online map editors who ensure that the data in their areas is as up-to-date as possible.”

At the end is a video, which can be linked to from the above referenced About Us page on the Waze website. The video ends with a note to this effect – “Keep in mind that Waze is a driver-generated service. Just, as other user-generated projects, it depends on user participation. Your patience and participation are essential.”

I don’t know about you, but if Waze is going to pick up a billion bucks based on my labor, I’d want more than a note indicating that my participation and patience were essential to their success. However, the more interesting question is whether or not Waze is worth $1,000,000,000.00.

To get my arms around valuing Waze I decided to go through a brief acquisition checklist

What is it that is worth a billion dollars at Waze?

Brand? No.

Waze is minor brand that remains generally unknown around the world. I think it might be difficult to put a high valuation on a company whose product is crowd-sourced and whose brand represents the industrious endeavors and lacks of its audience. Note that use of “lacks” here does not indicate that these people are dolts, rather that the user profile is likely not uniform (standardized) or distributed uniformly across geographical space. In turn, this suggests that the product is not uniformly accurate across that same space. As a consequence, the brand’s value may exhibit a significant spatial and temporal variation.

Distribution/Users ? No.

Wikipedia claims that Waze was downloaded 12 million times worldwide by January 2012 and 20 million times by July 2012. By the end of 2012, according to Waze, that number had increased to 36 million drivers. Today, there are apparently 44 million users. To be honest, I am not sure how to parse the information on downloads. Downloads do not indicate active users. The notion of downloads, also, does not indicate geographic coverage or the ability to harvest GPS traces or active crowdsourced updates in specific geographies.

Next, I am not sure how Waze measures users, nor was I able to find any definitive source for this information. I doubt that it has 44 million active users. An article in the Huffington Post indicates that Berg Insight, a Swedish market research firm, says Waze has from 12 million to 13 million monthly active users. If Berg Insight is correct, then the Waze contributors are likely spread thin on a worldwide basis and likely concentrated in a modest number of large urban areas. In addition, how long the active users of Waze have been contributing GPS traces or active updates would appear to be time limited based on the reported number of users and the growth of the company.

So distribution remains unknown, except, perhaps, to Waze. However, even if they could validate the number of reliable active users, it remains unclear how those users are distributed across the geographical space of Waze’s target markets.

Finally, another problem is the type of driving usually performed by Waze users. Are the majority of the miles traced those showing a repetitive journey to work five days a week? I suspect this is a large portion of the tracks they receive. If this is true, then their distribution is likely quite limited in terms of the uniformity and the extent of geographic coverage.

Intellectual Property, Trade Secrets, Know-how? No.

Waze has 100 employees. I am sure that they are bright, energetic and extremely capable. I doubt that what they may know, have codified, filed as patent applications or hold as trade secrets is worth anything near a billion dollars. After all, it is not that other people are ignorant on the topic of how to build a crowdsourced mapping system.

Map Database? No.

Waze claims that in 2012 its users made 500 million map edits and upgraded the map to reflect 1.7 million changes on-the-ground that took place in 110 countries with community-edited maps. Ok, just what does this stuff really mean?

Updates may merely reflect the poor quality of a map base or even the lack of a map base available to Waze for its customers use. The number of countries involved does not necessarily indicate that the company has complete, up-to-date coverage in any of these countries. More problematically, I suspect that Waze has no objective method of assessing the accuracy of its maps compared to other sources. For those of you who need a short primer on Spatial Data Quality, see my blog on the Apple Maps fiasco, as this is the reason they got a failing grade on their product roll out.

Again, the issue here is how many users have been contributing GPS traces and active edits and over what period of time. It appears to me that the time horizon of Waze is too short to have created a map database of considerable value.

Other Assets (intangibles)? No.

Waze has some uniquely capable people and assets, but, for me, they do not tip the scales at a billion dollars.

Is the whole worth more than the sum of the parts? No.

I just can’t get to the billion dollar number no matter how I combine the basic facts. I have read the articles indicating that Facebook needs its own map base so it can customize it for mobile advertising, or that it needs its own map database in order to compete in the mobile location market. I suppose a company can convince itself of anything and Facebook may have crossed the chasm based on these types of assumption. If so, I think they are wandering in a labyrinth of strategic blunders.

Yes, they could wind-up with their own map database, but I suspect that with this purchase will from day one be a headache in terms of spatial data quality. Facebook will spend more money fixing and tuning the Waze database than if they had licensed a database from Nokia or TomTom or from a collection of companies, as has Apple. In turn, the adoption of their “mapping product” by the market might be significantly delayed.

The more serious issue is that dealing with the quality of the Waze database and integrating the database with other Facebook applications will subtract cycles from their efforts in areas that are core to building a successful Facebook mobile business. In the end, Facebook will come down with a serious case of buyer’s remorse, as they will eventually ask the question “Why wasn’t anyone else willing to pay a billion dollars for Waze?

In a final check of the Waze site tonight I noticed that the Waze homepage (http://www.waze.com/) redirects to http://world.waze.com/?redirect=1 , which is a complete and absolute blank. Perhaps the deal is done. Or, it might simply be a map tribute to Lewis Carroll.

Best,

Mike

Bookmark and Share

Posted in Apple, Facebook, Google maps, Local Search, Mapping, Nokia, TomTom, User Generated Content, Volunteered Geographic Information, Waze, crowdsourced map data, map compilation, map updating | 5 Comments »

Unintended Consequences – The Roles of Google and Crowdsourcing in GIS

March 18th, 2013 by admin

The following blog is a concise, non-illustrated version of a keynote address I gave at the North Caroline GIS conference last month in Raleigh, NC.

There is little doubt that Google has created an incredibly successful mapping product, but it is at this point that the law of unintended consequences may occur and diminish not only the success of Google Maps, but also hinder mapping and GIS in the wider world.

Let’s start by looking at what I mean by “unintended consequences.” In simple terms an unintended consequence is not a purposive action, it is an outcome. Outcomes can be positive, such as a windfall. Outcomes can be negative, such as a detriment. Or, results can be perverse, in which case the outcome is contrary to what was expected. My focus in this blog is on the negative outcomes, although some may typify them as a case of the glass half-full.

The romantic notion that cartographers wandered the world with charts and map tables so they could compile map data as they explored is the stuff of history. For countless decades map publishers have created map manuscripts by compiling data collected from sources that were considered authoritative and it is this model that Google had adopted. From a practical perspective, it is impossible for any single company to map the entire world at a street and road level without the help of contributors from the public sector, private sector and ordinary citizen scientists interested in maps, geography and transportation.

It is my belief that Google, due to the success of its search engine and the pursuit of its corporate mission “…to organize the world’s information and make it universally accessible and useful”, has been unusually successful in convincing authoritative sources around the world to allow Google to use their data in its mapping products. In some cases this has involved licensing or use agreements, and Google has advantaged itself by integrating data from sources that it considers the “best of breed” to enhance its products.

Most of these “trusted” sources are “official sources”, such as the USGS, the Bureau of the Census and other governmental agencies at all levels of administration from around the world. In areas where Google has been unable to reach agreement to use specific data, or in those locations where “trusted” data does not exist, it has relied on its own industrious endeavors to compile these data, although it has been helped tremendously by crowdsourcing.

It is clear to me that Google turned to licensing and crowdsourcing to remedy the unpalatable variations in the levels of spatial data quality in the map data that were supplied to it in the years when Google Maps was primarily based on data licensed from Tele Atlas (now TomTom) and Navteq (now Nokia). It appears that Google’s transition to able compilers of navigation quality map databases has been quite successful. However, I wonder if this success is not unlike the magnificent willow tree with a tremendous girth and abundant leaves on massive flowing branches, but slowly dying of decay from the inside.

Google’s move into GIS by providing the power of the GoogleBase as a GIS engine is an attractive notion to many organizations and for good reason. However, people who are responsible for funding budgets in these organizations (such as legislators) are beginning to ask these overly simplified questions: “Why are we paying people to do this mapping stuff when Google is giving it away from free? “Can’t we just use their data?” I am sure you are all thinking, “Nobody could be that shortsighted.” I guess you have not spent much time with politicians.

Recent events have led me to conclude that Google has now realized this very flaw in its approach to mapping. Did any of you think it was unusual that Google released two different strategic studies recently showing the economic benefits of Geospatial data (see 1 at end of blog). You know, Google is always releasing its strategic studies. Why the last one I can remember was in …..hmmmm?

In a study commissioned by Google and carried out by the Boston Consulting Group, it was indicated that the U.S. Geospatial industry generated $73 billion in revenues in 2011, comprised 500,000 jobs and throughout the greater U.S. economy helped drive an additional $1.6 trillion in revenue and $1.4 trillion in cost savings during the same period.

A second study by Oxera was equally interesting and focused on the direct effects, positive externalities, consumer effects and wider economic effects, including the gross value added effect of “Geo services.” One section of this report that caught my eye was a discussion (page 15) of Geo services as an intermediate good – one that is not normally valuable in themselves, “…but help consumers engage in other activities.” When discussing the “economic characteristics of Geo” the Oxera report indicates (page 5) that, “This question is relevant because it has implications for the rationale for public funding of certain parts of the value chain and for the market structure of other parts.”

Neither of the released reports (at least in the form they were published) mentions Google, its mapping business or how these studies should be viewed by the Google-ettes.

While Google may have had many reasons for funding these two reports, I think that the “law of unintended consequences” is rearing its head in Google land. If the public/governmental sources that provide data to Google through license can no longer afford to produce the data because their funding sources thinks that collecting and presenting map data is something that can be handled better in the private sector (such as Google is doing), the data underpinnings of the geospatial world will start to collapse. Yes, I know that Google does not do what its licensors do with spatial data, but have you seen the decision tree of a politician who really understands the complexities of GIS and mapping, why they cost so much, take so long and can’t be shared through the enterprise?

OK – let’s turn to crowdsourcing. While Google did not invent crowdsourcing, it certainly knows how to use it to its advantage. Now that its users are willing to compile and correct the Google Mapbase for free, how will anyone else in the business make money compiling data using a professional team of map data compilers? The economics weigh against it and it may be a practice whose time has come and gone. The reasons for this are, in their entirety, more complex than I have described. However, without developing the argument more in this essay, I will simply skip ahead to my conclusion, which is that professional map database compilers are an endangered species. It is likely that their “retirement” will not be noticed – at least not until crowdsourcing falls out of vogue, as it will, when people begin wondering why Google cannot afford to keep its own damn maps up-to-date.

As all of you know, maps are near and dear to my heart. The problem of unintended consequences in regards to Google and crowdsourcing to GIS and mapping are nearly as worrisome to me as the planet-wide loss of electricity. I’m going to squirrel away a cache of paper maps, just in case. Laugh if you want, but when you need to buy one from me you will begin to understand the meaning of monopoly, as well as to really appreciate the concept of unintended consequences.

1. Links to both studies can be found in this article at Directions Magazine

http://apb.directionsmag.com/entry/google-shares-oxeras-report-on-impact-of-geospatial-services-on-the-wo/306916

Bookmark and Share

Posted in Authority and mapping, Data Sources, Geospatial, Google, Google Map Maker, Google maps, Mapping, Mike Dobson, Navteq, Nokia, Tele Atlas, TeleAtlas, TomTom, crowdsourced map data, map compilation, map updating | 2 Comments »

Google Maps announces a 400 year advantage over Apple Maps

September 20th, 2012 by admin

UPDATE September 24, 2012: The Comment Cycle for the following entry is now closed. Thanks to everyone who has contributed.

I had a call from Marc Prioleau of Prioleau Advisors this morning and speaking with him prompted me to look into the uproar over Apple’s problems with its new mapping application. So, this column is Marc’s fault. Send any criticisms to him (just kidding). While you are at it, blame Duane Marble who sent me several articles on Apple’s mapping problems from sources around the world.

In my June blog on Apple and Mapping , I postulated that the company would find building a high quality mapping application very difficult to accomplish. Among the points I made were these:

• However, it is not (mapping) San Francisco that will give Apple heartburn. Providing quality map coverage over the rest of the world is another matter completely.

• Currently Apple lacks the resources to provide the majority of geospatial and POI data required for its application.

• My overall view of the companies that it (Apple) has assembled to create its application is that they are, as a whole, rated “C-grade” suppliers.

• Apple seems to plan on using business listing data from Acxiom and Localeze (a division of Neustar), supplemented by reviews from Yelp. I suspect that Apple does not yet understand what a headache it will be to integrate the information from these three disparate sources.

• While Apple is not generating any new problems by trying to fuse business listings data, they have stumbled into a problem that suffers from different approaches to localization, lack of postal address standards, lack of location address standards and general incompetence in rationalizing data sources.

• Apple lacks the ability to mine vast amounts of local search data, as Google was able to do when it started its mapping project.

Unfortunately for Apple, all of these cautions appear to have come true. So much for the past.

In this blog, after setting the scene, I will suggest what Apple needs to do to remedy the problems of their mapping service.

Given the rage being shown by IOS 6 users, Apple failed to hurdle the bar that was in front of them. I have spent several hours poring over the news for examples of the types of failures and find nothing unexpected in the results. Apple does not have a core competency in mapping and has not yet assembled the sizable, capable team that they will eventually need if they are determined to produce their own mapping/navigation/local search application.

Perhaps the most egregious error is that Apple’s team relied on quality control by algorithm and not a process partially vetted by informed human analysis. You cannot read about the errors in Apple Maps without realizing that these maps were being visually examined and used for the first time by Apple’s customers and not by Apple’s QC teams. If Apple thought that the results were going to be any different than they are, I would be surprised. Of course, hubris is a powerful emotion.

If you go back over this blog and follow my recounting of the history of Google’s attempts at developing a quality mapping service, you will notice that they initially tried to automate the entire process and failed miserably, as has Apple. Google learned that you cannot take the human out of the equation. While the mathematics of mapping appear relatively straight forward, I can assure you that if you take the informed human observer who possesses local and cartographic knowledge out of the equation that you will produce exactly what Apple has produced – A failed system.

The issue plaguing Apple Maps is not mathematics or algorithms, it is data quality and there can be little doubt about the types of errors that are plaguing the system. What is happening to Apple is that their users are measuring data quality. Users look for familiar places they know on maps and use these as methods of orienting themselves, as well as for testing the goodness of maps. They compare maps with reality to determine their location. They query local businesses to provide local services. When these actions fail, the map has failed and this is the source of Apple’s most significant problems. Apple’s maps are incomplete, illogical, positionally erroneous, out of date, and suffer from thematic inaccuracies.

Perhaps Apple is not aware that data quality is a topic that professional map makers and GIS professionals know a lot about. In more formal terms, the problems that Apple is facing are these:

Completeness – Features are absent and some features that are included seem to have erroneous attributes and relationships. I suspect that as the reporting goes on, we will find they Apple has not only omissions in their data, but also errors of commission where the same feature is represented more than once (usually due to duplication by multiple data providers).

Logical Consistency – the degree of adherence to logical rules of data structure, attribution and relationships. There are a number of sins included here, but the ones that appear to be most vexing to Apple are compliance to the rules of conceptual schema and the correctness of the topological characteristics of a data set. An example of this could be having a store’s name, street number and street name correct, but mapping it in the wrong place (town).

Positional Accuracy – is considered the closeness of a coordinate value to values accepted as being true.

Temporal Accuracy – particularly in respect to temporal validity – are the features that they map still in existence today?

Thematic Accuracy – particularly in respect to non-quantitative attribute correctness and classification correctness.

When you build your own mapping and POI databases from the ground up (so to speak), you attempt to set rules for your data structure that enforce the elements of data quality described above. When you assemble a mapping and POI database from suppliers who operate with markedly different data models, it is unwise to assume that simple measures of homogenization will remedy the problems with disparate data. Apple’s data team seems to have munged together data from a large set of sources and assumed that somehow they would magically “fit”. Sorry, but that often does not happen in the world of cartography. Poor Apple has no one to blame but themselves.

Recommendations

1. Unfortunately for Apple, they need to take a step back and re-engineer their approach to data fusion and mapping in general.

2. I suspect that the data and routing functionality that they have from TomTom, while not the best, is simply not the source of their problems. Their problem is that they thought they did not have a problem. From my perspective, this is the mark of an organization that does not have the experience or know-how to manage a large-scale mapping project. Apple needs to hire some experts in mapping and people who are experienced in mapping and understand the problems that can and do occur when compiling complex spatial databases designed for mapping, navigation and local search.

3. Apple does not have enough qualified people to fix this problem and needs to hire a considerable number of talented people who have the right credentials. They, also, need to develop a QA/QC team experience in spatial data. They could establish a team in Bangalore and steal workers from Google, but if they want to win, they need to take a different approach, because this is where Google can be beaten.

4. Apple appears not to have the experience in management to control the outcome of their development efforts. They need to hire someone who knows mapping, management and how to build winning teams.

5. Apple needs to get active in crowdsourcing. They must find a way to harness local knowledge and invite their users to supply local information, or at least lead them to the local knowledge that is relevant. This could be accomplished by setting up a service similar to Google Map Maker. However, it could also be accomplished by buying TomTom, and using its MapShare service as part of the mapping application to improve the quality of data. I think something like Map Share would appeal to the Apple user community.

6. Speaking of acquisitions, Apple could buy one of a number of small companies that integrate mapping and search services into applications for use by telephone carriers. The best of these, Telmap, was snapped up by Intel earlier this year, but other companies might be able to do the job. Perhaps Telenav? Hey, here is an interesting idea – how about ALK, now being run by Barry Glick who founded MapQuest?

7. I suppose Apple will want to hire Bain or some other high power consulting group to solve this problem. That would be the biggest mistake they have made yet, but it is one that big business seems to make over and over. As an alternative, I suggest that Apple look to people who actually know something about these applications.

Conclusions

There is no really quick fix for Apple’s problems in this area, but this should not be news to anyone who is familiar with mapping and the large scale integration of data that has a spatial component.

Of course there appears nowhere to go but up for Apple in mapping. I wish them the greatest of success and suggest that they review this blog for numerous topics that will be of assistance to them.

If you want to know more about map data quality see ISO (International Organization of Standardization), Technical Committee 211. 2002. ISO 19113, Geographic Information – Quality principles. Geneva, Switzerland: ISO. Available online from http://www.isotc211.org/

And, I urge Apple to keep a sense of humor about these problems, as have some of its users. I had a great laugh at a comment about Apple’s mistaking a farm in Ireland as an airport. The comment was “Not only did #Apple give us #iOS6… They also gave us a new airport off the Upper Kilmacud Road! Yay!

Until next time.

UPDATE on September 24, 2012. I have closed the comments period for the Apple Maps Blog. Thanks to all who have contributed.

Mike

Bookmark and Share

Posted in Apple, Authority and mapping, Data Sources, Geotargeting, Google Map Maker, Google maps, MapQuest, Mapping, Personal Navigation, TomTom, crowdsourced map data, map compilation, map updating | 106 Comments »

The Atlantic Magazine Reveals How Google Builds its Maps – Not.

September 19th, 2012 by admin

At last! We are close to delivering our final report to the US Census Bureau on their Master Address File and I now have time to devote to one of my favorite pastimes – writing blogs for Exploring Local. Hooray. What this means in consultant speak is that I am “on the beach” or between assignments, although truth be told, I am not looking very hard for anything to do for a while. I have my personal projects all laid out.

Over the last few weeks I have read a number of articles about maps and mapping that have renewed my interest in what’s going on in the profession. I guess that is something of a misstatement, since there a no longer enough people who actually are cartographers to make a profession, at least one that has any hopes of future growth. However, not to worry. Alexis C. Madrigal, a noted madrigal singer, oops I meant a the senior editor at The Atlantic, where he oversees the Technology Channel, recently wrote a marketing piece, oops I meant an “even-handed” review of the encyclopedia of all cartographic wisdom – Google Maps. His article “How Google Builds Its Maps – and What It Means for the Future of Everything” is a monumental tribute to revisionist history.

I suspect that The Atlantic magazine will soon be renamed “The Atlantic & Pacific”, since Googleland appears to be the heart of the cartographic universe. While reading the article I thought “Wow, he gets paid for writing this crap” and as I read further, I began to wonder “But who pays him?” The entire read resembled a poorly thought out advertorial.

I guess Apple’s entry into the mapping arms race has the big “Goog” upset and they decided to get ahead of the curve by bringing in a cub reporter, who knew little about mapping, to whom they could show why Apple doesn’t have a chance. Why Googs did not make these stunning revelations to a writer from a real tech magazine is an interesting question, but one to which we all know the answer. Madrigal’s article was enough to make me want to change my opinion of the problems that Apple must overcome to be a player in this venue. Well, let’s start down that road by focusing on the startling new truths that Google revealed to Mr. Madrigal about the world of mapping.

I know that it may be hard for some of you to realize that mapping was not discovered by Google. Last February I was examining a floor-mosaic–map in Madaba Jordan that dated from the 6th century AD that was designed to show pilgrims the layout of the Holy Lands. I can assure you that it did not have the Google’s copyright notice anywhere on it and I can, also, assure that it was not particularly old as maps go.

Dating from the 6th Century AD mosaic-map was designed to provide an overview of the Holy Lands to those on pilgrimages.

It may come as a further shock to you that Google did not invent map projections, including the one they use, nor did they invent tiling, symbolization, generalization, decluttering, zooming, panning, layers, levels, satellite imagery, map tiling, crowd-sourcing (both active and passive), their uniquely named “Atlas tool”, as well as most everything else associated with Google Maps. Even Google’s Street View had its origins in robotic sensing systems developed to enhance computer vision research, although Google, with the help of Sebastian Thrun, was smart enough to figure out how it could give them a competitive advantage in compiling street maps.

Where to start on the Madrigal article? How about the byline that reads “An exclusive look inside Ground Truth (no, Google did not invent this either), the secretive program to build the world’s best accurate map” Phew, I was glad to learn that Google was not attempting to build the world’s worst accurate map. I had hoped that the Googie was going to attempt to build the world’s most accurate map, but I guess that they just wanted the world to know that what they were building would be “bester” than whatever Apple could supply.

Did you notice the secret information the former NASA rocket scientist who was mentioned in the article told Madrigal? It was a howler, as a matter of fact it sounded like something I wrote in a blog when I was being snarky about Google. Anway, here is the exclusive/secret info from the horse’s mouth that was revealed exclusively the Madrigal of The Atlantic

“So you want to make a map,” Weiss-Malik tells me as we sit down in front of a massive monitor. “There are a couple of steps. You acquire data through partners. You do a bunch of engineering on that data to get it into the right format and conflate it with other sources of data, and then you do a bunch of operations, which is what this tool is about, to hand massage the data. And out the other end pops something that is higher quality than the sum of its parts.”

Wow was that informative. Before we go any further, I would like to note that Mr. Madrigal might have received a better education on Google Maps by reading this blog than visiting Google, but then that would be shameless self-promotion. So, will you tell him instead?

For some reason the opening figure in my copy of the article is a photo of two people playing ping pong. That had me stumped for a while, as I could not figure out what it had to do with project “Ground Truth”. Well, it still has me stumped, but I am working on it. I thought we might get a photo of a person at a workstation with a map on a monitor and details of the environment at the Google Maps office in Mountain View, CA. Apparently ping-pong is more interesting and newsworthy and the great Googs was not about to reveal any detailed information about how they compile maps to Mr. Madrigal.

I was more than mildly surprised that Madrigal seemed not to understand that there was more to Google Maps, or any map for the matter, than meets the eye. How did he think that routing occurs? Did he really believe the “Men in Black” idea that there were tiny aliens inside Google servers that supplied routes on demand as they were requested? Does he know that computerized routing has been around on a commercial basis since the early 1970s? Did he ever hear of Barry Glick, the founder of MapQuest, hawking online routing capabilities before Google was founded? Does he have any idea what NAVTEQ does with its highly instrumented vans and imaging systems? Has he ever looked at Bing Maps, or the hundreds of other services out there that provide competition to Google in the mapping sector? Put more simply, did the author of this article have the least little bit of inquisitiveness about what Google was telling him? My conclusion is a big, “Nope.”

I was, also, stunned to read Manic (okay, that’s supposed to be Manik) Gupta’s comment that the information in offline real word in which we live is not entirely online. When did this happen? Why is it allowed? Wow, this is beginning to sound like a science fiction thriller where there is no distinction between offline and online. Maybe Google really is an earth changing company, in more ways than we realize. Hopefully Tom Cruise will play the part of Gupta in the movie version of this thriller.

Gupta’s follow-up quote was even better – “Increasingly as we go about our lives, we are trying to bridge that gap between what we see in the real world and [the online world] and Maps really plays that part.” Hmmm. I had always thought that maps were a representation of the real world, and not the original thing. Based on the Madrigal article, it appears that he thinks that maps can and should serve as the real world. I guess Mr. Madrigal may not understand the real nature of project “Ground Truth”, or the use-warnings Google puts on their map and routes. I don’t know about you, but I have heard that trusting ground truth is usually a better strategy than ignoring it and trusting its representation, whether that representation is online, printed, in a box on your dash, in your phone, hiding in the ether while encoded in a radio wave, or packed as a new innovation labeled “Ground Truth” created by Google (and thousands of people before them).

I smiled when I read that Madrigal was stunned to learn that humans were involved in Google’s map making process. Yes. Humans are apparently needed to remedy problems that software cannot seem to solve. Imagine, data compiled from various sources that does not quite fit. Is that possible? Hmmm. Did Google invent that too? And is using crowd-sourcing to mine knowledge another Google innovation? No, I don’t think so. Is there no end to Madrigal’s naiveté? Well, the answer to that also appears to be “no.”

I hope you noticed his comment that, “This is an operation that promotes perfectionism.” I, also, liked this one “The sheer amount of human effort that goes into Google Maps is just mind boggling.” Followed by this, “I came away convinced that the Geographic Data that Google has assembled is not likely to be matched by any other company.” Well, guys, apparently it’s time to give up on mapping. Google, according to Madrigal, appears to have thought every thought about mapping that could ever be thought. Well, maybe not.

I hope you noticed the section of the article with this comment, “The most telling moment for me came when we looked at couple of the several thousand user reports of problems with Google Maps that come in every day.” A couple of thousand error reports every day? Is that like saying Google Maps only has 347 million known errors left to remedy? Seems that just like most of us Google will not be the first to achieve perfection in mapping. If you read my series of about Google’s map correction program, you know more about this than Mr. Madrigal, so you should consider applying for his position at The Atlantic.

I wonder why there appeared to be only one workstation for Mr. Madrigal to observe in Mountain View? According to Madrigal’s article hundreds of operators are required to map a country and many of them are in Google’s Bangalore office. Hmmm, So much for local knowledge. In part this remoteness of operators from the geography they are editing is why there are so many errors on Google Maps. In addition, maybe all those Google-driven innovations in mapping don’t quite help when the sources that Google harvests contain incorrect information to begin with. Adding erroneous information the edit queue of an operator who must use other surrogate information to validate it can be a recipe for disaster, as Google has proven time and time again.

I do applaud Mr. Madrigal for realizing that Street View is an important distinction in the marketplace for mapping services. Whether Google is actually using Street View for all of the processes mentioned in the article is unclear to me. Didn’t it sound like something a VP would say when Brian McClendon indicated that, “…… We’re able to identify and make a semantic understanding of all the pixels we’ve acquired.”? Wow, that’s great, but do you think they really do this? Someone should send McClendon some articles to read on image processing, as well as some older texts on information theory – seems as if they are doing a lot of work they do not need to do. And how about the number of businesses, addresses and logos they have collected with Street View. If only they could create a comprehensive address database with this stuff, but they can’t because of footprint limitations related to the deployment of their Street View assets. However, whether Street View provides a sustainable competitive advantage is something that Apple, Microsoft, NAVTEQ and TomTom will have to decide. It may a competitive advantage today, but I can assure you that whether or not it is sustainable, will not depend on Google’s wants.

So to Apple and all the apparently second level mapping companies – Don’t give up the map race quite yet. The fact the Google thinks you can’t catch them may be the best news you have had this year.

Finally, shame on Google for participating in a public relations effort masquerading as a report on technological innovation. While I have great respect for what Google has achieved with Google Maps, the interview behind the Madrigal article was not designed to reveal any details on Google’s technological innovations in mapping. Instead, it was an interview strategically planned to denigrate Apple’s mapping capabilities by implying that it could not compete with the great Googie. Revealing old news to someone who did not have a background in mapping, GIS or navigation is pandering and something I had not expected from Google. Just what is it about Apple’s mapping program that has them so scared? Hmmm. Something to think about.

Bookmark and Share

Posted in Apple, Authority and mapping, Data Sources, Google, Google Map Maker, Google maps, Navteq, TomTom, User Generated Content, routing and navigation | 4 Comments »

Interview on GPSBites

July 31st, 2012 by admin

Hi everybody:

Just a brief note that there is an interview with yours truly on GPSBites. The interview is focused on seven questions related to LBS, GPS, Mapping and a couple on my company TeleMapics and the type of work we do. If you have time, give it a read.

Thanks,

Mike

Bookmark and Share

Posted in Geospatial, Mapping, Mike Dobson, Personal Navigation, Tele Atlas, TeleAtlas, TomTom, shameless self-promotion | 1 Comment »

Apple and Mapping?

June 13th, 2012 by admin

I have been quite amused by all of the hoopla concerning Apple’s entry into the world of mapping and navigation. As I read the accolades pouring in as a result of the announcement, I could not help but wonder, “Do any of these people know anything about mapping?” Hmmm.

I watched a video of the “map” presentation at Apple’s WWDC and was struck by the fact that the presenter used the term “beautiful” to describe the map display on at least five separate occasions. Since my roots are in cartography, I appreciate a well-designed display, but only when the data represented on the display are a fair representation of the real-world. In other words, the most significant problem in creating a navigation/mapping application is data quality.

Yesterday, I read an article on CNN titled Apple’s Secret Weapon”, by John Brownlee. I thought it was an interesting an insightful view of what makes Apple great. Brownlee reasoned that it almost seems as if Steve Jobs and Apple created a time machine that allowed them to create products that are years ahead of their competitors. Brownlee hits the nail on the head when he indicates there is no “flux capacitor” at Apple, only the ability to actualize the, “…revolutionary, magical machines it dreams up.” Yes, iPod, iPhone, iPad, retina displays, etc. do show the ability of Apple to actualize dreams and make them realities that appeal to millions of potential purchasers.

Well, I guess this is true, if one is willing to make an exception for the mapping app that Apple intends to launch as a feature of IOS 6. Apple’s demonstration of the application at the WWDC showed little innovation and a lot of copying. However, since this is a software service, rather than a physical product, maybe Apple’s vaunted reputation for product development does not apply. After all, this is the company the brought you “Mobile Me”, a product that the even the late Steve Jobs described as, “Not our finest moment.”

It is my opinion that Apple decided to produce a mapping/routing/local search service on the basis of branding, not on the basis of this being an area in which the company possesses, or could ever hope to wield, a significant competitive advantage. Apple realized that it was losing brand recognition and revenue by using Google for its mapping needs and decided to bring in some “caulk” to stop the leak. The weakness with this approach is that Apple likely has little insight on what makes a great mapping application, or an appreciation that the development of a mapping application will be unlike anything else it has ever attempted. While its legions of designers and artists may be able to make the app beautiful, it is data quality and not image quality that is the major differentiator in the mapping arms race that they have entered.

Unfortunately, Apple has limited expertise in mapping, and may not understand the problems it faces. Further, it is unlikely to be able to “actualize” a new standard for navigation or local search that will reshape the industry in a manner that reflects Apple’s leading edge capabilities in function and design of products intended for the consumer electronics space. For those of you who are doubters, did you see anything in the WWDC demo of their mapping application that you have not seen before or of which you were completely unaware?

It is important to remember that what we saw at the WDDC was an early stage development representing San Francisco as a base. I wonder how many people were asked to QC that the map space before the demo? If I had a dollar for every time San Francisco was used for a map demo that I have personally witnessed, I would be a very rich man today. However, it is not San Francisco that will give Apple heartburn. Providing quality map coverage over the rest of the world is another matter completely.

Over the past three years, Apple has acquired several small companies that were focused on parts of the mapping equation (Placebase – GIS and database driven mapping, C3 – 3D imagery and mapping, Poly 9 – projection, web mapping). Note that these companies are not data companies. Currently Apple lacks the resources to provide the majority of geospatial and POI data required for its application. Traffic, however, will be based on the GPS paths recorded from iPhone users to build both historical and real-time models of traffic flows.

In order to develop its mapping application, Apple has scoured the world for content that would allow it to develop a comprehensive, up-to-date, map database with the coverage required to provide services to its worldwide markets. My overall view of the companies that it has assembled to create its application is that they are, rated as a whole, ”C-grade” suppliers. I have focused my comments on the two categories of suppliers described below, but note that their imagery and parcel data suppliers are of “A” quality.

Navigable Map Database Suppliers.

The data supplier of most importance to Apple is TomTom, a company providing the navigation database provided by Tele Atlas, which TomTom acquired in 2008. It is my sense that Tele Atlas has not prospered under TomTom ownership. TomTom’s fortunes declined as the market for PNDs unexpectedly, at least to TomTom, dropped shortly after the acquisition. Besides limiting the company’s expenditures on improving the quality and coverage of TomTom’s date, the drop in the amount of PND’s sold decreased the update data available to Tele Atlas for map compilation purposes from TomTom’s excellent Map Share product. Put another way, this is the company Google dumped because it was unhappy with the quality of the data delivered.

While TomTom through Map Share had the promise of revolutionizing the navigation map industry, the progress has not met the promise. Tele Atlas has lost many of its key employees and it is my impression that its data quality has declined since 2008. I question the use of Tele Atlas data as the backbone for the Apple mapping service. It may be that Apple felt that TomTom was the only viable alternative, since they had already ruled out the use of Google and Navteq is tied up with Nokia, although I suspect that association may soon change. Apple may learn the hard way that choosing data suppliers based on brand strategy and not data quality does not result in the best possible solution.

In coverage areas where TomTom does not have the appropriate data, it appears that Apple will turn to other suppliers such as DMTI, a company that does provide relatively high-quality data for Canada, or Map Data Sciences, a company providing quality data for Australia and New Zealand. Unfortunately other map data suppliers involved, in my opinion, do not meet these same standards and I would expect Apple’s map data for much of the rest of the world to be lacking in detail, coverage and currentness.

I understand that Apple is planning to use Waze and perhaps OSM where appropriate (appropriate in this case likely means where TomTom does not have data). Those of you who have read other items in this blog know that I am a proponent of hybrid-crowdsourcing that blends traditional compilation techniques with both active and passive crowdsourcing. However, Apple does not have the assets to advantage themselves in this area and must rely instead on importing crowdsourced data that may not meet their standards. Time will tell, but a major issue that Apple must address is related to how the company works with its suppliers to update areas where users have noted errors.

I suspect TomTom will be responsive to making changes, as it needs the business. As most of you know, there is no organization behind crowd-sourced systems that can guarantee that a map error will be researched, recompiled and pushed to live in a specific amount of time. Of course, one of the things Apple has not revealed is how its database correction procedures will be implemented. Passing vectors to be rendered on the user device may portend a “live” mapping database behind the scenes at Apple Central, but as of now, this is conjecture on my part.

Business Listing Suppliers

Apple seems to plan on using business listing data from Acxiom and Localeze (a division of Neustar), supplemented by reviews from Yelp. I suspect that Apple does not yet understand what a headache it will be to integrate the information from these three disparate sources. Hopefully they will need to employ the readers of this blog to solve this problem, because it is one that can destroy the efficacy of their business strategy for mapping.

While Apple is not generating any new problems by trying to fuse business listings data, they have stumbled into a problem that suffers from different approaches to localization, lack of postal address standards, lack of location address standards and general incompetence in rationalizing data sources. But, hey, this is one area where having billions in the bank might help, at least it might help if you had some idea of what you were doing. As you may have gleaned from the tone of this blog I am not sure Apple understands the mess it is creating, at least at this stage of the development.

Futures

Mike Blumenthal, author of a popular blog on Google Places & Local Search, http://blumenthals.com/blog/ recently asked me if I thought that Apple was in the same place as Google in 2007 when it was using Tele Atlas data. Mike’s interest was in pondering whether Apple, like Google, might be on the road to developing a navigable map database.

This is an interesting question. I think Apple has more problems to solve and a lack of ability to solve them relative to Google in respect to creating a navigable map database. Among my concerns are these:

1) Apple lacks the ability to mine vast amounts of local search data, as Google was able to do when it started its mapping project.
2) Apple does not have Sebastian Thrun working for them, a world famous robotics brain (and professor at Stanford), who developed Street View and Google’s fleet of autonomous vehicles that now gather map data.
3) Apple does not currently have a program like Google Map Maker that has been a valuable source of local data for Google. Using OSM as a substitute may cause Apple major headaches. (Speaking of headaches, I get one every time I try to read the interchanges in the Legal-talk Digest that discusses OSM licensing issues.)
4) Google developed mapping as another way to forward integrate into advertising. Google is in the mapping business only because it advantages their advertising business. I do not see the correlate at Apple, since their ability to penetrate the mobile advertising market appears limited, at least at present.

Summary

Apple has a long road ahead of it in the company’s attempt to create a competitive navigation system aimed at mobile devices. However, one measure of the greatness of companies is how well they respond to significant challenges. Perhaps Apple will hurdle the bar in front of it and set a new standard for maps and navigation. After all, someone should.

Bookmark and Share

Posted in Apple, Authority and mapping, Google, Google Map Maker, Google maps, Local Search, Mapping, Microsoft, Mike Dobson, Navteq, Nokia, OSM, Tele Atlas, TomTom, Volunteered Geographic Information, crowdsourced map data | 8 Comments »

Contrary to Popular Opinion

June 10th, 2012 by admin

Nope, I have not fallen asleep. Nor have I retired. I have been busy at work with Steve Guptill, John Jensen and Dave Cowen (luminaries in the world of geography) on a series of projects for the Geography Division of the U.S. Census. In addition to this joint effort, I have been busy with my TeleMapics clients on projects that I cannot write about in this blog. All this work has made Mike a dull boy.

I did finally get to Egypt and Jordan in February and will soon add sections on each country to my ThereArePlaces travel website. Canon should hire me, as I took over 8,000 pictures with their equipment during my three-week sojourn.

However, the reason I am surfacing here briefly it to let you know that I have a chapter coming out in a book on Volunteered Geographic Information, edited by Dan Sui, Sarah Elwood and Mike Goodchild (more luminaries from the world of geography). My chapter is titled, “VGI as a compilation tool for navigation map databases.” You can find information about the book here, but be warned that it has an $180 price tag.

Another factor that caused me to stop writing Exploring Local is that I could not find anything interesting happening in the world of mapping, Local Search and LBS. However, there is now an enormous amount of activity going on behind the screens of the Great Lord OZ and I may just be prompted to comment on some of it.

I hope all of you are well and prospering.

Mike

Bookmark and Share

Posted in Authority and mapping, User Generated Content, Volunteered Geographic Information, crowdsourced map data, map compilation, routing and navigation | 1 Comment »

Google, Navteq and Map Compilation

July 6th, 2011 by MDob

And Then There Was…..One?

For this blog I had intended to write an in-depth analysis of efforts of Google and Navteq related to the map compilation strategies they use to produce navigation quality map databases. However, I have decided to focus on the major differences in their approaches, namely field observation and the use of crowdsourcing, since these should be the deciding factors in determining which company is able to produce maps with the desired coverage, currency and accuracy. Unfortunately, there are a couple of additional factor that may have more bearing on the outcome of this competition than any of the technical issues. Of course, I have learned something from journalists and novelists over the years and will conclude this blog by integrating these mitigating factors into my thoughts on which company will dominate the world of maps, mapping and, perhaps, spatial data.

Fans of OSM might be disappointed to learn that I do not consider OSM a contender for this crown, although I do think the organization has a bright future, but perhaps one that is not as map-centric as it is today. Fans of TomTom will wonder why I have not included Tele Atlas as participant in the “Map Champion of the World” competition. Well, it is my opinion that TomTom has all of the components to be a great competitor, but lacks the financial ability to implement its map compilation strategy in a comprehensive, robust manner.

Google and Navteq

Those of you who have paged through the PowerPoint in my last blog will have noted that I am convinced that the winners in the world of map compilation will be those who wield a hybrid approach to the subject. The hybrid compilation approach that I envision melds 1) traditional compilation techniques (e.g. field work/field observation, data gathering/data mining, use of imagery, conflation, data editing, data QC/QA) as practiced by staff with a professional training in mapping/GIS, or by staff who have received training in map compilation techniques with 2) crowdsourced gathering of map data. Make no mistake, crowdsourced map data is a required ingredient for success, just as much as is the use of established map compilation techniques.

How do Google and Navteq differ on the two factors of greatest importance?

The field work advantage goes to Navteq.

Field observation can be structured in several ways, but most often the strategies result in either field work that is sometimes directed in a top-down or conceptually driven manner, while at other times it is directed using a bottom-up or data driven strategy. My belief is that the best map compilation programs must mix a spoonful of each approach to create the winning elixir. In other words, map compilers need to have a conceptually driven database update program that reflects their best guess about areas covered by their map base that are: of strategic importance to leading customers, areas of unusually fast growth/economic development, or areas of significant change. In addition, most “players” in the map compilation business have an established program to “refresh” all coverages in their database on a cyclical basis, based on the notion that data change over time, new sources evolve and that all areas need to be reviewed for changes on a known cycle (although these cycles may differ by geography).

Conversely, data compiled in a previous field canvass can be riddled with errors (either in the original map compilation or with data that have changed since the last compilation). Most often these errors are discovered by map users or the customers of the map database provider and this is an example of data driven compilation. If the area (such as the boondoggle on I-195 in Providence, Rhode Island) is important enough to customers and users, a field team will usually be deployed to unravel the ground truth if other methods of collecting and correcting the data cannot be accomplished through the use of surrogate data.

I think it is an underappreciated fact that having customers with a vested interest in the accuracy of a company’s map database is an important part of the success of map compilation efforts. Unsatisfied customers have leverage in the map compilation process as they are extremely unlikely to accept blame from their users on a map error that is not their fault. The can and often do wave their contractual agreement at the map compilation company to help the data providers understand the “need” to update maps in areas the customer feels included particularly egregious errors.

While “most” companies listen to their customers and the users of their data, customers, especially ones who contribute significantly to revenue streams, command attention. The more important the customer, say BMW, the likelier it is that the offending change will be corrected sooner rather than later and if there is an offending area with numerous compilation errors, the more likely it is that a field team deployed or tactical field representatives contacted and deployed to determine the actual condition on the ground.

Smaller customers command less attention and mere users of products like you and me are, often, even further down the “sensitivity” chain. The reason that I make this distinction is the Navteq has a number of customer accounts that are extremely important to them and who can snap the compilation whip when they are unhappy with the current state of the Navteq database. Google, on the other hand, while having has customers, only indirectly provides support to them (e.g. by the location of their business as a POI or as a map of the location in a Google “local search”). Google’s map compilation system is really a data driven system led by users rather than customers. In other words, the impetus for Google to make changes to their maps is often public embarrassment about their map gaffes highlighted by users, rather than by the direct requirements of customers who are deploying map database in an attempt to solve a specific problem such as in-car navigation.

I think this distinction is important because it is likely that many map errors never get communicated to Google by its users. For instance, how many times have you corrected a Google Map? Of the errors that do get communicated to Google, some users attempt to use Google Map Maker to correct those errors, and the corrections are then vetted by Google’s crack editing team based in India, who, of course, have a complete lack of local knowledge on which to evaluate these proposed changes. The Google error reporting process is one that is unlikely to generate a field response. Yes, the Google Street View vehicles do operate on a top-down deployment, but it is unlikely that they are deployed as a data driven response mechanism.

While electronic and image sensing activities are conducted by both Google and Navteq, the Navteq field teams and contacts are prepped with specific objectives and provided with task lists before they enter the field and human observation of spatial characteristics is often a significant part of their activity. If Google cannot sense these data, find them through data mining or imagery classification, it is unlikely that they will ever discover the richness of data that Navteq operators are able to observe in the field. In essence, if Google cannot find the solution using an algorithm, it needs to be found through crowdsourcing or not at all.

We could spend more time discussing why Navteq’s field operations return better data accuracy than those of Google, but that would be kicking a dead horse –something for which I am known, but do not feel like doing today.

The crowdsourcing advantage goes to Google.

Oh stop! Navteq can tell me a million times how much crowdsourced data it now receives and that still will not convince me of the benefits that this is bringing the company.

There are two types of crowdsourced data. Active data is contributed by a user when they find an error or omission on the map that they willing to fix, or at least to contribute information on what they observed on the ground, as opposed to what they saw on the mapped database. Passive crowdsourcing is the use of GPS signals from PNDs, car navigation units, Smart Phones or other devices equipped with GPS or capable of tracking other RF (such as Wi-Fi). While a significant amount of data can be extracted from paths, it is mostly related to geometry, position, data flows (for traffic analysis) and the like. Passive data does not provide attributes such as names, addresses, zip codes, contact numbers, roadside furniture (see, I told you once that you needed to read that GDF manual), and other important information. The vast majority of the crowdsourced data received by Navteq and incorporated into its databases is passive. Navteq has very limited inputs of active crowdsourced data and gets very little benefit from the real-world knowledge held by its users and the users of its customers’ products and services.

Google, on the other hand, receives significant amounts of both active and passive crowdsourced data. Its active crowdsourcing through map corrections and Google Map Maker corrections provides the company an enormous benefit, although it has not yet managed to create a system that provides them all the benefits that active crowdsourcing can supply. However, my purpose here is not to expound on a better system, but merely to point out that the advantage in the use of crowdsourcing clearly benefits Google Maps, while Navteq’s efforts in active crowdsourcing are too limited to provide any significant benefit to Navteq, other than these data may be used as change detection beacons to help target their field activities.

Crowdsourcing operations are relatively inexpensive to create and maintain. The primary factor is whether your map can be distributed to enough users to bring in a beneficial number of map corrections through active crowdsourcing. Google’s distribution channel is massive in terms of size and reach, while Navteq’s distribution channel is modest, being limited to its Map Reporter and similar relatively unknown resources. Yes, users of Navteq data can and do refer their users to Map Reporter or provide other access for contributing error reports, but, in total, these inputs are insignificant.

So Why Are These Things Important?

Field operations are extremely expensive. While the data quality advantage that Navteq now maintains over Google and everyone else is a competitive advantage, it is unclear to me that Navteq has the financial resources to maintain this expensive advantage.

If you remember back to 2008, one of the reasons that Nokia acquired Navteq was that it was contributing mightily to the Navteq revenue steam and was trying to negotiate lower rates, but Navteq would not budge. The negotiation, then, took an unexpected turn and Nokia told Navteq “Your data is too expensive, so we’re going to buy the company.”

Guess what happened next. Yep, those post-acquisition intercompany transfer prices produced the pricing concessions that Navteq would not offer in the license negotiations. Guess what? Yes, Navteq’s revenue stream has suffered because of this process.

Nokia’s latest innovation is that it has decided not to run Navteq like a for-profit, stand-alone business, but to integrate it into some ill-conceived Nokia business unit called “Location and Commerce” (LAC – now they just need to add knowledge and they can call the business LACK) under Michael Halbherr. Larry Kaplan, who as CEO has managed Navteq for the last couple of years and understood the complexities of the business and business model, will depart the company at the end of the year. Michael Halbherr, apparently has limited experience in building map database or in managing map database companies, but heck, he has familiarity of selling products that use the data from his stint as CEO of gate5 AG, another of Nokia’s “successful” acquisitions. Good luck Michael and please try not to kill off the only substantive alternative to Google Maps.

Hmm. I guess that means that Navteq better get used to hearing “Your field operations cost too much and you need to reduce expenses because the Location and Commerce business isn’t producing any revenue other than the revenue you are generating. Of course, your revenue is what is funding our amazing, Finnish-style, sauna-driven management structure and market-mismanagement structure. And, by the way, why aren’t you using more of that low-cost active crowdsourcing as a replacement for your field operations?” (If this question is asked, those of you at Navteq should take a quick peek at the slideshow mentioned in my last blog.)

You may remember that in 2008 I told you that Nokia wanted to become an advertising company. This statement was based on my assumption the only way to make money with maps and phones would be to sell advertisements to people who are searching for local attractions. I concluded then that Nokia did not have the wherewithal to make this transition and I think that opinion is still valid. Nokia’s CEO Stephen Elop indicated that “Focusing on location and commerce is a natural next step in Nokia’s Services journey.” I guess somebody must be familiar with the amazing successes of “Nokia’s Services Journey”, but I am having a hard time coming up with any service related success that the company has achieved.

In my last blog, I mentioned rumors about the SS Navteq being set afloat based on some rumors that I had heard from contacts at several conferences held over the last few weeks. We all knew that something unpopular was going on at Navteq, but my guess was before I became aware of the Nokia reorganization of Navteq. As a consequence, a change in ownership might not be in the immediate future. On the other hand, I suspect that within the next two years, Nokia itself will be sold or decide that it is unprepared to successfully compete in the location and commerce business and sell the location and commerce business to someone, say like NavInfo.

The real problem here is that Nokia is slowly beating the competitiveness out of Navteq and it staff. In addition, I suspect that Google will eventually find the right strategy in crowdsourcing and data mining and begin to create data that can actually be used for navigation, as opposed to routing a table of points, as they do today. When this happens, Google will be able to compete with Navteq across a wide variety of markets. Navteq, of course, could maintain, or even extend their current lead over Google, but it appears that malaise has gripped the organization and it may be that the company’s workforce no longer believes that it can maintain its market leading position. From a practical point of view, I believe that sentiment to be untrue, but I can understand why the Navteq workforce has doubts about the future.

Let’ see how things have worked out in the world of navigation map databases. TomTom acquired and, then, killed Tele Atlas through mismanagement brought on by financial difficulties. Nokia acquired and is killing Navteq caused by general incompetence and financial difficulties.

Who will be left in the world of mapping? Google, that’s who. So Microsoft, if you want to have mapping and routing in the future, either buy Nokia or adapt OSM and do what you can with it. Of course, it might just be cheaper to buy MapQuest, since they already seem to know what to do with OSM.

Happy trails – and Navteq, I’m hoping you will not abandon your market leading position without a fight, although that might involve fighting Google and Nokia.

Now, it’s time for me to fight the final boss in Red Faction Armageddon. Hope I survive.

Next time, unless someone beats me on the head, I plan to explore some new topics. Stay tuned.

Click for our contact information

Bookmark and Share

Posted in Authority and mapping, Google, Google Map Maker, Google maps, Mapping, Microsoft, Navteq, Nokia, Nokia Ovi Maps, OSM, TeleAtlas, TomTom, User Generated Content, Volunteered Geographic Information, openstreetmap | 5 Comments »

« Previous Entries