Exploring Local
Mike Dobson of TeleMapics on Local Search and All Things Geospatial

More On Google’s User Generated Content Tower Of Power

January 28th, 2010 by MDob

As I noted last time, User Generated Content could be the data gathering tool that lets Google surpass NAVTEQ and TeleAtlas in quality of data and spatial coverage. The potential “fly in the ointment” is “How “good” does the map data that Google is collecting need to be? This “fitness for use” question is difficult for an outsider to answer, but we can make some assumptions. Presumably, Google considers the data in the Google-Mapbase to be fit for mapping, routing, navigation and route guidance. If not, why would it have dumped TeleAtlas?

On the other hand, the data in the current version of the Google-Mapbase appears to me to be of lesser quality than that provided by TeleAtlas and a wider gap may exist between Google and NAVTEQ in terms of map accuracy, especially in the currentness of map attributes. How should we think about this issue?

Perhaps this conundrum is an example of the situation that other pundits are referring to as “good enough” in reference to their belief that the navigation market may be driven to produce map databases of less accuracy than the “high precision map databases” that will be needed to support Advanced Driver Assistance Systems (ADAS) targeted at driver safety and the advanced Energy Management systems (EM) targeted at efficient use of the power/drive train in vehicles.

I guess this recently exposed “good enough” argument means that Google got rid of the data from its supplier of navigable map databases (TeleAtlas) so they could collect and publish inferior data. You know, more people should read this blog.

Google, by its own admission, could not find a way to get its supplier (TeleAtlas) to actively work to enhance the quality of its map database and resolve the inadequacies that Google had been complaining about over a lengthy period. Although Google parted ways with NAVTEQ for strategic reasons after Nokia acquired NAVTEQ, it was quite clear that Google was unimpressed by the quality of NAVTEQ’s data during the period it was a licensee of NAVTEQ.

Clearly, Google took a run at creating a navigable map database in order to improve the accuracy of their maps, navigation and route guidance capabilities. Does anyone honestly think that the company will not industriously endeavor to enhance the Google-Mapbase? Really!

Another group of pundits is claiming that now that they have had an opportunity to really examine the Google Maps Navigation application (a beta) that it is “behind” on several features and that Google will need to update its application to be competitive with features offered by other providers of navigation services. Give me a break. Does anyone honestly think that Google is going to stop its cycle of continuous improvement? Google’s application will improve and Google will continue to deliver innovative products and concepts as part of programs designed to enhance its ability to deliver targeted advertising to its customers wherever they may be on whatever device they may be using, even when they switch between devices.

It is my opinion that the “good enough” argument and the “inferior application” arguments reflect a lack of understanding of the potential revolution that Google is attempting in the collection of map data for navigation quality databases. Google’s current applications may not have all of the features of PNDs or even other navigation systems on phones. The reasons they are lacking these features is that Google does not yet have the attribute data that would allow them to provide posted speed limits, avoid toll routes, take scenic routes, or other features that are data dependent. So, the really interesting question is “Does Google have the right approach to maintaining a navigation quality map database?” In essence, “Can Google overcome the 25 year head start enjoyed by NAVTEQ and TeleAtlas?” Of course it can.

We looked at my proposed model of Google data revision activities last time and now will discuss more details on the model.

I noted that User Generated Content (UGC) will become the “quarterback” of Google’s data collection efforts. My belief is that UGC, in the form of customer complaints, map corrections, map additions, business listing updates, probe data, Google Map Maker and Street View Advertisements, will help focus Google on the weaknesses of their map database and proscribe where Google needs either to mine its Street View data or send its Street View vehicles to gather additional data. Alternatively, Google could use UGC as a diagnostic indicating where it needs to search for and, then, conflate better attribute data than it used in its original pass through a geographic area.

The potential use of “billboard” space in Street View images appears in a patent issued to Google recently and could benefit Google by enhancing its address database and business listing database through improved communication with business owners and neighborhood interest groups likely to use this advertising service. (Some may not know this, but Google AdWords has a branch that makes online advertising available to charitable institutions in an effort to help these groups publicize their charities. Imagine all of the good local information Google might be able to learn by assisting charities with their “local” advertising activities using Street View as inventory.)

Mining the “collective intelligence of Google users” is a longer term play for the company, but one that will reap substantial benefits when the Google Maps Navigation application succeeds in convincing users of phones equipped with the application to use Google rather than other navigation alternatives. If the price (free) were not enough of an incentive, users will likely find that Google’s database is more up-to-date and more accurate than the databases provided by others in the marketplace. This argument takes us back to the notion that Google doesn’t have better algorithms, it just has more data to mine and more data mining means improved navigation data, over time.

UGC, however, is generally unstructured in a spatial sense, meaning the user controls the changes and selects that area where change data is reported. In essence, when UGC is an “active” process Google responds to the changes reported, but has no ability to direct the geographic tendencies of their map error reporters. This is opposed to the field efforts of several mapping companies in which the company actively directs its field teams to canvass geographic areas based on reports of errors and, also, on the basis of a comprehensive collection process that attempts to re-canvass all map coverage over time.

It is here that we need to remember that UGC works best when it is governed by the law of large numbers – when you reach the tipping point, all bugs become shallow or, in respect to the present topic, all map corrections become shallow. The most important advantage that will accrue to Google in updating its Google-Mapbase is its future use of probe data, based on the potential input of the predicted number of users of a free navigation service available on the Android platform.

Probe data (following the bread-crumb trail of GPS signals registered by and locating your phone in space) can best be thought of as a change detection generator. When roads are closed for construction, probe data will immediately reflect the situation. When new roads are opened, probe data will immediately reflect this change by providing traces of movement in areas previously empty of such traces. If a traffic artery has been converted to one-way, probe data will immediately reveal the absence of the previously normal two-directional traffic. While probe data is not the sole panacea for improving map revision practices, it is a mechanism that will take much of the guess work out of where Google should deploy field collection assets, like Street View, to create improved map data.

Of course, the law of large numbers may work to Google’s disadvantage, especially if a large number of users object to having their paths tracked and saved in a Google data center. Even today, TomTom/TeleAtlas, whose MapShare program benefits from probe data collected from users who have opted-in to the service, strips the first two and last two minutes of travel from the probe paths they capture, to provide some degree of anonymity to the contributors of their data. While the data is contributed anonymously, it is clear that the only person leaving 6 Sesame Street each morning and returning to 6 Sesame Street each evening is likely a resident of that address. We will have to see how this issue plays out, but if it plays out in Google’s favor, it will be a game changer, especially when added to the other practices they use to gather map data.

Other Initiatives

I fully expect that Google will soon consider paying a select group of its UGC contributors for the data corrections they provide. Google, by creating an effective and feedback-equipped UGC map correction system, has enlisted the efforts of a large group of people known in the industry as those suffering from Cartosis! These poor folks are map-a-holics who just cannot get enough cartography. They love maps and would like nothing better than to spend their day editing the darn things. (Believe me, as former Chief Cartographer for Rand McNally, I was inundated with their letters demanding corrections, additions, deletions and various insights on our cartographic practices, as well as my personal lineage).

Every article I read on Google’s mapping efforts suggests that Google is benefitting from the contributions of these geo-specialists in ways that are eluding others in the map database field. Google is earning the good will of these people by incorporating their comments and making the changes they contribute visible on Google’s map displays in a relatively short time. If Google can harness the good-will of these folks, perhaps by a modest stipend – or simply an acknowledgement, they just might become the company’s best mapping-buds. While this may sound humorous to some, these people are very good at knowing their local area and sometimes exhibit significant levels of familiarity with broad geographic areas.

I suspect you are wondering why I am spending so much time on this group. Well, the one big problem for Google is that they do not really have a field collection force to gather data in areas where they know they are weak. Yes, they can send out Street View vehicles, but this is an inefficient way to resolve spatial problems that can be solved by local people. If I were Google, I’d be thinking very hard about how to incentivize their UGC data pros.

And Now, Apple?

How about this for a change in direction? Now that Apple has released their iPad (what an awful name), maybe they will turn to thinking about how they are going to use PlaceBase, the small mapping company they acquired last year. PlaceBase did some nice data integration and was known as for its data visualization efforts. However, the company did not support routing directly, although its API provided hooks for integrating routing from other services. It seems to me that Apple needs to work on mapping, routing and geography in general, but no one seems to know what they might be doing with PlaceBase. Any ideas? And don’t suggest the PlacePad, but maps on the iPad – now that’s a good idea.

Bookmark and Share

Posted in Apple, Data Sources, Geo-patents, Google, Google Map Maker, Mapping, Mike Dobson, Navteq, TeleAtlas, TomTom, User Generated Content, crowdsourced map data, local search advertising, map updating

4 Responses

  1. Mike Moore

    Hey Mike, what a provocative post! And one that I think deserves some reply… I definitely think there is a good discussion to be had here.

    First off, let me say that many people may recognise my name from NAVTEQ’s Developer Web site at NN4D.com, I am a technical lead there and take some pride both in the high quality of the products that NAVTEQ produces and also in the breadth of the support, both technical and commercial, that NAVTEQ offers to our developers. However, in this context, what follows are my personal opinions, which do not necessarily reflect NAVTEQ’s views.

    It seems to me that the fundamental premise of your blog post is wrong. You seem to assume that innovation will be found in the search for advertising revenue. My personal experience is that the pursuit of advertising revenue simply is not enough of a driver to ensure a high quality product – although “good enough” is sometimes achieved.

    I live in the UK and my natural analogy here is the difference between BBC free-to-air broadcasting (which is essentially a subscription model) and commercial free-to-air broadcasting (funded by advertising)… a typical night on BBC broadcasting is, again in my personal opinion, generally much better than the typical night on a commercial channel. Not that there aren’t exceptions to the rule of course – there are some great individual programs on commercial TV, and some really bad nights on the BBC :-)

    To achieve high quality takes a focused committment to the product. It takes a real step into the “what if…” innovation that a pure advertising model could never justify at a commercial level.

    I suppose the simplest example is the one that you specifically call out:

    “Google does not yet have the attribute data that would allow them to provide posted speed limits, avoid toll routes, take scenic routes, or other features that are data dependent.”

    Say what? Google have driven all those roads and didn’t collect this information? I’m sorry but, where I come from, this is one of the basic reasons WHY we drive the roads… not to mention the other 260 attributes that we collect, per road segment.

    You also seem to assume that traditional map companies such as NAVTEQ do not use User Generated Content… What is a “User”? What kind of “User” would you expect to Generate high quality and consistent Content? The User of a “free” product? Of course NAVTEQ allows Users to Generate Content, in fact, we have a whole Web site devoted to the activity, mapreporter.com. But we also have an army of highly trained professional Users (customers) and staff that have a vested interest in Generating Content, and not just any old Content, but highly relevant and accurate Content.

    And finally, probe data collection is not the exclusive province of Google. NAVTEQ has been involved in probe data collection since 2008 to my knowledge, perhaps earlier. The sheer potential of the “collective intelligence of NAVTEQ users” is, it seems to me, many orders of magnitude greater than any other “collective intelligence” in this arena.

    NAVTEQ is, by definition, an innovative company. When we started collecting map data 25 years ago, GPS satellites were still a military experiment and the Internet was barely big enough to even need a search engine. Over the years, NAVTEQ has invented over 200 patentable techniques for map data collection and processing. This corporate culture is not going to stop. Over the next 25 years that innovation will continue. Our business focus is high quality, navigable maps. It always has been.

    Mike Moore
    NAVTEQ Network for Developers
    http://NN4D.com

    Mike – Thanks for your comments. You make many interesting points and I appreciate the time and effort you spent to contribute an alternative point of view. I have responded to several of your points below. Perhaps I will get to some of the others in a future blog.

    Just so you know, I met Navigation Technologies in early 1986 when I joined Russ Shields (who started NAVTEQ) on the, then, SAE Committee on Automobile Navigation. I met with Russ, numerous NAVTEQ presidents, developers, and managers over the succeeding 15 years that I was with Rand McNally and have had contact with them as a potential customer (when I was the CTO of go2 Systems) and, later, as an industry analyst. In other words, NAVTEQ is not an unknown quantity to me.

    In my blogs I believe that I have maintained the NAVTEQ has the best data quality of any of the current providers. My interest, however, is not in the current state-of-the-art, but in the future development of spatial databases. While I admire your defense of the company, if I had to rank NAVTEQ’s efforts in UGC compared to your competitors Google and TeleAtlas, the company would be the last name on the list. The NAVTEQ Map Reporter and the NAVTEQ website reflect an “old-school” approach to UGC. I suspect that the error corrections you receive from the site are very limited and that its traffic is very low compared with your competitors’ methods of soliciting UGC. Perhaps more important, NAVTEQ is not really a customer facing company and has little in the way of customer presence. UGC requires a different model than NAVTEQ is executing, although I realize there is a corporate effort underway to change that situation.

    Yes, I know that NAVTEQ has experience with probe data and ingesting probe data from many of its partners. In addition, I know that NAVTEQ could not make any sense of the information it was receiving from UPS drivers and I was told by a NAVTEQ employee that it was NAVTEQ that ended this cooperative program. Perhaps, it is unfair of me to mention this, but NAVTEQ is not looked at as the paragon of partnering by its customers, a rumor probably well-known to you. However, I hope that NAVTEQ will be able to benefit from the probe data that could be generated by the use of Nokia’s phones, presuming they are used to access a service that uses NAVTEQ data.

    Next, I find it difficult to believe that NAVTEQ continues to make the claim for its database of “260″ attributes per road segment. While NAVTEQ may have fields in its database for 260 attributes, the company certainly does not collect, on a regular basis, 260 attributes for the majority of road segments in its database. For some road segments the attributes are quite thin, as, I am sure, everyone who has ever looked at your data knows. So, the interesting question is “What is the real difference in attribution between Google and NAVTEQ?

    I am quite sure that NAVTEQ would convincingly win this battle if it were held today. However, I am equally impressed that Google was able to build a significant spatial database in a very short period of time, compared to the 25 years taken by NAVTEQ and TeleAtlas. The future will certainly tell this story, as well as whether or not Google decides to collect the attribute data that it is now missing. The next interesting question is “How long it might take to collect these data?”, and we may have common ground in our thoughts on this process.

    Finally, I am very impressed with NAVTEQ, its patent portfolio and its products – in other words I am a fan. Neither am I forecasting the demise of the company, as other analysts seem to be suggesting. Conversely, NAVTEQ has long had a not-invented-here attitude and that applies to the company’s record of innovation. It is my belief that NAVTEQ’s lack of meaningful attention to UGC may be a dangerous exposure and one that I urge the company to remedy.

    Thanks again for your comments, which I found interesting and insightful.

    Mike

  2. Mike Moore

    (Editor’s note – Mike Moore of NAVTEQ has clearly spent a great deal of time thinking about his company and their position. In his response to my response, he has raised a number of interesting points. In order to provide a more meaningful interchange, I have inserted several of my comments in to his document. My Comments are in italic. Also, please note that Mike Moore is speaking as an individual and not as a corporate spokesperson for NAVTEQ. As a consequence, I apologize if any of my comments place Mike in a difficult position. However you have to appreciate a guy who correctly uses the word “penultimate”)

    Hi Mike, thanks for your vote regarding the quality of NAVTEQ data… as you have posted elsewhere, this is not just an idle concern. Consumers demand “excellence” in their navigation, and it is clear that “good enough” is simply “not enough”. (Once more, the obligatory reminder that these are my personal opinions, which do not necessarily reflect NAVTEQ’s views!)

    I was originally thinking that we would be having a discussion about the relative merits of creating products specifically for ad revenue generation – which in my view would always tend towards the “good enough” – versus creating products that have a true intrinsic value that is worth paying for – which naturally leads to products of high quality. In my mind, the very fact that there are many, many millions of users that do NOT use Google Maps and use MapQuest, Bing, AAA, etc. is testimony that even on-line, consumers are demanding something Google simply isn’t providing.

    Mike Dobson’s counter -
    Google’s revenue from advertising last quarter was over $2 billion and it is likely that Local Search on cellular devices will significantly enhance that number. You need to re-examine what you think you know about advertising in the past and look at how Google’s model works and the incredible network of opportunities they have to build an integrated, targeted distribution channel. Clearly Google was willing to spend more than they were paying TeleAtlas, (or NAVTEQ at one time) to build their new map database. Since Google has a history of NOT OWNING data, but has focused on metadata (principally data about where data can be found and its validity), one has to wonder why they would abandon that stance to produce map data. (Especially since they will continue to pay TeleAtlas for data they are NOT using over the next four years.) I think the answer is “Because they could not achieve the quality levels they need to advantage their advertising business with the data from existing suppliers.” No one goes into the map database business on a lark and Google is not an exception to this rule. The fact that they can offer their maps and navigation applications for free is yet another example of my belief that they are not competing with NAVTEQ, TeleAtlas or any other map supplier. They are focused on the advertising market and maps and map databases are one of many tools that they will need to optimize their interest in Local Search over mobile devices.

    More of Mike Moore email
    Be that as it may, I’d like to respond to the two big issues (at least, big to me personally) that you specifically raise, attributes and UGC.

    Attributes: NAVTEQ collects over 260 attributes per road segment. This is our standard practice, and the number of these attributes will only rise as we begin to collect data for new products.

    Perhaps some explanation is needed… an “attribute” is a feature of the environment that is specifically related to navigation on that segment of road, the existence (or not) of that feature is what is recorded. A segment of road can be as short as 30 meters. If any one of those many attributes changes then we start a new segment in the data.

    What attributes do we collect? Well, for a start, all those things that you have said Google do not collect, speed limits, whether or not the road is a toll road, is the road on a recognised scenic route, etc.

    Examples of other attributes we collect include the actual speed achievable on the road (which may be less that the legal limit due to, say, road humps), the number of lanes, the presence of a physical divider, the house numbering system along the road, the name of the road, if the road carries one way or two way traffic, the presence of a stop sign or traffic light, the presence (or not) of a sidewalk, and on and on for another 250+ items.

    Attribute collection is equally valid regardless of the presence of the specific item in reality. That is, the fact that a road is NOT emergency vehicle access only is as valuable a piece of information as knowing that a road IS emergency vehicle access only (especially to emergency vehicles).

    Coverage such as I describe above is available in 78 countries (as of December 2009) and this number is always increasing.

    Of course, some of this is a work in progress, there will always be some countries where, in any given quarter, full coverage has only been collected for a portion of the country because we are still mapping the country… NAVTEQ field crews drive millions of miles each year (with no exaggeration)… For the vast majority of the 78 countries however, data collection at this level is 100%.

    Mike Dobson counter –
    I guess we could answer this question quite easily. How many of the road and street segments in the NAVTEQ database have over 260 “attributes”?

    More Mike Moore’s email -
    UGC: It seems that perhaps I may not have explained myself well enough originally. UGC comes from consumers to our customers, who then provide that information to us. In some cases that information is provided instantaneously, using APIs that we provide to our customers for the purpose.

    So, perhaps you are right (I have no idea in fact) that direct consumer traffic on mapreporter.com is lower than it would be if NAVTEQ was a B2C company… but Map Reporter is much, much, more than just a consumer facing Web site. In addition, many of NAVTEQ’s customers are professional cartographers. The UGC we get from them is extremely high quality, accurate and reliable.

    On a penultimate note: You seem to think that Google’s map building prowess is somewhat extraordinary. I would only like to point out that Street View (on which the map data appears to be based) was announced in May 2007… at the time, they had already completed “mapping” four cities, so presumably they actually started collecting data before then, for the sake of round numbers, let’s say January 2007. So, it has, in fact, taken Google at least three years to reach the point they are at now (which I might add is also US only data). And since you have already stated that what they have is certainly far from navigation quality, I think you can begin to see the true enormity of the task remaining.

    Mike Dobson counter -
    Mike, regardless of when Google started Street View, it was not until later in the process that they started thinking about building their own map database. I’ll bet that if we looked at NAVTEQ’s Strategic plan in 2007, 2008 or 2009 that it did not focus on Google creating a map database that might compete with NAVTEQ’s core products. The speed with which they built the U.S. database is astounding – although that does not mean that it is yet competitive with either NAVTEQ or TeleAtlas. However, if we went back and looked at how long it took NAVTEQ to create coverage of the United States, we would have to acknowledge that the started in the mid-80’s, did not finish until well into the next decades and might have taken even longer if the U.S. Bureau of the Census had not released TIGER, which was used by NAVTEQ to fill in all the areas where it did not have map coverage.

    The point I was making was that Google has made surprising progress. I would find it hard to believe that this issue has escaped NAVTEQ’s senior management team and not caused NOKIA to experience an even deeper case of buyer’s regret.

    I agree that Google has a challenge facing them in terms of map accuracy, but with the User Generated Tower of Power (I could not resist), they may be able to scale that hill more quickly than some think. Just so you know, I think all companies in the map business have a challenge with map accuracy, coverage, comprehensiveness and the currentness of their data. In other words Google is no exception. The interesting issue is how will the companies involved respond. Thanks for your comments!

    More Mike Moore email -
    And finally: I’d like to say that I have found your viewpoint challenging! This is a good thing. You are letting us know precisely where we can improve and what you are interested in. I hope I have been able to show that some of what you want is already there, we maybe just aren’t that good at communicating it, but there is always room for improvement and I will make sure I point all the right people at your site.

    Mike Moore
    NAVTEQ Network for Developers
    http://NN4D.com

  3. Mike Moore

    (Editor’s note: In order to help you follow Mike Moore’s new comment I chose to add a sentence in bold to identify the section of my previous comment to which he was responding.)

    Question being addressed by Mike Moore in repect to this reply that I made on his second comment.

    Mike Dobson’s counter -
    Google’s revenue from advertising last quarter was over $2 billion and it is likely that Local Search on cellular devices will significantly enhance that number. You need to re-examine what you think you know about advertising in the past and look at how Google’s model works.

    MM: I am addressing that very point. In general terms, for any company with an advertising business, the customer is not the consumer or, in this dicussion, the application developer – the customer is the advertiser. Such a company is NOT a B2C company, it is a B2B company. Free-to-air radio for example, needs to keep listeners happy to generate ratings, but ratings are not the goal of the station – ratings are merely one selling tool to use with their customers.

    The business plan for any such company is to do whatever it takes to keep the advertising revenue coming in. That business plan specifically does NOT allow for innovative “research and development” and “product improvement” in the underlying (I can’t think of a single radio station that also manages bands or owns a record label – although it may be possible there is one, somewhere)… instead, it is entirely focused on “reducing cost of delivery”.

    To be specific about the mapping business, if such an advertising business decided that they needed to collect their own mapping data, it is solely because they believe they can do it at less cost than with their current supplier (this calculation of course generally means looking into the short to medium term future rather than the “here and now” usage).

    Having come down seemingly so hard on both Google and advertising as a business model, I’d like to just clarify that I am not actually “anti-Google” or even “anti-advertising”. Google have, in fact been, innovative. However, my view is that – with the very notable exception of Web search – their innovation is not in WHAT they offer, but HOW they offer it. Google’s core business is in the advertising industry, within which they are (or were?) a disruptive force, and they are now looking to grow their “ratings” by taking existing technologies and services (e.g. word processing software, news feeds, map data, books, etc.) and offering them through a new channel. Google most assuredly have not innovated, or even improved on, any of the underlying. Any one of these “ratings hooks” will be dropped at a moments notice if it does not continue to promote advertising revenue.

    And I have absolutely nothing against advertising revenue as part of a business plan, if the company concerned is offering something of intrinsic value in the first place. NAVTEQ is already monetizing its data through advertising. Traffic.com being the prime example, and LocationPoint Advertising (e.g. on mobile devices) being another.

    However, because NAVTEQ’s business is focused on maps, this advertising revenue – as with the data product revenue – funds investment in research and development and future product improvement. An advertising business generally uses its dollars to find ways to increase ratings, not improve the underlying. I hope this distinction is clear, it is the entire point of my expressing my (personal) opinion here!

    Question being addressed by Mike Moore in repect to this reply that I made on his second comment.

    Mike Dobson counter –
    I guess we could answer this question quite easily. How many of the road and street segments in the NAVTEQ database have over 260 “attributes”?

    MM: Given that there are several million miles of roads in the database and most roads have more than one segment, I’m not sure that it is at all “easy” to answer the question… But given that most of the 260 attributes fall into the “Yes/No” category (there either is a road divider or there isn’t) then the obvious answer would have to be, if not “all”, then to all intents and purposes, “practically all”. If you’re interested in a full list of the attributes then you can download the complete data specification at http://NN4D.com

    Exceptions I can think of are a few attributes where the default case is another attribute (e.g. speed category – the measure of actual speed achievable – is only relevant on those road segments where the speed category is different to the legal speed limit) therefore you might argue that many road segments don’t “have” this attribute… but the point is that on a NAVTEQ Full Coverage map, someone actually drove along the road and made the assessment that the legal speed limit was, in fact, achievable.

    There may be some (few) segments where it is difficult to collect the information for some reason (e.g. a road with no posted speed limits and no available county records), or the information is genuinely not recordable (e.g. a house numbering system with a random pattern) and there may be roads that “exist” but simply aren’t navigable, and therefore we don’t collect data along that road.

    I’m not a map data expert, I’m a developer that writes applications for mobile phones – I use the data – so maybe I’m missing other possible exceptions.

    Also, not all products contain all data… Not everyone is interested in height, slope and road curvature, so this information is in a separate ADAS product, pedestrian attribution (sidewalks, crosswalks, etc) is similarly part of the separate Discover Cities product… and we also have “entry level” products that contain very little of this attribute data and has a correspondingly reduced cost.

    Mike

    Mike Dobson reply:

    Dear Mike:

    I am going to reply your third comment, but will not comment on additional notes from you on this specific topic (“More On Google’s User Generated Tower of Power”), as we are, in my opinion, rehashing the material included in the original blog and in my replies to your first two comments on the topic. Feel free to comment on any future blog that catches your eye.

    It is clear to me that you have a fundamental misunderstanding of Google, its growth strategies, its advertising programs, as well as and why the company built its Google-Mapbase. Your view of the traditional model of advertising does not relfect how Google’s AdWords, AdSense or other advertising programs work. Once you have spent some time inside that world, you might change your opinion of what Google has built and where it is going. However, I agree that Google could be vulnerable, but not from NAVTEQ, TeleAtlas or anyone else in the world of mapping. And not from Nokia!

    I do not understand, nor can I make sense of your comment (referring to Google) – “That business plan specifically does NOT allow for innovative “research and development” and “product improvement” in the underlying (I can’t think of a single radio station that also manages bands or owns a record label – although it may be possible there is one, somewhere)… instead, it is entirely focused on “reducing cost of delivery”.

    However, instead of trying to make senses of that, Mike, ask yourself the following question – “Why did Google decide to create it own Google-Mapbase?” Then ask this “Was developing its own navigable map database an attempt by Google to reduce its cost of delivery?” If your answer is yes, then, our views are so divergent that any amount of information I write will not help you to “cross the chasm.” However, you may find the line of reasoning described below to be helpful.

    The expense that Google accrued in creating the Google-Map base far exceeds what it was paying TeleAtlas or what it would have paid TeleAtlas over a span of many years to license the TeleAtlas database. In addition, Google is still paying TeleAtlas and, apparently, will do so for the remaining five years of their contracts, even though they are only using the TA data in Europe. While Google is planning to replace the TA European product with their own content, they will still be required to honor the terms of their joint-contract. Google did not take their leap into the world of navigation databases and mapping as a cost reduction effort. If you and NAVTEQ believe that, your corporate future may not be as successful as I suggested in my last comment.

    In regards to attributes in the NAVTEQ database, you made this statement “Given that there are several million miles of roads in the database and most roads have more than one segment, I’m not sure that it is at all “easy” to answer the question…” Mike, you work with computers – if you have a database in which there is a schema that has over 260 attributes for every segment, simply count the segments that have all 260 attribute fields populated.

    If you did that, I suspect that you would find out the vast majority of your database does not have 260 populated fields per segment. However, the meaningful question is “How many attributes does NAVTEQ actively collect that are currently not in the Google-Mapbase?” NAVTEQ certainly does have a lead here; the question of interest to me is, “Can Google reach parity with NAVTEQ and TeleAtlas using its present research techniques?” If so, when might we expect this to happen? If not, then Google will continue to have a second-rate map database.

    Mike, I’ve enjoyed the interchange, but let’s move on.

  4. Mike Moore

    I agree, we seem to have reached an impasse, so let’s move on… if anyone want’s to check out the attributes actually available in the NAVTEQ data for themselves, then they can download (for free) both the data specification as well as sample data at http://NN4D.com... I would also be quite happy to carry on this conversation there if anyone is so inclined.

    But if I don’t correct one point you made in your last response then I won’t have a job… These are my PERSONAL opinions, which DO NOT necessarily reflect NAVTEQ’s views.

    My apologies to Mike Moore. After suggesting that we “move on” in our discussion, I did not notice a further message from him, which, I just discovered just this morning, while responding to another reader on another topic. Although I thought Mike had made it clear that he was not speaking for NAVTEQ, but as a reader of this blog, Mike wants to be sure that the rest of you know that the comments were his and not those of NAVTEQ. –Mike