Exploring Local
Mike Dobson of TeleMapics on Local Search and All Things Geospatial

Google Maps and Business Listings – Better, but not quite there

December 16th, 2014 by admin

It’s time for that long winter’s nap – or perhaps a long read on a topic of interest to everyone – business listings and Google Maps. Thrill your boss, intrigue your friends and drown the conversation with data – all of this can be found below and it’s free, but only for the next millennium. Why, you might even gift your boss with a copy. I am sure they will thank you. Sorry about the size of the graphics and their extending out of the text frame, but I wanted you to be able to see them.

I have been following Google’s interest in fielding autonomous cars, their use of AI and maps to improve their navigation, and how Google thinks it will be pretty easy to expand their current map and related databases to support these efforts. (See this effort for an example of the coverage). In addition, I recently perused what was a very complimentary review in Wired of Google’s Ground Truth.

Hmmm. Perhaps I have slept through some major revolutions in mapping and map data quality. Well, actually I was in Italy for the month of October and maybe everything new happened then? Actually, I don’t think so because I had reason to use Google and Google Maps to find services I needed in the cities I was visiting. I was amazed. The business listings shown on Google Maps for several cities in Italy appeared to be even more incomplete than the Google Map to which I referred at home. Even more worrying, I was in established urban cores that should be well known to any serious providers of maps. Hmmm.

While in Italy, I had occasion to search for laundries, dry cleaners, stores, and business services, you know, the kinds of things related being away from home for an extended period. Since I was not familiar with the geography of businesses in the cities and regions I visited, I had intended to consult Google Maps to find locations so that I might be able to acquire those services and goods without undue exploration of the city. However, after using Google Maps in Italy this notion appeared to be a cruel hoax.

I had difficulty accepting that the types of services for which I was looking were not available, say in Rome or Florence. As I walked through the cities, I noticed a number of businesses that provided what I was looking for, but they were not shown on Google Maps. In addition, many of these stores had not appeared when I used Google Search to interrogate a category that I thought should contain a specific type of business. Often I could often perform a Google Search on the name of the business that I saw while passing by and found that it would now be displayed on the map as a search result. This process seemed sort of backwards to me, since I was using Google Maps to try and find potential targets in the real world, not using reality to find locations that could be symbolized on Google Maps if I searched for the name I saw on a storefront.

On arriving back in the US I decided to take a closer look at business listings on Google Maps. I was hesitant to once again look into business listings, as it is the scab of the mapping world that never seems to heal. However, as the venerable Yogi Berra is reputed to have said, “You can see a lot just by looking.” So, I headed for two shopping areas near my home after having equipped myself with a camera, GPS, note pad and a trusty sidekick who was willing to work in return for lunch.

About every two of years, for one client or another, I take a close look at the state of business listings on maps or navigation systems, shudder and tell them things were not any better than two years before. Once, I even designed a business listings system for a major player in the electronics space, but when they saw the complexity and expense of becoming a market leader they decided not to pursue their intended strategy. On the other hand, maybe it would be different this time and it sounded like a fun way to spend a few days.

Note: the study reported here was just for my information and does not have the detail of a client report. I think you might find it interesting – at the very least it will save you from having to go out and look for yourself – and if you are interested in business listings it will give you a lot to think about.

The study areas examined  for this research

Figure 1. The two shopping areas included in the present study are located in Laguna Niguel, California. Basemap courtesy of Google Maps. The areas were chosen based on accessibility to the author and because I had estimated that they would contain close to 100 businesses. Not a large sample, but ample to provide some food for thought on the topic.

Both shopping areas are outdoors and open, rather than an enclosed mall setting. Each consists of a large main building offset from the road, several pads or minor buildings scattered through the shopping area, and acres of parking. In both sites a few shops are located in buildings adjacent to nearby streets. In most cases the businesses in these centers are set-back far enough from the road to render reading business names impossible while driving by. Exceptions to this general rule are the huge signs for the anchor stores in each shopping area (e.g. Home Depot, Hobby Lobby or Wal-Mart).

I started the research by creating an inventory of the businesses operating in each of the shopping centers. I enumerated all businesses in the respective centers noting their individual position, name and general category of business. I entered the names of the businesses in list form with address information and also noted their location on aerial imagery. I, also, had maps of the businesses in each shopping area that I retrieved from the management web sites for the properties and used these to determine details of the geometry of the shops for further comparisons with building outlines and partitions that might be included on Google Maps. Although the maps provided by the management web sites were relatively up-to-date in terms of tenancy, my list of businesses was based on field observations.

In order to determine which businesses were shown on Google Maps I used a desktop computer (several actually) to interrogate Google Maps. The examination involved zooming, scrolling, examining images, examining Street View (when and where it existed), using various “developer’s consoles” associated with several browsers to determine details about the map tiles and Google servers involved, as well as using the Google Index to search for businesses that did not appear on Google Maps. When a Index search resulted in success, I then clicked the inset map that accompanied the search results and examined the location in Google Maps
Note that the use of Google Search and Google Maps are influenced by prior searches, prior Google Maps use and other information that Google knows about you as a user. The research results described here may be specific to my computers (several different machines were used during the process) and not reflect the experiences of others performing similar searches. However, without being more open about why, I have concluded that the results of my map viewing and searching should not be significantly dissimilar to those of most other users of Google Maps who view the Google products using an Internet browser and a desktop computer. Although I checked some of the business listings using mobile devices, this was to ensure that there was a basic compatibility between the business listings on Google Maps using various devices.

Note that after I had finished my original analyses at the end of last week, Google updated its map database and made changes to the businesses in both centers. I incorporated these changes in a second analysis that I will provide as part of this report. Finally, after having received a note from a friend on events at Apple Maps, I decided to include the business listings they had for the areas of interest, and will, also, comment briefly on their efforts in the area of business listings at the end of this blog.

Research Questions and Considerations.

It is my belief that Google or any company providing detailed street maps involving a wide range of scales for presentation should, at the appropriate scale or scales, attempt to name all business that occur with the boundaries of the areas they have mapped. The detail limitations on map content related to the lack of space available on printed maps are irrelevant in digital mapping systems designed to show extreme details in local areas. Users of modern mapping systems should be able to use large-scale maps as surrogates for reality. In the case of business listings this would equate to naming every business in a local shopping area, so that the user could find and navigate to any of the potential targets available and of interest to them.

It is clearly the case that not all businesses in an area are represented on Google Maps. One could postulate several possible reasons for the lack of a complete list of local businesses, although not all are equally likely:

1) Data quality issues – Google might not possess the required data to describe and/or locate businesses accurately within a specific area
2) Geocoding issues – Google might be unable to adequately determine the validity of the address data that must be geocoded to locate businesses.
3) Street View coverage limitations – Some entities, particularly, shopping areas with internal streets and roads may not allow Google vehicles access or, perhaps, Google has instructed drivers to not to capture street view in particular commercial locations (e.g. Google Street View imagery was unavailable for the majority of the internal roads in the shopping areas involved in this study). Alternatively, Google may have the necessary Street View data, but have not yet processed or published them
4) Google wants their users to search for data on unidentified businesses as they make considerable profits from such search activities versus the limited income directly generated by their mapping operations.
5) Google’s map design appear to me to be heavily influenced by their desire to capture more revenues from mobile devices, whose screen real estate could limit the usefulness of burdening the display with the names and positions of all business locations. Alternatively, there could be aesthetic design reason for limiting the number of displayed business tags on Google’s map

I thought that looking more closely at what Google was doing in my local area might be helpful in understanding Google’s display strategy in regards to business listings.I had six general questions that I wanted to answer during the course of my research:

1. How many businesses were in the shopping area and how many of these businesses were shown on Google Maps?
2. What was the accuracy of the locations of business shown on Google Maps?
3. When businesses were located in the correct building, were they correctly located within that building?
4. When businesses were not located on Google Maps, could Google Search map them and what was the accuracy of the locations on Google Maps revealed through Google Search?
5. When “searched” business locations were shown in the correct building, were they located correctly within that building?
6. What could Google’s performance in displaying business names on maps tell me about their strategies and challenges?

Let’s look at the details revealed by the research. There is a lot of data here, so I provide a series of graphics and brief captions, rather than spend a great deal of time in detailed explanation. The graphs generally follow the order of the questions provided above.

Plaza de la Paz

Businesses at Plaza de la Paz shown on Google Maps

Figure 2. Plaza de la Paz is comprised of 52 businesses. Google has mapped a portion of these and included 2 additional businesses that once operated here but are now out-of-business. Approximately half of the businesses at Plaza de la Paz were shown on the Google Map of this area.

Chart of the variablity of the locations used to represent the business listings

Figure 3. While most of the mapped locations were relatively correct (near where they should be located on the map or easily findable from that position if at the shopping area) those that were misplaced or significantly misplaced appeared to me to be errors resulting from some aspect of the geocoding process. Frequently, these locations were symbolized along external roads bounding the shopping areas, but generally not symbolized within the buildings that actually house these businesses and which Google shows on its map.

Percent of businesses mapped in the correct building

Figure 4. As noted previously most of the businesses that were mapped were located in the correct building, although some of these were generously misplaced within the building that they occupied. In some instances this was not a major problem, but in other cases, it put the business on a side of the building that was not visible if you were looking for the business near the location where Google had located it on its map. I moved a business into the misplaced category when it was unlikely that most people could figure out where the business was based on the symbol location. Note – the main building in this shopping area is several blocks long, so the measure of “occurring within the correct building” is a very liberal measurement.

Locations that required the use of Google Search to discover

Figure 5. One-hundred percent of the businesses not shown on Google Maps had locations within the shopping area that were known to Google. In order to find these businesses I entered each business name (known to me only because I had observed the name in the field) in Google Search (not Map Search) and then clicked on the map that appears to the right of the listing on the search engine response page. In most cases, these businesses were located in the correct building, although several were shown along streets or in parking lots, rather than within the building outlines that Google had extracted from satellite imagery.

Search locations mapped in the correct building
Figure 6. Searching the Google Index revealed eighteen businesses that were mapped located in the correct building. A modest number were misplaced within the building and shoppers would likely experience difficulty finding these businesses using the locations provided on Google Maps.

Next, let’s see the results to the same questions for the second study area.


Businesses at the Marketplace in Laguna Niguel

Figure 7. Google had a much better grasp of the businesses that were located at the Marketplace as compared to Plaza de la Paz. In part, this may be that the layout of the shopping center is “open” and the buildings surround central parking rather than being embedded within buildings surrounded by parking areas, as at the Plaza de la Paz. Google did not map any out-of-business locations at Marketplace.

Location of the businesses at the Marketplace Laguna Niguel

Figure 8. Not only did Google list more of the businesses in the Marketplace, but it was able to locate them more accurately than in Plaza de la Paz.

Mapped in the correct building

Figure 9. Unfortunately, while able to identify the correct building containing the business, Google was less able to accurately locate the businesses within the building. My conclusion was that these were geocoding errors resulting from attempting to geocode the address data of the businesses without detailed reference information for the units involved. For example, these businesses have addresses such as 27150 A, 27150C, 27150F. There are no units labelled B, D, or E. Without knowledge of the omitted unit identifiers, it would be difficult to properly geocode the actual addresses for units that do exist or to even know where to start geocoding within a building.

Search locations at the Marketplace

Figure 10. Only one of the existing Marketplace businesses omitted from Google Maps could not be found and mapped using Google Search. Most of these results were correctly located.

Search locations in correct building

Figure 11. In general, location accuracy of the results from Google Search was commendable. Although several locations were misplaced within the correct building, it should be noted that the sample size was small.

Thoughts so far

Google Maps clearly did not represent all businesses in the study areas. Approximately one-third of the businesses in the areas studied were not included on Google Maps. Moreover, when Google Maps was able to locate the business in the correct building structure, it frequently mislocated some of these businesses within those confines. In addition, approximately one-quarter of the businesses shown on Google Maps were either shown in the wrong building, symbolized in the parking lot or along an adjacent road, or were, in a small number of cases, no longer in business at the shopping areas examined. The businesses not on the map could be searched and mapped if you knew the name of the business from some other source. In general, the accuracy of the locations of businesses found through search was slightly inferior to that of the businesses appearing on Google Maps but not requiring search to symbolize them. However, when these “searched” businesses were located in the correct building, they were as likely to be accurately located within that building, as those businesses initially shown on Google Maps in the correct building. The last finding was unexpected. If the locations that required search can be shown on the map with approximately the same accuracy as those that are normally shown, why not show all of these data on the map to begin with?

More detail

In order to get a better understanding of Google’s strategies in naming businesses, I needed to see a little more data.

I decided to investigate if there were categories of businesses that appeared to be favored by Google and other categories that seem disadvantaged by Google’s approach. As is well known, attempting to categorize almost anything results in instant argument. And so it goes. I decided on the categories to use and feel that they adequately represent the variety of businesses that were discovered in the field.

First, I categorized the businesses in each of the two study areas and, then, lumped them into one pool of businesses representing both locations. The results were as follows.

Business segment percentages

Figure 12. It appears that there is a good mix of business types in the two areas studied. Dining dominates, followed by Specialty Retailing (mainly shopping for general stuff), Services (banking, FedEx, etc.), Salon Services (beauty shops, nails, etc.,) and Home Furnishing, Decorating and Supplies.

Business segments on Google Maps

Figure 13 Wow; guess Google feels that people in “South County” like to dine out since Google Maps really emphasize the Food and Restaurants category. It appears that the businesses names published by Google on its maps reflect the general distribution/occurrence of these categories within the two shopping areas. The segments emphasized by Google are those commonly thought to be of most interest to mobile users.

Mapped businesses by market segment

Figure 14. This is probably the most interesting of the graphs as it shows the performance of Google Maps in terms of businesses mapped in each of the business categories. Google provides good coverage of the most popular gategory Food and Restaurants. In the two shopping areas studied it does a relatively poor job of representing Pet Services and Supplies and Pharmacy & Medical Services.

Indeed, in several categories it seems to be almost a fifty-fifty chance that an individual business will appear. Oh wait, it’s generally worse than that since, in most categories, national or regional chains/brands are well represented on Google Maps, at the expense of …small businesses. However, my experience is that this problem really exists at the level of business listing aggregators who supply their lists to Google and other providers. It is quite easy for these companies (or Google) to contact major retailers and solicit the location of their stores and outlets (or scrape them from the web). Finding the small business owners is much harder and that appears to be reflected by the lack of representation of small businesses on Google Maps.

Well, now that I had a better grasp of how Google Maps was performing, I had one more issue to deal with before concluding -

What about those Google Maps updates?

Before I could post this blog last weekend, Google decided to push out new map updates early Saturday morning. So, realizing that this might be an interesting opportunity, I processed the new data and made some new graphs. It appears that Google Maps really is trying to improve its business listings, but still under represents business listings. The update made only one minor adjustment to the Marketplace shopping area and this was to indicated a Redbox existed in the parking lot (not where it is located). Since the businesses mapped at the Marketplace were unchanged I did not update the graphs for the Marketplace. Plaza de la Paz, however, received a significant upgrade to the number of businesses listed.

Plaza de la Paz (after Google Update)

Businesses on Google Maps after update

Figure 15. Mapped locations in Plaza de le Paz increased by seventeen percent in last weekend’s Google Maps update.

location and updated business listings

Figure 16. The number of mapped locations at Plaza de la Paz increased from 28 to 39 in the most recent Google Maps update, but some of the additions were misplaced and the number of erroneous businesses entries increased from two to three.

Updates and correct building location

Figure 17. While the number of businesses mapped in the correct building increased by ten, the number of businesses locationally misplaced in the correct buildings increased as well.

Next, I wondered about how these updates were applied across the business segments?”

detail by market segment on Google Maps update

Figure 18. The recent Google Map updates was focused on the Food and Restaurants segment. With this update Google has identified all of the businesses in this category that exist in the two shopping centers! Other changes were the addition businesses to the Service category, as well as single business additions to Salons, Home Furnishing and Pharmacy.

As can be seen on the graph, a number of segments are still under-served and in most of these the businesses that are not represented are small businesses. One interesting exception is that Google continues to snub a sizable AT&T shop in Plaza de la Paz, although you can see the unnamed individual aisles in the Home Depot store instead, if that would be of interest to you. All in all, Google is making good progress in putting businesses on the map, but one has to wonder how soon they will accomplish the goal of complete representation and accurate location of local businesses.

Some thoughts about the location accuracy of listings.

The accuracy of business location on Google Maps is generally good for major chains (which often are in a stand-alone unit that makes it easier to identify the business), but less accurate for small businesses that are, often, clustered along the length, or sometimes the perimeter of a buildings.

In most cases in this study Google appears to have relied on geocoding to locate the addresses of their business listings (since they appear not to have Street View Data for these areas) and in many cases the quality of the address data, the map match, or the geocoding algorithm produced, or combined to produce, insufficient accuracy in the name placement results. In the previous sentence, the “map match” refers to the detail available to place addresses within a building in which all of the units may have similar addresses. For example, in both shopping areas there was one large, elongated main building that ran almost the length of the entire shopping area. Unless you have access to the building plans that detail the size and location of the units that house individual businesses, you will have difficulty matching the address to a reasonable location within the larger unit.

In both shopping areas, the quality of location data, as shown by business placement on Google Maps was quite unimpressive and less advanced than I had hoped. While Google was able to identify the building outlines within both shopping areas, it was often unable to accurately determine where the business was within the unit. If Google actually desires to field autonomous cars that can deliver customers to a commercial destination, it is my belief that the accuracy of the business location placement on Google Maps will need to be vastly improved.

Street View is probably a portion of the required elixir, but being up-to-date would require the vehicles to explore all shopping centers on a recurring update cycle. To be of benefit, the update cycle should be based on a “change” index calibrated to economic indicators and other local and regional trends that effect the provision of services and retailing.

While Street View would provide some of the required geometry, I find myself wondering why Google does not appear to gather the details concerning store placement that are available from most property management companies. People who make their money leasing property know the units, their details, and even their clients. Of course, they may be pursuing these data and may not yet have compiled it for the areas studied here.

An aside

Before I end this already overly long blog, I feel compelled to comment on the utterly unhelpful “what’s here” tab that shows up when you right click on or near a location on Google Maps. In my experience the result is an address and a Street View image, if one is available. Oh, that’s really helpful. Once in a while you will see an address and a business name, but that happens infrequently. Given the details about locations that Google has amassed, why is this button’s functionality so lame?

Over a decade ago I gave a presentation about the utility of the GIS-powered “What’s Here” functionality. When you click a spot on the map the system responds with map details showing what is around the point you have selected and links allowing further exploration. This is a trivial spatial search that would add a world of utility to maps such as Google’s. Why hasn’t Google or anyone else invested the time and effort to generate a space filling functionality that would make those empty maps incredibly useful to anyone searching for anything close by the point of interest?

Now back to the topic

One final note, a colleague mentioned some news about Apple Maps and the charge it is making in displaying business listings on its maps. Well, take a look at the next figure.

Apple Maps shows some business listings

Figure 19. Apple critically lags Google in providing business names on its maps. Worse yet, Apple does not appear to provide building outlines, nor have access to any specific referential data that might improve its geocoding. As a result, it relies on street-face geocoding to place most of the businesses it names. The result is that the majority of the businesses are named along the surrounding city streets, even though this is not where the businesses are located. In the shopping areas examined in this study, street face geocoding and mislocating the business is better than not locating the business at all, but not very valuable to the consumer. Unless Apple Maps changes its current strategy for naming and locating businesses, it will remain severely behind Google in the Names Race. One ray of hope, Apple showed some of the small businesses that Google is missing. How about that?

General conclusions

Near the start of this blog I asked what Google’s performance in displaying business names on maps tell me about their strategies and challenges? My conclusions are quite simple:

1. Google is clearly working on improving the quality and abundance of the business listings shown on the basic Google Maps.
2. Google Maps continues to suffer from incomplete coverage of business listings and complicates this with less than accurate locations used to show the businesses that do appear on their maps.
3. Google Maps clearly under-serves the small business community. Although it has worked on strategies for improving this coverage it does not appear to have a significant advantage in this segment.
4. While the businesses in an area that are not shown on their maps appear to be known to Google, for some reason the company appears not to have enough confidence in these listings to present them as included content on their basic maps.
5. It remains unclear to me whether the incomplete listings reflect a quality control issue or is an indication that Google does not feel a need to provide complete coverage of businesses on its maps.
6. The completeness of business listings on Google Maps is a critical issue with implications for the utility of Google maps, as well as their ability to support navigation and autonomous vehicles in the future.
7. Google is obviously spending cycles trying to improve the quality and comprehensiveness of their business listing data and likely spending a ton of money to do so. What are its competitors going to do to compete if Google creates a highly accurate business listings database to go with its highly accurate maps? Either pay the toll, be shut out of the market, or continue to wonder why inferior products don’t excel in the marketplace.

There were several additional interesting items in the data collected for this study, but this blog is already too long and I have some Christmas shopping waiting for me. So with that, I send my warmest wishes for a Happy Holiday Season and may 2015 be your best year yet!

Dr. Mike

Bookmark and Share

Posted in Authority and mapping, Categorization, Google, Google maps, Local Search, Mapping, Mike Dobson, business listings, google map updates, map updating, mapping business listings, updating business listings | No Comments »

Rumors Run Rampant – MapQuest on the Outside – another map engine on the inside?

September 10th, 2014 by admin

I heard what I will call a “rumor” this morning, but suspect that it was a statement of fact. If I am wrong, I apologize in advance. As you know, there are shades of rumors, so I will add some color where I have additional information.

It appears that AOL has quietly begun the shutdown of parts of the MapQuest operation. A few weeks ago the announcement was made at MapQuest’s back-end operation hub for MapQuest located in Lancaster, PA. Some members of the engineering staff were let go then, more will be released in November, and the operation in Lancaster is scheduled to close in March of 2015. Perhaps most important here is the fact that Lancaster is the back-end mapping operation for MapQuest. One would think that if they were moving the engineering operation to Denver (where the rest of the MapQuest group operates) that they would have moved the engineering team there, as well. It is my opinion that AOL intends to contract with another service to provide the mapping engine for MapQuest. Well, whatever the case, this is a rumor, but, I think, specific timing notwithstanding, the strategy of the story has been in the oven at AOL for some quite some time.

As some of you may know, MapQuest was once the leading provider of online maps and routes. Its historical trail involved a number of companies headed by Barry Glick and culminated in the property that eventually became MapQuest being acquired by Donnelley Cartographic Services, an organization that made maps for print publishers, but was wise enough to see the future of online mapping. Though the future was unclear, Barry and his successors navigated the road ahead and took MapQuest to a successful IPO, followed by the acquisition of the company by AOL.

MapQuest was the King of the Road in online mapping until it began to encounter a headwind from Google Maps. Of course, there were other earlier competitors than Google, but one-by-one these pretenders became irrelevant, fell into decline and ceased operations. The few that survived continue in business, but remain minor footnotes in the market.

It is somewhat interesting to note that the demographic that was attracted to MapQuest on the Internet was an older than average, mature audience. It is thought by many that the original audience for MapQuest continues with the service even today, with some of those loyal customers still printing out routing instructions rather than using route guidance through smart phones or other personal navigation devices.

The problem that nagged MapQuest’s planned IPO was a lack of revenue. Suffering from a real world case of the “Innovator’s Dilemma” MapQuest was a product that no one requested. When launched it was a “give-away”, a status that it could not escape once the genie was out of the bottle. Indeed, the numerous map-making companies littering the roadsides today are a result of “free-Internet maps.” Unfortunately, while MapQuest was able to overcome the perception of the “operating at a loss,” problem by up-selling its popularity when the stock market for Internet properties was “incandescent”, the problem did not disappear. The lack of revenue issue was knowingly acquired by AOL, whose executives were sure they could monetize the product line. Unfortunately, the strategies they implemented to cure the revenue problem failed and, when altered, failed again.

Those of you who read this blog may have noted that on several occasions I have indicated that Google is in the advertising business and that mapping, a side-line, was integrated into their strategy as a method of selling more advertising, especially location-based advertising. While MapQuest tried the advertising gambit, even Google advertising, it could not generate enough revenue to cover operating expenses. It’s likely that even Google has made its investment in mapping with little hope of recovering its map compilation and serving expenses. However, its map base provides advantages to the company in advertising and beyond that prompt it to continue its massive investment, at least for a while.

The important note here is that online maps have produced a state of disequilibrium in the market for online maps– one in which the revenue results not from the sale or the use of maps, but results from the advantages that maps can bring to other product lines over long periods of time. I think you all know the budget battles that must ensue at Google about who is paying for what and why this new mapping initiative deserves to be funded. If Google has not had these arguments yet, I guarantee you that they will in the future

What we are left with is an unbalanced market where Google, HERE (Nokia) and Apple will remain the major players. I suspect that AOL will replace MapQuest with either HERE maps or Google Maps, but in any event, MapQuest, an American original, will be soon be no more than a shell of its past glories.

My hat is off to the stalwarts that created, popularized and polished MapQuest. You did your profession and your company a great service. AOL? Well, they never seemed to understand mapping, or the use of spatial data. Perhaps more importantly, it appears that the executives did not understand how to manage MapQuest to success. I understand that both MapQuest and AOL Search report to the AOL Chief Analytics Officer. I am sure that monetizing spatial data is not one of his competencies. After all, map data and map engines are generic – at least to those who know nothing about either!

It is likely that switching the MapQuest engine is “merely” a matter of expense for AOL. Too bad they did not see the promise of MapQuest, but “buyer’s remorse” is a terrible thing and usually leads, as it did in this case, to limited investment for new product development. Speaking of “buyer’s remorse” it is an issue that may be endemic in the mapping industry, as HERE continues to muddle making a success of the former Navteq and TomTom is rumored to be in a dither about mapping expenses from the former Tele Atlas.

One more thing – there are some highly talented software engineers from MapQuest now available. I found a few of them on LinkedIn – take a look if interested.

I hope your Labor Day Holiday was relaxing and rewarding,

Dr. Mike

Bookmark and Share

Posted in Apple, Geospatial, Google, Google maps, HERE Maps, MapQuest, Mapping, Mike Dobson, Navteq, Nokia, Personal Navigation, Tele Atlas, TomTom, local search advertising, map compilation | 2 Comments »

Hear HERE – “Halbherr is Not Here Anymore.”

August 21st, 2014 by admin

You know, sometimes you just get a feeling about things and I have that feeling about HERE, so I wrote this blog. I do not have powerful data to support my opinions, but my time in mapping tells me that my feelings on the issues are right, or as close to right as one can be without having inside information – which I do not have. This is not a “How to win friends and attract clients” topic, but lets have a go anyway.

Yesterday I read an article detailing the departure of Michael Halbherr who was the head of HERE, Nokia’s mapping division.

I chortled to myself when the reporter of the article noted that HERE, according to nameless analysts, “…could fetch around $6 billion if Nokia decided to sell the unit.” What the reported did not mention is the Nokia paid $8.1 billion for NAVTEQ (now HERE) in 2008. Presuming a conservative, cumulative rate of inflation between then and now, the purchase price in 2014 dollars would represent approximately $9 billion. Through superior management, groundbreaking strategic planning, and the reduction of budgets for compiling map data bases, Nokia has managed to lop off $3 billion in value in 6 years. In addition, some rights related to the use of HERE’s map data were provided to Microsoft as part of the interestingly engineered purchase of Nokia’s handset division. Whether a potential acquirer of HERE would pay $6 billion might depend on confidential aspects of the grant of rights that Microsoft negotiated with Nokia regarding their future use of HERE data. Guess this means that the loss might be even greater than $3 billion after the bids fly.

According to the Time’s article, Halbherr, who reportedly had clashed with Nokia’s new Chief Executive Rajeev Suri, was leaving the company “…to pursue his own ‘entrepreneurial interests outside the company’.” The departure should generate a doozy of a non-disclosure, non-compete agreement. Imagine that, leaving a unit that could be the target for an acquisition to pursue your own entrepreneurial interests. I think I would have waited to see what happened – unless of course nothing was going to happen. On the other hand, I might leave if I was interested in joining a group that was interested in buying HERE – especially if the potential acquiring entity was a financial buyer (more later).

I am sure that Halbherr was frustrated with Nokia’s management. I am equally certain that Nokia was frustrated with Halbherr’s lack of strategic focus.

Let’s look at the situation from a few perspectives.

2007 – 2012

While thinking about Halbherr’s departure I noted that I had written some sage words about the deal at the time that it happened. In a 2007 report I opined that the acquisition would create new competitors in the mapping industry. The reasons for my conclusion were:

    1. The deal (along with the acquisition of Tele Atlas by TomTom) would create uncertainty surrounding the supply of data in various industry segments (Automotive, Mobile and Online).
    2. PNDs (personal navigation devices such as Garmin or TomTom units) exploded into the market in 2005-2007, but it was likely that the volume sold would rapidly decline in the face of the migration of navigation and mapping to the smartphone.
    3. The deal would generate pricing concerns in the industry due to the consolidation of map database suppliers.
    4. Other strategic considerations surrounding the acquisition might reduce NAVTEQ’s effectiveness in the marketplace (competition, brand management, integration with Nokia, etc.)
    5. A belief that new technologies and new approaches to map compilation might lower the cost of map data collection.

I note, in retrospect, that all of these events transpired and none of the outcomes were beneficial to Nokia.

My understanding is that life in NAVTEQ-land came to a standstill after the acquisition. Nokia botched the integration and then fired many of Navteq’s managers who understood both mapping and the target industries that could have been further exploited by Nokia. Examples of marketing errors abound, but let’s not forget one of my personal favorites – OVI Maps

Several companies integrated new technologies to reduce the cost of data collection. Google developed its mobile location pod that generated Street View and numerous other pieces of data that could be used to reduce the cost of map compilation, while expanding the currentness and breadth of the data that was needed to create a modern mapping system. Crowdsourcing changed the face of the industry and its economics, but Navteq was slow to embrace it, and even slower to find a way to adequately compete with Google’s fleet of low-cost map data collection vehicles.

PNDs took a calamitous nose dive as expected, but Nokia was not quick enough in integrating mapping applications into its phone-based ecosystem.

2012 – 2014

Given the declining performance of HERE in the online market and its seeming inability to keep competitors out of its niche in the dashboard and bus of motor vehicles, it is my opinion that HERE’s main weakness is (and has for some time been) underfunding the expansion and continued development of its map/navigation database.

My belief is that HERE has grown into a marketing driven data company that wears blinders. The industry reports are that HERE does not listen to its customers. Furthermore, the company has been slow to produce the data and innovative services that its customers need.

Lastly, HERE has been seen by Nokia’s management as an embarrassment to, as well as a leaderless entity within, the Nokia family of companies.

Future Issues

I doubt that Nokia can afford to compete in the mapping wars with Google and Apple.

The last potentially meaningful data we have on Navteq’s database development and delivery was that it spent $273 million in the first nine months of 2007 in the pursuit of a better database. At that time the amount that NAVTEQ spent during a partial year far outstripped the combined expenses of all of its competitors for several years. I doubt that HERE is now spending at a comparable rate to maintain and expand its database.

While the amounts that Google is spending to expand its mapping/navigation/location databases may not be sustainable in the long run, the large investments in mapping that they continue to be willing to make have put them far ahead of anyone in the mapping arms race. Perhaps the more important issue is that few potential acquirers will understand that the amount of money HERE is spending on database development is grossly inadequate to grow the business in the mapping/navigation/location marketplace or to compete with Google or Apple in the future.

Buyer, buyer, who has the buyer?

First, I am not sure that there are many strategic buyers who would be interested in HERE.

I have heard from my contacts that Samsung was once interested, but its recent financial difficulties seem to preclude taking such a step. Microsoft could be a potential buyer, but seems to be looking inward and may have already made a bad investment in Nokia that could sour future interest in another Nokia-related acquisition. Certainly it is possible that some strategic investor may try to low-ball a bid for an entity that Nokia no longer wants, but I am not sure any company that might make such a bid will have either the wherewithal to manage the company to success or the bankroll to make it competitive.

Intel tried to get in to mapping/location with its acquisition of Telmap, but has already closed that company and taken another approach to the location market. Other potential buyers may be out there, perhaps in the form of companies in the mobile phone ecosystem.

The alternative scenario is that a financial buyer will see HERE as an opportunity to make beaucoup bucks (esoteric financial term). What a horrible mistake that would be for them and HERE!

Strategic buyers often find it necessary to reduce the expenses of an acquired company to make it look like an acquisition is working to generate new profits, but in the case of the underfunded HERE, this could be a disaster. Financial buyers usually take the same approach, but often complicate the situation by adding a layer of management representing the values of the financial investors. Often, the new CEO will be a person who met a principal of the new owners while playing golf. The financial manager will note that this potential CEO once used a folded map, said that they really like Google Maps and Street View. He or she will have been the head of marketing for a consumer products company.

Oh, my head already hurts for HERE. The common approach of financial buyers is that they will flip rather than fix companies that underperform. It appears that turning a company around degrades the time value of money too much. My guess is that it would take at least three-years to transform HERE into the formidable machine that was once NAVTEQ.

The sales pitch during the acquisition will be, “We don’t know anything about your business (true), and we will not try to manage it for you (false).” They may not know about HERE’s business, but their bid will be based on how well they think the HERE business could/should perform and how far they think they can beat the fat out of its management based on expense considerations, not strategic considerations. And when the business does not perform well after these reductions, they will cut the budget again – and, at that point, it will become abundantly clear that the acquirer knew little about the business of HERE.

Of course, in this case such a lack of experience is to be expected. Who does know much about running a map and navigation business nowadays? Neither Google nor Apple are independent mapping companies. They are companies that forward integrated into mapping to expand their core businesses. Perhaps this is the road that needs to be taken with HERE, although this is the failed road that Nokia took in 2007.


After I finished reading the Halbherr resignation article I laughed once again, but this time at the irony of the whole thing. After all, the logical endpoint of Nokia crashing HERE is a duopoly in mapping of Google and Apple. Imagine that – in 2007 the duopoly fears focused on NAVTEQ and Tele Atlas going to Nokia and TomTom, companies that would have an insurmountable lead in the world of mapping. So much for insurmountable leads. TomTom…TeleAtlas…. Don’t even ask!

By the way, I have been working on a piece about symbolizing maps for those lacking a background in the language of maps. Every time I pick it up I realize that it would take a book to do the topic justice, but I do not want to write a book. So I start rewriting. Someday I hope to finish the article, but it’s a pain trying to find the right approach and tell the story concisely. Sorry for the delay.

One more thing – it’s late (about 2:15 AM PDT) and I apologize for any typos in this blog. My eyes are just too tired to “see” them.

Until next time.

Dr. Mike

Bookmark and Share

Posted in Apple, Garmin, Google, HERE Maps, Mapping, Microsoft, Mike Dobson, Navteq, Nokia, crowdsourced map data, map compilation | Comments Off

eParking – Some Things You Need to Know

May 21st, 2014 by admin

Slightly over a year ago I had two, brief, unplanned, interesting exposures to the world of eParking services. First, a former colleague at go2 Systems asked me to meet with the CEO and co-founder of ParkMe. As is usual for these types of invitations, I conducted serious research time on the company, its personnel and their business model. Subsequently I met with members of the management team of ParkMe to explore opportunities, but did no work for them. Several months later I was asked to and conducted a brief study of another eParking business in which a former colleague from Rand McNally had invested. The target company of that examination was ParkWhiz.

To be honest, before I had visited these companies I was not particularly interested in the “parking” space. Due to these two chance encounters I came to realize that parking services represent an interesting market whose optimization may be a possible key to reducing traffic congestion, while providing a much needed improvement in customer satisfaction in directed, automobile-based travel situations.

It is my opinion that the unexpected encounters with ParkMe and ParkWhiz exposed me to two of the leading competitors in this market, whose strategies, though significantly different, typify the distinction in the eParking market. Both companies approach eParking as a national market, although in ParkMe’s case, they collect international data (and claim to include Antarctica). Strong competitors in the eParking space include SpotHero, Parking Panda, Gotta Park, ParkHub, Click and Park, as well as numerous others. My interest here, however, is not to discuss the merits of any specific company in this space, but merely to recommend that you spend some time looking at this market, as I think that parking services in the near-future will become a must have-feature for companies providing navigation and routing services (e.g. Google, Apple, Nokia, TomTom).

Some Details on eParking

1. The size of the parking industry has been estimated at $30 billion. As might be expected there is an autocorrelation with population. According to research by the National Parking Association* Five states (California, New York, Texas, Florida and Illinois) generate slightly more than half of the total parking revenues from lots/garages. In addition, the states mentioned include at least half of the parking facilities in the nation. The leading segments providing parking are commercial/owner-operator facilities, colleges and universities, hospitals, municipalities and retail/shopping centers. For practical purposes the market can be further broken down into on-street and off-street (lots/structure) parking. The market for off-street parking is approximately twice as large as the on-street market.

2. Parking has remained a local product and many owner operators do not advertise their service. Instead, they rely on local knowledge of their business to attract steady customers (weekly, monthly) and rely on location and access to nearby facilities to “capitalize” on transient customers (hourly), such as that generated by service or shopping oriented businesses.

3. The eParking world appears divided in terms to its approach to monetizing the world of parking. Some companies are attempting to build large inventories of data on parking facilities (even at the street-space level) including attributes such as location, hours of operation, spaces available, cost, etc. These companies appear to be focused on becoming the “premier” supplier of parking data to the navigation industry, believing that the addition of data on parking is a natural extension of navigation and routing systems. (Note that several of the “data” companies have felt the heat from the next category of providers I describe and have affiliated with some of these companies to provide services that expand beyond data.)

Other companies, while amassing large, detailed, databases of parking data view that information as a component of a service business primarily designed to allow users to book (reserve) parking ahead of time, for example while on the road as part of a journey. Companies using this strategy see eParking as both a service business and data licensing enterprise. Drivers can use this type of service to save time, money and gain peace of mind when they need to find a parking space near their destination. In addition, the participating parking owner/operators may benefit from this association through improved inventory management and branding.

It is my opinion that this latter class of competitor wants to influence the distribution of parking information on a just-in-time basis. This segment of the eParking space hopes to serve the parking lot owner/operator by managing their parking spaces in a manner that reflects demand propagated by exposure to a broader audience of potential customers than could be generated by the parking enterprise acting alone. In addition, those playing this intermediary-role could provide valuable services to the lot owner, in addition to the obvious advantages in yield management for selling inventory. In essence, the players in this transactional segment of the eParking market want to become integrators providing value added services that make customers of: drivers needing parking services, owner/operators of parking facilities needing to fill parking spaces, and navigation services providers looking to build offerings integrating new, spatially-targeted advertising opportunities.

4. Parking services, while considered a national market, operate mainly as a local business and strong local competitors exist. The same divisions are true in eParking. However, the end-game for most participants in eParking is acquisition by a company that could benefit from owning a parking data provider or parking services provider. It is in this sense that national data and distribution may better position players in the eParking segment for the ultimate end-game.

5. Compiling data on parking spaces, lots, garages and other facilities is a difficult task. All companies in eParking use specific forms of data compilation and many use hybrid methods that combine aspects of the hands-on and hands-off approaches. Which techniques used usually depend on the nuances of specific markets.

Hands-off techniques view the compilation task solely as a data gathering operation. Usually field teams (often stringers) canvass an area, gather visible information from inspection (address (and other contact information), signage, costs, capacity, etc.) and take photos of the location for later data mining.

The hands-on approach often encompasses the above actions, as well as site visits to determine attributes not immediately visible from the street. In addition, these visits usually include a dialog with the owner operator about the specifics of their facility, as well as discussion concerning the integration methods for representing the parking inventory in an online reservation system.

6. Some players in the eParking market actively partner with automated, real-time municipal parking reporting systems, since doing so allows the end-user of the service to determine how many parking spaces are available at a facility in near-real-time. Coupling this knowledge with pricing makes an effective tool for consumers looking for economical parking availability throughout the day (as parking spaces are often day-parted, increasing in cost at the most congested hours).

7. In general, companies in the eParking space do not provide their own mapping service. Google appears to be the preferred provider (and, perhaps, the preferred acquirer) of many, but not all, of the companies mentioned in this article.

Navigation – Why Parking Information is an Outstanding Need

1. Donald Shoup, a noted researcher in the field of traffic studies, indicated that, on the average, people cruising for a parking space account for a thirty-percent share of local traffic.** If Shoup’s finding is true, the amounts of wasted time, gasoline, pollution and frustration are reasons enough to want to solve the “parking” problem. Of course, dire factoids rarely convince anyone of anything, so let’s think some more about the eParking opportunity.

2. If you were to ask my opinion on an “ideal” navigation system, it would be one that solicited my intended destination, and, then, suggested nearby parking for my consideration, before routing me to my parking preference closest to my intended destination. Yeah, that’s right – how many places do you navigate to and not park?

I usually arrive at a destination and then spend too much time figuring out where I can leave my car without getting a ticket – or trying to create path back to the address (location) of my destination from where I parked my car.
Like everybody else, I use routing to find locations I have not visited before. If I am traveling in a suburban area and do not know where the address is located, it’s almost certain that I do not know where nearby parking is located, but I always assume that street parking should be available. Usually I arrive and then drive around for a while trying to find a parking place and relent and choose a parking lot or garage when the no-cost option fails. For locations in cities (and when on business) I don’t even consider street parking and visually hunt for the parking garage closest to my destination and take a space, if the cost is not stratospheric. If it is and, if I have time, I may continue my hunt for lower priced parking.

I do not own a PND or know of a routing application that offers me truly integrated parking services – that is the option to navigate to the closest parking to the address I have entered as a destination. I am not saying that systems cannot route me to parking lots whose address information I could find near a destination by searching the map or searching for a parking lot near a destination using one of the companies mentioned above. However, I want a eParking reservation service that provides information on the parking available (rates, etc.) near my destination integrated into a navigation engine so that I can enter my destination, choose my parking option (reserve/pay), and be routed to the parking facility. I, also, want a walking map from the parking garage to my destination, and I want that route to describe restaurants, points of interest and other information (even ads) that might be of use to me as I pass by on my way to my end-destination.

I realize that I could go to one of the parking services mentioned here, enter an address and get a map showing the parking details, but I want a route to the parking location, integrated with traffic, and other query capabilities. If Google, Apple, or Here is planning to provide this type of service, then it is likely that they will acquire one or more of the companies mentioned above to help them with this challenge.

3. I presume that I am not the only poor, lost soul looking for local parking and hoping that some major mapping/navigation/routing player integrates eParking services into their offerings.

4. Based on my personal experience Evanston, Illinois takes the cake when it comes to “Cruising for Parking”. Hmm – sounds like the title of a zippy new reality series. You read it here first!

If you have time off this Memorial Day weekend, I hope you enjoy it.


*Search for “Parking in Perspective: The Size and Scope of Parking in America”
**Shoup, Donald C., 2006, Cruising for Parking, Elsevier,Transport Policy 13 (2006) 479-486

Bookmark and Share

Posted in Data Sources, Geospatial, Geotargeting, Google maps, Local Search, Mike Dobson, Navteq, Nokia, TomTom, eParking, landmarks and navigation, local search advertising, map compilation | 3 Comments »

Google Maps and Search – Just what is that red line showing?

April 24th, 2014 by admin

In an earlier blog in this series I contemplated a future sea-level change in online mapping that would develop as an adjunct to the popular mapping systems that are provided by Google, HERE, Bing, Apple and others. These database mapping systems currently are mainly oriented towards providing detailed street-level coverage, since this information meets the fundamental needs of users for geo-search and navigation.

Most mapping products are designed to meet the needs of map providers for generating income. For instance, HERE generates income by providing mapping databases and software that cater to the in-car navigation markets, as well as to ADAS, and other systems designed to make car travel safer and more efficient. Google, on the other hand, generates significant income by integrating its mapping activities into various aspects of its complex system of advertising. In addition, Google is obviously interested in other markets for spatial data, such as those focused on GIS and intelligent/autonomous cars.

All of the companies mentioned above, also, have users/customers interested in viewing maps that tell stories by showing aspects of geography or geographical aspects of a company’s services. For example, the American Airlines map that was shown in the original article in this series was an example of American Airlines attempting to show its global reach using online maps as the story telling device.

Recently, the online mapping providers mentioned above have begun attempting to increase the functionality of their mapping systems by providing data that allows the generation of “quasi-reference maps” whose objectives appear to be similar in approach to those formerly popular as printed world atlas products. It is my opinion that attempts to create a dual or multipurpose purpose production mapping system provisioned with the capability to publish both detailed street maps and world reference maps have been less than impressive. In part, this is due to these providers’ lack of familiarity with the intricacies of supporting the objectives, methods and presentational formats required to publish a wide range of mapping products from an integrated spatial database (e.g. street, reference and thematic maps).

In my opinion Google has made the most progress on dual-use maps and evidences a lot of promise for continued innovation. Even so, what they have created for us often does not make sense. Let’s look at one simple, fundamental object –geopolitical borders.

See the figures below for common examples of the complexity of harmonizing and generalizing multiple-purpose, multiple-source data bases. In these examples we can see that there are multiple representations of a feature in a database that has been designed to allow a variety of zoom levels. Each of the images was generated by typing a state or country name into the search bar that is a part of the Google Maps interface.

The targets of the searches are returned as a red outline and this appears to be true whether you are searching for countries, state, counties, cities or other categories of political or, where available, postal geography. These representations of boundaries symbolized in red first appear in a small scale representation encompassing the geography of the unit that was searched. Initially it appeared to me that each of these border representations disappeared at a preset zoom level that showed more map detail than the initial view, but the levels at which the boundaries symbolized in red disappeared (or at least segments of these borders disappeared) seemed to change both within and between classes of boundaries (e.g., international, state, county, city, etc.). Next. the levels at which the red lines on Google Maps disappear may be influenced by factors such as browser type and screen resolution, although I did not experiment specifically with these elements.

Let’s search for “Canada” as an example of using the place name search functionality in Google Maps.

Result of Canada Search on Google Maps

For a larger version of this image, click here

Yes, we are presented with a representation of Canada on which a red line demarcates a boundary.

Let’s search for the United States.

Results for a search for the US when using Google Maps

Hmm, no red border. Maybe this omission is a geolocation feature? If so, Google should note that a recent piece of research suggested that among Americans living in the US, some thought that the Ukraine was located within the US border. Perhaps showing the red border when someone searched for the U.S., similar to what is shown for other countries, might be useful. On the other hand, as we shall see, there is some question as to the nature of the “geography” that is actually being represented by the red outlines returned by Google and symbolized on their maps.

Look at the screenshots below and evaluate if any of these examples would inform someone who knew little geography and wanted to use Google maps as a reference source to help them understand the location of geographical borders.

Let’s replicate someone searching for the entity “California.”

California Search Result

To see a wider view, click here

What’s that dent in the northern border of California at Goose Lake? I didn’t know that Goose Lake was not part of California.

What specific border quality is that red line showing? If you look closely at the Goose Lake you can see a gray dash symbolized across the Lake that seems to follow the border between the two states as represented on official highway maps.

Let’s zoom in.


Hmm, guess I zoomed a little too much since the red line disappeared. However, what is shown appears to be one representation of the CA border that is similar to the one shown on the official state highway map. So what was that red line in the previous image?

Let’s zoom back out to bring back the red line.

Search and zoom out

It would appear that the red line, in this case, may demarcate the “land” boundary of California.

Let search for “Oregon” for support.

Oregon Search

Here Goose Lake is shown as excluded from Oregon by the red line.

Okay so it looks like the red line is not a standard political border, but the land-water boundary that follows the land side of the border of the entity named. Yep, searched for “California” again and Lake Tahoe was excluded by the red border. When I searched for “Nevada,” Lake Tahoe was not shown as part of NV.

Gee, that’s great, but where does Google tell the casual user the quality that the red line represents? In a page I found on Google Map legends the only reference to “red boundaries” was a note that “disputed” international borders were shown in red. However, I am not sure that the page I examined was authoritative, although it appeared on one of Google’s URLs. I doubt that the casual user would have any idea where to find out this sort of information. Indeed, I suspect that the casual user may, for example, search for California and conclude that Goose Lake, Lake Tahoe and various other water bodies are simply not part of California.

Oh, one other thing. It appears that the red boundaries are shown when zooming down to the level at which another feature takes precedence and replaces the red line. In essence, the occurrence of the representation of the red line is variable and tied to the local geography represented in the view port. For example, at the 20 mile zoom map the Oregon Boundary is still red.

Oregon boundary zoom

For a larger version, click here

But at the 10 mile zoom level the red is replaced by a gray dash that disappears when a more dominant feature in Google’s display hierarchy, such as a state or county road, is coincident with the border.

10 mile

For a larger version of this image, click here

This would seem to indicate that Google has implemented scale-variable presentation, a neat trick, but one that may make it difficult for the user searching for an entity and examining it at variable scales. However, when I searched fthe borders of a few more states, I ran into additional situations that further clouded the identity of Google’s red line.

Here is a good example. Let’s search for “Maryland.”


For a larger version, click here

Oh, my – Look at those straight segments of red line in Chesapeake Bay. I am not sure what representational logic applies here since the red line no longer appears to be demarcating the same “landed-ness” quality that it appeared to demarcate in the maps of California, Oregon or Nevada.

Let’s take a closer look.


Yowee! How does this red line relate to the others we have viewed?

If I look at Google Earth’s coverage of Maryland, at a similar scale, I see the same border in white. Hmmm, this is getting more confusing.

Maryland in Google Earth

Well, at this juncture I was thoroughly confused about the “identity” of the Google’s red line. I decided that I needed more data and returned to the start – Canada, one of my favorite places. So, I searched Google Maps for Canada, zoomed in, panned around, and found more interesting, but not reference quality data. Take a look at this –


For a larger version, click here

When I zoomed further most of the red border disappeared, but not all of it, a new red line behavior that I had not seen in our previous examples.


For a larger image, click here

How about that! What condition is the red line now representing? Hang on, it’s about to get stranger.

Even Stranger

To see a broader coverage extent, click here

I am not sure I understand anything about the red line now. In addition, I note that it does not appear to be coincident with some of the islands and other coastlines that it is supposed to follow. Who is editing this crap?

My head hurts

For a wider extent, click here

Look, more mismatches. Why the red line looks like it might be…oh DCW (Digital Chart of the World) or something similar. Anyway, maybe it’s just old, generalized data created from a spatial database designed for other circumstances. I guess this means that Google’s multiple representations of the same feature set have not been harmonized.

For a larger version, click here

Maybe the data on which the red line is based is so old that climate change has altered sea level or the extent of the icepack?
See this image

For a wider extent, click here

Well, regardless of the physical reality, it’s pretty sloppy representational work for an aspiring reference publisher.

I am sure that there is some reasonable explanation of the rationale for showing whatever it is that Google is showing with the red line. What the examples shown here point out is that it is difficult to compile a cartographic database and provision it with the diverse types of content needed to provide the range of data types and data elements required for presentation in a system that maps both detailed street data and more generalized regional or world reference information. It is, perhaps, even more difficult to harmonize (in terms of currentness and theme) and generalize these unique types of content when they are to be used in systems that provide output for numerous and wide-ranging map scales.

It is likely that all companies attempting this evolution from street map to reference map will run into numerous and substantive data quality problems. Two critical data quality issues that are clearly a problem for Google as evidenced by the above images are logical consistency (issues related to the correctness of the characteristics of a data set) and thematic accuracy (particularly in respect to non-quantitative attribute correctness and classification correctness). Unfortunately, this is just the tip of the data quality iceberg that Google and others are facing. It is the user and geographic literacy that suffer such attempts at experimentation.

From the perspective of the user, accepting the messages from these maps will depend on whether or not the spatial data is authoritative, coherently presented and understandable. We asked a simple and basic geography question and Google failed. For a company that has as a goal creating a perfect map of the world – well, they seem to have a very long road ahead.

As a final note, the question of revenue generation always is an issue with the production of maps. The red borders that Google shows may be as confusing as they appear to be because people who search for map borders of countries and states may not reflect the company’s financial interests in geolocation and navigation, which bring in lots of advertising revenue. Local borders are clearly more important. Why just look at this coherent border for Charleston, South Carolina.


For a larger version, click here

You may have noted that on some of the illustrations linked to above (the “larger” illustrations hidden unless you click the link) the search tab often contains a sub-tab labelled “terrain.” If you click the “terrain” tab in the live version of a Google Maps search, the system will show you a version of the map with terrain shading and the correct geopolitical border for the entity searched. If only the red lines that Google presents when you search for a geographic entity showed the same borders.

Well, it seems that Google needs help. Send your border info to “wherezit@.” It doesn’t matter where you send your data because Google or the NSA will be able to find it. Of course, this brings us back to authoritative data, trusted data and the whole conundrum we discussed years ago. It makes me feel good knowing that my blogs are “timeless”. Hah!

Duane Marble would prefer my blogs to be “Typo-less”, but I would miss his caustic notes containing edits, so no go.

Until next time,

Dr. Mike

Bookmark and Share

Posted in Apple, Authority and mapping, Bing maps, Categorization, Geospatial, Google, Google maps, HERE Maps, MapQuest, Mapping, Microsoft, Mike Dobson, Navteq, Nokia, Technology, map compilation, map updating | Comments Off

Does Anyone Need to Know Anything About Maps Anymore (2)?

February 20th, 2014 by admin

(This is NOT the blog I had planned next for the series, but it is one that may help clarify why this topic is of such significance. If you were not wild about the last blog, you might skip this one.)

In a comment on my last blog regarding cartographic knowledge, Pat McDevitt, VP of Engineering at AOL, formerly with MapQuest, TomTom and Tele Atlas, mentioned his interest in “map-like-graphics”, such as subway maps (see my response to his comment for more detail). In the 1980s, Barbara Bartz Petchenik coined a term for such displays by naming them “map-like-objects”, or MLOs. MLOs sacrifice some aspect of cartographic accuracy to promote easier understanding and use by a selected population. Let’s explore this concept a bit, as a discussion may help to further illustrate the points I was making in my last blog.

The class of MLOs that represent subway maps includes purpose-built graphics designed to help riders of these transportation systems understand how rail lines connect stations in a manner that can be used to plan journeys. Since the rider only can access and exit the trains at specific stops, the actual geometry of the network ( in terms of distance and direction) is of inferior importance to creating a display that is readable, interpretable and actionable in a manner that allows the user to ride between an origin and an intended destination. The argument here is that while MLOs may sacrifice cartographic accuracy, they are tools that can be more effective than using an accurate, detailed map of the same spatial objects. If only the use-case were so simple! Let’s explore by personal example.

I have visited London at least 20 times during the course of my adult life. I usually explore the city riding the London Underground to travel to a location near my planned destination. I admit, with some shame, that of all the urban geographies I have explored I know London’s geography the least well. I find this curious since this location is one of my favorite travel destinations. It is, also, a destination I have visited more frequently than other urban areas that I seem to be able to navigate with little problem.

During my visits to London I was bothered that the objective reality I gained while walking its streets seemed to conflict with where I expected the city’s spatial features to be located. While I was certain that some time/space perturbation was afoot, I was not sure if popping out of the Underground’s “wormholes/tube stations” so distorted my mental map of London that it could not be remediated.

More recently I started exploring the notion that my ill-conceived geography of London actually was a result of traveling using the Underground. I realized, after some consideration of the issue, that my “relative mental spatial reference” for the locations of features of interest in London was likely based on where the nearest tube station was positioned. What is problematic here is that my sense of the geography of the tube stations was informed by the Tube map. Was it really possible that I had used my knowledge of where stations were shown on the ubiquitous Tube map to inform the reality of my above ground wanderings on my probable location? Sounds like science fiction, but could it be true?

To that point, my irrational view of London’s geography might be because the Tube map includes a variety of planned misrepresentations, which you can read about in the article What does London’s Tube Map Really Look Like? Of additional relevance is a study from 2011 by Zhan Guo called Mind the Map (a parody on Mind the Gap – signage familiar to all who have ridden the Tube). Gou concluded that 30 percent of passenger take longer routes because the Tube map misrepresents distances between stations. (You can read a concise version of Gou’s report in the Daily Mail.)

Based on this brief diversion we might conclude that while MLOs can be useful, they may be extremely misleading. Many would say that the problems generated by MLOs result from the users of these maps employing them for purposes for which they were not intended. If that is so, maybe these map-like-objects should come with a use warning, like those on the mirrors of some American cars – perhaps something like:

This map probably represents a spatial area that is considerably larger than this paper/display screen. True distances, directions and spatial context are not represented correctly or reliably. Reliance on this map for any use, even riding the Tube, is not recommended and may result in serious injury, lost time, exposure to buskers, or other inconveniences. The publisher and producer of this map and related licensees are not responsible for errors of omission, commission, or other misrepresentations resulting from lack of cartographic knowledge, incompetency, lack of moral fortitude regarding international border disputes, editorial policies, advertorial policies or, more commonly, frequent cost avoidance cutbacks in map compilation efforts.

While such warning might sound humorous (hopefully), the multiple use issue is of considerable concern. While those who create MLOs may realize the shortcomings of this type of spatial display, I am not sure this type of knowledge is known by users of the map. It is likely that a large proportion of the population that use MLOs will be unaware of the limitations that complicate extending the use environment that the original MLO was designed to allow. In some ways the problem is similar to that experienced by the twenty-six percent of U.S. citizens who, having observed the sky (or not) concluded that the sun revolves around the earth!

The problem of representing spatial reality in maps is extremely difficult. People who use maps do so in one of several manners, but all of these uses involve, to some extent, answering the question “where?” In many cases map use is data driven, prompting people to browse the map in a manner that helps them organize it into a familiar/understandable patterns.

To illustrate this case, imagine that you are viewing an election map displaying states that voted Republican (colored red) or Democrat (colored blue). Most people would explore this display by examining their home state, comparing other nearby states and then looking for the states that voted their preference, followed by those that supported the opposite side. The recollection that most people would have of this map is the patterns made by red and blue states and their spatial clustering across the extent of the map. Even the most cursory inspection of a map usually results in the acquisition of a pattern that is matched with other map patterns that users have acquired. The unfortunate complication here is that users do not know when they are observing an MLO that works well only for a selected purpose, or when they are observing a cartographic display that has been tightly controlled to produce a spatially accurate a representation of the variable being mapped.

Perhaps more pernicious is the hybrid MLO. The American Airlines map that I showed last time was designed to function as an MLO, but was based on a highly accurate cartographic display. In addition, the map was created by a production system that was designed to produce both reference and detailed street maps, but apparently not to produce advertisements or MLOs. Imagine teasing the cartographic reality out of that map. Someone who had not seen a world map before might assume that the globe really does look like what was shown in that display. Well, so what?

I recently read an interesting article by Henry Petroski titled “Impossible Points, Erroneous Walks,” (American Scientist March-April 2014 Volume 102, Number 2, available only by subscription) that was brought to my attention by Dr. Duane Marble shortly after I published my last blog. Petroski, a noted author (he is both a Professor of Civil Engineering and History at Duke University), was railing about an illustration in the New York Times that incorrectly represented the scallops on a sharpened pencil. His thoughts on the seriousness on this seemingly modest error were equally true of MLOs. He wrote:

Books, newspapers, and magazines are also teachers, as are television and radio and the web, as well as the inescapable advertisements. Whether or not we are consciously aware of it, the whole of our everyday experience is an ongoing teaching event.

This is why it is important that what we and our children are exposed to in the broader culture be presented and represented accurately. The words and images that we encounter in common spaces can be no less influential in shaping our perception of the world than what we learn in a formal classroom setting. If we find ourselves surrounded by incorrect depictions of objects, our sense of how things look and work can become so skewed that we lose some of our sense of reality.

Petroski continues:

This is not to say that there is no room for imagination and creativity in both engineering and artistic work. But even the most abstract of ideas and renderings of them should follow rules of geometry grammar and aesthetics that make them meaningful to venture capitalist and museum-goers alike.” (Petroski, 2014, P1-2 Impossible Walks, Erroneous Points).

There we have it. In the context of maps, we might substitute “But even the most abstract of spatial ideas and rendering them should follow the rules of cartography, map grammar and the design of displays representing spatial distributions…” That of course would return us to the title of my last blog, which was “Does Anyone Need to Know Anything About Maps Anymore?” Of course they should! Next time, let’s resume why this lack of cartographic insight will become a greater problem in the future of online mapping.

Thanks for visiting,

Dr. Mike

Bookmark and Share

Posted in Authority and mapping, Geospatial, Mapping, Mike Dobson, map compilation | 1 Comment »

Does Anyone Need to Know Anything About Maps Anymore?

February 10th, 2014 by admin

As many of you may have noticed, Exploring Local has not been updated recently. For the last year I have been engaged as an expert witness in an issue involving the type of subject matter that I usually comment on in Exploring Local. Due to the sensitivity of the proceedings, I decided not to
write any new blogs while I was engaged in the proceeding. Recently the matter concluded and I intend to focus some of my time on issues related to mapping and location-based services that are of interest to me and that I would like to share with you. So, here we go-

A few months ago I saw a blurb on my LinkedIn page about a debate that was going on regarding maps in a forum titled “GIS and Technological Innovation.” You can find the article and some of the comments here, in case you do not belong to LinkedIn.

I cringed at the pejorative title of the argument, which was, “Do Programmers Really Make the Best Cartographers?” While this is not quite as ill-phrased as, “Do Code Monkeys Really Make Better Maps than Wonky Map Makers?”, somehow the original title seemed to not quite set the right tone. The most problematic issue with the original question, at least for me, was the lack of context. For example, my interest in the comparison was, “When doing what?” In essence, was the original question designed to explore 1) who writes the best code for cartographic applications, or 2) who makes the best maps using available applications? In my opinion, both questions are non-productive.

Let’s substitute these questions instead. First, “Does anyone know how to “make” maps (or mapping software) that effectively communicates the spatial information they were designed to convey?” If someone does know how to do this, the question of interest then becomes, “Do mapping systems permit their users to exercise these capabilities?” A third important question is, “Does anyone compile the spatial data that fuel mapping systems in a manner that accurately reports these data as they exist in the real world?”

Now, for purposes of continuing this argument, let’s make an assumption though clearly not true, that all spatial databases are of equivalent quality. If we accept this position for purposes of exposition, then the next meaningful issue is, “Does the mapping system function to inform the reader of the spatial information it is designed to map in a manner that retains the fidelity of spatial relationships as they occur in the real world?” This leads us conceptually to a two-sided map-making platform; on one side we have the mapping functionality and on the other we have the actor who uses the functionality to prepare maps.

Analyzing the capabilities provided by software-based mapping programs will lead us to conclude that some level of cartographic practice has been embedded in all software systems designed to produce maps. I think we can agree that, the software mapping tools convey someone’s (or some development team’s) understanding, hopefully informed by cartographic knowledge, of the functional requirements of a mapping system. These requirements, for example might include consideration of the goals that use of the mapping tools should accomplish, how the tools should operate, how the desired capabilities of the tools might be formalized as functional software, and whether or not user input is allowed to modify the functionality in any meaningful way.

We should, also, acknowledge that some of the end-users of these systems may have knowledge of the cartographic process and seek to use these systems to create a map that melds the capabilities of the available software functionality modified by their personal experience with rendering spatial data. In practice, the use-situation is often constrained because many mapping applications, for example Bing Maps, Apple Maps, and Google Maps, are structured to meet a specific publishing goal that influences how the available software interacts with spatial data. While this potential limitation may influence how a person uses an online system to create maps other than those normally provided by the system, it does not teach away from the general tenet that knowledge of cartographic theory and practice should underlay how well maps function in communicating spatial information, regardless of who makes them or who creates the software functionality.

If software developers and modern cartographers have some degree of cartographic knowledge, where do they get it? Although there is a small (and declining) worldwide cadre of academic cartographers who continue to research improvements in the communication of spatial data using maps, there are just not that many people who benefit from or are even aware of these efforts. Conversely, even if the developer of an online mapping system has discovered accepted cartographic theory and practice and used it to shape the functionality of their software, the truth table is whether or not the use of its functionality can be harnessed to present data advantageously, that is in a manner that accurately represents the spatial data. I think that this is the critical question that pervades all modern map use. Restated, we might ask, “Are the capabilities that mapping systems offers us today based on mapping engines whose developers and users (map makers) have been adequately informed on cartographic theory and practice?”

My response to this question is mixed. For example, most online mapping systems appear to have been developed by people who understand the mathematics of map projections, although they appear not to appreciate the use-limitations of projection types. Conversely, most online systems seem to have been developed without a clear understanding of the complexities of data categorization, classification and symbolization.

If I could get the online mappers to listen to me I would plead for them to include the famous “That’s Stupid” functionality, which automatically erases your map when you have created an illogical presentation or one that is misleading due to errors in representation, symbolization, generalization, classification, technique, etc. Of course, if such functionality were ever implemented, there might be no online mapping whatsoever.

Laugh if you will, but take a look at this fine example of modern online mapping brought to us by American Airlines as part of a recent promotion urging people to travel on a worldwide basis. The map appears to have been created by Microsoft and it is copyright both by Nokia (HERE) and Microsoft (BING).

American Airlines, Microsoft and Nokia give you the world and more.

Click here for a larger version of this map.

You may have noticed that you have a choice of visiting any of the approximately twenty-seven, apparently non-unique, continents (one representation of Europe seems to have mysteriously disappeared into the seam at the right edge of the map and does not show up on the left continuation). The map is exquisitely crafted using shaded relief, although I suppose this could be a representation of the earth during a previous ice age since there are no countries shown, nor airports with which to fly American Airlines.

I am not certain of the distances involved on the map as there is no scale. Although we know that the equatorial circumference of the earth is, oh – a) 24,901 miles (Google), b) 24,859.82 miles (Geography-About.com), c) 25,000 miles (AstroAnswers), d) 24,902 (Lyberty.com), or e) 24,900 (dummies.com). Don’t even ask about the polar circumference! Well, some measurement must be appropriate, but which one applies to the map in question? Further, where does it apply and how does it change over space?

Perhaps my interest in scale has been rendered a historical artifact, replaced by the ubiquitous use of “Zoom Level?” I presume you have heard modern “zoom level” conversations, as in, “These two towns are about an inch apart on my screen at zoom level 17. How far apart are they at zoom level 12? I don’t know, I don’t use Bing, I use Apple Maps and my screen has more pixels per inch than yours. Is that important?”

Why does this matter?

Without further belaboring the numerous problems with today’s most common mapping systems, it is important to note that online mapping is about to take a significant turn from street maps and simple navigation towards GIS and what might be called spatial inquiry systems. Users will benefit from a move beyond street maps to geographical inference engines that can answer user questions in a highly targeted spatial manner. However, much of the promise of these types of systems is based on understanding spatial data and methods used to represent it. In the next few blogs I will discuss where I think this evolution will take us in the online world of mapping and how we might get there by solving some interesting problems. However, I will likely mix in a few product reviews along the way, as there are a number of companies claiming some remarkable, but unlikely, potentials.

Until next time –


Dr. Mike

Bookmark and Share

Posted in Apple, Bing maps, Geospatial, Mapping, Microsoft, Nokia, map compilation | 2 Comments »

Waze-Crazy – Would Facebook Drop a Billion on Waze?

May 9th, 2013 by admin

As often happens lately, I had no intention of writing about anything in the news due to lack of interest. However, Marc Prolieau wrote an article this morning on the rumor that Facebook would pay one billion dollars for Waze, and then wrote me to ask my thoughts. I, then, saw an article berating Google for not being the company that was buying Waze. It was at that point that I began thinking that I must have missed some major development at Waze. In turn, this idea prompted me to do some research and write a commentary on the potential acquisition.

In the spirit of openness, I was contacted by someone representing themselves as Facebook’s “location” guy shortly after my blog about the problems associated with the release of Apple Maps in 2012. We never connected. So, I do not have any contacts at Facebook, nor do I have any contacts at Waze with whom I am in communication. Also, in the spirit of openness, I thought about titling this essay “Startup Crowd-Sources Corporate Value.” So, let’s get going.

Waze describes itself as follows:

“After typing in their destination address, users just drive with the app open on their phone to passively contribute traffic and other road data, but they can also take a more active role by sharing road reports on accidents, police traps, or any other hazards along the way, helping to give other users in the area a ‘heads-up’ about what’s to come.”

“In addition to the local communities of drivers using the app, Waze is also home to an active community of online map editors who ensure that the data in their areas is as up-to-date as possible.”

At the end is a video, which can be linked to from the above referenced About Us page on the Waze website. The video ends with a note to this effect – “Keep in mind that Waze is a driver-generated service. Just, as other user-generated projects, it depends on user participation. Your patience and participation are essential.”

I don’t know about you, but if Waze is going to pick up a billion bucks based on my labor, I’d want more than a note indicating that my participation and patience were essential to their success. However, the more interesting question is whether or not Waze is worth $1,000,000,000.00.

To get my arms around valuing Waze I decided to go through a brief acquisition checklist

What is it that is worth a billion dollars at Waze?

Brand? No.

Waze is minor brand that remains generally unknown around the world. I think it might be difficult to put a high valuation on a company whose product is crowd-sourced and whose brand represents the industrious endeavors and lacks of its audience. Note that use of “lacks” here does not indicate that these people are dolts, rather that the user profile is likely not uniform (standardized) or distributed uniformly across geographical space. In turn, this suggests that the product is not uniformly accurate across that same space. As a consequence, the brand’s value may exhibit a significant spatial and temporal variation.

Distribution/Users ? No.

Wikipedia claims that Waze was downloaded 12 million times worldwide by January 2012 and 20 million times by July 2012. By the end of 2012, according to Waze, that number had increased to 36 million drivers. Today, there are apparently 44 million users. To be honest, I am not sure how to parse the information on downloads. Downloads do not indicate active users. The notion of downloads, also, does not indicate geographic coverage or the ability to harvest GPS traces or active crowdsourced updates in specific geographies.

Next, I am not sure how Waze measures users, nor was I able to find any definitive source for this information. I doubt that it has 44 million active users. An article in the Huffington Post indicates that Berg Insight, a Swedish market research firm, says Waze has from 12 million to 13 million monthly active users. If Berg Insight is correct, then the Waze contributors are likely spread thin on a worldwide basis and likely concentrated in a modest number of large urban areas. In addition, how long the active users of Waze have been contributing GPS traces or active updates would appear to be time limited based on the reported number of users and the growth of the company.

So distribution remains unknown, except, perhaps, to Waze. However, even if they could validate the number of reliable active users, it remains unclear how those users are distributed across the geographical space of Waze’s target markets.

Finally, another problem is the type of driving usually performed by Waze users. Are the majority of the miles traced those showing a repetitive journey to work five days a week? I suspect this is a large portion of the tracks they receive. If this is true, then their distribution is likely quite limited in terms of the uniformity and the extent of geographic coverage.

Intellectual Property, Trade Secrets, Know-how? No.

Waze has 100 employees. I am sure that they are bright, energetic and extremely capable. I doubt that what they may know, have codified, filed as patent applications or hold as trade secrets is worth anything near a billion dollars. After all, it is not that other people are ignorant on the topic of how to build a crowdsourced mapping system.

Map Database? No.

Waze claims that in 2012 its users made 500 million map edits and upgraded the map to reflect 1.7 million changes on-the-ground that took place in 110 countries with community-edited maps. Ok, just what does this stuff really mean?

Updates may merely reflect the poor quality of a map base or even the lack of a map base available to Waze for its customers use. The number of countries involved does not necessarily indicate that the company has complete, up-to-date coverage in any of these countries. More problematically, I suspect that Waze has no objective method of assessing the accuracy of its maps compared to other sources. For those of you who need a short primer on Spatial Data Quality, see my blog on the Apple Maps fiasco, as this is the reason they got a failing grade on their product roll out.

Again, the issue here is how many users have been contributing GPS traces and active edits and over what period of time. It appears to me that the time horizon of Waze is too short to have created a map database of considerable value.

Other Assets (intangibles)? No.

Waze has some uniquely capable people and assets, but, for me, they do not tip the scales at a billion dollars.

Is the whole worth more than the sum of the parts? No.

I just can’t get to the billion dollar number no matter how I combine the basic facts. I have read the articles indicating that Facebook needs its own map base so it can customize it for mobile advertising, or that it needs its own map database in order to compete in the mobile location market. I suppose a company can convince itself of anything and Facebook may have crossed the chasm based on these types of assumption. If so, I think they are wandering in a labyrinth of strategic blunders.

Yes, they could wind-up with their own map database, but I suspect that with this purchase will from day one be a headache in terms of spatial data quality. Facebook will spend more money fixing and tuning the Waze database than if they had licensed a database from Nokia or TomTom or from a collection of companies, as has Apple. In turn, the adoption of their “mapping product” by the market might be significantly delayed.

The more serious issue is that dealing with the quality of the Waze database and integrating the database with other Facebook applications will subtract cycles from their efforts in areas that are core to building a successful Facebook mobile business. In the end, Facebook will come down with a serious case of buyer’s remorse, as they will eventually ask the question “Why wasn’t anyone else willing to pay a billion dollars for Waze?

In a final check of the Waze site tonight I noticed that the Waze homepage (http://www.waze.com/) redirects to http://world.waze.com/?redirect=1 , which is a complete and absolute blank. Perhaps the deal is done. Or, it might simply be a map tribute to Lewis Carroll.



Bookmark and Share

Posted in Apple, Facebook, Google maps, Local Search, Mapping, Nokia, TomTom, User Generated Content, Volunteered Geographic Information, Waze, crowdsourced map data, map compilation, map updating | 5 Comments »

Unintended Consequences – The Roles of Google and Crowdsourcing in GIS

March 18th, 2013 by admin

The following blog is a concise, non-illustrated version of a keynote address I gave at the North Caroline GIS conference last month in Raleigh, NC.

There is little doubt that Google has created an incredibly successful mapping product, but it is at this point that the law of unintended consequences may occur and diminish not only the success of Google Maps, but also hinder mapping and GIS in the wider world.

Let’s start by looking at what I mean by “unintended consequences.” In simple terms an unintended consequence is not a purposive action, it is an outcome. Outcomes can be positive, such as a windfall. Outcomes can be negative, such as a detriment. Or, results can be perverse, in which case the outcome is contrary to what was expected. My focus in this blog is on the negative outcomes, although some may typify them as a case of the glass half-full.

The romantic notion that cartographers wandered the world with charts and map tables so they could compile map data as they explored is the stuff of history. For countless decades map publishers have created map manuscripts by compiling data collected from sources that were considered authoritative and it is this model that Google had adopted. From a practical perspective, it is impossible for any single company to map the entire world at a street and road level without the help of contributors from the public sector, private sector and ordinary citizen scientists interested in maps, geography and transportation.

It is my belief that Google, due to the success of its search engine and the pursuit of its corporate mission “…to organize the world’s information and make it universally accessible and useful”, has been unusually successful in convincing authoritative sources around the world to allow Google to use their data in its mapping products. In some cases this has involved licensing or use agreements, and Google has advantaged itself by integrating data from sources that it considers the “best of breed” to enhance its products.

Most of these “trusted” sources are “official sources”, such as the USGS, the Bureau of the Census and other governmental agencies at all levels of administration from around the world. In areas where Google has been unable to reach agreement to use specific data, or in those locations where “trusted” data does not exist, it has relied on its own industrious endeavors to compile these data, although it has been helped tremendously by crowdsourcing.

It is clear to me that Google turned to licensing and crowdsourcing to remedy the unpalatable variations in the levels of spatial data quality in the map data that were supplied to it in the years when Google Maps was primarily based on data licensed from Tele Atlas (now TomTom) and Navteq (now Nokia). It appears that Google’s transition to able compilers of navigation quality map databases has been quite successful. However, I wonder if this success is not unlike the magnificent willow tree with a tremendous girth and abundant leaves on massive flowing branches, but slowly dying of decay from the inside.

Google’s move into GIS by providing the power of the GoogleBase as a GIS engine is an attractive notion to many organizations and for good reason. However, people who are responsible for funding budgets in these organizations (such as legislators) are beginning to ask these overly simplified questions: “Why are we paying people to do this mapping stuff when Google is giving it away from free? “Can’t we just use their data?” I am sure you are all thinking, “Nobody could be that shortsighted.” I guess you have not spent much time with politicians.

Recent events have led me to conclude that Google has now realized this very flaw in its approach to mapping. Did any of you think it was unusual that Google released two different strategic studies recently showing the economic benefits of Geospatial data (see 1 at end of blog). You know, Google is always releasing its strategic studies. Why the last one I can remember was in …..hmmmm?

In a study commissioned by Google and carried out by the Boston Consulting Group, it was indicated that the U.S. Geospatial industry generated $73 billion in revenues in 2011, comprised 500,000 jobs and throughout the greater U.S. economy helped drive an additional $1.6 trillion in revenue and $1.4 trillion in cost savings during the same period.

A second study by Oxera was equally interesting and focused on the direct effects, positive externalities, consumer effects and wider economic effects, including the gross value added effect of “Geo services.” One section of this report that caught my eye was a discussion (page 15) of Geo services as an intermediate good – one that is not normally valuable in themselves, “…but help consumers engage in other activities.” When discussing the “economic characteristics of Geo” the Oxera report indicates (page 5) that, “This question is relevant because it has implications for the rationale for public funding of certain parts of the value chain and for the market structure of other parts.”

Neither of the released reports (at least in the form they were published) mentions Google, its mapping business or how these studies should be viewed by the Google-ettes.

While Google may have had many reasons for funding these two reports, I think that the “law of unintended consequences” is rearing its head in Google land. If the public/governmental sources that provide data to Google through license can no longer afford to produce the data because their funding sources thinks that collecting and presenting map data is something that can be handled better in the private sector (such as Google is doing), the data underpinnings of the geospatial world will start to collapse. Yes, I know that Google does not do what its licensors do with spatial data, but have you seen the decision tree of a politician who really understands the complexities of GIS and mapping, why they cost so much, take so long and can’t be shared through the enterprise?

OK – let’s turn to crowdsourcing. While Google did not invent crowdsourcing, it certainly knows how to use it to its advantage. Now that its users are willing to compile and correct the Google Mapbase for free, how will anyone else in the business make money compiling data using a professional team of map data compilers? The economics weigh against it and it may be a practice whose time has come and gone. The reasons for this are, in their entirety, more complex than I have described. However, without developing the argument more in this essay, I will simply skip ahead to my conclusion, which is that professional map database compilers are an endangered species. It is likely that their “retirement” will not be noticed – at least not until crowdsourcing falls out of vogue, as it will, when people begin wondering why Google cannot afford to keep its own damn maps up-to-date.

As all of you know, maps are near and dear to my heart. The problem of unintended consequences in regards to Google and crowdsourcing to GIS and mapping are nearly as worrisome to me as the planet-wide loss of electricity. I’m going to squirrel away a cache of paper maps, just in case. Laugh if you want, but when you need to buy one from me you will begin to understand the meaning of monopoly, as well as to really appreciate the concept of unintended consequences.

1. Links to both studies can be found in this article at Directions Magazine


Bookmark and Share

Posted in Authority and mapping, Data Sources, Geospatial, Google, Google Map Maker, Google maps, Mapping, Mike Dobson, Navteq, Nokia, Tele Atlas, TeleAtlas, TomTom, crowdsourced map data, map compilation, map updating | 2 Comments »

Google Maps announces a 400 year advantage over Apple Maps

September 20th, 2012 by admin

UPDATE September 24, 2012: The Comment Cycle for the following entry is now closed. Thanks to everyone who has contributed.

I had a call from Marc Prioleau of Prioleau Advisors this morning and speaking with him prompted me to look into the uproar over Apple’s problems with its new mapping application. So, this column is Marc’s fault. Send any criticisms to him (just kidding). While you are at it, blame Duane Marble who sent me several articles on Apple’s mapping problems from sources around the world.

In my June blog on Apple and Mapping , I postulated that the company would find building a high quality mapping application very difficult to accomplish. Among the points I made were these:

• However, it is not (mapping) San Francisco that will give Apple heartburn. Providing quality map coverage over the rest of the world is another matter completely.

• Currently Apple lacks the resources to provide the majority of geospatial and POI data required for its application.

• My overall view of the companies that it (Apple) has assembled to create its application is that they are, as a whole, rated “C-grade” suppliers.

• Apple seems to plan on using business listing data from Acxiom and Localeze (a division of Neustar), supplemented by reviews from Yelp. I suspect that Apple does not yet understand what a headache it will be to integrate the information from these three disparate sources.

• While Apple is not generating any new problems by trying to fuse business listings data, they have stumbled into a problem that suffers from different approaches to localization, lack of postal address standards, lack of location address standards and general incompetence in rationalizing data sources.

• Apple lacks the ability to mine vast amounts of local search data, as Google was able to do when it started its mapping project.

Unfortunately for Apple, all of these cautions appear to have come true. So much for the past.

In this blog, after setting the scene, I will suggest what Apple needs to do to remedy the problems of their mapping service.

Given the rage being shown by IOS 6 users, Apple failed to hurdle the bar that was in front of them. I have spent several hours poring over the news for examples of the types of failures and find nothing unexpected in the results. Apple does not have a core competency in mapping and has not yet assembled the sizable, capable team that they will eventually need if they are determined to produce their own mapping/navigation/local search application.

Perhaps the most egregious error is that Apple’s team relied on quality control by algorithm and not a process partially vetted by informed human analysis. You cannot read about the errors in Apple Maps without realizing that these maps were being visually examined and used for the first time by Apple’s customers and not by Apple’s QC teams. If Apple thought that the results were going to be any different than they are, I would be surprised. Of course, hubris is a powerful emotion.

If you go back over this blog and follow my recounting of the history of Google’s attempts at developing a quality mapping service, you will notice that they initially tried to automate the entire process and failed miserably, as has Apple. Google learned that you cannot take the human out of the equation. While the mathematics of mapping appear relatively straight forward, I can assure you that if you take the informed human observer who possesses local and cartographic knowledge out of the equation that you will produce exactly what Apple has produced – A failed system.

The issue plaguing Apple Maps is not mathematics or algorithms, it is data quality and there can be little doubt about the types of errors that are plaguing the system. What is happening to Apple is that their users are measuring data quality. Users look for familiar places they know on maps and use these as methods of orienting themselves, as well as for testing the goodness of maps. They compare maps with reality to determine their location. They query local businesses to provide local services. When these actions fail, the map has failed and this is the source of Apple’s most significant problems. Apple’s maps are incomplete, illogical, positionally erroneous, out of date, and suffer from thematic inaccuracies.

Perhaps Apple is not aware that data quality is a topic that professional map makers and GIS professionals know a lot about. In more formal terms, the problems that Apple is facing are these:

Completeness – Features are absent and some features that are included seem to have erroneous attributes and relationships. I suspect that as the reporting goes on, we will find they Apple has not only omissions in their data, but also errors of commission where the same feature is represented more than once (usually due to duplication by multiple data providers).

Logical Consistency – the degree of adherence to logical rules of data structure, attribution and relationships. There are a number of sins included here, but the ones that appear to be most vexing to Apple are compliance to the rules of conceptual schema and the correctness of the topological characteristics of a data set. An example of this could be having a store’s name, street number and street name correct, but mapping it in the wrong place (town).

Positional Accuracy – is considered the closeness of a coordinate value to values accepted as being true.

Temporal Accuracy – particularly in respect to temporal validity – are the features that they map still in existence today?

Thematic Accuracy – particularly in respect to non-quantitative attribute correctness and classification correctness.

When you build your own mapping and POI databases from the ground up (so to speak), you attempt to set rules for your data structure that enforce the elements of data quality described above. When you assemble a mapping and POI database from suppliers who operate with markedly different data models, it is unwise to assume that simple measures of homogenization will remedy the problems with disparate data. Apple’s data team seems to have munged together data from a large set of sources and assumed that somehow they would magically “fit”. Sorry, but that often does not happen in the world of cartography. Poor Apple has no one to blame but themselves.


1. Unfortunately for Apple, they need to take a step back and re-engineer their approach to data fusion and mapping in general.

2. I suspect that the data and routing functionality that they have from TomTom, while not the best, is simply not the source of their problems. Their problem is that they thought they did not have a problem. From my perspective, this is the mark of an organization that does not have the experience or know-how to manage a large-scale mapping project. Apple needs to hire some experts in mapping and people who are experienced in mapping and understand the problems that can and do occur when compiling complex spatial databases designed for mapping, navigation and local search.

3. Apple does not have enough qualified people to fix this problem and needs to hire a considerable number of talented people who have the right credentials. They, also, need to develop a QA/QC team experience in spatial data. They could establish a team in Bangalore and steal workers from Google, but if they want to win, they need to take a different approach, because this is where Google can be beaten.

4. Apple appears not to have the experience in management to control the outcome of their development efforts. They need to hire someone who knows mapping, management and how to build winning teams.

5. Apple needs to get active in crowdsourcing. They must find a way to harness local knowledge and invite their users to supply local information, or at least lead them to the local knowledge that is relevant. This could be accomplished by setting up a service similar to Google Map Maker. However, it could also be accomplished by buying TomTom, and using its MapShare service as part of the mapping application to improve the quality of data. I think something like Map Share would appeal to the Apple user community.

6. Speaking of acquisitions, Apple could buy one of a number of small companies that integrate mapping and search services into applications for use by telephone carriers. The best of these, Telmap, was snapped up by Intel earlier this year, but other companies might be able to do the job. Perhaps Telenav? Hey, here is an interesting idea – how about ALK, now being run by Barry Glick who founded MapQuest?

7. I suppose Apple will want to hire Bain or some other high power consulting group to solve this problem. That would be the biggest mistake they have made yet, but it is one that big business seems to make over and over. As an alternative, I suggest that Apple look to people who actually know something about these applications.


There is no really quick fix for Apple’s problems in this area, but this should not be news to anyone who is familiar with mapping and the large scale integration of data that has a spatial component.

Of course there appears nowhere to go but up for Apple in mapping. I wish them the greatest of success and suggest that they review this blog for numerous topics that will be of assistance to them.

If you want to know more about map data quality see ISO (International Organization of Standardization), Technical Committee 211. 2002. ISO 19113, Geographic Information – Quality principles. Geneva, Switzerland: ISO. Available online from http://www.isotc211.org/

And, I urge Apple to keep a sense of humor about these problems, as have some of its users. I had a great laugh at a comment about Apple’s mistaking a farm in Ireland as an airport. The comment was “Not only did #Apple give us #iOS6… They also gave us a new airport off the Upper Kilmacud Road! Yay!

Until next time.

UPDATE on September 24, 2012. I have closed the comments period for the Apple Maps Blog. Thanks to all who have contributed.


Bookmark and Share

Posted in Apple, Authority and mapping, Data Sources, Geotargeting, Google Map Maker, Google maps, MapQuest, Mapping, Personal Navigation, TomTom, crowdsourced map data, map compilation, map updating | 106 Comments »

« Previous Entries