Exploring Local
Mike Dobson of TeleMapics on Local Search and All Things Geospatial

More on Spatial Databases for Autonomous Vehicles

August 31st, 2016 by admin

Many of the companies now collecting spatial data to support the development of autonomous and semi-autonomous vehicles appear to be evolving their approach to match sentiments espoused in Nineteenth century literature where Mein Herr, a character in Lewis Carrol’s story “Sylvie and Bruno Concluded” (1893), said, “And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!”

“Have you used it much?” I enquired.”

“It has never been spread out, yet,” said Mein Herr: “the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well.” (Footnote 1)

In today’s world of proprietary database building numerous companies are imaging the environment in both archival and real-time modes. The purpose is to create highly accurate spatial databases of the environment that can be used to support the operation of autonomous vehicles. Conversely, the industry might be better off building one common spatial database, or by instrumenting our roads, streets and associated “road furniture” than by creating multiple representations of the same thing. Unfortunately, the decision by major players to make spatial data one of their monetizable, competitive advantages bodes poorly for a common spatial database or a smart transportation infrastructure to support the operation of autonomous vehicles.

Is Multiple Representation Really a Problem?

Yes! Many companies are building proprietary master spatial databases to support their interests in autonomous vehicles. In addition their vehicles are creating real-time positional databases. If one could merge of the data from these efforts and map/overlay them, do you suppose the data for geographical positions would precisely align? How about comparing spatial databases created by different parties? The sad state of affairs is that it might be impossible to meaningfully integrate these multiple representations of the same data due to differences in collection technologies, methods, measurement, classification, design, and a host of other variables. Comparing the “rightness” of one representation of reality to another can be a very difficult task. Let’s take a look.

Guess we should take a step back for some perspective

Compiling spatial data to support vehicle navigation is based on a long and illustrious history of innovating techniques and methods to deal with the issues of capturing, storing, handling, and using attributes critical for representing identified transportation arteries and their surround. Although we might now smile at some of the resulting products, such as pocket globes, wall maps, folded maps, road atlases, auto-photo-guides that showed a photo at every intersection between an origin and destination, and TripTiks, these and other displays were based on spatial data compilations systems that once helped solve the navigation problems of the traveling public.

When digital spatial databases were developed to create “computerized” navigation, routing and presentation systems, established and mature methods of spatial data handling served to help create the modern tools required to build the database and rendering methods supporting these efforts. Many of these efforts were essentially data-driven systems replacing the previous conceptually-driven drawing methods that were focused on creating a map or some related form of spatial display. During these developments cartographers and spatial scientists met and began wrestling with the problems generated by multiple representations of the same geographic locations.

Tomorrow’s autonomous vehicles may provide, but will not require, visual map displays for their operation. Rather they will implement geographical data in a manner that can serve to provide the spatial information required for the autonomous vehicles to safely and reliably manage a vehicle’s operation without the intervention of a human pilot/driver.

An important consideration in this pursuit is that many companies seem prepared to rely on sensing or imaging the transportation networks and appear to imagine that by doing so on a global or regional basis, they may be able to solve the operational problems required to support the emergence of autonomous cars that will actually be “fit” for some intended use. It is important to note that not all data about the environment required for controlling and navigating autonomous vehicles are visible. For example, sirens that provide drivers information about not yet visible approaching service vehicles that legally have the right of way cannot be sensed by most imaging systems. Similarly postal codes, addresses in high rise office complexes, jurisdictional boundaries, some toponyms (place names), etc. often require diverse collection methods other than imaging from a road surface. The existence of these “escapees” from current sensor capabilities means that multiple spatial database (multiple representations) will need to be merged in order to provide the comprehensive view of the real world required to operate autonomous vehicles.

There are a host of spatial data issues that need to be addressed by those developing systems to guide the performance of autonomous vehicles. Today, I will cover three of these topics. We will get back to multiple representations, after we take a look at requirements.

The Importance of Setting Requirements for Spatial Data

Starting your development for an autonomous vehicle with, “We’ve got the know-how to make A and B and can create some of the spatial data for this system using the technologies C and D,” is not the useful approach to solving the problems the system might need to negotiate. For example, and in my opinion (see footnote 2), if Tesla’s Autopilot system could not: 1) sense a white semi-trailer with black tires sitting astride a road, 2) decide the object was not in its spatial database, 3) process it as a dangerous spatial exception, and 3) take evasive action, then specific performance and safety requirements may have been missed when planning the system. If the ranging capabilities of Tesla’s cameras and radar were not sufficient in respect to the distance over which it could observe navigation critical variables, or if its threat processing was not capable of informing its operating system that an action must be taken due to this fault (if sensed), then requirements were either missing or not met. Designing a system whose components do not solve common spatial problems or spatial threat situations even infrequently encountered by ordinary drivers is not a positive option for an industry that needs to convince its user base to have confidence in its products.

Purposeful, disciplined, comprehensive design documents for spatial data and its use will be required to build vehicles fit for autonomous functionality. I hope the industry is thinking about this challenge, as much as it is thinking about the race to market. Spatial data has not been something that the automotive industry has traditionally incorporated in its vehicle build process and, as a result, new challenges face those intent on competing in this market. I suggest the players who are considering building either autonomous vehicles or guardian angel types of applications for vehicles pay some attention to the following topic when considering the use of spatial data.

Multiple Representations – or – this data item is another representation of this geographical descriptor, except maybe when it doesn’t quite match or you can’t even decide if they are representing the same thing (Footnote 3)

At some point, there may only be one spatial database operating in or provided to an autonomous vehicle, but that time is not now. Today, we have companies using LIDAR, radar and optical sensing techniques to build highly precise master positional spatial databases for navigation that are attributed with additional information. The moving vehicle platform, itself, is creating a GPS path (perhaps INS based) of where the vehicle is being guided for purposes of matching its location with the master spatial database. When the highly precise positional spatial database is not spatially comprehensive other spatial databases are used to calculate: paths, travel time, congestion and similar information. Concurrently, the sensors with which a vehicle may be equipped, are also mapping a path along which the car is moving, recording such things as position, distance between vehicles, lane marker position, speed, and other variables that can be used to safely control the operation of the vehicle. It is in this sense that the spatial databases that are used in autonomous cars provide an example of the simultaneous use of multiple representations of the same geographical space – specifically along the path that the vehicle is currently traversing.

How dissimilar must the spatial data in the onboard/in-cloud master spatial database and that collected by the on-board sensors be before a fault is called? Which source is regarded as the failsafe and is it always considered the failsafe? What are the results of a fault being identified? While the sensors would seem to have the upper hand, what if they are not operating within required tolerances? How are these distinctions to be measured and prioritized during vehicle operation? If neither data source is regarded as optimal or reliable at a brief instant in time, how is the issue resolved to the benefit of the vehicle and its occupants? What is the fallback when a vehicle does not provide a steering wheel, brakes, or gearshift, but its autonomous system can no longer navigate the vehicle due to discrepancies in spatial data?

The question of when a reported deviation along a segment of a route becomes significant can be a complex issue. If the sensors on each vehicle encountering an actual change in a path are measuring the change compared with a proprietary spatial database, how will specific differences in these sources be reconciled? When do these differences become critical? Does vehicle speed sway and stance alter these evaluations? If differences in the reconciliation recommendations at a specific geographic location vary between vehicles by manufacturer due to differences in the requirements set for their proprietary spatial data, as well as the on-board methods of sensing the environment, what happens? How will emergency command and control decisions be prioritized and implemented in a rational manner?

When we get to the stage where autonomous vehicle control systems can speak to each other, what happens when they are impacted by sensed current data that reflect different images of reality and the geometry of a proprietary spatial databases that do not reconcile? For a variety of reasons static reconciliations (edits) are a common part of using spatial data, but it is when you add in the concept of multiple representations of potentially non-homogeneous spatial databases acting in concert to solve a problem that must be dynamically reconciled that your headache may suddenly become a migraine.

Then, add in consideration of the potential differences in the content, coverage and comprehensiveness of the various spatial databases that someone might be using to support the spatial data requirements for a typical autonomous vehicle. Another unfortunate fact dogging multiple representations of a spatial object is related to issues of data quality. Of course, the situation is complicated by the fact that few people understand how you might critically measure the data quality issue in potentially non-homogeneous spatial databases that will be used for purposes of autonomous vehicle operation.

Speaking of People

Where will the experience in spatial data collection and handling come from? Well, yes there are a lot of people who have always liked maps and even more who have used them. But the number of professionals steeped in the theory and practice of the use of spatial data handling for navigation and related issues is extremely small. My concern here is that the appropriate sensitivity to understanding the problems of integrating spatial data into on-demand, real-time systems may not be well understood by many of the developers of these systems.

Make no mistake – there are a number of people who are vaguely familiar with the notion of spatial data, yet appear markedly unfamiliar with the myriad problems of compiling, creating, handling and using spatial data in a manner that accurately reflects the limitations and advantages of the spatial databases they are building. If these people do not understand the problems or fitness-for-use posed by merged, multiple-sourced databases containing multiple representations of the same data, it is unlikely that the software they create to interrogate and use these spatial data are going to consistently and reliably perform their intended function in the support of autonomous navigation – or even in navigation efforts is aimed at performing a Guardian Angel type of support.

The Problem of Hubris

Yes, I know. All of the problems that I mentioned here have already been solved by brainiacs of untold wisdom. Yet I persist. Why? Well, I persist because the brainiacs have actually not yet solved these problems. Ask your development team what they are going to do about the multiple representations of spatial data. They will probably ask you what that means and then tell you they have already solved the problem.

Those of you who follow my blog know that I sounded an alarm when Apple announced it had started building a spatial database to support its need for an online mapping product to be released within the same year. In my blog I suggested that Apple was in for a surprise, as they clearly did not appreciate the difficulties of compiling a data base to be used for mapping and routing. Shortly after the Company released Apple Maps, I critiqued Apple for the product’s numerous failures, all of which were avoidable. As a matter of fact, the whole word seemed to criticize Apple’s inept attempt at creating a mapping product. In a recent article Apple executives admitted they originally failed in mapping because they did not understand the scope of the challenge. And so it goes.

I’m getting too old for this kind of stuff, so today’s blog marks a new focus on issues in spatial data handling that more people ought to think about before their efforts to build “functional autonomous vehicles” proves disastrous for the transportation industry. More next month.

Until next time.

Dr. Mike

Footnote 1. I used the same Sylvie and Bruno example in, “Silicon Valley Mapenings” an article focused on the interest in autonomous vehicles that was occurring in early 2015. The quote is an example of the gift that just keeps on giving.

Footnote 2. For obvious reasons Tesla has been reluctant to publicly comment in detail on the accidents experienced by its users. As a consequence, my statement is speculative and may not be borne out by the facts, if they are ever revealed. It may well be that some factors or factors unknown to me and unavailable in public literature (as of this date – 8/29/2016) related to these incidents will reveal that the company bore no fault in any manner in any these incidents (8/29/2016). I do note that today (8/30/2016) that Tesla announced an update to Autopilot that will significantly enhance the advanced processing of its radar signals.

Footnote 3. I briefly discussed multiple representations in several blogs, but most recently in “Comments on the Development of Spatial Databases to Support Autonomous Vehicles” .

Bookmark and Share

Posted in Apple, autonomous vehicles, Categorization, Data Sources, Mapping, Mike Dobson, multiple representations of spatial data, routing and navigation, spatial databases for autonomous vehicles | 1 Comment »

Measuring the Cost of Uber’s Push Into Spatial Data

August 4th, 2016 by admin

The recent “news” stories that Uber was doubling-down and developing a spatial database to support the company’s mission seemed anti-climactic to me. After all, over the last two years Uber has undertaken numerous (perhaps that should be “nuberous”) activities to support its strategic initiative in mapping. What was newsworthy, apparently, is the fact that Brian McClendon, formerly head of Google Maps, now head of Uber Maps), wrote a PR piece in the form of a blog focused on the importance of mapping to Uber.

Perhaps of more interest was another byline on the topic indicating that Uber was prepared to spend $500 million pursuing just the right set of geographic data that would be needed to meet the operational needs of the business. Of course, no one at the company would comment on the figure, where the number come from, or if it was in the ballpark. Hmmm. I suspect that the correct answer was, “No comment” and the truthful answer was, “More than you can say!”

(I note here that mapping may be of less interest to Uber than creating spatial databases, but, I guess if you want to think of spatial databases as maps, that’s OK, as long as you don’t think that autonomous cars are going to “image” the data they require for operating in order to use it.)

Uber was rumored to be interested in acquiring the mapping company HERE from Nokia, but not interested enough to pay the approximately $3 billion a consortium of German car manufacturers paid for the company. In McClendon’s blog titled Mapping Uber’s Future”, he indicated that, “Existing maps are a good starting point, but some information isn’t that relevant to Uber, like ocean topography.” I agree.

Many data elements of interest to users of Google and Apple Maps would appear to be irrelevant to the needs of Uber, just as they are to the data needs of HERE and TomTom. However, I hope McClendon was not implying that he could build a database that would meet Uber’s need for less than the sums spent by either HERE or TomTom. Further, I presume McClendon knows that Google spent more than $500 million to create the data that supports routing using Google Maps to support its worldwide markets.

I suspect $500 million is just the first tranche of investment required for Uber to meet its mapping goals. I base this on a simple requirements analysis. Uber will need to define the types of spatial data required to support its corporate goals in, at least, the following areas:

1. What types of spatial data (elements, items and attributes) will be required to allow autonomous cars to safely and legally operate on all streets and roads in the geographies where Uber desired to provide such capabilities?

a. In addition to answering questions about “why these geographies”, the company will need to estimate the cost of identifying, acquiring and sourcing the required data for each unique geography and for each specific autonomous technology utilized by the vehicles they eventually choose to operate.

b. There are no shortages of existing spatial data and navigation standards, but Uber has indicated that it will become its own standard and the interplay between how it hopes to collect most of its spatial data (technology and sensors) and formulating spatial databases that can provide the actual re-use of that data by its fleet of vehicles will be a challenge in a real-time environment.

2. What types of spatial data will the company need to provide their drivers in order to efficiently deploy this human resource?

a. In addition to the mapping, routing, navigation services, and software to support spatial queries, capturing traffic patterns and the precise pick-up and drop-off locations mentioned by McClendon will require a fiendish amount of work reporting data that changes with amazing rapidity (by day of week and time of day). These concerns, perhaps, would best be met by an acquisition, or several of them.

3. What types of spatial data will need to be supported in order to meet the needs of Uber’s riders?

a. While McClendon indicates that the company won’t need underwater topography, it will need data about users, where they travel, when they travel and where they go. It will need to develop location dictionaries that include the diverse names that users may call the same location and be prepared to translate foreign languages using foreign variant names for these locations. User preferences, user histories, travel trends, location biases and a host of other data might make this stew quite potent, but knowing which ingredients are worth the cost of collection is a complex undertaking.

4. What type of spatial data must be in the database to meet company needs?

a. For example if Uber goes into the package delivery business it will need to develop an infrastructure that would meet these needs. It would also need a business listings database for the millions of small businesses that are operating, but not retail establishments. What other geography-related goals does the company have that its spatial database will have to support?

5. What kind of spatial data handling infrastructure will Uber implement on a worldwide basis to meet the general needs described in the four categories of requirements described above?

a. Do you think you could build even this part of the equation for $500 million?


If Uber has done a requirements analysis that is focused on these and other issues that need to be addressed in the creation of the complex spatial database they will need to operate their business, they will have concluded that they are going to gobble up a ton of cash before they are operational. While HERE may not have been a bargain, its selling price is a not irrelevant estimate of how much it will take to start Uber’s effort at building a functional, extensible spatial database.

Wow, this will be fun to watch!

And Something Else

On a completely different topic – earlier this year, I received The Distinguished Career Award of the Cartographic and Geographic information Society (CaGIS). If interested you can read about the award at the society’s website . I am not sure if this is the organizational equivalent of the “Old Geezer” award, but I was thrilled to receive it. Adding to the fun, I received the award in San Francisco, my birthplace. Does this mean I am spatially autocorrelated? Who knows?

Bookmark and Share

Posted in Apple, Authority and mapping, autonomous vehicles, Data Sources, Google maps, HERE Maps, routing and navigation, TomTom, Uber | 2 Comments »

Comments on the Development of Spatial Databases to Support Autonomous Vehicles

February 15th, 2016 by admin

Companies announcing new mapping initiatives seem to be crawling out of the woodwork. Why the sudden interest in mapping and what should we make of these new entrants in the mapping space?

The recent CES show was a hotbed for news releases on mapping initiatives related to the functioning of autonomous vehicles (AV). Toyota, for example, announced that, using automated cloud-based spatial information generation technology, it is developing a high-precision map generation system that, “…will use data from on-board cameras and GPS devices installed in production vehicles (around 2020).” In their press release on the topic Toyota noted that, in their opinion, this proposed constant collection of road data by multiple cars will offset the accuracy gained by using LiDAR (which they are not using) and will trump the infrequent, expensive road data collection process used by today’s mapping incumbents.

Ford, in turn, announced its own mapping initiative. Ford indicated that it will use a compact LiDAR sensor developed by Velodyne (at a mere $8,000 a unit) which it hopes will help it win the race to offer a fully AV. Ford appears to be confident that Velodyne’s Ultra Puck will enable its fleet of vehicles to create real-time, 3D maps surrounding their path through the environment.

Other companies making noise about the automotive market include Quanergy, which bills itself as the, “Future of 3D sensing and Perception – 3D LiDAR for ADAS, Autonomous Vehicles & 3D Mapping.” Panasonic is, also, developing a LiDAR capability for mapping, as well as enhancing its advanced camera imaging technology. In addition, let’s not forget GM and Mobileye’s exploration of advanced mapping with OnStar data, as well as GM’s collaboration with Lyft to build a network of AVs.

Why the interest in Mapping?

I am sure that many of the long-established automotive companies involved would indicate that they have always been interested in mapping. Of course, this type of statement would be broadly true, but most of their interest has been educational or licensing related – few automobile OEMs, Tier 1 or Tier 2 suppliers have spent any real money on developing mapping capabilities until now.

I suspect that the “move” to mapping is being driven by at least two fundamental issues. Data Integrity concerns are, I think, the prime mover behind the trend of in-house spatial data gathering. Building a service-related user community may be the second factor behind developing spatial data capabilities. We will discuss these two trends below, but first – a look at the profit motive.

It is my belief that the move into mapping by OEMs is not based on a direct-profit motive. Rather than making money on mapping data, or deferring costs by creating their own data, advanced mapping will be used indirectly to promote brand allegiance and expand brand reputation during a period in which individual car ownership may transition to some form of collaborative use of vehicles and their data networks, rather than purchase or lease of automobiles. Next, let’s acknowledge that developing and maintaining the spatial data that is critical for the operation of AVs may be a protective strategy to defend against potential injury lawsuits where it will be claimed that a specific AV did not perform as advertised.

Spatial Databases and AV

Let’s begin by acknowledging that the spatial databases supporting AV operation must fulfill at least two fundamental functions. One the one hand, the spatial database must be designed, populated and organized in a manner that supports the systems allowing the autonomous functioning of a vehicle within the surrounding environment. This means that the data needs to combine positional and contextual data about each vehicle and its surroundings, as well as information about other objects in its environment that are stationary or moving. Second, there is a need for a spatial database that can support the navigational needs of the autonomous vehicle when moving between two or more locations.

Perhaps a good way to think of this distinction is the difference in spatial knowledge required to know when to turn from one street to another, versus the spatial data that an AV would need to consider to be able to execute a turn at that location. The majority of the press releases at CES were concerned with creating accurate, reliable and up-to-date spatial databases that are fundamentally concerned with the methodology of AV operations rather than determining the routing characteristics between origins and destinations.

Data integrity and Spatial Databases in AVs

AV systems will demand enhanced mapping capabilities to ensure the proper functioning of systems designed to manage the performance and maneuvers of self-regulating vehicles. The associated spatial databases will become a critical component in the ability to create fully functioning autonomous vehicles that are reliably safe to operate and whose operation consistently reflects local road and transportation environments. In my opinion, it seems unlikely that vehicle OEM’s would be willing to cede the responsibility for setting the specifications for and collection of these mission critical spatial data to external suppliers of mapping data.

While collecting all of the spatial data required to support autonomous cars may be a task beyond the capabilities or interests of automobile OEMs, these companies may want to collect and control mission critical spatial data (e.g. road centerline data and other spatial data related to road and road environment characteristics) in a manner that fully supports their notion of how to best functionally and legally operate an AV. Conversely, routing related variables such as addresses and business listings would likely continue to be provided by third parties.

In today’s navigation and local search oriented markets companies are able to license mapping data from providers such as HERE and TomTom, but the specifications for these data are set by these two companies. Yes, the data format may reflect industry-based standards, but many of these standards reflect the influence of the data providers. In today’s era of rapid prototyping, innovative engineering and technological leapfrogging it may well be that the OEMs will generate data specification, data processing and data collection systems that are unique to the fundamental operation of their specific autonomous vehicle systems. In essence, these systems and the spatial data that help operate the AVs may be regarded by some companies as a distinct competency leading to a sustainable competitive advantage.

One major concern with the use of data from companies such as HERE and TomTom in AVs is that historically, as pointed out by Toyota in its press release cited above, their data collection, processing and distribution methods have been time-bound and not updated and disseminated at the speed that is expected to be required to operate AVs. Unannounced road closures, emergency road repairs, intermittent lane geometry changes, and the like need to be compiled with urgency and rapidly disseminated to a company’s AVs that may encounter these changes. It seems unlikely that TomTom and HERE can supply fleet-based, spatially comprehensive data in adequate enough time slices to support AV functions using their current data collection systems. Unless TomTom and HERE can provide (or partner with fleet owners who can provide provide) continuous, spatially comprehensive updates to their entire range of automotive customers, it is unlikely that they will be the beneficiaries of the emerging market for AVs.

While I agree that TomTom and HERE have significantly improved their data collection and distribution methodologies over the past few years, let’s not forget that in 2010 TomTom, HERE, Google and others in the map data business took months to recognize and reliably reroute traffic after a section of interstate highway was deconstructed in Providence, Rhode Island. Yes, you read that right. Even using live GPS traces indicating where cars were actually driving, these companies continued to route users over a section of interstate that no longer existed (see here http://blog.telemapics.com/?p=241 and here http://blog.telemapics.com/?p=248 for information on this topic). This kind of problem is not one that automobile OEMs or their lawyers will be willing to accept when AVs become a reality.

A final observation of some interest the notions expressed above is that HERE has recently been awarded a contract from the Geography Division of the United States Bureau of the Census (RFP CENSUS2015_GEO0227). The RFP stated that in addition to other requirements, “The Census Bureau is seeking to obtain complete and accurate datasets containing housing unit addresses with associated attribute and geographic coordinate data, and spatially accurate street centerline data to supplement and/or validate content of the MAF/TIGER System.”

In Amendment A0021 to this RFP the Census Bureau clearly stated that the street centerline data developed as a result of this effort (but not the address information) would eventually be released to the public. HERE’s willingness to supply road centerline data under this contract may speak volumes about the intentions of OEMS to collect spatial data.

Community -The advantage of size

Google search engineers have often stated that they are not smarter than anybody else, but note that when thinking about improving existing products or creating new products they accrue the advantage of having more data to analyze to properly define opportunities than most of their competitors. In a similar light, the larger automotive OEM’s could make their vehicles “spatially smarter,” by fielding a massive spatial data collection effort based on the number of vehicles deployed (directly (research fleets) or indirectly (sales)). Toyota, for example should be able to produce a more comprehensive and accurate spatial database to manage its autonomous vehicle performance than, say, Subaru. Ford, with its success in light-trucks, might be able to create more comprehensive data on rural routes than say a Volvo or KIA.

Purpose-built spatial databases produced by an OEM, in turn, could be used to analyze, tune and optimize the performance of every autonomous vehicle they produce. For example, instructions could be customized for individual vehicles, adjusted for their carry-loads, profiled for specific geographic areas/environments, optimized for local weather and the like. Without the mission critical spatial data to support the customization of their vehicles the OEM would be at a competitive disadvantage.

Spatial data is certainly an area where more can be better and the OEMs with the most sensor units/deployed vehicles could wind-up producing market leading, highly accurate, comprehensive spatial databases of road-related information. In turn this spatial data superiority could become a branding opportunity leading to more sales or more use of a company’s AVs. Although it would make complete sense to create an industry wide spatial database to support AVs I doubt that this will happen due to the need of players in the automotive industry to create sustainable competitive advantages. However, this may be an opportunity being explored by the new owners of HERE.

A Fly in the Ointment

The need for interoperability between autonomous vehicles suggests that one of the safest ways for these vehicles to commingle on the roads would be based on interpreting a common spatial database. As noted above this is unlikely to happen. It is possible that regulators will have the prescience and courage to demand it, although I do not see that happening in next decade and, perhaps, not at all.

If the AVs are in motion and adjacent autonomous vehicles from different OEMs do not have the same representation of the world around them, how will they efficiently maneuver in such a manner as to avoid being a threat to each other?

If companies creating AVs, or the systems for AVs, decide they need to license additional data to build their recipe for the spatial database required to support these vehicles, how will they deal with the issue of multiple representations that will most certainly be found when trying to integrate licensed spatial representations with the spatial representations they have collected using different systems?

Yes, they are all measuring the same world, but they will not be measuring it in the same manner or even with similar equipment. How closely their data models match is an open question. Given this possibility, how will the OEMs deal with issues of metadata mismatches or metadata that is incorrect or simply does not exist? The potential complexity of the multiple representation problem, in the context of spatial databases designed to support the operation of AVs, is stunning. However, it is only one example of a series of problems that commonly haunt applications based on extremely large spatial databases. Multiple instances of potentially non-interoperable, extremely large spatial databases distributed across numerous, comingled AVs sounds like the makings for a ripping good disaster movie.


It appears that automobile manufacturers (including the consortium that purchased HERE) have realized that spatial data may become a powerful benefit in the future – if they can find a way to make it their own. While mapping is a term that flows off the tongue and into strategic plans with ease, actually collecting, compiling and unleashing the power of spatial data in a beneficial manner takes more skill and know-how than many would-be practitioners may understand. Indeed, harnessing its power may require skills in spatial data handling and knowledge about the intricacies of mapping that have slowly withered as interest in these this disciplines has been eroded by the transformation of the spatial data world in its hot pursuit of GIS.

Major players interested in entering the spatial data/mapping market that is evolving to support AVs would do well to consider forming advisory boards where their strategists and data geeks can counsel with mapping and spatial data handling experts who might be able to give them an insight or two, or just maybe help prevent a catastrophic system failure.

Or, as a great sage once said, “you don’t know what you don’t know.”

Thinking about all this stuff has given me a headache – so I am heading to the shore to photograph birds – merrily driving my 1998 Corvette stick shift. I ordered the car purpose built and have been its sole owner. I finally managed to crack the 50,000 mile barrier this week. Do you know what is wearing out on this vehicle? The sensors (fuel pump sender sensor, air quality sensor, etc.). How about that, but no complaints from me.

Till next time.

Dr. Mike

Bookmark and Share

Posted in Authority and mapping, autonomous vehicles, Google, HERE Maps, map compilation, Map Use, Mapping, Mike Dobson, Personal Navigation, routing and navigation, TomTom | 1 Comment »

Use Cases and Online Maps

January 4th, 2016 by admin

Hi, Everybody.

This topic started out innocently enough and wasn’t research for a blog. What I was trying to do (my use case) was to find driving directions to several wildlife sanctuaries that had been recommended by my photography buddies. I started the process by Googling the Cibola National Wildlife Refuge. Google pops-up a side panel on the search results page that includes specific information on the refuge in question. See Figure A.

Google Search Page for Cibola NWR
Figure 1. Google Search Results for Cibola NWR

I clicked the “Directions” button at the top of the panel, presuming that this could be used to produce a route from my starting point to the refuge in question. Well, it did produce a route to the refuge in question, but not one that would be very useful for most visitors to this location (See Figure 2 below, if you want to skip ahead). Adding to the confusion, Google does not always employ the same use case to solve other examples of the same class of problem.

The vagueness of Google’s routing solution led me to examine the issue in greater detail. During the process, I concluded that Google, Apple and HERE* still do not seem to appreciate the nuances of the relationships between mapping and the types of use case employed by map users, although they do seem to understand the general notions. For example, Google represents wildlife refuges, national parks, national wilderness areas and places of this ilk that have a reasonable size by a shaded polygon. This is common GIS practice and can provide useful, though very general location information. These polygons, on Google’s maps, in turn, are identified by a symbol that apparently represents a centroid within the specific polygonal boundaries for a location. This is a reasonable way to show the appropriate map name associated with a tinted polygon. However, it is an inappropriate trip destination for a routing engine, but it is what Google frequently uses in these cases.

Well, good luck with that! Autonomous vehicles are going to love this stuff (hope the autonomous car people at CES are reading this). In many cases, if not most, the location of a centroid, placed in this manner for the class of objects being examined here, will be located in an area of inaccessible wilderness (e.g. a location that does not have roads or trails leading to it). The locations denoted by the symbols used by Google do locate the respective property and should be reachable by helicopter, but seem mismatched when paired with the common notion of the type of directions presented by online routing engines and expected by users (the “common routing” use case).

Let’s look at this problem by examining a series of maps. In all cases the origin of the route was the same and in all cases the map shown resulted from clicking the “Directions” button on the search panel that Google produced for each location. For purposes of presentation the images shown are crops of the maps Google generated to portray directions to each of the locations. (Yes, I know the maps are oversized. But they are pretty easy to read.)

Google Map of Route to Cibola NWR
Figure 2. Google Map showing a proposed route to the Cibola NWR.

Google seems to have gone out of its way to calculate paths to the destination that route you along actual roads as close to the centroid as they can take you by car and then draws a dashed line from that point of departure to the centroid. Above is an example of a route between my house and the Cibola National Wildlife Reserve.

When I sent this map to a friend he suggested that I buy a Range Rover with the snorkel option, since the route that starts when the road ends takes you through a river and into the wilderness. No, there is no bridge or other crossing where Google shows its “dashed-line route.” A useful alternative strategy would be to route visitors to the visitor center or the refuge headquarters of the relevant property. This example led me to wonder exactly which use-cases Google might have considered when creating their cartographic representation of a route to the Cibola NWR.

Let’s look at the Google Map to the nearby Havasu Wildlife refuge for comparative purposes.

Google Map showing its proposed route to the Havasu NWR
Figure 3 Google Map Showing a proposed route to the Havasu NWR.

For your information the Office of the Superintendent for Refuge is located in Needles, California. That might be a helpful place to send you for the camping, fishing and hunting permits needed to use for this property, but that was not Google’s choice.

On the other hand, if you were considering visiting the Channel Islands National Park off the California coast, Google routes you directly to the park headquarters at the harbor in Ventura, California, and doesn’t even show you the islands or the direction to them on the map presented.

Google's proposed route to the Channel Islands National Park
Figure 4 Google Map showing a proposed route to the Channel Islands NP

Perhaps that is because you cannot drive to the islands? But why does Google show the park headquarters in this case and not in others where you cannot drive to the symbol Google uses to represent the entity, but can drive to the park headquarters or the visitor center? Maybe this is an example of Google considering a different use-case. Hmmmm?

Next, if you want to visit Haleakala National Park, Google not only shows you that you have to fly to Maui, but provides information on fares and flights.

Google's proposed route to Haleakala National Park
Figure 5 Google Map showing a proposed route to Haleakala NP

Unfortunately once you land in Maui, the route provided by Google ignores the main road to Haleakala and takes you to another of its concocted centroids that you can reach by helicopter, but not by driving. Hmm, yet another use case!

Now, more absurdity. Look at this Google route to the Grand Canyon National Park.

Google's proposed route to the Grand Canyon NP
Figure 6 Google’s proposed route to Grand Canyon NP

I Hope nobody tries to drive this “route.” After all, remember what happened to Evil Knievel and his rocket-propelled motorcycle! Yes, that is the official Grand Canyon Visitor Center shown on the map, even though Google avoids routing you to this location. After all, why use the only paved road in the area.

How about this one for Yosemite National Park?

Google's proposed route to Yosemite NP

Figure 7. Google’s proposed route to Yosemite NP

After all, why would a visitor would want to tour Yosemite Valley when they could destroy their car trying to traverse the Sierras, but not using known roads or trails?

I’ve decided what to get Google next Christmas: A sense of direction and a basic text on cartography. A primer on common use-cases for map use and when to employ them might be a nice addition. Note, I looked at a few maps from Apple and HERE for the same places and found similar errors.* Maybe I should send primers to each of them?

Or, perhaps, Google could “read” the “Visiting Cibola” page at the Cibola NWR website, as I did, and click the provided lat/lon, which uses Google Maps to generate a route directly to the Visitor Center at the Refuge. Guess those Refuge guys know their use-cases. Some of you may have noted that Google provides an address for the Cibola NWR and you can generate a route to that address as interpreted by Google. However, the spot identified by Google is not the Park headquarters/visitor center (although you would drive past it to reach this location).

My biggest concern here is that Google crawls the websites of the entities mentioned and could easily determine locations within each area that corresponds to an informative use case or ones that could be tailored to general map-use case requirements. Perhaps that is the future of mapping – maps on demand tailored to specific use cases?

Many may read this comment and think, “Well, it’s something Google can fix.” True, but the question is more complex than it appears and requires some insights into cartography, map use, and human factors engineering. More specifically, someone in the mapping group at Google needs to thinks about the differences between routing and reference maps, as well as the influence of use cases on both.

If you think I am picking on Google – go back and take a look at my earlier blog “Google Maps and Search – Just what is that red line showing?” . Google gets the act of mapping data, it just does not seem to understand map use. As you might expect, I think this is something that needs to be fixed sooner than later.

By the way, Toyota is entering the mapping derby, or so they said in a press release for CES. I plan on blogging about Toyota’s announcement sometime soon.

My best wishes for a successful, healthy and happy 2016

Dr. Mike

* MapQuest and TomTom could not find Cibola NWR, even when I gave them a city name and Postal Code. Every time I look at Apple’s Maps I get a headache. Their design is terrible, but this can be overlooked. The data errors are more problematical. Apparently Apple has trouble understanding the concepts of boundaries. Maybe I should write about that sometime?

Bookmark and Share

Posted in Apple, Authority and mapping, autonomous vehicles, Google, Google maps, HERE Maps, map compilation, Map Use, Mapping, MapQuest, Mike Dobson, Personal Navigation, routing and navigation, TomTom, use cases | 1 Comment »

Local Search – Local Data – Local Sources

August 26th, 2015 by admin

In my last blog I mentioned that some problems are best solved with a “global” approach, while others might be susceptible to a solution that is targeted locally. For some reason this thought has been on my mind since that time. Today, I expand on why a local approach might be the elixir that challenges and upsets that status quo in the business listings market and in the markets that use these types of listings.

Until the rise of the Internet there were few businesses based on providing a national, comprehensive directory of business listings. Until that time almost everyone who needed information for contacting or finding a specific business seemed to be able to make-do with the city or county based Yellow Page Directories that were delivered free of charge to businesses and residences in local markets. If you needed a directory for a distant city or urban area in another state, you could wander over to the public library and usually find a large collection of dog-eared telephone directories that could be used to solve your location problem. Individual national businesses, such as hotels, published their own purpose-built mini- directory/brochures to advertise and market their products and/or services.

At the time there were few national, comprehensive sources of business listing information available, and those that did exist were either inadequately comprehensive or unavailable for online use. For example, several companies had operating divisions that generated business listings as part of the financially-oriented services they offered, but in the early days of the Internet, most were unwilling to license these data based on specific intellectual property concerns typically held by publishers at that time. This lack of data sparked a number of companies to attempt new methodologies to create national business listings databases. In many cases, these early attempts involved the large-scale scraping (scanning or keyboarding) of telephone books and yellow page directories in order to create compendiums of business listings that covered the United States.

From the earliest days of online business listings directories there have been many changes in the methods and success of creating databases with relatively current business listings information. Some companies continue to scrape directories published by telephone companies, while newer players have emerged that scrape websites on the Internet, interrogating the web for the presence of valid, business listing information. Most companies, however, continue to approach the business listings compilation from a national or international perspective, although these companies often collect data at local levels using advanced technology, to compile and refine the data of interest.

One major difference in business listing data collection between today and pre-internet is that the market for Yellow Page/Yellow Book providers has been gutted by the success of national online distributors of business listings. Gone are the days when the Yellow Pages salesperson would visit a business, confirm the business listings details, and try to sell the company on upgrading to preferred listings or a graphic advertisement positioned to attract attention in next year’s book.

In today’s market the shop owner must take the initiative to represent their business with an accurate business listing. Problem is they just haven’t caught up with the trend. They don’t know which sites will best represent their business, or how to maintain them once submitted. Nor do they know how to claim their listing, or how spending any of this stuff makes sense when somebody on Yelp can pan their business without ever having used the service. Yep, I know, “Woe is them.” Sounds like this is a business opportunity, and it may be for certain verticals, but many have tried and failed to provide this type of support.

One observation I’d like you to consider is that while the Internet has increased the opportunities to use national business listings databases, the content and accuracy levels of these aggregated business listings databases may primarily reflect the goals of the companies desiring to offer an Internet-based service that is perceived as national in scope. While relevant spatial data from these systems may be used by local users for local purposes, the level of data quality necessary to meet the provider’s goals may not reflect the needs of local users for accurate and up-to-date business listings information consistently useful to them as they carry out their daily tasks.

Creating a national business listings database and ensuring that it is of uniform high quality is a very difficult task that seems beyond the capability of most of today’s providers, including Google and Apple. The cost of fielding, compiling, publishing and updating a comprehensive, up-to-date and accurate inventory of businesses for an area the size of the United States is staggering in terms of expense and unexpectedly complicated in terms of execution. For example, Google has tried any number of methods of enticing business owners to claim and “own” their business listings, as well as to correct them when appropriate. In addition, Google’s Street View has imagery of storefronts across the country for use as collateral material to help evaluate the existence of specific businesses. Google’s own index of the web is another useful tool for finding business listings data. Finally, Google continues to license business listings data from companies that license these databases for use by others.

Even with this variety of sources of data, business listings remain the soft-underbelly of the Internet-based local search. My examination of local business listings reveals a preponderance of low quality data. The problems I find appear to be the result of a comedy of errors, many of which seem remediable if anyone would just go out and look. And perhaps that lack is the critical problem for all of today’s pseudo-local search websites. No service has yet proven that they have developed a reasonable method for fielding research aimed at discovering the critical, relevant information about a business that one would need to build a robust business listings database.

Perhaps if we continue to think of the problem as a national one, no one ever will. It may be time for a new take on the old approach of compiling data locally by employing community-based teams responsible for and sensitive to local markets. I suspect that compiling business listings could be better done by companies with local interests operating in local markets than by national or international companies interested in serving remote, diverse markets from central locations. Some thoughtful entrepreneur will likely take this thought and realize that franchised local search sites supplied with up-to-date, accurate local data could be used to create popular community-based websites capitalizing on the paradox of increasing tribalism in the age of globalism.

As you might imagine, I wrote pages and pages on this topic and tossed most of it. Although it was fun to write and to explore the concepts involved, the blog, like the last one, had grown too long. If interested, you might want to rummage through my trash, as I do edit a copy using …..pen and paper. Of course those who see my many typos may wonder what I was thinking about when editing. Why the next blog, of course!

Hope you are enjoying the “dog days” of summer. Speaking of local problems, our drought is worsening. My lawn in now a unhealthy shade of yellow – with no hope for recovery. And so it goes.

Dr. Mike

Bookmark and Share

Posted in Apple, business listings, Data Sources, Geospatial, Geotargeting, Google, Local Search, mapping business listings, updating business listings, Yellow Pages | Comments Off on Local Search – Local Data – Local Sources

What3Words – Not.Quite.Right

August 3rd, 2015 by admin

Recently, just for fun, I have been examining innovative grid offerings from What3Words, MapCode (TomTom-link) and Open Location Code (Google). What3Words seems to have caught the most attention, and in this blog I will present my thoughts about this specific effort at creating a more useful map grid for addressing. This is a really long blog. If you don’t have the time to read it, skip to the bottom section titled And Now a Word From Monty Python – it skips the details, but will give you the gist of my evaluation.

Three notes to start this off. First, the commentary that follows is not focused on detailed aspects of geodesic discrete global grid systems or their function as data structures. We are concerned here with simple location encoding systems, often called “finding grids” that can be used to provide an indication of the position of something, somewhere. Second, I do not intend to rehash the grids that have survived the test of time, other than to comment that there are a number of very useful grids that can be used for purposes of “finding.” Third, in an attempt at brevity, I am going to cut a lot of corners involving map projections, geoids, tessellations and other interesting areas and avoid discussions of theory that would leave you begging me to stop. Instead, let’s look at some basic notions involved in geographic grids, and then examine What3Words and what it (and other recent grid development efforts) may be trying to accomplish.

Map Grids – What’s involved?
At its basic level, the effort involves computing a grid comprised of cells relatively uniform in size that are used to tile, with no overlaps and no gaps, the area of geography in which you are interested. The coordinates defining these grid cells might identify the corners of a cell or they might identify the center of a cell. The method of annotation aligns with the goals of the producer of the grid.

Many grids that have been developed have been associated with efforts by militaries or other government agencies around the world interested in finding and naming locations in which they have or may field operations. Most of these efforts designate individual map grid cells by using short-codes that 1) avoid the need for users to be fluent with latitude and longitude, 2) eliminate the use of positive and negative grid values, and 3) do not require a detailed understanding of how the grid system was created.

“Finding” grids can be global or a local
In order to create a map grid one needs to decide the scope and parameters of the problem being solved. For instance, if you create a city street map designed to operate independently of other maps (i.e. other geographic areas); you might be satisfied by creating a local grid that bounds and applies only to the area covered by the map. Often these types of grids create cells are identified by coordinates called “bingo-keys,” as, reading a map index accompanied by local coordinate reference sounds like someone calling a bingo game, “A-29, I-32, etc.” Local grids should not be taken as meaning limited in extent to small areas. For example, the Township and Range system that exists only in some areas of the United States is defined on the basis of numerous, local baselines and principal meridians, but functions as an integrated land recording system across large swaths of the country.

Of course, another person might map the same area described above in the local street map example and decide that the geography involved should be represented as part of a global referencing system. In this case, the need for this map to integrate with the geography of the rest of the world is deemed of paramount importance to the developer of the grid.

Deciding whether a “finding” problem is local or global depends on your goals for the system, how you intend the grid to be used, and your plan for implementation and popularization of the grid. However, creating a global grid benefits from considerations related to how the new system could integrate with, or, possibly, replace existing grid systems. Unless the new grid provides a desirable functionality that existing grids do not, it is unlikely to be adopted by enough people to ensure its continued existence. Instead, it may be viewed as an unnecessary, duplicative addition in a field already crowded with worthy alternatives.

Grid Coordinates
As noted above, grid systems require a method for describing the location identified by the grid. In many cases these are reported in the form of linear or angular quantities that designate a position that a location occupies in a specific reference system. Coordinates from grid systems can be considered to serve as addresses. In its simplest form an address can be thought of as an abstract concept expressing a location on the Earth’s surface.

Two important questions follow. What does the creator of a grid mean when they use the term address to describe the locations in a new grid system? Second, how will commonly used existing addressing systems handle the form of address generated by the new grid? For example, from the perspective of a postal service an address might be defined as being: mailable, deliverable, locatable, and geocode-able. For some grid designers, locatable may be the only criterion of importance. For others, the address requirement might include the notions of it being hierarchical and topological. The notion of hierarchical can be seen in the address form used by go2 systems based on a long line of patents dating from 1996 (“Geographic location referencing system and method,” Patent number: 5839088) that in one embodiment, provides a hierarchical address in the form US.CA. LA.14.15. Other grids system coordinates may allow one to discern useful information about the relative distance and direction between coordinate pairs, thus providing a useful relational context to the “finding” problem.

So what is what3words?
On its website what3words (w3w) describes itself as, “… a universal addressing system based on the 3mx3m global grid. Each of the 57 trillion 3mx3m squares in the world has been pre-allocated a fixed & unique 3 word address.” On the same page of their website, the company indicates its opinion that the world is poorly addressed and that w3w provides a unique combination of just 3 words that identifies a 3m x 3m square anywhere on the planet. It claims that the grid cells are, “… far more accurate than a postal address, and much easier to remember, use and share than a set of coordinates.”

The ability to remember three words, as opposed to remembering a long pair of spherical coordinates is at the heart of the w3w system. W3w appears to be trying to introduce a system of geographic coordinates into widespread “public use,” as opposed to the more limited scientific and technical user populations associated with the use of many other geographic grids.

Example forms of the w3w coordinates are as follows: “remote.sun.palms,” ” feast.grab.bride,” or “madness.tags.curious.” As a further example, there are approximately 100 3m x 3m cells that fall within the boundaries of the property containing my home. If I enter my postal address using the w3w website, it appears to select a cell that is coincident with the center of the roof covering my abode. However, I could choose the coordinates representing any of the cells on my property as my w3w address. Presumably driveways or front doors might be a preferred choice for those presented with a large number of cells that could be used to identify the location of their home or business.

W3w is neither hierarchical nor topological. Any of the triplets used by w3w to identify a grid cell reveals nothing about the geographic relations between specific locations. In addition, w3w currently does not appear have a vertical component or any other method of ensuring precise addressing for multi-unit locations. I guess that people living in the same corner of a multi-level building might have the same w3w address and delivering anything to them might be a real puzzler. I suppose that’s part of why topology is so important in many addressing systems.

The approximately forty-thousand word English-language vocabulary used to identify the cells has been designed to avoid words that might be considered impolite or upsetting when combined with others. For example,” dogs.tinned.cats” is shown to identify a location in Japan, but the combination of words “dogs.eat.cats” or any related variant does not appear in the system. Singular and plural forms of words are included. The algorithm employed was designed to ensure that similar three-word combinations do not occur in the same geographical area. A variant form of a three-word combination used in one location (e.g. the use of plural form of one of the words in the coordinate triplet) might be used to describe a location on another continent.

Next, there are multiple language versions of w3w, although it appears that English is used in all versions for representing locations in the oceans and seas of the world. The triplet of words used to describe a specific land-cell using English bears no relationship to the three-word coordinate for the same cell in any other language, although these multiple representations point to the same world coordinate when analyzed by the w3w software. If you compared your w3w destination coordinates with someone who had used another language version of the grid, you both might be headed to the same destination, but, lacking a software application, would have no idea that the two seemingly unrelated grid cell designations were describing the same exact location.

As an aside, note that there appears to be some size parameter in work in naming locations in the ocean. While blank sections of water are named in English in all language versions, modestly-sized islands, such as Reunion Island, currently in the news, are covered with grids cells using words from the language version being used (e.g. French words if you use the French language version of the product). However, smaller islands (such as Flat Island and Round Island to the northeast of Reunion) are named in English, even when using another language version of the product. In further examination of this issue, I note that the Spratly Islands, involved in a territorial dispute between China, Brunei, Malaysia, Vietnam, and the Philippines are named using triplets of English words regardless of the language version of the product that is used. I Guess there might not be a strong appetite for the use of the w3w grid by China unless the naming algorithm is altered a bit.

The three words chosen as a coordinate for a location normally represent the center of the cell. These points, at least theoretically, “…will be within 2.12. metres from any adjacent square with a w3w address.” (Robert Barr –What3Words Technical Appraisal* is available here ). Barr further states that the w3w address is already a geocode (p. 16) and does not suffer from the problems associated with the geocoding and reverse geocoding process.

How about that? The w3w triplet is actually a pointer to the latitude/longitude grid that makes the system possible – but you must have already guessed that relationship.

In order to use w3w a user needs to have access to the w3w website or an app that uses the system. That means in order to identify their location and find the relevant grid address they need a computer, or a smart phone, or access to these types of devices and, at some point in the process, access to an Internet connection. The person hoping to find their w3w address needs to be able to point to their location on a map to select the grid cell that is going to be used to represent their location and whose coordinates will be used as their address.

If I had never seen an online map or an aerial image identifying my location on one, it might be a pretty hard task to accomplish. As a matter of fact even people who have had access to digital maps and satellite imagery often perform very poorly when attempting to use these types of spatial displays for purposes of locating features in the real world. What this means is that the adoption of w3w may be slowed by its users ability to access the required technology, as well as the abilities of users to locate their homes and businesses using the w3w platform. In addition, intervening opportunity make take its toll since the required technology can be used to solve the “finding problem” using alternative means.

In any event, after having identified the location of my home or business, I would need to remember the three-word combinations used to represent them. Of course, without access to the w3w software, no one else can determine if they are near me solely on the basis of the three-word coordinates. Nor can anyone help me out by referring to, say, a nearby address if I cannot quite remember my sequence, since w3w word-triplets are randomly connected to geographical space in the w3w system.

So, let’s recast the story. W3w grid cells are created based on lat/lon and then identified with unique three-word combinations. In order to use these “addresses” the three-word combinations point to a lat/long coordinate pair that can be used to tie into typical mapping and routing systems. Yikes! Just what benefit does w3w provide?

W3w seems to make a great fuss about the memorability of their three word triplets triumphing over the difficulties in using lat/lon coordinates. In other words, the w3w coordinates could be considered as a simple mnemonic for representing a location in a table that contains lat/lon.

Although I have never tried to memorize coordinate pairs, I agree that lat/lon coordinates might be hard to remember. Of course, so is memorizing and retaining the correct form of a random concatenation of three-words from a forty-thousand word dictionary that creates approximately 57 trillion unique variations of these coordinate triplets.

Perhaps more to the point, I cannot remember the last time I focused on remembering a specific lat/lon coordinate. However, I use lat/lon almost daily, but this action has been made opaque by mapping and finding technology. In my daily life, I no longer need an address for others to find me. I can call up a Google map and by tapping into my GPS chip it can calculate my location and tell others how to find me.

Indeed, if I point at a location on a map in Google Maps, right click and query, “What’s Here,” I receive the lat/lon of that location. If I put that lat/lon in a signature block, it would allow people to find me who did not know my postal address. In fact, the finding action in the above example seem to roughly approximate the same procedure people have to use to find the three-word coordinates in w3w that define the a lat/lon coordinate.

While the concepts of “finding” and “finding grids” might be considered a global problem, providing addresses for individuals and their businesses may, in fact, be an opportunity that is best considered a local problem. Further, assigning global addresses using a global grid when the grid system contains no recognition of the political and administrative geography involved may be an insurmountable problem. While this may sound short-sighted, I can assure you that addresses, addressing and the “authority” to establish them, to standardize their form, and to mandate their use are political hot buttons everywhere in the world.

Finally, note that technology may be bypassing the need for their beneficiaries to understand the complexities of grid systems. Consider, the mobile phone. You probably can’t remember the long sequence of digits that can be used to call your friends. Depending on the contents of your address book, it may also know your location and the locations of everyone you call. In addition, your phone records everywhere you go on the Internet and in real life. The phone doesn’t seem to need w3w to accomplish this feat and neither do you.

And Now a Word From Monty Python

Consider the fictional scenario presented below. I thought about scrapping the blog above and using this skit instead, but decided it might be better to discuss some of the issues with w3w in more depth. However, the scenario below is a pretty good summary.

It was a cold and dreary night. I had no idea where I was, so I called Rescue Services.
The operator asked, “What Three Words. Please?”

I replied, “I Am Lost.”

“No,” was the reply. “We couldn’t find any results for ‘I.AM.LOST’.”

I retorted, “But, I.AM.LOST.”

“No, sir. We require three words, not four words.”

I replied, “MY.CHOICES.ARE?”

“No, we did not get any result for those three words”

I responded, “HELP.ME.OUT?”

“Sir, you need to use a three word combination that contains three words from the forty-thousand or so recognized by what3words.”


“No,” was the response followed by, “And, that’s WHAT3WORDS. Although if you choose to use French or Portuguese the dictionary is only twenty-five thousand words because they do not cover the oceans and seas. Are you asea?”

“No, but how do I get the correct what three words that locate my position?”

“Use a What3Words App to identify your location on a map and it will return the three words defining that position.”

“But if I gave you my What3Words, what would you do with them?”

“Convert them to lat/lon and run a route to you.”

I was incredulous – “WHY.DO.THAT? You can read my lat/lon directly from the GPS chip in my phone and you can JUST.FIND.ME!”


Encountering new maps grids is always fun and the thought that one might contain something productively innovate is always a big lure for me. I admire the team at w3w for attempting to solve a difficult problem. Unfortunately, convincing the world to use a new grid is a very difficult task, even when you might have created something better than that which already exists. While w3w is being effectively marketed, it is my opinion that is it is unlikely to be widely adopted. It lacks what I consider to be a fundamental innovation. Further, its utility as a map grid is constrained by the simplicity that makes its use appealing to many.

Finally, I am no more enamored of the new grids Map Code and Open Location Code than w3w, but for entirely different reasons. But this blog is already entirely too long.

Letters, we’ll get letters…..


Dr. Mike

*) Dr. Barr is an acquaintance and a professional of the highest caliber. His analysis of w3w is good reading and I recommend it to you. He appears to view w3w favorably.
**) It is my opinion that the w3w website software is not particularly well-disciplined. Its various language options appeared to me to be unstable when examined over several days using Firefox v39. I did not interrogate the website using any other browser.

Bookmark and Share

Posted in Authority and mapping, geocoding, geographical gazetteer, map coordinates, map grids, routing and navigation, Technology, what3words | Comments Off on What3Words – Not.Quite.Right

Can Anyone Stay on Top of the Online Mapping Hill?

July 19th, 2015 by admin

Recently a colleague contacted me to ask my thoughts about a report indicating that Microsoft was selling some map-related assets to Uber. He noted his disappointment, as he had hoped that Microsoft would reinvigorate its mapping activities and, once again, become a notable player in the mapping market. The brief conversation led me to contemplate the world of online maps both past and future.

Microsoft has had a long and storied role in desktop mapping software. For a while they were the leading provider of consumer oriented mapping software, but that role relied on the company’s success in controlling the physical distribution channels for its products. In the age of packaged mapping software aimed at the desktop computer Microsoft was able to influence the popularity of its products by controlling the distribution channels that determined the availability of products for purchase.

Microsoft could afford to buy as much shelf space and as many end caps or stand-alone displays as it desired. Since physical space in stores was limited, Microsoft’s presence could restrict the competitive products that were available. In other cases when competing products seemed to offer more and better functionality, Microsoft often reduced the price of its software to “free,” or at a cost level that was not sustainable for most competitive products. Due to the ability to leverage its mapping brand across distribution channels and measure its mapping products profitability across all software product lines, Microsoft’s mapping software became a dominant force in the industry. This is not to say that Microsoft’s mapping software was uncompetitive, as it was often of better quality than the products of many other players in the mapping industry.

MapQuest’s launch of a free, online mapping product quickly changed the distribution paradigm. In what should have been a case study for the Innovator’s Dilemma, Barry Glick and Company offered Internet-based routing capability between addresses across the United States, even though no one, at the time, ever asked for one. While not quite as fully functional as some of the desktop mapping/routing software, it was often more up-to-date and offered none of the cock-ups that frequently accompanied the use of CD-ROM software, and the related idiosyncrasies of the operating systems of the time.

Microsoft’s response to the development of online mapping systems was quite timid, and, perhaps, more confused than anything else. Unfortunately MapQuest was the people’s choice, although Microsoft online map product was competitive. The more important point is that it was at this point that Microsoft’s inability to influence online distribution doomed its mapping efforts, as the company now would have to depend on functionality and innovation in its effort to lead the market without any of the revenue that accrued to the company from their desktop mapping. But like MapQuest and Yahoo, Microsoft had no idea how to make money from online mapping.

Google’s development of mapping as an infrastructure play designed to enhance its advertising business marked a turning point in the sophistication of online mapping functionality. Google had a financial reason to spend a great deal of money promoting innovative mapping and routing features. It was able to out-spend and out-innovate Microsoft and all other players in the mapping universe as a result. In turn, the threat of the position achieved by Google as a partial result of the global success of its mapping programs led Apple to develop its own capabilities in mapping. Apple realized that users of the iPhone expected quality mapping and the company was not interested in its customers being users of Google Maps. Apple’s spend on mapping has been to protect its brand.

In today’s online world of mapping Google and Apple, two companies with strategic incentives requiring mapping, rule the roost. Will this leadership continue?

I have previously noted in this blog my interest in how long Google might be able to sustain its “spend” on mapping software. I think we now have an answer. It is my impression that the heady days of map development at Google are over and that its map products will be maintained at or near their existing levels, but with little innovation, other than in regards to autonomous navigation systems, as we proceed into the future. Google, unfortunately, is approaching middle-age and is developing the concerns that accompany fiscal responsibility. Over the last year or so, Google Maps has experienced senior management departures and market abandonment (GIS). Now the company has new financial leadership and this will result in spending limitations leading to a lack of innovation that will certainly limit Google’s future in the world of mapping.

Although it is early in game for Apple, I doubt they will fare much better. In its favor, the company has been more circumspect about spending. It appears to have out-thought Google’s mapping innovations and found of way to reach near-parity without spending as much as Google. However, in the long run, Apple’s market is limited to its own customer base, and quality mapping will be too expensive to support without some fundamental change in Apple’s business model. I suspect that eventually Apple will find that it, too, cannot afford to support its mapping programs at the desired level of accuracy and functionality.

The problem for all companies involved in mapping is that supporting quality in spatial data is an effort that historically has nickle-and-dimed profitability. While some of the basic map “facts’ may remain unchanged for decades, other map features change with amazing rapidity. It is an unfortunate rule of mapping that you cannot just compile, then build your spatial database and stop. In order to be competitive companies need to update their map data in a cyclical and spatially comprehensive manner. In addition, technology is constantly changing and the spatial support systems that spin these databases must be upgraded, updated and rethought every two to three years. Most organizations simply cannot afford maintain these types of efforts on a worldwide basis.

Few CFO’s want to hear the answer to this question, “When will you be finished with the mapping database?” The answer, of course, is “Never!” The answer to “Can you spend less?” is “Of course, but the data won’t be as good and the functionality will suffer.” (While “active” crowdsourcing may be considered an alternative here, I think that it is, for several reasons, not a sustainable choice for major commercial map providers. However, crowdsourcing (either active or passive) is not the topic of today’s blog.)

That brings us to Uber. Obviously Uber is interested in mapping. It has hired key players from Google, made an asset-deal with Microsoft and submitted a bid for HERE. However, the HERE bid appears dead, which leads me to presume that new Uber employee Brian McClendon (ex-Google, once a mapping exec for the company) may be planning on recreating the Google Map Machine at Uber. I do not doubt that Uber could spend some of its money to build a great street-level spatial database for the world. Conversely, I hope someone besides me begins asking, “How many of these worldwide street-level databases are we going to build? Isn’t there a better way?” Maybe!

Although it may not make your day, next time I am going to write about map grids, an ever popular topic for dreamers. It might be fun – and, hopefully, informative.


Dr. Mike

Bookmark and Share

Posted in Apple, Google, google map updates, Google maps, HERE Maps, map compilation, map updating, Mapping, MapQuest, Microsoft, MindCommerce, OSM | 5 Comments »

Google Maps Stumbles Badly – Crowdsourcing is the Problem*

May 25th, 2015 by admin

Google Maps has had a rough go of it lately. Public relations problems generated by crowdsourced data are at the heart of the conundrum, but the problems are related to two different systems used to support Google’s mapping efforts.

Google Maps’ current public-oriented problems are these:

1) The editing system the company employs for using crowdsourced data that may eventually appear on Google Maps is not authoritative.

2) The local search “folksonomy-oriented” matching algorithm used to match names users enter to find locations on Google Maps was poorly designed.

Both of these “gotchas” are unfortunate and could have been avoided. I, and others, have offered plentiful, free-advice to Google about the company’s need to tune its spatial data capture to enhance its map data base, not to detract from it. Let’s look at the specifics of Google’s latest mapping problems.

Map Maker

In regards to Map Maker, the public relations fiasco focused on Google Maps apparently ingesting a curated edit of a road network whose content outlined an Android-like figure urinating on the logo of its competitor Apple. Indeed, it was reported that just to the east of the peeing Android was a sad face emoticon with the text, “Google Review Policy is Crap.” (See this source to view both images.) As an aside, it is good to see that Google has kept its sense of humor. While I was searching for sources on the “peeing Android,” the ad on Google’s search page was titled, “Manage Overactive Bladder.”

Of course, these have not been the only errors discovered in the company’s use of crowdsourced data – I am sure many of you remember the listing for Edward’s Snow Den located in White House. How could these types of map spam have been unexpected by Google? And yet, according to the Venture Beat news source cited above, Google’s response included this interesting comment, “We also learn from these issues, and we’re constantly improving how we detect, prevent and handle bad edits.” Hmmm. I never would have classified the Google Maps Team as slow learners. They are world-class brainiacs. Maybe they lack an appreciation of or familiarity with the nuances of cartographic practice?

In any event, in 2011 for example, I wrote a series of blogs analyzing Google’s Map Maker System and the company’s handling of crowdsourced data ( e.g. here and here, among other articles on the topic). To save you from having to read the original articles here is a concise summary – I examined Map Maker and its editing system and found that due to flaws in the system as it existed at that time the edited and “validated” information in Map Maker resulting from user generated data should not be considered “authoritative.”

Currently, Google has suspended Map Maker edits and is working on a solution to the “problem” of users contributing invalid, inappropriate, or otherwise erroneous spatial data for use in Google Maps. Let’s talk about what Google might consider doing to solve this problem near the end of the blog.

Google Map Search

Google Maps latest problem, highly documented in the press here, here, here, and here, is that it attempted to match unconstrained location identifiers (an uncontrolled vocabulary) entered by users during map search with actual locations on Google Maps (a controlled vocabulary). More specifically, the company chose to employ a purpose-built approach based on the use of an unconstrained folksonomy to match possible surrogate names entered by users during map search queries to find actual names and locations of the POIs (points-of-interest) symbolized on Google Maps.

I fully support the notion of a folksonomy-based approach to local search. As a matter of fact, in 2007, before Google or anyone else in mapping or location search was using the concept, I wrote a blog titled “Controlled Vocabularies, Why local search needs folksonomies.”

Google apparently understood the concept, but was not thorough enough in its implementation.

According to Google Maps’ own blog, the Google Team culled spatial terms from online discussion forums and related these names to known geographical locations. In some cases, the terms they gathered were found to be “offensive.” Really? How unexpected was this obvious, method-induced error? Did they think that they might not find associations between names and places that might be offensive? Have they never read about the riled-up public opinions on naming decisions made by the Board of Geographic Names of the United States? Nevertheless, Google authorities stated that they, “…were deeply upset by this issue, and we are fixing it now.” Hmmm. What other time bombs are yet to be found? Has Google not yet learned that maps and spatial information cannot be handled or considered “…just another information system?”

Maps and the information that they contain will bite you in the ass when you least expect it. My experience comes from years of teaching map making and over a decade spent as the person in charge of all mapping operations at a company that was, at the time, the world’s leading print publisher of maps and atlases. My mantra each morning was, “What’s it going to be today?” Google may be beginning to appreciate the problems of compiling accurate maps, evaluating map data for timeliness and appropriateness, calibrating authoritative editing systems, all while keeping your product up-to-date and editorially acceptable to your user base (it’s that old geographic names thing again).


Problems with their approach to crowdsourcing are at the heart of Google’s current, public, mapping blunders.

Surowiecki in his important work “The Wisdom of Crowds” provided a comprehensive look at user generated content and I urge you to read his book. Surowiecki postulated that taking advantage of the wisdom of crowds depends on the diversity of opinion, independence and decentralization in the crowdsourcing population, as well as the influence of the method used for soliciting contributions. Surowiecki felt that if the crowd contributing data cannot satisfy these conditions, then its judgements are unlikely to be accurate. If he is right, then Google may need to rethink its approach to crowdsourcing data for use in Google Maps, as it appears to me that its current procedures violate almost every aspect of these cautions.

In part, Google’s use of crowdsourced data seems to reflect a belief that the company would have been unable to create as comprehensive a map database on its own as it has been able to create using crowdsourcing. Google rightly reasoned that contributors to its spatial database might not have the same goals as Google in regards to map accuracy and authority. Presumably, it is for that reason that Google evolved a hybrid-edit practice, but then negated the efficacy of the system.

First, it employed internal editors who did not possess the specific local geographic knowledge to assess crowdsourced contributions supposedly describing local geography. Second, it further diluted its goals for the system by the manner in which it allowed its contributors to become one of the components of the authoritativeness of its edit system. In the long run, Google needs to find a way to exert control and authority over its edit system. Until it does, blunders like those described above, and ones that are even worse, will plague their map database.

Google’s goals for crowdsourced data often appear contradictory. While they want to be able to harness local knowledge from users, their system allows users to contribute to the system even when they do not have local knowledge, nor are located in the region for which changes are being contributed. Map Maker is a prime example of this mismatch. In turn, some review editors, also, appear not to have the local knowledge that one would think was required to analyze a contributed change made to some aspect of a “local” geography. Using imagery is an understandable, but poor substitute for local knowledge.

In other crowdsourced mapping systems edited data are pushed to a live site and, then, curated until it is “considered” correct (kind of like a ping pong match) by meeting the commonly held notion of what is correct by the community that evaluates it. Data in crowdsourced systems are supposed to be “self-healing” over time. Google, apparently, instituted its editorial review measures because they could not afford for live data to be batted back and forth until judged to be “healed.” For example, it is difficult to design a mapping system or a routing system whose features might be in a constant state of flux. Not only could this create incorrect maps, but non-navigable routes.

Google seems to have designed a system that that did not take the “extended” healing path, but one that was just good enough when its product was at a lower profile. Unfortunately, the system is no longer appropriate for the uses to which it is being put. Could these active sources of user generated content be used to navigate autonomous cars? We had better hope Google figures out a fix before that happens.

In regards to the map search problem, Google apparently was aggregating input from people who, presumably, were unaware of Google’s use of these data. While Google seems free to aggregate any information it wants, it boggles the mind that it would do so based on chat room conversations which were certainly not authoritative sources of information on local geography. Creating a folksonomy without consideration of the source authority, or the use of a filter for “appropriateness” were major, bush-league blunders. In addition, gathering crowdsourced data is influenced, see above, by the method used to solicit information from the targeted population. Google now knows that its method is in error, but will it be able to concoct a user-focused paradigm that elicits data accurate and useful enough for the purposes of Google Maps?

Whether or not Google can find a way to effectively engineer and police crowd-sourced systems is a topic of interest for them (and for me). My own opinion is that active and passive crowdsourced systems will be critical components in all future mapping systems. Google has the resources to monitor, evaluate, rank and adjust or regulate its crowdsourced geographical data to achieve its goals in mapping, but seems reluctant, or unable to mount the specific effort required to confront the problem.

As, I have noted here in past blogs, Google engineers don’t necessarily think they are smarter than everyone else, just that they have more and better data with which to examine a problem. Google should have the smarts, resources and the required data in their data lake/reservoir/swamp to analyze the likely validity and usability of crowdsourced map data by creating a consistent, authoritative vetting process. But, maybe not. Or maybe the effort would make it uneconomical.

Well, maybe Verizon will come up with something now that it owns the almost moribund MapQuest. Apple Maps? Well, they could certainly take better advantage of crowdsourced map data, but that does not seem to be of particular interest to them at this time. Although it is a technique that could really help them improve the quality of their maps and, especially, their business listing data.

And now for something completely different

While I spend most of my time on assignment for my consulting business or thinking about the problems of mapping and spatial data handling when not on assignment, I do find time for one hobby in particular. Don’t laugh – it’s bird photography. If you are interested in the world of shore birds you might want to take a look at some of my photos – DobsonPhotoArts.com . (While prints are for sale, I do not expect you, the audience of this blog, to buy any images; their purchase is part of a more complex strategy – and fun research in its own right. So don’t alter my sample.)

I hope you had a great Memorial Day Weekend.

Dr. Mike

The reference to the Surowiecki work is as follows:

Surowiecki, James, [2005). The Wisdom of Crowds: Why the many are smarter than the few and how collective wisdom shapes business, economies, societies and nations, New York. NY: Anchor Books Edition pp 306

*Blog edited on 2015_05_26 to improve readability.

Bookmark and Share

Posted in Apple, Authority and mapping, business listings, crowdsourced map data, Data Sources, folksonomy, Google, Google Map Maker, google map updates, Google maps, map compilation, map updating, Mapping, MapQuest, routing and navigation, User Generated Content, Volunteered Geographic Information | 2 Comments »

Silicon Valley – the New Motor City?

May 11th, 2015 by admin

As you may have noticed, my last few blogs either directly or indirectly, have been nipping at the issues surrounding autonomous vehicles (AV) and the spatial data that might be needed to operate them. Today’s post offers a look at how Google, Apple, Uber and others might choose to compete in this market.

While the current automobile manufacturers and their suppliers obviously will attempt to compete in the AV market, it remains an open question as to whether or not these industry stalwarts will be able to effectively transition to a future AV market, in which vehicles being driverless,” and manufacturers being dealer-less may not be the most significant changes. The complex infrastructure that supports the motor vehicle industry (e.g. suppliers, dealers, repair shops, parking facilities, aftermarket services etc.,) will likely experience disruptive change as AVs emerge. In turn, this transition will generate enormous money-making opportunities related to the new and potentially unique infrastructure requirements required to support the AV market. (See this article from Forbes for some interesting insights. )

The implications of AV market development are incredibly complex and most of us have not thought through the changes that will accompany this market. Will cars that cannot crash need to be insured? How will local municipalities replace the income generated by motor vehicle violations when traffic fines disappear due to vehicles being programmed not to violate local transportation and parking ordinances? Will it be more economical to run fleets of AVs around the clock (as delivery vehicles in off hours), rather than park them overnight? Will houses need garages for vehicles? How will the transition be managed when SUVs and AVs share the highway before AVs replace old fashioned drive-it-yourself vehicles? Who is going to employ those taxi, truck, bus and other commercial vehicle drivers made redundant when AVs become commonplace? Obviously the questions are endless – as are the opportunities.

It is my position that what happens to the existing market for vehicles, its infrastructure in general, and current OEMs in particular, will depend, to a considerable degree, on how Google, Apple and Uber attempt to monetize and leverage the market for autonomous vehicles. I realize that some of you think that this scenario is implausible. Others might respond that this is the same mindset held by mobile phone manufacturers (e.g. Nokia and Motorola) when contemplating the news of iOS and Android-powered mobile phones. Note that disruptive innovation rarely comes from existing players in a market.

In this blog I do not intend to include a long background discussion on AV, as any search engine will result in tons of information on the technology. Instead, the paragraphs that follow outline how I see the future AV market unfolding mainly for the three key players that interest me today. Let’s start with market timing and then move on to Google, Apple and their approach to the AV marketplace. Comments on Uber’s strategy will be based on the potential strategies of Apple and Google.

While there is an amazing amount of ongoing work aimed at producing saleable AVs, it is likely that mass-produced autonomous vehicles are over a decade away. Before then, we will see semi-autonomous vehicles that require varying degrees of driver intervention. AV prototypes will become abundant during the next five years, but the market will remain miniscule over the next decade. During this market development period the existing manufacturers will attempt to show their ability to innovate, as well as their influence while lobbying for legislation that might tilt the table in their favor over companies threatening it from the software/technology worlds.

The possible strategies of Google and Apple

Google makes the Android OS, billed as the world’s most popular mobile OS, available to everyone, but produces and markets their own version of an Android phone to show mobile companies and software developers how the system should work. The original Android, aimed at creating an Open Source OS for mobile phones, was acquired and managed by Google to further its corporate goals, while sparking a communications revolution. In respect to those “corporate goals,” Android is advertised https://www.android.com/intl/en_us/ as having, “The best of Google built in,” which means that “Android works perfectly with your favorite apps like Google Maps, Calendar, Gmail and YouTube.” What that really means is that Google has you in the clutches of their massive and highly profitable advertising business.

Apple, on the other hand, developed its iPhone operating system (iOS) with the intent of delivering it exclusively for use in a handset developed by Apple and manufactured to its specifications. Apple’s amazing financial success is based on the fact that the company’s products are designed to provide a comprehensive user experience for its customers. In turn, Apple’s customers rely on the company’s integrated approach to product development, equipment manufacturing, support and device education, as it provides them an “upscale” product experience.

The approaches of Google and Apple to the mobile market potentially speak to two potentially different strategies for competing in the AV market. It seems to me that the goals of Apple and Google in respect to the AV market should be the same – they should not care about the car, they should care about controlling what goes on in it when cars become autonomous. It is at this point that, for the operator, the vehicle becomes a floating- office/living room/den/bar/restaurant etc. Relieved of the “duty to drive” people will want to use the car cabin for recreation/communication/lifestyle experiences or to conduct business. I am not sure that either Apple or Google needs to own the car to own the cabin, but Apple may well want to own the “car” and the entire vehicle experience.

Google may conceptualize the vehicle cabin as a “local search petri dish” – abounding with germs – each of which is a new thread opening a unique path to sell local advertising. Apple may see the car as another device whose sales will dramatically increase their income and include a cabin-based market for services that a captive audience will be willing to buy/lease/rent.

Google and its Johnny Cab

It is my opinion that Google is looking to kick-start their concept of how the AV World should work through the development of what will become a low cost, fleet-oriented solution to navigation that encompasses the concept of cars on demand and no need for individual ownership of the vehicle. While these are laudable goals, they may be so utopian in scope as to preclude success. The problems with implementing such fleets are enormous, but, then again, those who think of ways to do so will make fortunes.

Of course, Google’s goals are flexible and the company will likely produce a purpose-built automotive oriented OS running its Johnny Cab AVs to show other manufacturers how a Google-based system could benefit the development of AVs. Specifications for its Johnny Cab OS, including the details of the hardware, sensors and data (e.g. a map database, traffic database, vehicle restriction database) required to make the vehicle truly autonomous will also be available. However, it will allow its licensees adopt the OS to fit their own vision of AV production and style, just as it does today with Android in the mobile phone market. Google will, however, control and tightly license use of the databases key to operating the system. In this strategy, Google may become the best friend of existing automobile manufacturers and their suppliers, although providing the AV OS and support infrastructure will put Google, not the vehicle manufacturers, in the catbird seat. See Google’s pre-marketing for wireless cars here.

It is my opinion that Google already possesses the majority of tools, know-how, and databases required to create fully functioning AVs. Tuning their OS and optimizing the vehicle to meet their view of the future will take some time, but certainly they are the lead horse in the AV race. Their key concept of ride availability based on vehicle fleets rather than individual ownership may be further in the future, but the notion does solve a number of difficult mobility-related problems that society faces today.

Google’s strategic desire in their AV development effort is to ensure the production by others of a large population of AVs powered by Google as a method of extending the sphere of influence of its advertising business. I doubt that Google is interested in becoming a manufacturer of automobiles, just as it has not shown much interest in being a manufacturer of mobile phones, tablets, or computers. Apple, on the other hand, well, Apple will likely take an entirely different approach.

Apple and its iCar.

In some ways, Apple appears to be more realistic about exploiting the market, sensing that people want better designed products – an iCar OS for instance. Apple helped transformed the world of telephony into a world of social contact, software apps and declining voice communications. The company has similar dreams for transforming the AV world into a social experience that transcends the friction of distance by focusing each vehicle trip on you and your wants during the journey, not about the vehicle, or how to get it to your destination, or how to avoid other maniac drivers.

Apple’s presumed approach, however, requires more than just an OS. The iCar must be a complete user experience that Apple can control from design though production. Apple will likely want to develop, market and support its own branded vehicle and is currently reported to be executing this concept at a skunkworks in Silicon Valley .

In essence, Apple may become the real threat to the current automotive manufacturers and their suppliers. Fortunately for them, Apple’s penchant for high-end, high-margin products will make it likely that Google or a Google-Like Company (GLC) will partner with the existing industry bringing the IOS vs. Android wars to the segment of the new marketplace that does not require “luxury” vehicles.

While hardware for AVs is not a crucial question (for an overview of the required hardware – see this from Wired) I have some doubts that Apple is up to the “whole car” challenge. Apple seems to be shy about working with anything that is mechanical in nature. Think of the evolution of all of Apple’s devices and you will notice that over time they have become smaller and less mechanical (dials replaced by touch screens, etc.). While I think Apple understands the world of “mechanical things”, I believe that the Company considers them the weak link in all use-scenarios. I wonder if Apple would be familiar with managing the life cycle of a mechanical automobile and dealing with the problems that such systems might present to their customers and to Apple as a product/service provider. Today, if you are having trouble with your still under warranty iPad, you go to an Apple store and once they confirm the fault, they simply swap out your unit. This type of service might be more difficult in the future AV market, but a modular car and component system could be maintained by remote diagnostics and serviced by “hot-swapping” equipment centers. Time will tell.

Note, that Apple’s map database might need some serious augmentation in order to support the company’s AVs. Today its database lacks the “map” details needed for autonomous navigation and also lack the roadway/roadside imagery and lidar data that could provide a significant benefit in the AV market, items that Google has been industriously collecting for some time. On the other hand, everyone involved in the potential AV market needs to ask themselves whether “map” database systems architected and designed for high-volume, Internet-based map serving will meet the needs of managing the tasks required to simultaneously, safely navigate million of AVs. I suspect there will be significant performance difficulties when using these map databases in AV applications – but that is not a topic to be covered here – at least not today.

And Now – Uber

Today Uber does not need to manufacturer or otherwise own a communications device or OS to support its business, since the phone and related apps are simply regarded as connecting the user to the Uber infrastructure. However, in the future, their infrastructure will be significantly transformed by AVs, their adoption rate and the nature of their ownership (personal or commercial). If my guess about the strategies of Apple and Google are correct, then Uber may have a difficult time competing, at least in some geographic markets, unless they reconfigure how their business operates.

Uber connects rider with vehicles and drivers. At its heart, Uber depends on driver/vehicle availability and matching these with the demand for trips generated by passengers. It may be that this scenario will become more difficult to manage if the scenarios for Apple and Google actually occur as outlined above. Owners of an iOS AV would, I think, be unlikely to make them available for use by Uber. Fleets of Google’s Johnny Cabs would already be programmed for maximum availability and, likely, not be available for use by Uber. While this issue could be considered a matter of logistics, Uber could take the uncertainty out of the equation by producing and owning a dedicated fleet of AVs. In this scenario Uber would continue to support its present business, but through its own fleet of AVs. The company could, also, expand its offering on a contract basis to provide individuals with transportation services that eliminate their need to own a car at all. Finally, Uber could add to its street visibility and market strength, by designing and fielding a fleet of custom AVs, in much the same way that the UPS brown trucks are unique and identifiable anywhere in the world.

It is likely that the notions described above are what has caused Uber to bid for HERE, as well as to partner with Carnegie Mellon on the Uber Advanced Technologies Center. From my perspective Uber could save a lot of money by being a licensee of Google’s forthcoming AV products, but, apparently, the company’s strategic interests in new markets makes them reluctant to partner with a likely competitor. At this point, however, Uber’s investment choices do not convince me that they are on track to play a winning hand in this game of chance.


I think the key concept behind Google and Apple’s participation in the development of autonomous vehicles involves each company capitalizing on the freedoms that will result when automobile users are finally relieved of the duty to drive. Driving is a demanding, tiring task that provides few rewards other than to transform space by spanning the distance between an origin and destination faster than by possible using other personal transportation devices. Those finding exceptional ways to fill the time that people will no longer spend driving vehicles will find a number of incredible new markets. Aftermarket modifications will become even larger in the world of AVs. Privacy? Don’t even ask.

For Uber the future of the cabin is largely a continuation the past. Its current users are those who have already chosen to be relieved of the duty of driving. What will change for Uber is the potential need to own a fleet of vehicles rather than contracting this availability from independent owner operators in order to sustain their business. The choice of whether to develop their own AV or not will profoundly influence the future development of this dynamic company. Total cost of ownership (TCO) will become a fundamental part of their business model, and one that they may wish they had avoided. They will find that paying drivers to use their own cars provided a much better return.

And one final note. How are all these quasi-sentient AVs going to find their destinations when they are not residential addresses? No company that I know of has either comprehensive or accurate business listings data, a theme that we have long hammered on in this blog. Add in the growing mold of link-rot and you have a toxic problem for the AV industry. Let’s face it, the AV will not know where the driver actually wants to go, it just knows which item on the list that was presented to the user was selected as the destination. In fact, the AV rider may not know this is the wrong business name/address pair until they arrive. What fun – a vehicle that finds destinations using a spatially indexed random numbers table. So what else is new?


Dr. Mike

Bookmark and Share

Posted in Apple, autonomous vehicles, Google, HERE Maps, Mapping, mapping business listings, Nokia, Personal Navigation, Uber | 2 Comments »

HERE Maps on Sale – The mapping derby begins

April 15th, 2015 by admin

As you may have read in the WSJ , Forbes, or other sources, Nokia’s mapping unit HERE is in play. While I do not find this item to be “news,” it has attracted a great deal of publicity and speculation on the estimated value of the company, the potential strategic benefits for the winner of the auction, and the companies that might be interested in acquiring the property. (Read my August 2014 blog on HERE, for more information on the company’s problems and its future – “here”, so to speak. The comments in my 2014 article seem especially pertinent given today’s news.)


Let’s discuss some background items that I consider relevant to the discussion on the potential acquisition of HERE.

First, every company that has any association with navigation has known for a long time that HERE could be acquired for a “very reasonable price.” It is not as if owning HERE has provided any strategic or financial advantages for Nokia, especially after the sale of its handset unit. However, the diehards at Nokia and HERE will express indignation at this statement and respond that selling HERE was never considered before now. Yeah, right – if you believe that, well, wait until you get a job in senior management!

Second, HERE is not now nor has it ever been a consumer-facing business. Re-architecting it to function as a “visible” and valuable consumer brand, while maintaining the company’s role as automobile industry supplier, would likely not be an easy task for any potential acquirer.

Third, HERE revenue for 2014 came in at EUR 969M. Another evaluation had HERE’s EBITDA at $168M. Then, as now, the company’s operations ran a modest loss.

Insightful due diligence might reveal the reason behind the loss, but these problems might not be of significant interest to a strategic buyer determined to be a player in the navigation space. After all, Google Maps is the best game in town and you are not going to catch them by standing still or wringing your hands over something that needs improvement.

Although I do not have access to any “insider” information, I suspect the lack of significant growth in HERE’s financials is related to a) a lack of strategic leadership by Nokia, b) inefficiencies related to the loss of the “Navteq Corporate Memory” caused by the departure of numerous senior personnel from the company who, for one reason or another, did not continue with the company after its acquisition by Nokia (2008), and c) non-optimal revenue generation (i.e. below what was in the plan). My take is that sales have been difficult to conclude for, at least, two reasons. One weakness that I believe is providing inroads for competitors is the perception of declining data quality in HERE’s mapping and support databases. Second, HERE’s owner has “Nokia-ized” the sales process and its “telecom-based approach” to the automobile market has alienated current and potential customers.

Fourth, Nokia acquired Navteq for EUR 5.7 billion in 2008 and the value of the asset has declined since that time. In 2014 Nokia took a EUR1.2 billion impairment charge, as it revalued HERE at EUR2 billion. As noted in the articles linked to at the start of this one, preliminary current estimates of the “bid” value for HERE may range from EUR1 billion to EUR 4 billion.

The fifth background piece for the acquisition puzzle deals with the “potential” rights (if any) related to HERE that Microsoft may have received when it acquired Nokia’s handset division. Yes, there was a side deal for the use of HERE maps for an extended period of time, but there was also the issue of warrants and rights that were not definitively described in the press release on the deal.

Who might be interested in acquiring HERE?

A wide-range of companies could be interested in acquiring HERE. Let’s look at some of the most frequently mentioned (but not necessarily the likeliest).

Uber is at the top of everyone’s list, but I am not in that camp. However, when companies are not required to be economically rationale and have a high valuation, anything is possible. From my perspective, Uber is a user of map data, but does not need to own the map database (unless Google were going to acquire HERE and Apple were to acquire TomTom). Google? – Well, they didn’t want anyone else to own WAZE, but general antitrust considerations (along with their current problem in Europe) make it unlikely they will play this time around.

Back to Uber – Rather than buying Here, it should be hot at work investigating or developing methods to utilize its drivers maneuvers while on duty to map the paths used for transportation throughout the areas where the company operates. In other words, Uber could adapt the WAZE model to map its paths if it wants to transition away from commercial map data providers. Uber does not need to own a global database of streets, as there are limited benefits to maintaining data on streets and roads on which their drivers will never pilot an Uber-mobile.

Automobile Manufacturers/Suppliers Consortium
I can see HERE being attractive opportunity for a consortium of automobile manufacturers, as these companies fear the strategies of Google and Apple for invading the car. I suspect this would be the worst thing that could happen to HERE as automobile manufacturing companies are notoriously fickle and slow-footed. Indeed, it is hard to see a group of automobile manufacturers agreeing on anything over an extended ownership period. It would be extremely difficult for such a consortium to agree on a map compilation program that did not favor their most successful markets (what a battle that would be).
While owners can focus the resources of a company wherever they prefer, focusing map compilation on popular markets for in-car navigation could reduce the possibility of leveraging the data to other commercial markets and applications that need relatively uniform coverage everywhere. A limited map compilation strategy might result in decreasing profits and decreased map update frequency. Oh, here we go again. Isn’t this type of problem what happened to HERE over the last seven years?

One of the articles I read on this topic suggested there may be some discussions between Nokia and a consortium of German car makers. Hmmm. This could be the answer to the question I posed in last year’s blog when discussing why Halbherr would leave HERE before it was sold – but maybe not.

Microsoft could have done this deal numerous times before now if it were really interested. Nokia needed a lifeline when it sold its handset division to Microsoft, but the big Softy appears to have been satisfied with a long-term contract for map data. I do not see maps as a large part of Microsoft’s future, but the acquisition of HERE would depend on how much of an advantage owning HERE would be in helping Microsoft to become a significant player in automobile information and navigation systems. Microsoft’s map offerings are currently at relative disadvantage to Google regarding what it can offer the automobile manufacturers and may soon be behind Apple, as well. Whether Microsoft moves to acquire HERE may depend on how comfortable they might be in partnering with a potential new owner of HERE. I expect Microsoft may be able to exert some persuasion on the acquisition of HERE as a result of codicils in their previous deal with Nokia, but that is a supposition on my part and may be entirely wrong.

Apple certainly needs help if it ever wants to grow its mapping capabilities beyond the current IOS-based handset market. However, with the exception of Beats Audio, Apple seems more willing to develop technology on its own by acquiring small shops that exhibit abilities that Apple believes would help them tune their existing efforts. Their mapping acquisitions have been modest and their partnership with TomTom may preclude them from feeling any need to participate in the auction.

While it may not happen, one of the most interesting options is for a technology infrastructure company to acquire HERE. Map data and navigation have become a utility and should be produced and distributed by a company that understands this type of business. Although it’s map data with all the inherent peculiarities and difficulties with data collection, processing and use, it is still data of various flavors that need to be delivered to a specific customer-type at the point of use. Networking companies, Intel, or other hardware/services/systems providers who understand the necessary model could be well poised to make a run at HERE. By the way, Intel is used only as an example. They have already been burned in the map market.

Handset Manufacturer Consortium
Samsung is one of the possibilities in this category, as it is unhappy with the strategic positions of Google and Apple in regard the smart phone market. Industry rumors abound that Samsung considered joining a coalition of companies organized to come up with an alternative to Google Maps and Apple Maps. Whether Samsung should seriously think that owning a map company can improve its strategic position in the handset market or as a technology provider, in general, remains an interesting issue for me. While Samsung could play the role of a “utility,” it is unlikely that it would have the expertise or management skill to develop HERE in an advantageous manner. It is for this reason that Samsung, or a similar company, might partner with other handset providers, as well interested automobile manufacturers or suppliers, to bid on HERE. Note: somewhere in the mix you will undoubtedly find a map company or possibly a traffic-reporting company.

There are tons of possible suitors, but this blog is already too long, so let’s cut to the chase and talk price.

Nokia’s hope is that a bidding war erupts between companies that have no need to think of their valuation of HERE as “real” money. Traditional companies, such as those associated with the automobile industry, will be unlikely to drop a gazillion to purchase HERE. Internet services companies might be inclined to pay a higher price, if they consider the time value of money to be irrelevant in the face of acquiring what might become a sustainable competitive advantage in future markets important to their strategy.

If one were aiming to duplicate the range of attributes, data quality and data specifications that were engineered into the original process designed by Navteq, then building an asset base comparable to the current quality and functionality of the database owned by HERE would take a considerable amount of time and money (although less than was required by Navteq). The unknown is how well HERE has maintained, enhanced and or expanded its database since the acquisition of Navteq by Nokia and this question can only be answered by the requisite due diligence.

My belief is that significantly more than half the value of the company is based on the value of the data. Unfortunately, without testing the data it would be difficult to know: 1) the value of the database without examining the current data collection and processing infrastructure, and 2) the potential degradation of the database’s quality, if any, during Nokia’s ownership. Valuing the enterprise (infrastructure, brand, etc.) sans database would be an easier task, but no one would be interested in acquiring the company and not the data.

If pressed, on an intellectual basis I could agree with Nokia that a value of EUR2 billion for HERE could be justified. Unfortunately, I doubt that Nokia is willing to spend the money to maintain the data in a state equivalent to or exceeding its current quality. If this is the case, then the company is between a rock and hard place, since time will diminish the value of its assets. My conclusion is that the sale of HERE is tilted to the buyer’s advantage. In addition, if the quality of the HERE database has slipped, then the multiple will decrease and the the price should fall closer to EUR1 billion.

Time will tell.

By the way, we have retained Mr. Tudball and….ahhhh…. Misses Wiggins to represent TeleMapics in the bidding for HERE. Well, somebody has to interject comedy into this deal!

Best wishes to all.

Dr. Mike

Minor updates on 4/16/15 to correct one typo and the wording of one sentence.

Bookmark and Share

Posted in Apple, Google, Google maps, HERE Maps, map compilation, map updating, Microsoft, Mike Dobson, Navteq, Nokia, routing and navigation, TomTom, Waze | 2 Comments »

« Previous Entries