Exploring Local
Mike Dobson of TeleMapics on Local Search and All Things Geospatial

Where Would You Like to Place the Crimea on Your World Map?

December 19th, 2019 by admin

Apple and its Maps product popped onto my radar last month. First, as part of the a response to a probe by the House Judiciary Committee into whether or not Apple had engaged In anti-competitive practices, the company revealed that it has spent several billions of dollars creating and improving Apple Maps. Wow, the price of hubris has gone up.

Maybe inflation has run rampant since I last looked, but it is difficult for me to imagine Apple Maps being able to efficiently and effectively spend billions of dollars to create, maintain, update and improve its map database. It is my opinion that if the company did spend that much money on the effort, then it, presumably, made numerous licensees and contractors very happy and unreasonably profitable. However, from a cartographic perspective, I admire a company that realized their original map database was in need of improvement and systematically set out on a course intended to improve it.

Unfortunately, a couple of weeks ago, after investing all those billions of dollars on procedures, research and technological innovations to improve the “ground truth” of Apple Maps, the Company revealed that it was willing to ignore its expensive, industrious endeavors and accept the Russian Government’s opinion that the Crimea was legally part of Russia and not part of the Ukraine. This was, apparently, news to the Ukrainians, as well as to the editors at Apple Maps who changed their maps to accommodate the demands of the Russian government.

Similarly, Google Maps has, also, capitulated to Russia on the depiction of the Crimea on its map product. Of course, this is nothing new as both Apple and Google seem to have tied themselves in knots while finding ways to accommodate China’s claim to Taiwan and its wishes regarding its representation on maps, as well as in their product marketing materials.

One of the solutions, used by both Apple and Google, to resolve the border representation issue is profoundly troubling. In this essay I call the problem “multiple representations.”

On the Apple map of the area served up to my iPad the Crimea is demarcated as an internal border in the Ukraine, although there is no international border shown between its eastern border and Russia. The border representation that you and I see on an Apple map showing the political border of the Crimea apparently is not what someone in Russia sees viewing the same mapped area on the map views served by Apple. Apple Maps reportedly displays the Crimea as part of Russia only on the maps that are served to people viewing the product in Russia. Indeed, Apple has corroborated this claim.

Similarly, according to the BBC Google, too, portrays the Crimea as part of Russia on its maps shown to viewers in Russia. In Google Maps shown to the rest of the world, the Crimea appears represented as a disputed border. However, it has chosen to show the location of the disputed border between the northern edge of the Crimea and the Ukraine, rather than between the eastern edge of the Crimea (the Ukraine) and Russia. Choosing to locate the border in this location seems to “imply” that Google Maps acknowledges Russian hegemony over the land area involved, even though the company does not use its established symbol for a “de facto” border on the mapped area.

I suspect more countries around the world may attempt to capitalize the value of their purchasing power by enacting laws with punitive penalties that require world geography to be represented on maps viewed by their citizens (and possibly all others) as the country prefers it to be interpreted. Multiple representations of common political borders seem to be in our future; it seems that the cartographic equivalent of “fake news” is “fake maps.”

If decisions about the representation of geographical content are not handled carefully, conscientiously and respectfully, Apple Maps and Google Maps may be heading down a path the leads to the end of meaningful reference map publishing. What is an appropriate response to legislated cartographic representations that are clearly fabrications, even though desired by a specific government? Will Apple and Google eventually value market penetration over the representation of geographical and political reality? How will future map users measure “spatial integrity?” Ouch, my head hurts already.

I am concerned that Apple and Google are giving an entirely new meaning to the usage of the term “Reference Atlas.” And make no mistake; both Apple Maps and Google Maps are the new reference atlases for most of the people in the world. If you knew little about world geography (most people in the modern world) and were referring to Google Maps or Apple Maps for purposes of reference, how would you know if you could trust that what you saw on a map was a “faithful” or perhaps “reasonable” representation of the conditions on the ground? How would you know if what you were seeing was “fake cartography?” And if you could determine that it was a false representation, what could you do about it that would affect a change?

One issue to focus on here is that border manipulation on maps is not new, although the nature of the current distribution method makes this strategy a more concerning move than it was in the print world. Political borders are an excellent example of the “squishy” side of cartography. Borders between countries most often are not visible on the ground, so the general compilation goal is to gather authoritative data from the sources that have it in order to understand how to represent borders as correctly as possible. Normally, governments are thought to be authoritative. They define their borders and provide this information to cartographers in the form of the country’s official maps or as cartographic data distributed by its official mapping agency.

Of course, it is often the case that neighboring countries (or even neighboring states in the United States) define the same border and differ in terms of the representation of its location – and they too are considered “authoritative” sources for map data.

The fact that the “authoritative” border data from one official sources often does not match that provided by a neighboring country means that those who compile these data into world reference map formats need to “interpret” border data to reflect a “balanced” editorial opinion – or what might be described as a “harmonized” world-reference view of the data. It is for that reason that numerous types of borders (demarcated, disputed (de facto), legal (de jure), indefinite, undefined, etc.) are usually shown by unique symbology on the maps provided by those who try to publish “reference atlases”. Or, at least we once used terms such as those above to define borders in map products.

So, it seems that the border information on world reference maps should actually come from the countries involved. Well, in theory maybe, sort of, or maybe, but in practice – often not directly. No, that is not waffling. Many of the internal and external country boundaries in today’s digital world databases originated in digital cartographic databases that date back to the last century. In fact, many of these historical, linearly-related precursors to today’s geospatial world reference databases were produced by or for military agencies of the United States government and donated to the public domain by them. How about that?

Further, the dirty truth here is that the border information on these databases was compiled and, then, captured by digitizing paper maps at the best scales that were provided to them by specific countries, or on the best maps that could be found when countries were not cooperative or allies. Oh, and some of the maps were so old that they had not been revised in decades – or worse.

The really bad news is that, in many cases, when the quality of the available reference material was contradictory (a common border was shown in a different location on the maps to the two countries sharing the border) the tie-breaker (actual border position) was frequently “captured” from a small scale Rand McNally, National Geographic or Bartholomew print atlas of the associated area. (Two brief explanatory issues here: first, map companies cannot successfully sue the government for copying most categories of their map data; second, copying map data is not necessarily copyright infringement. (Copying a protected digital map database is another issue, usually related to licensing.))

The ramblings above are a short way of telling you that the most of the borders on the world reference maps published by Apple, Google and other modern map makers are probably relatively inaccurate at the scales they are published at, based on the suspected compilation trails for these data. And in any event, since, most often, you cannot see political borders on the ground, how can you know if their depictions on maps (digital or otherwise) are good or bad representations? And how do you measure the importance, or lack of it in these errors? I suspect that Apple and Google probably have little understanding of the detailed provenance of the majority of the border information have depicted on their maps. These two companies are effectively street mappers who decided that with all the data they have that they might as well provide world reference maps and become atlas publishers. After all, what could go wrong?

When I was Chief Cartographer at Rand McNally & Company (last century – the dark ages so to speak) in our reference atlas publishing business we spent an inordinate amount of time on issues of world political geography and how to represent borders fairly and informatively. We had an editorial committee that approved major map changes and worked with representatives of governments around the world to source their data and to better understands their concerns and claims. Even with these efforts, we often got into hot water with one border blunder after another and so will Apple and Google as long as they continue providing us the fine, but necessarily flawed maps that they produce. After all, maps (online or offline) are not reality. Rather, they are selective generalizations of geography using representational mediums not well suited to the task of presenting complex spatial relationships.

Creating multiple representations of the world’s borders to suit claims by various countries creates an inherently complex and potentially no-win situation for the companies involved. I urge all digital map publishers to consider that serving multiple representations of the same geography could eventually harm the company’s reputation as an authoritative reference publisher. While some border representations may be debatable, although required to remain in a market, making these decisions is a slippery slope that will not end well if not appropriately examined using what might be termed a reliable, reusable and complex decision-making framework for spatial data usage.

Potential Actions

1. If companies involved in publishing digital map databases do not currently have advisory editorial boards in place, I would urge them to consider establishing an “independent” one and to thoughtfully consider the recommendations that are made. This does not mean that every recommendation of the board is followed, but “listening” to the recommendations should provide a better understanding of the “tentacles” attached to all spatial databases and geographic information publishing.

2. For quite some time I have wondered why Apple Maps and Google Maps do not make better use of spatial queries to improve the ability of the map products to communicate spatial information. How difficult would it be to make world borders queryable when there are issues of jurisdiction? For example, the Crimea could be shown using whatever symbology is required by law, but interrogating the border would invoke an information block that explained that nature of the controversy and the claims that impact the symbolized area. Sometime in the future, both Apple and Google will need to become GIS-based publishers, but I suspect that move will be slow in coming.

3. Finally, it seems humorous to me that neither Google Maps nor Apple Maps appear to offer a graphical legend that explains the meaning of lines, marks, signs and colors used on their map representations. This could be a case of my not being able to find an official, graphical legend for either Apple Maps or Google Maps, but plenty of searching failed to reveal where they might be found. (If you know, an official, authoritative legend for these map databases, please let me know.) Google, at least, provides a written (non-graphical) description of the borders on its maps. However, in the case of both companies the interpretation of symbols and graphic items on these maps seems left to the imagination. Perhaps, we are headed towards more imagination-based map publishing, but, clearly, that would not be reference map publishing – and without great caution, it could become known as propaganda, a direction that reference map publishing should seek to avoid. And so it goes.

Happy Holiday Season

Dr. Mike

You can find my other passion at Dobson PhotoArts

Bookmark and Share

Posted in Apple, Apple Maps, Authority and mapping, Data Sources, Geospatial, Google, Google maps, map compilation, map updating, Map Use, multiple representations of spatial data | 1 Comment »

A New Look at Apple Maps

July 24th, 2018 by admin

Recently I was pleased to read a review in TechCrunch detailing how Apple Maps is attempting to change directions, by owning its mapping data and developing complex data compilation, handling, procedures and processes. The Company, reportedly, believes that doing so will help it to create an accurate and up-to-date spatial database for use in Apple Maps and, potentially, other products.

As many of you know, in June of 2012 I wrote a blog titled “Apple and Mapping?” in which I detailed the many practical reasons that the, then, imminent rollout of Apple Maps might be a disappointment. Subsequently, in September, 2012, I wrote a blog acerbically titled by me – “Google Maps announces a 400 year advantage over Apple Maps,” in which I dissected why the rollout of Apple Maps was a massive failure. Although many factors were detailed in the blog, the most significant issue was that Apple Maps exhibited major shortcomings in terms of understanding and managing spatial data quality. The blog attracted a lot of attention and when I was asked by the press how long I thought that Apple Maps would take to catch Google Maps, I replied that it would likely take more than 5 years, with 7 to 10 years being a reasonable estimate.

The good news from the TechCrunch article is that the Apple Maps Team appears to have used the last six years to reassess its approach to mapping by addressing the many problems that plagued the initial release of the Maps product. Based on the Panzarino article Apple claims to have to spent a considerable sum in “righting the ship.” Sources of improvement focused on, in the article, included, hiring staff, developing new data compilation/analyses methods and adopting various advanced technologies aimed improving the spatial data quality of its Maps product. Note that according to the article this development began two years after the initial release of the Maps product, or about four years ago.

The gist of the Panzarino article is that he was informed by the management of Apple Maps that the company has been busily developing processes and procedures that will completely and upwardly revise the quality of Apple Maps!

I hope so, but my experience tells me that changes of this type are evolutionary and, rarely, revolutionary. Taking four years to get to the point where you are confident enough about your methods that you can tell a naive outsider what you are attempting to accomplish in terms of spatial database building may reflect a lack of confidence on the part of Apple Maps regarding the path they have chosen. Further, in today’s technical environment four years can be multiple product lifetimes in respect to the development of spatial data handling technology. Spatial data decay is, also, a recurring and formidable problem for long-term development efforts such as that being attempted at Apple Maps.

One of the more interesting changes reported in the Apple article is that the Company is moving is rid itself of data suppliers. I think they can successfully move in this direction in respect to road and street data, but find myself wondering how they will compile data elements such business listings without the advantage of being a major search engine or being a volume player in online advertising. However, compiling, controlling and owning your own spatial database can provide huge benefits in terms of data accuracy and consistency. For example, analyzing reliable, accurate and controlled metadata describing the spatial data you have collected and own can be used to build robust data gathering procedures, as well as potentially revealing steps that could/should be taken to enhance that data.

Unfortunately, many of the benefits of owning the information in your database are true only if you have an organization that has cultivated an understanding of the nature of spatial data handling and how to build and manage an extremely large spatial database designed to meet known requirements for deployment in mapping and other corporate applications. We will not know for quite some time how well Apple has been able to do this on a regional basis, and will have to wait even longer to evaluate the success of Apple Maps improvement program in terms of the global market it serves.

I am concerned that Apple Maps is rolling out its “new and enhanced” data in the Bay area, followed by Northern California before moving on the rest of the United States. I suspect that the Bay area is at the top of the list because this is the area whose geography is most familiar to them and where they have the best ability to field check their data, due to a large number of feet and vehicles on the street.

One would think that, in terms of data quality, the San Francisco area is the least likely geography to embarass them and the second least likely area is Northern California. So those are probably the safest areas for a release, but these “local” geographies are not an honest test of the data quality of the “new” Apple Maps product. The comprehensiveness and homogeneity of spatial data quality needs to be equivalent throughout the geographical extent of a spatial database. The controlled rollout procedure that Apple is using to release its new data suggests to me that they may not quite be sure how well their new production system will work across large spatial extents – such as the United States, and eventually the world (or at least as much of it as is shown by the projections most frequently used for online mapping).

As a side note, it is apparent that Apple Maps is relying heavily on path segments snipped from the travels of owners of iPhones who have enabled location services (passive crowdsourcing). The notion of spatial auto-correlation raises its head here. iPhone distribution obviously auto-correlates with population distribution and density, at least in the developed world. But does their ownership distribution, also, reflect local spatial patterns of wealth and poverty leading to uneven coverage across larger areas? Does this mean that Apple Maps will be of good quality only where they have dense enough phone coverage (and movement of the phones) to adequately reflect the pattern of transportation modes in a local geographic area? Do they they have other enough additional data and sources (for example from their instrumented vans) that can be used to compensate for this potential unevenness?

While there are an enormous number of iPhones deployed, not all of them are in areas where the GPS traces are of much use due to a variety of environmental issues. Other users may not have location services enabled. In addition, depending on the length of the segments that Apples says it will discard at the start and end of journeys (to enhance user privacy) it may be that wormholes will exist in their data. Undoubtedly its large user base will be of benefit, but I think we will have to wait to see if the distribution of “probe” data is of universal benefit to Apple Maps data quality.

As another “side note” – it is my opinion that Apple’s relative lack of expertise in active crowdsourcing may hinder its efficiency in correcting map data that are not easily imaged or “recorded” by using sensors of one sort or another. For example, while the head of Apple Maps indicated that the “new” product would shine in areas such as directing people to the entrance of buildings when the building address was on one street but the door was on another. Maybe in some areas, but how about when the business locations cannot be sensed from the street? Consider the case where a small or medium enterprise is one of many businesses located inside a building that has several entrances. While a person could tell you the answer to which door to use quite easily (via crowdsourcing), interrogating sensor data to reach the same level of solution would, likely, be a waste of time. But, let’s wait and see how Apple Maps actually performs.

On the whole, I think that Apple’s apparent attention to improving its maps is good news, as it seems to indicate that the Company now understands the complexity of its undertaking in respect to building a spatial database that can support mapping, navigation and other potential uses for its spatial data.

I am assuming, for now, that Apple has spent enough time to clearly understand quality assurance as it applies to compiling accurate, extremely large spatial databases. As a matter of fact, thinking about that topic in respect to this blog has led me to my next article on what I call the “Eternal Pillars of Map Quality.” It might be a useful primer to those interested in knowing more about how to approach building large spatial databases focused on mapping and various forms of navigation. It will be out in a week or so.

As a final note, I cruised through people on LinkedIn associated with the term “Apple Maps) to see if anyone I knew (or knew of) worked there (a few). While I am sure that many Apple Maps employees do not list on LinkedIn, I was able to find between 250 -300 people who appeared to be currently associated with Apple Maps. Not as many as I hoped I might find, but, of course, this was a completely unscientific sample – so quote it carefully if you repeat it at all. What was more interesting to me was several unusual job titles. The one I liked to most was – (Apple Maps) Engineering Program, Manager Urban Experiences. Cool – Maybe there is hope for Apple Maps after all.


Dr. Mike

Bookmark and Share

Posted in Apple, Apple Maps, crowdsourced map data, Data Sources, Geospatial, Google maps, map compilation, map updating, Map Use, Mapping, routing and navigation, User Generated Content | 2 Comments »

Addresses, Technology and the Map Wars

May 15th, 2017 by admin

In my daily reading it is rare that I do not discover several new articles about Autonomous Vehicles (AV/), Highly Automated Vehicles (HAV) and who is winning the battle for market domination. While I find the competition fascinating, I, also, find myself wondering more and more about the state of infrastructure that may be necessary to support these ventures.

Because of my background and interest, the considerations often lead me to pondering how spatial databases will inform the Operational Design Domains within this brave new world. My belief is that we are early in the development of the technologies that will help move us to an extensive rollout of AVs, but other pundits indicate that fleets of these vehicles will be deployed and fully operational in various markets with a three to five year range. I agree that such as rollout could happen, but have found myself repeatedly considering how the limitations of today’s spatial databases might impede both the geographical footprint and successful implementation of this initiative.

While the identities of the presumed major suppliers of map/spatial databases intended to support the requirements of AVs are well known, it seems as if new players are jumping into the game with great haste. For example, Reuters recently reported that “Mobileye sees income from maps before self-driving cars launch.” While Mobile eye has shown that it has top-notch engineers, I am not sure that their team or the management of Intel their new owner understand the complexities of image-based crowdsourcing on data currentness, coverage and comprehensiveness, specifically as it applies to the creation and maintenance of spatial databases designed to be used to help manage the operation of AV/HAV.

When reading articles on how modern techniques, such as deep learning, neural nets and other AI-related research are being applied to the building of the attributes in spatial databases I, often, wonder how effective these developments will be in terms of collecting and assessing common geographic attributes, especially those that cannot be sensed, or imaged from the roadways that limit the exploration of vehicles. While it is easy to conceptualize a spatial database as a map showing the precise geometry of a transportation network, building a spatial database that reflects the total spatial environment within which an AV/HAV must be controlled as it operates is an extremely difficult task. Consider the simple example of addresses and addressing systems. Using the United States as an example we note that the principal address currently used to direct navigation systems has been the mailing or postal address. For decades the navigation industry been trying to use mailing addresses as a form of location-address in mapping and navigation systems, often with unfortunate results.

Of the approximately 150 million mail delivery points (addresses) in the U.S, nearly 40 million are rural deliveries and the majority of these have mail delivered to a box in a row of mailboxes not co-located with the residences involved. Imaging the mailbox likely would reveal little information about the location-address of recipient’s home or business. Postal address for businesses nestled inside a building are another headache for imaging and one whose error factor increases based on the volume and geometry of the building, as well as the number and location of entrances, courtyards and internal island-buildings.

Another interesting addressing problem is how the postal address, once correctly associated with a property, is converted to and assigned to a specific geographic location. Some prefer to linearly interpolate mailing address location based on address ranges associated with a block face, which often results In a vague approximation of the location. Another approach uses the centroid of the property parcel linked to the address to identify its location. An alternative method is to geocode the center of the rooftop of the selected main building using an rectified, ortho-image of the relevant property. Another method geocodes the address of the property to the street centerline, generally perpendicular to the building. In other words, the location of an address seems to be a matter of perspective gated by the amount and types of data describing the location available to the decision maker. While each type of process has benefits for specific applications (e.g. parcels and real estate, street center lines for navigation) each method potentially creates a unique location address for the same element.

Of equal importance, in today’s world, there is a need for numerous location addresses to represent various aspects of an individual property (e.g. driveway, mail box, parking spaces, drone delivery, E911 access). And “No” What3Words is not the solution. How to solve this issue seems tp be generating some interesting work and complex patents.

The reason for this brief digression on the creation of location addresses is that if you map the same address using a variety of data sources you will find its locations varies, sometimes by major distances. Of course, other mapping transgressions such as the use of inappropriate reference ellipsoids/systems also contribute to the incorrect location of the same coordinate across the map space. Ouch, those method errors really mount up – as does confounding.

Addresses are used in navigation for both farcasting and nearcasting. When a driver starts a journey and desires to create a route to a destination (farcasting) a spatial database containing the specifics of the distant destination must exist in order for the system to look ahead (farther than the sensing system can “see”) and calculate a path to that location. The nearcasting task is concerned with the details of “What’s around me?” It is here that the terms “coverage” and “comprehensiveness,” as applied to spatial databases, raise their ugly heads. If the map base does not include the required specific details on the destination or my locale (coverage) or it includes the area but does not include data on the desired level of detail (comprehensiveness) then the system cannot navigate the user with any certainty. I suspect that Mobileye will soon become familiar with these potential thorns of map coverage.

If Mobileye (and companies like them) intend to build map coverages based on vehicular crowdsourcing (floating car data) they will need to have some idea of what data elements and coverages can be created by the vehicles equipped with their sensing systems. In addition, they will need to understand what geographies will likely not be recorded by their sensors, due to spatially limited deployment. Within the operational geographies the Company will need to know what spatial data elements can be reliably sensed, as well as the data that their sensors might be completely unable to capture. Once this is done, they can focus on where they might find alternative sources for the data unavailable to them from their platform. If this is the case, the same navigation problems that we have today that result from mixing uncontrolled data sources will continue in the future, regardless of the sophistication of the sensors, support software and further processing.

It appears that Mobileye is planning on supplying the data they collect to existing mapping companies, such as HERE. It may be that this “cooperation” is a sign that Mobileye has considered the mapping problem and decided that it does not want to try and provide a comprehensive spatial database for AV/HAV purposes. Mobileye may be surprised on how little value the navigation industry will assign the generation/transfer of these data. (As an aside, I purposefully used the term “these” and not “their” in the previous sentence.)

On the other hand, one would assume that in respect to problems such as addresses and addressing existing mapping companies should have significant advantages over most of the newcomers to the mapping arena. The existing companies have been forced to learn what data compilation really means in terms of building spatial data base and it is likely that the newer companies have not yet learned all of the difficulties involved in compiling and maintaining spatial databases. Building spatial databases over a number of years usually sensitizes one to development issues that are unique to spatial data sets, such as data that are difficult to classify, map, augment, and update. Unfortunately, some of the existing mapping companies seem have not learned the lessons that might have given them a strategic lead in the future map wars.


I think that the future is amazingly bright for those interested in mapping and that we are witnessing the dawn of a new era in mapping research. Although there are numerous, difficult problems facing those who want to create a viable, market-leading, map (spatial) database for AV/HAV use, it will happen. Of course, there is a potential “fly” in the ointment. In a future where vehicles are spatially aware, it seems to me that all vehicles will need to rely on the same, unified spatial database. Vehicles communicating with each other that rely on different spatial databases will likely be incompatible and unsafe due to contradictory instructions based on differing views of geography. Remember, maps are representations of reality – not reality itself. This is my way of agreeing with Lewis Carroll that we must eventually, in terms of AV/HAV, use the world as its own “map.” What fun that will be – and what an interesting challenge.

Or, as Tolkien might have put it
“One Map to Rule them all,
One Map to Find them
One Map to Bring them all and in Geography Bind them.”

Until next time.

Bookmark and Share

Posted in Authority and mapping, autonomous vehicles, crowdsourced map data, Data Sources, Geo-patents, HERE Maps, map compilation, map updating, Mapping, multiple representations of spatial data, spatial databases for autonomous vehicles, User Generated Content, Volunteered Geographic Information | 1 Comment »

More on Spatial Databases for Autonomous Vehicles

August 31st, 2016 by admin

Many of the companies now collecting spatial data to support the development of autonomous and semi-autonomous vehicles appear to be evolving their approach to match sentiments espoused in Nineteenth century literature where Mein Herr, a character in Lewis Carrol’s story “Sylvie and Bruno Concluded” (1893), said, “And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!”

“Have you used it much?” I enquired.”

“It has never been spread out, yet,” said Mein Herr: “the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well.” (Footnote 1)

In today’s world of proprietary database building numerous companies are imaging the environment in both archival and real-time modes. The purpose is to create highly accurate spatial databases of the environment that can be used to support the operation of autonomous vehicles. Conversely, the industry might be better off building one common spatial database, or by instrumenting our roads, streets and associated “road furniture” than by creating multiple representations of the same thing. Unfortunately, the decision by major players to make spatial data one of their monetizable, competitive advantages bodes poorly for a common spatial database or a smart transportation infrastructure to support the operation of autonomous vehicles.

Is Multiple Representation Really a Problem?

Yes! Many companies are building proprietary master spatial databases to support their interests in autonomous vehicles. In addition their vehicles are creating real-time positional databases. If one could merge of the data from these efforts and map/overlay them, do you suppose the data for geographical positions would precisely align? How about comparing spatial databases created by different parties? The sad state of affairs is that it might be impossible to meaningfully integrate these multiple representations of the same data due to differences in collection technologies, methods, measurement, classification, design, and a host of other variables. Comparing the “rightness” of one representation of reality to another can be a very difficult task. Let’s take a look.

Guess we should take a step back for some perspective

Compiling spatial data to support vehicle navigation is based on a long and illustrious history of innovating techniques and methods to deal with the issues of capturing, storing, handling, and using attributes critical for representing identified transportation arteries and their surround. Although we might now smile at some of the resulting products, such as pocket globes, wall maps, folded maps, road atlases, auto-photo-guides that showed a photo at every intersection between an origin and destination, and TripTiks, these and other displays were based on spatial data compilations systems that once helped solve the navigation problems of the traveling public.

When digital spatial databases were developed to create “computerized” navigation, routing and presentation systems, established and mature methods of spatial data handling served to help create the modern tools required to build the database and rendering methods supporting these efforts. Many of these efforts were essentially data-driven systems replacing the previous conceptually-driven drawing methods that were focused on creating a map or some related form of spatial display. During these developments cartographers and spatial scientists met and began wrestling with the problems generated by multiple representations of the same geographic locations.

Tomorrow’s autonomous vehicles may provide, but will not require, visual map displays for their operation. Rather they will implement geographical data in a manner that can serve to provide the spatial information required for the autonomous vehicles to safely and reliably manage a vehicle’s operation without the intervention of a human pilot/driver.

An important consideration in this pursuit is that many companies seem prepared to rely on sensing or imaging the transportation networks and appear to imagine that by doing so on a global or regional basis, they may be able to solve the operational problems required to support the emergence of autonomous cars that will actually be “fit” for some intended use. It is important to note that not all data about the environment required for controlling and navigating autonomous vehicles are visible. For example, sirens that provide drivers information about not yet visible approaching service vehicles that legally have the right of way cannot be sensed by most imaging systems. Similarly postal codes, addresses in high rise office complexes, jurisdictional boundaries, some toponyms (place names), etc. often require diverse collection methods other than imaging from a road surface. The existence of these “escapees” from current sensor capabilities means that multiple spatial database (multiple representations) will need to be merged in order to provide the comprehensive view of the real world required to operate autonomous vehicles.

There are a host of spatial data issues that need to be addressed by those developing systems to guide the performance of autonomous vehicles. Today, I will cover three of these topics. We will get back to multiple representations, after we take a look at requirements.

The Importance of Setting Requirements for Spatial Data

Starting your development for an autonomous vehicle with, “We’ve got the know-how to make A and B and can create some of the spatial data for this system using the technologies C and D,” is not the useful approach to solving the problems the system might need to negotiate. For example, and in my opinion (see footnote 2), if Tesla’s Autopilot system could not: 1) sense a white semi-trailer with black tires sitting astride a road, 2) decide the object was not in its spatial database, 3) process it as a dangerous spatial exception, and 3) take evasive action, then specific performance and safety requirements may have been missed when planning the system. If the ranging capabilities of Tesla’s cameras and radar were not sufficient in respect to the distance over which it could observe navigation critical variables, or if its threat processing was not capable of informing its operating system that an action must be taken due to this fault (if sensed), then requirements were either missing or not met. Designing a system whose components do not solve common spatial problems or spatial threat situations even infrequently encountered by ordinary drivers is not a positive option for an industry that needs to convince its user base to have confidence in its products.

Purposeful, disciplined, comprehensive design documents for spatial data and its use will be required to build vehicles fit for autonomous functionality. I hope the industry is thinking about this challenge, as much as it is thinking about the race to market. Spatial data has not been something that the automotive industry has traditionally incorporated in its vehicle build process and, as a result, new challenges face those intent on competing in this market. I suggest the players who are considering building either autonomous vehicles or guardian angel types of applications for vehicles pay some attention to the following topic when considering the use of spatial data.

Multiple Representations – or – this data item is another representation of this geographical descriptor, except maybe when it doesn’t quite match or you can’t even decide if they are representing the same thing (Footnote 3)

At some point, there may only be one spatial database operating in or provided to an autonomous vehicle, but that time is not now. Today, we have companies using LIDAR, radar and optical sensing techniques to build highly precise master positional spatial databases for navigation that are attributed with additional information. The moving vehicle platform, itself, is creating a GPS path (perhaps INS based) of where the vehicle is being guided for purposes of matching its location with the master spatial database. When the highly precise positional spatial database is not spatially comprehensive other spatial databases are used to calculate: paths, travel time, congestion and similar information. Concurrently, the sensors with which a vehicle may be equipped, are also mapping a path along which the car is moving, recording such things as position, distance between vehicles, lane marker position, speed, and other variables that can be used to safely control the operation of the vehicle. It is in this sense that the spatial databases that are used in autonomous cars provide an example of the simultaneous use of multiple representations of the same geographical space – specifically along the path that the vehicle is currently traversing.

How dissimilar must the spatial data in the onboard/in-cloud master spatial database and that collected by the on-board sensors be before a fault is called? Which source is regarded as the failsafe and is it always considered the failsafe? What are the results of a fault being identified? While the sensors would seem to have the upper hand, what if they are not operating within required tolerances? How are these distinctions to be measured and prioritized during vehicle operation? If neither data source is regarded as optimal or reliable at a brief instant in time, how is the issue resolved to the benefit of the vehicle and its occupants? What is the fallback when a vehicle does not provide a steering wheel, brakes, or gearshift, but its autonomous system can no longer navigate the vehicle due to discrepancies in spatial data?

The question of when a reported deviation along a segment of a route becomes significant can be a complex issue. If the sensors on each vehicle encountering an actual change in a path are measuring the change compared with a proprietary spatial database, how will specific differences in these sources be reconciled? When do these differences become critical? Does vehicle speed sway and stance alter these evaluations? If differences in the reconciliation recommendations at a specific geographic location vary between vehicles by manufacturer due to differences in the requirements set for their proprietary spatial data, as well as the on-board methods of sensing the environment, what happens? How will emergency command and control decisions be prioritized and implemented in a rational manner?

When we get to the stage where autonomous vehicle control systems can speak to each other, what happens when they are impacted by sensed current data that reflect different images of reality and the geometry of a proprietary spatial databases that do not reconcile? For a variety of reasons static reconciliations (edits) are a common part of using spatial data, but it is when you add in the concept of multiple representations of potentially non-homogeneous spatial databases acting in concert to solve a problem that must be dynamically reconciled that your headache may suddenly become a migraine.

Then, add in consideration of the potential differences in the content, coverage and comprehensiveness of the various spatial databases that someone might be using to support the spatial data requirements for a typical autonomous vehicle. Another unfortunate fact dogging multiple representations of a spatial object is related to issues of data quality. Of course, the situation is complicated by the fact that few people understand how you might critically measure the data quality issue in potentially non-homogeneous spatial databases that will be used for purposes of autonomous vehicle operation.

Speaking of People

Where will the experience in spatial data collection and handling come from? Well, yes there are a lot of people who have always liked maps and even more who have used them. But the number of professionals steeped in the theory and practice of the use of spatial data handling for navigation and related issues is extremely small. My concern here is that the appropriate sensitivity to understanding the problems of integrating spatial data into on-demand, real-time systems may not be well understood by many of the developers of these systems.

Make no mistake – there are a number of people who are vaguely familiar with the notion of spatial data, yet appear markedly unfamiliar with the myriad problems of compiling, creating, handling and using spatial data in a manner that accurately reflects the limitations and advantages of the spatial databases they are building. If these people do not understand the problems or fitness-for-use posed by merged, multiple-sourced databases containing multiple representations of the same data, it is unlikely that the software they create to interrogate and use these spatial data are going to consistently and reliably perform their intended function in the support of autonomous navigation – or even in navigation efforts is aimed at performing a Guardian Angel type of support.

The Problem of Hubris

Yes, I know. All of the problems that I mentioned here have already been solved by brainiacs of untold wisdom. Yet I persist. Why? Well, I persist because the brainiacs have actually not yet solved these problems. Ask your development team what they are going to do about the multiple representations of spatial data. They will probably ask you what that means and then tell you they have already solved the problem.

Those of you who follow my blog know that I sounded an alarm when Apple announced it had started building a spatial database to support its need for an online mapping product to be released within the same year. In my blog I suggested that Apple was in for a surprise, as they clearly did not appreciate the difficulties of compiling a data base to be used for mapping and routing. Shortly after the Company released Apple Maps, I critiqued Apple for the product’s numerous failures, all of which were avoidable. As a matter of fact, the whole word seemed to criticize Apple’s inept attempt at creating a mapping product. In a recent article Apple executives admitted they originally failed in mapping because they did not understand the scope of the challenge. And so it goes.

I’m getting too old for this kind of stuff, so today’s blog marks a new focus on issues in spatial data handling that more people ought to think about before their efforts to build “functional autonomous vehicles” proves disastrous for the transportation industry. More next month.

Until next time.

Dr. Mike

Footnote 1. I used the same Sylvie and Bruno example in, “Silicon Valley Mapenings” an article focused on the interest in autonomous vehicles that was occurring in early 2015. The quote is an example of the gift that just keeps on giving.

Footnote 2. For obvious reasons Tesla has been reluctant to publicly comment in detail on the accidents experienced by its users. As a consequence, my statement is speculative and may not be borne out by the facts, if they are ever revealed. It may well be that some factors or factors unknown to me and unavailable in public literature (as of this date – 8/29/2016) related to these incidents will reveal that the company bore no fault in any manner in any these incidents (8/29/2016). I do note that today (8/30/2016) that Tesla announced an update to Autopilot that will significantly enhance the advanced processing of its radar signals.

Footnote 3. I briefly discussed multiple representations in several blogs, but most recently in “Comments on the Development of Spatial Databases to Support Autonomous Vehicles” .

Bookmark and Share

Posted in Apple, autonomous vehicles, Categorization, Data Sources, Mapping, Mike Dobson, multiple representations of spatial data, routing and navigation, spatial databases for autonomous vehicles | 1 Comment »

Measuring the Cost of Uber’s Push Into Spatial Data

August 4th, 2016 by admin

The recent “news” stories that Uber was doubling-down and developing a spatial database to support the company’s mission seemed anti-climactic to me. After all, over the last two years Uber has undertaken numerous (perhaps that should be “nuberous”) activities to support its strategic initiative in mapping. What was newsworthy, apparently, is the fact that Brian McClendon, formerly head of Google Maps, now head of Uber Maps), wrote a PR piece in the form of a blog focused on the importance of mapping to Uber.

Perhaps of more interest was another byline on the topic indicating that Uber was prepared to spend $500 million pursuing just the right set of geographic data that would be needed to meet the operational needs of the business. Of course, no one at the company would comment on the figure, where the number come from, or if it was in the ballpark. Hmmm. I suspect that the correct answer was, “No comment” and the truthful answer was, “More than you can say!”

(I note here that mapping may be of less interest to Uber than creating spatial databases, but, I guess if you want to think of spatial databases as maps, that’s OK, as long as you don’t think that autonomous cars are going to “image” the data they require for operating in order to use it.)

Uber was rumored to be interested in acquiring the mapping company HERE from Nokia, but not interested enough to pay the approximately $3 billion a consortium of German car manufacturers paid for the company. In McClendon’s blog titled Mapping Uber’s Future”, he indicated that, “Existing maps are a good starting point, but some information isn’t that relevant to Uber, like ocean topography.” I agree.

Many data elements of interest to users of Google and Apple Maps would appear to be irrelevant to the needs of Uber, just as they are to the data needs of HERE and TomTom. However, I hope McClendon was not implying that he could build a database that would meet Uber’s need for less than the sums spent by either HERE or TomTom. Further, I presume McClendon knows that Google spent more than $500 million to create the data that supports routing using Google Maps to support its worldwide markets.

I suspect $500 million is just the first tranche of investment required for Uber to meet its mapping goals. I base this on a simple requirements analysis. Uber will need to define the types of spatial data required to support its corporate goals in, at least, the following areas:

1. What types of spatial data (elements, items and attributes) will be required to allow autonomous cars to safely and legally operate on all streets and roads in the geographies where Uber desired to provide such capabilities?

a. In addition to answering questions about “why these geographies”, the company will need to estimate the cost of identifying, acquiring and sourcing the required data for each unique geography and for each specific autonomous technology utilized by the vehicles they eventually choose to operate.

b. There are no shortages of existing spatial data and navigation standards, but Uber has indicated that it will become its own standard and the interplay between how it hopes to collect most of its spatial data (technology and sensors) and formulating spatial databases that can provide the actual re-use of that data by its fleet of vehicles will be a challenge in a real-time environment.

2. What types of spatial data will the company need to provide their drivers in order to efficiently deploy this human resource?

a. In addition to the mapping, routing, navigation services, and software to support spatial queries, capturing traffic patterns and the precise pick-up and drop-off locations mentioned by McClendon will require a fiendish amount of work reporting data that changes with amazing rapidity (by day of week and time of day). These concerns, perhaps, would best be met by an acquisition, or several of them.

3. What types of spatial data will need to be supported in order to meet the needs of Uber’s riders?

a. While McClendon indicates that the company won’t need underwater topography, it will need data about users, where they travel, when they travel and where they go. It will need to develop location dictionaries that include the diverse names that users may call the same location and be prepared to translate foreign languages using foreign variant names for these locations. User preferences, user histories, travel trends, location biases and a host of other data might make this stew quite potent, but knowing which ingredients are worth the cost of collection is a complex undertaking.

4. What type of spatial data must be in the database to meet company needs?

a. For example if Uber goes into the package delivery business it will need to develop an infrastructure that would meet these needs. It would also need a business listings database for the millions of small businesses that are operating, but not retail establishments. What other geography-related goals does the company have that its spatial database will have to support?

5. What kind of spatial data handling infrastructure will Uber implement on a worldwide basis to meet the general needs described in the four categories of requirements described above?

a. Do you think you could build even this part of the equation for $500 million?


If Uber has done a requirements analysis that is focused on these and other issues that need to be addressed in the creation of the complex spatial database they will need to operate their business, they will have concluded that they are going to gobble up a ton of cash before they are operational. While HERE may not have been a bargain, its selling price is a not irrelevant estimate of how much it will take to start Uber’s effort at building a functional, extensible spatial database.

Wow, this will be fun to watch!

And Something Else

On a completely different topic – earlier this year, I received The Distinguished Career Award of the Cartographic and Geographic information Society (CaGIS). If interested you can read about the award at the society’s website . I am not sure if this is the organizational equivalent of the “Old Geezer” award, but I was thrilled to receive it. Adding to the fun, I received the award in San Francisco, my birthplace. Does this mean I am spatially autocorrelated? Who knows?

Bookmark and Share

Posted in Apple, Authority and mapping, autonomous vehicles, Data Sources, Google maps, HERE Maps, routing and navigation, TomTom, Uber | 2 Comments »

Comments on the Development of Spatial Databases to Support Autonomous Vehicles

February 15th, 2016 by admin

Companies announcing new mapping initiatives seem to be crawling out of the woodwork. Why the sudden interest in mapping and what should we make of these new entrants in the mapping space?

The recent CES show was a hotbed for news releases on mapping initiatives related to the functioning of autonomous vehicles (AV). Toyota, for example, announced that, using automated cloud-based spatial information generation technology, it is developing a high-precision map generation system that, “…will use data from on-board cameras and GPS devices installed in production vehicles (around 2020).” In their press release on the topic Toyota noted that, in their opinion, this proposed constant collection of road data by multiple cars will offset the accuracy gained by using LiDAR (which they are not using) and will trump the infrequent, expensive road data collection process used by today’s mapping incumbents.

Ford, in turn, announced its own mapping initiative. Ford indicated that it will use a compact LiDAR sensor developed by Velodyne (at a mere $8,000 a unit) which it hopes will help it win the race to offer a fully AV. Ford appears to be confident that Velodyne’s Ultra Puck will enable its fleet of vehicles to create real-time, 3D maps surrounding their path through the environment.

Other companies making noise about the automotive market include Quanergy, which bills itself as the, “Future of 3D sensing and Perception – 3D LiDAR for ADAS, Autonomous Vehicles & 3D Mapping.” Panasonic is, also, developing a LiDAR capability for mapping, as well as enhancing its advanced camera imaging technology. In addition, let’s not forget GM and Mobileye’s exploration of advanced mapping with OnStar data, as well as GM’s collaboration with Lyft to build a network of AVs.

Why the interest in Mapping?

I am sure that many of the long-established automotive companies involved would indicate that they have always been interested in mapping. Of course, this type of statement would be broadly true, but most of their interest has been educational or licensing related – few automobile OEMs, Tier 1 or Tier 2 suppliers have spent any real money on developing mapping capabilities until now.

I suspect that the “move” to mapping is being driven by at least two fundamental issues. Data Integrity concerns are, I think, the prime mover behind the trend of in-house spatial data gathering. Building a service-related user community may be the second factor behind developing spatial data capabilities. We will discuss these two trends below, but first – a look at the profit motive.

It is my belief that the move into mapping by OEMs is not based on a direct-profit motive. Rather than making money on mapping data, or deferring costs by creating their own data, advanced mapping will be used indirectly to promote brand allegiance and expand brand reputation during a period in which individual car ownership may transition to some form of collaborative use of vehicles and their data networks, rather than purchase or lease of automobiles. Next, let’s acknowledge that developing and maintaining the spatial data that is critical for the operation of AVs may be a protective strategy to defend against potential injury lawsuits where it will be claimed that a specific AV did not perform as advertised.

Spatial Databases and AV

Let’s begin by acknowledging that the spatial databases supporting AV operation must fulfill at least two fundamental functions. One the one hand, the spatial database must be designed, populated and organized in a manner that supports the systems allowing the autonomous functioning of a vehicle within the surrounding environment. This means that the data needs to combine positional and contextual data about each vehicle and its surroundings, as well as information about other objects in its environment that are stationary or moving. Second, there is a need for a spatial database that can support the navigational needs of the autonomous vehicle when moving between two or more locations.

Perhaps a good way to think of this distinction is the difference in spatial knowledge required to know when to turn from one street to another, versus the spatial data that an AV would need to consider to be able to execute a turn at that location. The majority of the press releases at CES were concerned with creating accurate, reliable and up-to-date spatial databases that are fundamentally concerned with the methodology of AV operations rather than determining the routing characteristics between origins and destinations.

Data integrity and Spatial Databases in AVs

AV systems will demand enhanced mapping capabilities to ensure the proper functioning of systems designed to manage the performance and maneuvers of self-regulating vehicles. The associated spatial databases will become a critical component in the ability to create fully functioning autonomous vehicles that are reliably safe to operate and whose operation consistently reflects local road and transportation environments. In my opinion, it seems unlikely that vehicle OEM’s would be willing to cede the responsibility for setting the specifications for and collection of these mission critical spatial data to external suppliers of mapping data.

While collecting all of the spatial data required to support autonomous cars may be a task beyond the capabilities or interests of automobile OEMs, these companies may want to collect and control mission critical spatial data (e.g. road centerline data and other spatial data related to road and road environment characteristics) in a manner that fully supports their notion of how to best functionally and legally operate an AV. Conversely, routing related variables such as addresses and business listings would likely continue to be provided by third parties.

In today’s navigation and local search oriented markets companies are able to license mapping data from providers such as HERE and TomTom, but the specifications for these data are set by these two companies. Yes, the data format may reflect industry-based standards, but many of these standards reflect the influence of the data providers. In today’s era of rapid prototyping, innovative engineering and technological leapfrogging it may well be that the OEMs will generate data specification, data processing and data collection systems that are unique to the fundamental operation of their specific autonomous vehicle systems. In essence, these systems and the spatial data that help operate the AVs may be regarded by some companies as a distinct competency leading to a sustainable competitive advantage.

One major concern with the use of data from companies such as HERE and TomTom in AVs is that historically, as pointed out by Toyota in its press release cited above, their data collection, processing and distribution methods have been time-bound and not updated and disseminated at the speed that is expected to be required to operate AVs. Unannounced road closures, emergency road repairs, intermittent lane geometry changes, and the like need to be compiled with urgency and rapidly disseminated to a company’s AVs that may encounter these changes. It seems unlikely that TomTom and HERE can supply fleet-based, spatially comprehensive data in adequate enough time slices to support AV functions using their current data collection systems. Unless TomTom and HERE can provide (or partner with fleet owners who can provide provide) continuous, spatially comprehensive updates to their entire range of automotive customers, it is unlikely that they will be the beneficiaries of the emerging market for AVs.

While I agree that TomTom and HERE have significantly improved their data collection and distribution methodologies over the past few years, let’s not forget that in 2010 TomTom, HERE, Google and others in the map data business took months to recognize and reliably reroute traffic after a section of interstate highway was deconstructed in Providence, Rhode Island. Yes, you read that right. Even using live GPS traces indicating where cars were actually driving, these companies continued to route users over a section of interstate that no longer existed (see here http://blog.telemapics.com/?p=241 and here http://blog.telemapics.com/?p=248 for information on this topic). This kind of problem is not one that automobile OEMs or their lawyers will be willing to accept when AVs become a reality.

A final observation of some interest the notions expressed above is that HERE has recently been awarded a contract from the Geography Division of the United States Bureau of the Census (RFP CENSUS2015_GEO0227). The RFP stated that in addition to other requirements, “The Census Bureau is seeking to obtain complete and accurate datasets containing housing unit addresses with associated attribute and geographic coordinate data, and spatially accurate street centerline data to supplement and/or validate content of the MAF/TIGER System.”

In Amendment A0021 to this RFP the Census Bureau clearly stated that the street centerline data developed as a result of this effort (but not the address information) would eventually be released to the public. HERE’s willingness to supply road centerline data under this contract may speak volumes about the intentions of OEMS to collect spatial data.

Community -The advantage of size

Google search engineers have often stated that they are not smarter than anybody else, but note that when thinking about improving existing products or creating new products they accrue the advantage of having more data to analyze to properly define opportunities than most of their competitors. In a similar light, the larger automotive OEM’s could make their vehicles “spatially smarter,” by fielding a massive spatial data collection effort based on the number of vehicles deployed (directly (research fleets) or indirectly (sales)). Toyota, for example should be able to produce a more comprehensive and accurate spatial database to manage its autonomous vehicle performance than, say, Subaru. Ford, with its success in light-trucks, might be able to create more comprehensive data on rural routes than say a Volvo or KIA.

Purpose-built spatial databases produced by an OEM, in turn, could be used to analyze, tune and optimize the performance of every autonomous vehicle they produce. For example, instructions could be customized for individual vehicles, adjusted for their carry-loads, profiled for specific geographic areas/environments, optimized for local weather and the like. Without the mission critical spatial data to support the customization of their vehicles the OEM would be at a competitive disadvantage.

Spatial data is certainly an area where more can be better and the OEMs with the most sensor units/deployed vehicles could wind-up producing market leading, highly accurate, comprehensive spatial databases of road-related information. In turn this spatial data superiority could become a branding opportunity leading to more sales or more use of a company’s AVs. Although it would make complete sense to create an industry wide spatial database to support AVs I doubt that this will happen due to the need of players in the automotive industry to create sustainable competitive advantages. However, this may be an opportunity being explored by the new owners of HERE.

A Fly in the Ointment

The need for interoperability between autonomous vehicles suggests that one of the safest ways for these vehicles to commingle on the roads would be based on interpreting a common spatial database. As noted above this is unlikely to happen. It is possible that regulators will have the prescience and courage to demand it, although I do not see that happening in next decade and, perhaps, not at all.

If the AVs are in motion and adjacent autonomous vehicles from different OEMs do not have the same representation of the world around them, how will they efficiently maneuver in such a manner as to avoid being a threat to each other?

If companies creating AVs, or the systems for AVs, decide they need to license additional data to build their recipe for the spatial database required to support these vehicles, how will they deal with the issue of multiple representations that will most certainly be found when trying to integrate licensed spatial representations with the spatial representations they have collected using different systems?

Yes, they are all measuring the same world, but they will not be measuring it in the same manner or even with similar equipment. How closely their data models match is an open question. Given this possibility, how will the OEMs deal with issues of metadata mismatches or metadata that is incorrect or simply does not exist? The potential complexity of the multiple representation problem, in the context of spatial databases designed to support the operation of AVs, is stunning. However, it is only one example of a series of problems that commonly haunt applications based on extremely large spatial databases. Multiple instances of potentially non-interoperable, extremely large spatial databases distributed across numerous, comingled AVs sounds like the makings for a ripping good disaster movie.


It appears that automobile manufacturers (including the consortium that purchased HERE) have realized that spatial data may become a powerful benefit in the future – if they can find a way to make it their own. While mapping is a term that flows off the tongue and into strategic plans with ease, actually collecting, compiling and unleashing the power of spatial data in a beneficial manner takes more skill and know-how than many would-be practitioners may understand. Indeed, harnessing its power may require skills in spatial data handling and knowledge about the intricacies of mapping that have slowly withered as interest in these this disciplines has been eroded by the transformation of the spatial data world in its hot pursuit of GIS.

Major players interested in entering the spatial data/mapping market that is evolving to support AVs would do well to consider forming advisory boards where their strategists and data geeks can counsel with mapping and spatial data handling experts who might be able to give them an insight or two, or just maybe help prevent a catastrophic system failure.

Or, as a great sage once said, “you don’t know what you don’t know.”

Thinking about all this stuff has given me a headache – so I am heading to the shore to photograph birds – merrily driving my 1998 Corvette stick shift. I ordered the car purpose built and have been its sole owner. I finally managed to crack the 50,000 mile barrier this week. Do you know what is wearing out on this vehicle? The sensors (fuel pump sender sensor, air quality sensor, etc.). How about that, but no complaints from me.

Till next time.

Dr. Mike

Bookmark and Share

Posted in Authority and mapping, autonomous vehicles, Google, HERE Maps, map compilation, Map Use, Mapping, Mike Dobson, Personal Navigation, routing and navigation, TomTom | 1 Comment »

Use Cases and Online Maps

January 4th, 2016 by admin

Hi, Everybody.

This topic started out innocently enough and wasn’t research for a blog. What I was trying to do (my use case) was to find driving directions to several wildlife sanctuaries that had been recommended by my photography buddies. I started the process by Googling the Cibola National Wildlife Refuge. Google pops-up a side panel on the search results page that includes specific information on the refuge in question. See Figure A.

Google Search Page for Cibola NWR
Figure 1. Google Search Results for Cibola NWR

I clicked the “Directions” button at the top of the panel, presuming that this could be used to produce a route from my starting point to the refuge in question. Well, it did produce a route to the refuge in question, but not one that would be very useful for most visitors to this location (See Figure 2 below, if you want to skip ahead). Adding to the confusion, Google does not always employ the same use case to solve other examples of the same class of problem.

The vagueness of Google’s routing solution led me to examine the issue in greater detail. During the process, I concluded that Google, Apple and HERE* still do not seem to appreciate the nuances of the relationships between mapping and the types of use case employed by map users, although they do seem to understand the general notions. For example, Google represents wildlife refuges, national parks, national wilderness areas and places of this ilk that have a reasonable size by a shaded polygon. This is common GIS practice and can provide useful, though very general location information. These polygons, on Google’s maps, in turn, are identified by a symbol that apparently represents a centroid within the specific polygonal boundaries for a location. This is a reasonable way to show the appropriate map name associated with a tinted polygon. However, it is an inappropriate trip destination for a routing engine, but it is what Google frequently uses in these cases.

Well, good luck with that! Autonomous vehicles are going to love this stuff (hope the autonomous car people at CES are reading this). In many cases, if not most, the location of a centroid, placed in this manner for the class of objects being examined here, will be located in an area of inaccessible wilderness (e.g. a location that does not have roads or trails leading to it). The locations denoted by the symbols used by Google do locate the respective property and should be reachable by helicopter, but seem mismatched when paired with the common notion of the type of directions presented by online routing engines and expected by users (the “common routing” use case).

Let’s look at this problem by examining a series of maps. In all cases the origin of the route was the same and in all cases the map shown resulted from clicking the “Directions” button on the search panel that Google produced for each location. For purposes of presentation the images shown are crops of the maps Google generated to portray directions to each of the locations. (Yes, I know the maps are oversized. But they are pretty easy to read.)

Google Map of Route to Cibola NWR
Figure 2. Google Map showing a proposed route to the Cibola NWR.

Google seems to have gone out of its way to calculate paths to the destination that route you along actual roads as close to the centroid as they can take you by car and then draws a dashed line from that point of departure to the centroid. Above is an example of a route between my house and the Cibola National Wildlife Reserve.

When I sent this map to a friend he suggested that I buy a Range Rover with the snorkel option, since the route that starts when the road ends takes you through a river and into the wilderness. No, there is no bridge or other crossing where Google shows its “dashed-line route.” A useful alternative strategy would be to route visitors to the visitor center or the refuge headquarters of the relevant property. This example led me to wonder exactly which use-cases Google might have considered when creating their cartographic representation of a route to the Cibola NWR.

Let’s look at the Google Map to the nearby Havasu Wildlife refuge for comparative purposes.

Google Map showing its proposed route to the Havasu NWR
Figure 3 Google Map Showing a proposed route to the Havasu NWR.

For your information the Office of the Superintendent for Refuge is located in Needles, California. That might be a helpful place to send you for the camping, fishing and hunting permits needed to use for this property, but that was not Google’s choice.

On the other hand, if you were considering visiting the Channel Islands National Park off the California coast, Google routes you directly to the park headquarters at the harbor in Ventura, California, and doesn’t even show you the islands or the direction to them on the map presented.

Google's proposed route to the Channel Islands National Park
Figure 4 Google Map showing a proposed route to the Channel Islands NP

Perhaps that is because you cannot drive to the islands? But why does Google show the park headquarters in this case and not in others where you cannot drive to the symbol Google uses to represent the entity, but can drive to the park headquarters or the visitor center? Maybe this is an example of Google considering a different use-case. Hmmmm?

Next, if you want to visit Haleakala National Park, Google not only shows you that you have to fly to Maui, but provides information on fares and flights.

Google's proposed route to Haleakala National Park
Figure 5 Google Map showing a proposed route to Haleakala NP

Unfortunately once you land in Maui, the route provided by Google ignores the main road to Haleakala and takes you to another of its concocted centroids that you can reach by helicopter, but not by driving. Hmm, yet another use case!

Now, more absurdity. Look at this Google route to the Grand Canyon National Park.

Google's proposed route to the Grand Canyon NP
Figure 6 Google’s proposed route to Grand Canyon NP

I Hope nobody tries to drive this “route.” After all, remember what happened to Evil Knievel and his rocket-propelled motorcycle! Yes, that is the official Grand Canyon Visitor Center shown on the map, even though Google avoids routing you to this location. After all, why use the only paved road in the area.

How about this one for Yosemite National Park?

Google's proposed route to Yosemite NP

Figure 7. Google’s proposed route to Yosemite NP

After all, why would a visitor would want to tour Yosemite Valley when they could destroy their car trying to traverse the Sierras, but not using known roads or trails?

I’ve decided what to get Google next Christmas: A sense of direction and a basic text on cartography. A primer on common use-cases for map use and when to employ them might be a nice addition. Note, I looked at a few maps from Apple and HERE for the same places and found similar errors.* Maybe I should send primers to each of them?

Or, perhaps, Google could “read” the “Visiting Cibola” page at the Cibola NWR website, as I did, and click the provided lat/lon, which uses Google Maps to generate a route directly to the Visitor Center at the Refuge. Guess those Refuge guys know their use-cases. Some of you may have noted that Google provides an address for the Cibola NWR and you can generate a route to that address as interpreted by Google. However, the spot identified by Google is not the Park headquarters/visitor center (although you would drive past it to reach this location).

My biggest concern here is that Google crawls the websites of the entities mentioned and could easily determine locations within each area that corresponds to an informative use case or ones that could be tailored to general map-use case requirements. Perhaps that is the future of mapping – maps on demand tailored to specific use cases?

Many may read this comment and think, “Well, it’s something Google can fix.” True, but the question is more complex than it appears and requires some insights into cartography, map use, and human factors engineering. More specifically, someone in the mapping group at Google needs to thinks about the differences between routing and reference maps, as well as the influence of use cases on both.

If you think I am picking on Google – go back and take a look at my earlier blog “Google Maps and Search – Just what is that red line showing?” . Google gets the act of mapping data, it just does not seem to understand map use. As you might expect, I think this is something that needs to be fixed sooner than later.

By the way, Toyota is entering the mapping derby, or so they said in a press release for CES. I plan on blogging about Toyota’s announcement sometime soon.

My best wishes for a successful, healthy and happy 2016

Dr. Mike

* MapQuest and TomTom could not find Cibola NWR, even when I gave them a city name and Postal Code. Every time I look at Apple’s Maps I get a headache. Their design is terrible, but this can be overlooked. The data errors are more problematical. Apparently Apple has trouble understanding the concepts of boundaries. Maybe I should write about that sometime?

Bookmark and Share

Posted in Apple, Authority and mapping, autonomous vehicles, Google, Google maps, HERE Maps, map compilation, Map Use, Mapping, MapQuest, Mike Dobson, Personal Navigation, routing and navigation, TomTom, use cases | 1 Comment »

Local Search – Local Data – Local Sources

August 26th, 2015 by admin

In my last blog I mentioned that some problems are best solved with a “global” approach, while others might be susceptible to a solution that is targeted locally. For some reason this thought has been on my mind since that time. Today, I expand on why a local approach might be the elixir that challenges and upsets that status quo in the business listings market and in the markets that use these types of listings.

Until the rise of the Internet there were few businesses based on providing a national, comprehensive directory of business listings. Until that time almost everyone who needed information for contacting or finding a specific business seemed to be able to make-do with the city or county based Yellow Page Directories that were delivered free of charge to businesses and residences in local markets. If you needed a directory for a distant city or urban area in another state, you could wander over to the public library and usually find a large collection of dog-eared telephone directories that could be used to solve your location problem. Individual national businesses, such as hotels, published their own purpose-built mini- directory/brochures to advertise and market their products and/or services.

At the time there were few national, comprehensive sources of business listing information available, and those that did exist were either inadequately comprehensive or unavailable for online use. For example, several companies had operating divisions that generated business listings as part of the financially-oriented services they offered, but in the early days of the Internet, most were unwilling to license these data based on specific intellectual property concerns typically held by publishers at that time. This lack of data sparked a number of companies to attempt new methodologies to create national business listings databases. In many cases, these early attempts involved the large-scale scraping (scanning or keyboarding) of telephone books and yellow page directories in order to create compendiums of business listings that covered the United States.

From the earliest days of online business listings directories there have been many changes in the methods and success of creating databases with relatively current business listings information. Some companies continue to scrape directories published by telephone companies, while newer players have emerged that scrape websites on the Internet, interrogating the web for the presence of valid, business listing information. Most companies, however, continue to approach the business listings compilation from a national or international perspective, although these companies often collect data at local levels using advanced technology, to compile and refine the data of interest.

One major difference in business listing data collection between today and pre-internet is that the market for Yellow Page/Yellow Book providers has been gutted by the success of national online distributors of business listings. Gone are the days when the Yellow Pages salesperson would visit a business, confirm the business listings details, and try to sell the company on upgrading to preferred listings or a graphic advertisement positioned to attract attention in next year’s book.

In today’s market the shop owner must take the initiative to represent their business with an accurate business listing. Problem is they just haven’t caught up with the trend. They don’t know which sites will best represent their business, or how to maintain them once submitted. Nor do they know how to claim their listing, or how spending any of this stuff makes sense when somebody on Yelp can pan their business without ever having used the service. Yep, I know, “Woe is them.” Sounds like this is a business opportunity, and it may be for certain verticals, but many have tried and failed to provide this type of support.

One observation I’d like you to consider is that while the Internet has increased the opportunities to use national business listings databases, the content and accuracy levels of these aggregated business listings databases may primarily reflect the goals of the companies desiring to offer an Internet-based service that is perceived as national in scope. While relevant spatial data from these systems may be used by local users for local purposes, the level of data quality necessary to meet the provider’s goals may not reflect the needs of local users for accurate and up-to-date business listings information consistently useful to them as they carry out their daily tasks.

Creating a national business listings database and ensuring that it is of uniform high quality is a very difficult task that seems beyond the capability of most of today’s providers, including Google and Apple. The cost of fielding, compiling, publishing and updating a comprehensive, up-to-date and accurate inventory of businesses for an area the size of the United States is staggering in terms of expense and unexpectedly complicated in terms of execution. For example, Google has tried any number of methods of enticing business owners to claim and “own” their business listings, as well as to correct them when appropriate. In addition, Google’s Street View has imagery of storefronts across the country for use as collateral material to help evaluate the existence of specific businesses. Google’s own index of the web is another useful tool for finding business listings data. Finally, Google continues to license business listings data from companies that license these databases for use by others.

Even with this variety of sources of data, business listings remain the soft-underbelly of the Internet-based local search. My examination of local business listings reveals a preponderance of low quality data. The problems I find appear to be the result of a comedy of errors, many of which seem remediable if anyone would just go out and look. And perhaps that lack is the critical problem for all of today’s pseudo-local search websites. No service has yet proven that they have developed a reasonable method for fielding research aimed at discovering the critical, relevant information about a business that one would need to build a robust business listings database.

Perhaps if we continue to think of the problem as a national one, no one ever will. It may be time for a new take on the old approach of compiling data locally by employing community-based teams responsible for and sensitive to local markets. I suspect that compiling business listings could be better done by companies with local interests operating in local markets than by national or international companies interested in serving remote, diverse markets from central locations. Some thoughtful entrepreneur will likely take this thought and realize that franchised local search sites supplied with up-to-date, accurate local data could be used to create popular community-based websites capitalizing on the paradox of increasing tribalism in the age of globalism.

As you might imagine, I wrote pages and pages on this topic and tossed most of it. Although it was fun to write and to explore the concepts involved, the blog, like the last one, had grown too long. If interested, you might want to rummage through my trash, as I do edit a copy using …..pen and paper. Of course those who see my many typos may wonder what I was thinking about when editing. Why the next blog, of course!

Hope you are enjoying the “dog days” of summer. Speaking of local problems, our drought is worsening. My lawn in now a unhealthy shade of yellow – with no hope for recovery. And so it goes.

Dr. Mike

Bookmark and Share

Posted in Apple, business listings, Data Sources, Geospatial, Geotargeting, Google, Local Search, mapping business listings, updating business listings, Yellow Pages | Comments Off on Local Search – Local Data – Local Sources

What3Words – Not.Quite.Right

August 3rd, 2015 by admin

Recently, just for fun, I have been examining innovative grid offerings from What3Words, MapCode (TomTom-link) and Open Location Code (Google). What3Words seems to have caught the most attention, and in this blog I will present my thoughts about this specific effort at creating a more useful map grid for addressing. This is a really long blog. If you don’t have the time to read it, skip to the bottom section titled And Now a Word From Monty Python – it skips the details, but will give you the gist of my evaluation.

Three notes to start this off. First, the commentary that follows is not focused on detailed aspects of geodesic discrete global grid systems or their function as data structures. We are concerned here with simple location encoding systems, often called “finding grids” that can be used to provide an indication of the position of something, somewhere. Second, I do not intend to rehash the grids that have survived the test of time, other than to comment that there are a number of very useful grids that can be used for purposes of “finding.” Third, in an attempt at brevity, I am going to cut a lot of corners involving map projections, geoids, tessellations and other interesting areas and avoid discussions of theory that would leave you begging me to stop. Instead, let’s look at some basic notions involved in geographic grids, and then examine What3Words and what it (and other recent grid development efforts) may be trying to accomplish.

Map Grids – What’s involved?
At its basic level, the effort involves computing a grid comprised of cells relatively uniform in size that are used to tile, with no overlaps and no gaps, the area of geography in which you are interested. The coordinates defining these grid cells might identify the corners of a cell or they might identify the center of a cell. The method of annotation aligns with the goals of the producer of the grid.

Many grids that have been developed have been associated with efforts by militaries or other government agencies around the world interested in finding and naming locations in which they have or may field operations. Most of these efforts designate individual map grid cells by using short-codes that 1) avoid the need for users to be fluent with latitude and longitude, 2) eliminate the use of positive and negative grid values, and 3) do not require a detailed understanding of how the grid system was created.

“Finding” grids can be global or a local
In order to create a map grid one needs to decide the scope and parameters of the problem being solved. For instance, if you create a city street map designed to operate independently of other maps (i.e. other geographic areas); you might be satisfied by creating a local grid that bounds and applies only to the area covered by the map. Often these types of grids create cells are identified by coordinates called “bingo-keys,” as, reading a map index accompanied by local coordinate reference sounds like someone calling a bingo game, “A-29, I-32, etc.” Local grids should not be taken as meaning limited in extent to small areas. For example, the Township and Range system that exists only in some areas of the United States is defined on the basis of numerous, local baselines and principal meridians, but functions as an integrated land recording system across large swaths of the country.

Of course, another person might map the same area described above in the local street map example and decide that the geography involved should be represented as part of a global referencing system. In this case, the need for this map to integrate with the geography of the rest of the world is deemed of paramount importance to the developer of the grid.

Deciding whether a “finding” problem is local or global depends on your goals for the system, how you intend the grid to be used, and your plan for implementation and popularization of the grid. However, creating a global grid benefits from considerations related to how the new system could integrate with, or, possibly, replace existing grid systems. Unless the new grid provides a desirable functionality that existing grids do not, it is unlikely to be adopted by enough people to ensure its continued existence. Instead, it may be viewed as an unnecessary, duplicative addition in a field already crowded with worthy alternatives.

Grid Coordinates
As noted above, grid systems require a method for describing the location identified by the grid. In many cases these are reported in the form of linear or angular quantities that designate a position that a location occupies in a specific reference system. Coordinates from grid systems can be considered to serve as addresses. In its simplest form an address can be thought of as an abstract concept expressing a location on the Earth’s surface.

Two important questions follow. What does the creator of a grid mean when they use the term address to describe the locations in a new grid system? Second, how will commonly used existing addressing systems handle the form of address generated by the new grid? For example, from the perspective of a postal service an address might be defined as being: mailable, deliverable, locatable, and geocode-able. For some grid designers, locatable may be the only criterion of importance. For others, the address requirement might include the notions of it being hierarchical and topological. The notion of hierarchical can be seen in the address form used by go2 systems based on a long line of patents dating from 1996 (“Geographic location referencing system and method,” Patent number: 5839088) that in one embodiment, provides a hierarchical address in the form US.CA. LA.14.15. Other grids system coordinates may allow one to discern useful information about the relative distance and direction between coordinate pairs, thus providing a useful relational context to the “finding” problem.

So what is what3words?
On its website what3words (w3w) describes itself as, “… a universal addressing system based on the 3mx3m global grid. Each of the 57 trillion 3mx3m squares in the world has been pre-allocated a fixed & unique 3 word address.” On the same page of their website, the company indicates its opinion that the world is poorly addressed and that w3w provides a unique combination of just 3 words that identifies a 3m x 3m square anywhere on the planet. It claims that the grid cells are, “… far more accurate than a postal address, and much easier to remember, use and share than a set of coordinates.”

The ability to remember three words, as opposed to remembering a long pair of spherical coordinates is at the heart of the w3w system. W3w appears to be trying to introduce a system of geographic coordinates into widespread “public use,” as opposed to the more limited scientific and technical user populations associated with the use of many other geographic grids.

Example forms of the w3w coordinates are as follows: “remote.sun.palms,” ” feast.grab.bride,” or “madness.tags.curious.” As a further example, there are approximately 100 3m x 3m cells that fall within the boundaries of the property containing my home. If I enter my postal address using the w3w website, it appears to select a cell that is coincident with the center of the roof covering my abode. However, I could choose the coordinates representing any of the cells on my property as my w3w address. Presumably driveways or front doors might be a preferred choice for those presented with a large number of cells that could be used to identify the location of their home or business.

W3w is neither hierarchical nor topological. Any of the triplets used by w3w to identify a grid cell reveals nothing about the geographic relations between specific locations. In addition, w3w currently does not appear have a vertical component or any other method of ensuring precise addressing for multi-unit locations. I guess that people living in the same corner of a multi-level building might have the same w3w address and delivering anything to them might be a real puzzler. I suppose that’s part of why topology is so important in many addressing systems.

The approximately forty-thousand word English-language vocabulary used to identify the cells has been designed to avoid words that might be considered impolite or upsetting when combined with others. For example,” dogs.tinned.cats” is shown to identify a location in Japan, but the combination of words “dogs.eat.cats” or any related variant does not appear in the system. Singular and plural forms of words are included. The algorithm employed was designed to ensure that similar three-word combinations do not occur in the same geographical area. A variant form of a three-word combination used in one location (e.g. the use of plural form of one of the words in the coordinate triplet) might be used to describe a location on another continent.

Next, there are multiple language versions of w3w, although it appears that English is used in all versions for representing locations in the oceans and seas of the world. The triplet of words used to describe a specific land-cell using English bears no relationship to the three-word coordinate for the same cell in any other language, although these multiple representations point to the same world coordinate when analyzed by the w3w software. If you compared your w3w destination coordinates with someone who had used another language version of the grid, you both might be headed to the same destination, but, lacking a software application, would have no idea that the two seemingly unrelated grid cell designations were describing the same exact location.

As an aside, note that there appears to be some size parameter in work in naming locations in the ocean. While blank sections of water are named in English in all language versions, modestly-sized islands, such as Reunion Island, currently in the news, are covered with grids cells using words from the language version being used (e.g. French words if you use the French language version of the product). However, smaller islands (such as Flat Island and Round Island to the northeast of Reunion) are named in English, even when using another language version of the product. In further examination of this issue, I note that the Spratly Islands, involved in a territorial dispute between China, Brunei, Malaysia, Vietnam, and the Philippines are named using triplets of English words regardless of the language version of the product that is used. I Guess there might not be a strong appetite for the use of the w3w grid by China unless the naming algorithm is altered a bit.

The three words chosen as a coordinate for a location normally represent the center of the cell. These points, at least theoretically, “…will be within 2.12. metres from any adjacent square with a w3w address.” (Robert Barr –What3Words Technical Appraisal* is available here ). Barr further states that the w3w address is already a geocode (p. 16) and does not suffer from the problems associated with the geocoding and reverse geocoding process.

How about that? The w3w triplet is actually a pointer to the latitude/longitude grid that makes the system possible – but you must have already guessed that relationship.

In order to use w3w a user needs to have access to the w3w website or an app that uses the system. That means in order to identify their location and find the relevant grid address they need a computer, or a smart phone, or access to these types of devices and, at some point in the process, access to an Internet connection. The person hoping to find their w3w address needs to be able to point to their location on a map to select the grid cell that is going to be used to represent their location and whose coordinates will be used as their address.

If I had never seen an online map or an aerial image identifying my location on one, it might be a pretty hard task to accomplish. As a matter of fact even people who have had access to digital maps and satellite imagery often perform very poorly when attempting to use these types of spatial displays for purposes of locating features in the real world. What this means is that the adoption of w3w may be slowed by its users ability to access the required technology, as well as the abilities of users to locate their homes and businesses using the w3w platform. In addition, intervening opportunity make take its toll since the required technology can be used to solve the “finding problem” using alternative means.

In any event, after having identified the location of my home or business, I would need to remember the three-word combinations used to represent them. Of course, without access to the w3w software, no one else can determine if they are near me solely on the basis of the three-word coordinates. Nor can anyone help me out by referring to, say, a nearby address if I cannot quite remember my sequence, since w3w word-triplets are randomly connected to geographical space in the w3w system.

So, let’s recast the story. W3w grid cells are created based on lat/lon and then identified with unique three-word combinations. In order to use these “addresses” the three-word combinations point to a lat/long coordinate pair that can be used to tie into typical mapping and routing systems. Yikes! Just what benefit does w3w provide?

W3w seems to make a great fuss about the memorability of their three word triplets triumphing over the difficulties in using lat/lon coordinates. In other words, the w3w coordinates could be considered as a simple mnemonic for representing a location in a table that contains lat/lon.

Although I have never tried to memorize coordinate pairs, I agree that lat/lon coordinates might be hard to remember. Of course, so is memorizing and retaining the correct form of a random concatenation of three-words from a forty-thousand word dictionary that creates approximately 57 trillion unique variations of these coordinate triplets.

Perhaps more to the point, I cannot remember the last time I focused on remembering a specific lat/lon coordinate. However, I use lat/lon almost daily, but this action has been made opaque by mapping and finding technology. In my daily life, I no longer need an address for others to find me. I can call up a Google map and by tapping into my GPS chip it can calculate my location and tell others how to find me.

Indeed, if I point at a location on a map in Google Maps, right click and query, “What’s Here,” I receive the lat/lon of that location. If I put that lat/lon in a signature block, it would allow people to find me who did not know my postal address. In fact, the finding action in the above example seem to roughly approximate the same procedure people have to use to find the three-word coordinates in w3w that define the a lat/lon coordinate.

While the concepts of “finding” and “finding grids” might be considered a global problem, providing addresses for individuals and their businesses may, in fact, be an opportunity that is best considered a local problem. Further, assigning global addresses using a global grid when the grid system contains no recognition of the political and administrative geography involved may be an insurmountable problem. While this may sound short-sighted, I can assure you that addresses, addressing and the “authority” to establish them, to standardize their form, and to mandate their use are political hot buttons everywhere in the world.

Finally, note that technology may be bypassing the need for their beneficiaries to understand the complexities of grid systems. Consider, the mobile phone. You probably can’t remember the long sequence of digits that can be used to call your friends. Depending on the contents of your address book, it may also know your location and the locations of everyone you call. In addition, your phone records everywhere you go on the Internet and in real life. The phone doesn’t seem to need w3w to accomplish this feat and neither do you.

And Now a Word From Monty Python

Consider the fictional scenario presented below. I thought about scrapping the blog above and using this skit instead, but decided it might be better to discuss some of the issues with w3w in more depth. However, the scenario below is a pretty good summary.

It was a cold and dreary night. I had no idea where I was, so I called Rescue Services.
The operator asked, “What Three Words. Please?”

I replied, “I Am Lost.”

“No,” was the reply. “We couldn’t find any results for ‘I.AM.LOST’.”

I retorted, “But, I.AM.LOST.”

“No, sir. We require three words, not four words.”

I replied, “MY.CHOICES.ARE?”

“No, we did not get any result for those three words”

I responded, “HELP.ME.OUT?”

“Sir, you need to use a three word combination that contains three words from the forty-thousand or so recognized by what3words.”


“No,” was the response followed by, “And, that’s WHAT3WORDS. Although if you choose to use French or Portuguese the dictionary is only twenty-five thousand words because they do not cover the oceans and seas. Are you asea?”

“No, but how do I get the correct what three words that locate my position?”

“Use a What3Words App to identify your location on a map and it will return the three words defining that position.”

“But if I gave you my What3Words, what would you do with them?”

“Convert them to lat/lon and run a route to you.”

I was incredulous – “WHY.DO.THAT? You can read my lat/lon directly from the GPS chip in my phone and you can JUST.FIND.ME!”


Encountering new maps grids is always fun and the thought that one might contain something productively innovate is always a big lure for me. I admire the team at w3w for attempting to solve a difficult problem. Unfortunately, convincing the world to use a new grid is a very difficult task, even when you might have created something better than that which already exists. While w3w is being effectively marketed, it is my opinion that is it is unlikely to be widely adopted. It lacks what I consider to be a fundamental innovation. Further, its utility as a map grid is constrained by the simplicity that makes its use appealing to many.

Finally, I am no more enamored of the new grids Map Code and Open Location Code than w3w, but for entirely different reasons. But this blog is already entirely too long.

Letters, we’ll get letters…..


Dr. Mike

*) Dr. Barr is an acquaintance and a professional of the highest caliber. His analysis of w3w is good reading and I recommend it to you. He appears to view w3w favorably.
**) It is my opinion that the w3w website software is not particularly well-disciplined. Its various language options appeared to me to be unstable when examined over several days using Firefox v39. I did not interrogate the website using any other browser.

Bookmark and Share

Posted in Authority and mapping, geocoding, geographical gazetteer, map coordinates, map grids, routing and navigation, Technology, what3words | Comments Off on What3Words – Not.Quite.Right

Can Anyone Stay on Top of the Online Mapping Hill?

July 19th, 2015 by admin

Recently a colleague contacted me to ask my thoughts about a report indicating that Microsoft was selling some map-related assets to Uber. He noted his disappointment, as he had hoped that Microsoft would reinvigorate its mapping activities and, once again, become a notable player in the mapping market. The brief conversation led me to contemplate the world of online maps both past and future.

Microsoft has had a long and storied role in desktop mapping software. For a while they were the leading provider of consumer oriented mapping software, but that role relied on the company’s success in controlling the physical distribution channels for its products. In the age of packaged mapping software aimed at the desktop computer Microsoft was able to influence the popularity of its products by controlling the distribution channels that determined the availability of products for purchase.

Microsoft could afford to buy as much shelf space and as many end caps or stand-alone displays as it desired. Since physical space in stores was limited, Microsoft’s presence could restrict the competitive products that were available. In other cases when competing products seemed to offer more and better functionality, Microsoft often reduced the price of its software to “free,” or at a cost level that was not sustainable for most competitive products. Due to the ability to leverage its mapping brand across distribution channels and measure its mapping products profitability across all software product lines, Microsoft’s mapping software became a dominant force in the industry. This is not to say that Microsoft’s mapping software was uncompetitive, as it was often of better quality than the products of many other players in the mapping industry.

MapQuest’s launch of a free, online mapping product quickly changed the distribution paradigm. In what should have been a case study for the Innovator’s Dilemma, Barry Glick and Company offered Internet-based routing capability between addresses across the United States, even though no one, at the time, ever asked for one. While not quite as fully functional as some of the desktop mapping/routing software, it was often more up-to-date and offered none of the cock-ups that frequently accompanied the use of CD-ROM software, and the related idiosyncrasies of the operating systems of the time.

Microsoft’s response to the development of online mapping systems was quite timid, and, perhaps, more confused than anything else. Unfortunately MapQuest was the people’s choice, although Microsoft online map product was competitive. The more important point is that it was at this point that Microsoft’s inability to influence online distribution doomed its mapping efforts, as the company now would have to depend on functionality and innovation in its effort to lead the market without any of the revenue that accrued to the company from their desktop mapping. But like MapQuest and Yahoo, Microsoft had no idea how to make money from online mapping.

Google’s development of mapping as an infrastructure play designed to enhance its advertising business marked a turning point in the sophistication of online mapping functionality. Google had a financial reason to spend a great deal of money promoting innovative mapping and routing features. It was able to out-spend and out-innovate Microsoft and all other players in the mapping universe as a result. In turn, the threat of the position achieved by Google as a partial result of the global success of its mapping programs led Apple to develop its own capabilities in mapping. Apple realized that users of the iPhone expected quality mapping and the company was not interested in its customers being users of Google Maps. Apple’s spend on mapping has been to protect its brand.

In today’s online world of mapping Google and Apple, two companies with strategic incentives requiring mapping, rule the roost. Will this leadership continue?

I have previously noted in this blog my interest in how long Google might be able to sustain its “spend” on mapping software. I think we now have an answer. It is my impression that the heady days of map development at Google are over and that its map products will be maintained at or near their existing levels, but with little innovation, other than in regards to autonomous navigation systems, as we proceed into the future. Google, unfortunately, is approaching middle-age and is developing the concerns that accompany fiscal responsibility. Over the last year or so, Google Maps has experienced senior management departures and market abandonment (GIS). Now the company has new financial leadership and this will result in spending limitations leading to a lack of innovation that will certainly limit Google’s future in the world of mapping.

Although it is early in game for Apple, I doubt they will fare much better. In its favor, the company has been more circumspect about spending. It appears to have out-thought Google’s mapping innovations and found of way to reach near-parity without spending as much as Google. However, in the long run, Apple’s market is limited to its own customer base, and quality mapping will be too expensive to support without some fundamental change in Apple’s business model. I suspect that eventually Apple will find that it, too, cannot afford to support its mapping programs at the desired level of accuracy and functionality.

The problem for all companies involved in mapping is that supporting quality in spatial data is an effort that historically has nickle-and-dimed profitability. While some of the basic map “facts’ may remain unchanged for decades, other map features change with amazing rapidity. It is an unfortunate rule of mapping that you cannot just compile, then build your spatial database and stop. In order to be competitive companies need to update their map data in a cyclical and spatially comprehensive manner. In addition, technology is constantly changing and the spatial support systems that spin these databases must be upgraded, updated and rethought every two to three years. Most organizations simply cannot afford maintain these types of efforts on a worldwide basis.

Few CFO’s want to hear the answer to this question, “When will you be finished with the mapping database?” The answer, of course, is “Never!” The answer to “Can you spend less?” is “Of course, but the data won’t be as good and the functionality will suffer.” (While “active” crowdsourcing may be considered an alternative here, I think that it is, for several reasons, not a sustainable choice for major commercial map providers. However, crowdsourcing (either active or passive) is not the topic of today’s blog.)

That brings us to Uber. Obviously Uber is interested in mapping. It has hired key players from Google, made an asset-deal with Microsoft and submitted a bid for HERE. However, the HERE bid appears dead, which leads me to presume that new Uber employee Brian McClendon (ex-Google, once a mapping exec for the company) may be planning on recreating the Google Map Machine at Uber. I do not doubt that Uber could spend some of its money to build a great street-level spatial database for the world. Conversely, I hope someone besides me begins asking, “How many of these worldwide street-level databases are we going to build? Isn’t there a better way?” Maybe!

Although it may not make your day, next time I am going to write about map grids, an ever popular topic for dreamers. It might be fun – and, hopefully, informative.


Dr. Mike

Bookmark and Share

Posted in Apple, Google, google map updates, Google maps, HERE Maps, map compilation, map updating, Mapping, MapQuest, Microsoft, MindCommerce, OSM | 5 Comments »

« Previous Entries