Exploring Local
Mike Dobson of TeleMapics on Local Search and All Things Geospatial

Musings – Or You Can’t See There from Here

January 28th, 2011 by MDob

Last week Duane Marble sent me two news items that you might find of interest. The first item included a link to a short piece on someone’s observation that the major online map websites did not show the correct location of the football stadium that is to host the Super Bowl. About an hour later I got around to reading the article at Directions Magazine, but I found a comment indicating that the stadium was correctly located on all major web mapping sites. Either the first story was incorrect, or the maps were updated once the scandal was made public. But the interesting issue is how would you determine which possibility was, in fact, true? As far as I know, no major online mapping site has installed the infamous, but highly useful “time-machine” button.

The second item of interest described an activity involving the USGS scanning and georeferencing historical USGS quadrangles. The activity was described as follows:

“The USGS Historical Quadrangle Scanning Project (HQSP) is scanning all scales and all editions of approximately 250,000 topographic maps published by the U.S. Geological Survey (USGS) since the inception of the topographic mapping program in 1884. This scanning will provide a comprehensive digital repository of USGS topographic maps, available to the public at no cost. This project serves the dual purpose of creating a master catalog and digital archive copies of the irreplaceable collection of topographic maps in the USGS Reston Map Library as well as making the maps available for viewing and downloading from the USGS Store and The National Map Viewer.”

Receiving these notes resurrected a concern that I have batted about over the last decade involving the archiving of cartographic product in an online environment. Simply put, “Can online maps continue to fill the role served by paper maps as a historical resource?”

The “establishing a past state of a mapping database” problem first reared its head in my life when I was working in the world of commercial cartography. We had converted our cartographic renderings that had been prepared for printing based on manual map preparation technologies (scribing on negative film, interposed screens, camera work, etc.) to digital technology (digitizing, software manipulation of spatial data and output on film written by laser scanners). However, we continued our long held practice of storing printed copies of our products for purposes of copyright certification, as well as in response to a blend of other requirements. Among these ancillary concerns was the need to be able to document how our maps had represented some feature at a previous point in time. Usually these types of requests were associated with defending the company from lawsuits, for example, in which a party was claiming that one of our products misrepresented some geographical feature crucial to upholding the assertions underpinning a lawsuit.

The issue of interest in the last example was that our products (paper maps) were produced from product databases that were extracts of our Master Database. While we could and did archive copies of our master database based on legal requirements (e.g. assets guaranteed in financial transactions), recovery concerns, and general best practices, the number of product databases soon grew at a rate that outstripped our ability to finance the time, media and off-site storage expenses of archiving every edition of every product. Conversely, we were quite easily able to archive the paper products. In addition, our customers were quite easily able to archive their copies of our paper products, in case they wanted to examine what had changed in a particular location from one period of time to the next edition of the product. Do any of you have complete copies of Google’s U.S. map base from when it was first introduced?

A few years later, I was conducting some expert witness research in respect to a patent case and, as part of my efforts, I wanted to determine when a specific digital mapping application was first launched on the Internet. I managed to find the team responsible for the product, but not one of them could remember the date when the functionality was first “stood-up” and pushed out to the organization’s “live” site on the Internet. While on another patent case, I desired to find out when a specific functionality was made available to the public on a website that provide mapping services. Unfortunately, no one on the team responsible for the website remembered when the functionality was introduced. I should point out that the people I hoped could answer the questions I was asking were not involved in the litigation, nor would they be impacted by the results of the action – they simply did not record a history that told them when they launched or revised their online products. This lack of a formal “corporate memory” related to spatial databases and mapping functionality will surely be regarded as a significant lack by historians looking back on this era and trying to figure out who did what, when, where and how.

Just this last week someone from Providence sent me an image of some interesting public artwork that was being destroyed, as it was on the face on one of the former I-195 overpasses that are in the process of being deconstructed. Yes, this is an overpass that NAVTEQ and Google currently show traffic flowing across, even though this Interstate route has not been open for traffic for over one year. However, in trying to recreate the varying geometry used by the online mapping services to represent the interstate road network in Providence over the last year, I had no place to turn. If a historical record of mapping database changes exists, it appears not to be publicly available from either the providers of the online mapping systems or from their navigation database providers.

My interest is that I was trying to conceptualize what methodological changes need to be introduced to help insure that those managing map database systems could preclude these types of senseless errors. However, the process led me into a consideration that the focus of online mapping systems on the rapid, near-real-time presentation of map changes is causing us to lose the capability of tracking spatial changes across time and potentially impacting our ability to research some aspects of change detection. (After all, where would mystery novels be without the super sleuth suddenly realizing that Skyline Drive used to be called Beacon Road at the time of the kidnapping forty-years ago?)

While imaging systems capture everything that can be resolved in the image scene, they do so at the expense of the attribution of the elements that are sensed. For example, although it is easy to spot the difference between a large road and small road on imagery, it is not feasible to identify one linear structure as Interstate 95 and the other as Woodall Road, at least not on the basis of the imagery alone. Further, aerial imagery does not provide information on spatial features that are not part of the visible spectrum (such as legal boundaries, zip codes, census tracts, etc.). Maps serve as a source of compiled information that is made sensible through the icons, marks, and other symbols assigned to features by cartographers. Maps are a medium well suited to help us assess real world features, such as street names and boundaries, that may have changed across time, as well as allowing us to note where these changes occurred and how they may be related to other spatial data. However, to support this use, multiple editions of the map (or photo-map if that is your preference) must be available that provides coverage of the area of interest.

Maps serve more uses than location reference and it is my fear that many important uses of maps have no future in the rapid update, image-oriented, online world of maps and mapping. While the USGS can digitally Xerox all of the editions of its quadrangles dating back to 1874, they are able to do so because they have retained the printed editions of their topographic maps. Conversely, how can we know how Community X was represented on Google Maps two years ago? Or how the road geometry of my neighborhood was represented twelve years ago on, say, MapQuest?

Google, for example, clearly understands the value of archival information and provides historical imagery in Google Earth in an attempt to respond to this need. When viewing London in Google Earth, the application provides imagery from several periods between 1945 and 2010, selectable on a slider bar and displayed as an overlay. Somewhat curiously, Google insists on showing modern 3D-models, in place of historic buildings as they existed in 1945. However, the imagery from that period was of poor quality compared to what we expect today and the truth is that the city could be better represented by scans of paper maps that showed the geometry and names of London streets, icons of buildings and the POIs as they were named at these points in time.

When I thought further about the problem, I realized that we were able to solve the map comparison problem in the past, in part, precisely because our maps were not integrated into a master database spanning the extent of all of the geographical areas we could research and compile. Indeed, in the past the problem was just the opposite. I remember various news articles on researchers who were leasing warehouses and using the floor space to fit together the topographic sheets for a region into a master database of sorts. Of course the common complaints were that the paper maps were not dimensionally stable, the map seams did not butt as smoothly as had been hoped and the projection seemed off in the case of a curious quadrangle or two. Fancy that!

I guess time and the future provide new opportunities and new reasons to re-investigate the old problems we thought would be solved in the future. Could we put together a useful, workable archive of online maps that could be used for purposes of analyzing historical change detection across long periods of time? Something to think about, I guess. But when I contemplate how to organize and archive those 15 minute updates now used by some online map databases and make them accessible to the world, I get a headache.

Speaking of headaches – I have reservations for travel to Cairo on February 16th followed by a Nile River Cruise, a flight to Abu Simbel, a cruise across Lake Nasser to the Aswan Dam, followed by visit to Petra, Jordan returning home early March. I guess I should call the UN and warn them whenever I plan international travel. Well, there go my plans for a trip of a lifetime. On the other hand, my loss pales in comparison to the plight of the heroic citizens of Egypt who are willing to risk their lives for an opportunity for a better life. My thoughts and prayers are with them.

I guess I will just have to focus on blogging a bit more often to fill in my new wealth of spare time. In any event, whenever the next blog is published, I am going to write about the concepts of the Mechanical Turk and User Generated Content in mapping and ask which is which.

Click for our contact Information

Bookmark and Share

Posted in Authority and mapping, Categorization, Geospatial, Google maps, MapQuest, Mike Dobson, Navteq, User Generated Content

One Response

  1. Rik Sheridan

    It’s great that the USGS is digitizing the topos but from what I’ve seen they’re doing it all in PDF format which seems pretty useless in most GIS software that I know of. And difficult to combine maps in mosaics.


    Thanks for your comment Rik. There is more information about the format and what you can do with it at http://pubs.usgs.gov/fs/2011/3009/fs20113009_012611.pdf (including merging these images with the new generation of US Topo Maps). Although I suspect you have already know this information, others may not,