Exploring Local
Mike Dobson of TeleMapics on Local Search and All Things Geospatial

Navigation, Pedestrians and Landmarks (Part 4)

July 31st, 2009 by MDob

Last time I was speculating that if I were to become lost , let’s say in an urban environment, that perhaps I could simply take a photograph of my surroundings, submit it to a local search service and have it identify the location of the photograph. Well, I suspect that this may have been met with hoots of derisive laughter, but today I want to explore this concept and help you understand why it will happen.

As noted previously in this series, landmarks seem to be the key to providing navigation cues to users who are attempting to travel between places. Providing routing advice to users attempting to navigate between places is certainly easiest if there are well-marked street signs, identifiable intersections and sensible path directions. Unfortunately, many environments are not well-marked with street signs, forcing the use of other location indicators. It is in these types of environments that landmarks are the preferred alternative to street signs.

Curiously, some research has indicated that the users who are trying to follow a described path between locations consider navigation by landmarks the preferred mode of path following, even over directions that include street signs and maps.

In many environments where street signs are lacking, it may also be difficult to identify a landmark that could be used for navigation. I think of my travels in the past where I wandered through miles of uniform looking bungalows in north Philadelphia, rustic three-story apartments in Rome and would have been hard pressed to identify my specific location based on the concept of a landmark. But what if I could take a photo of my surroundings and send it to an application that could compare my photo with its database of photos and tell me where I was and what was around me?

Huh? How would that work? Well, let’s look at the figure below in which a user with an iPhone 3G 3 works with a Google application to find his or her location. The phone has a camera, GPS module, digital compass, and the ability to send photos. (Click here for a PDF with a larger version of the illustration.)

The Google Localizer - a hypothetical system for using photograph-assisted navigation.

So, the application tries to snag the GPS coordinates of the phone, but the user is in an “occluded” area, so the application tries cell tower triangulation but finds that the result is a large area and cannot locate them precisely. The application, then, tries to locate the phone based on local Wi-Fi networks, but once again finds this is not precise enough to be of assistance. Next, the application requests that the user take a photo of the surrounding area. If the user agrees to provide the photo, the phone is queried for the direction the phone is facing, in order to calculate the pose as an aid in comparing this photo with other photos of the potential location. All the location information provided in previous blocks of the diagram is available to the application to help determine the general location in which it should be searching for matches.

OK. But what would it be matching the photograph against? Well, how about all of that imagery that Google is gathering in Street View? If you have looked at the more recent Street View gantry, you will notice that they have upgraded their cameras (8 aimed horizontally at the surrounding buildings, one vertically) added 3 SICK laser scanners (two to the sides one to the front) for measuring distances, dimensions, contours and volumes. Hidden inside the enclosure are a series of sensors monitoring cell tower signal strength, directionality and position, as well as recording similar data from local Wi-Fi networks (click this link to see a photo of the newer version of the camera and sensors). You may have noted that Google now has this ensemble of equipment mounted on tricycles, which it is using to access areas that are pedestrian only and not available to their fleet of vehicles (nor those of Navteq and Tele Atlas). In other words, Street View is being used to create a massive database of details about spatial location and we believe that these data will be used for navigation and local search.

Of course, if Street View does not provide a match, perhaps one can be found in Google Earth annotations or at Google-owned Panaramio. Yes, the matching would require prodigious amounts of bandwidth and compute power if carried out on a large scale, but we suspect that Google is capable of providing this type of system. Notice that the use of a digital compass provides information about the pose of the phone during the photo taking so that the relative position of the camera (and the user) can be provided to augment the matching of the photograph taken by the user with an image in the database, which would likely have a different pose.

The system shown above, which I think of as the “Google Localizer”, or one like it could be used to provide location information, as well as navigation information that would assist users in finding their way to a location. While I have been describing this type of device as a navigator for street information, it could also become a powerful “business finder” for Google and its advertising engine AdWords. Imagine, driving one of those Google tricycles through a mall, capturing location information and photographs of all of the stores and then being able to guide someone from the parking lot directly to their destination, even if they had never been there before.

Let’ discuss the local search implications of the “Google Localizer” briefly next time, the problem with updating a pictorial database, and, then, change gears to look at some interesting companies in the local search and navigation markets.

Click for contact Information for TeleMapics

Bookmark and Share

Posted in Data Sources, Google, Local Search, Mapping, Personal Navigation, local search advertising, pedestrian navigation, routing and navigation


(comments are closed).