The Google Map Maker Review and Authority System
Please read yesterday’s blog before reading this one. It will make a lot more sense if you do.
Based on my experience with the Map Maker described in yesterday’s blog, the edit system is deeply flawed, at least in its present incarnation. Just so you know that all of those edits really happened – see that MDob in the right corner?
Unfortunately my experience with the authority system in Google Map Maker was, perhaps, even more troubling than my exposure to the edit system. The “background” for this statement is described below. I point out here that I attempted only three edits and received reviews only on these three edits. As you will read later, those three reviews and an analysis of other edits by these reviewers opened the door to a broader view of the Map Maker and its “trusted reviewers.”
The first edit I attempted concerning the lane in the parking lot that was erroneously displayed on the Google Map as connecting to an adjacent street is a prime example of the structural weakness of the edit/authority system used in Map Maker.
While preparing to edit the map, I was quite certain that my recollection that the lane in question did not intersect the adjacent street was correct. I looked closely at Google’s satellite imagery and decided it was not clear enough to allow me to confirm my recollection.
Another alternative to resolve this problem was to use the Google Street View imagery. While it provided clear evidence that the aisle in the lot did not connect to the adjacent street, I could not find details on the date of the imagery and could not resolve the issue due to the absence of metadata on the Street View imagery available in either Map Maker or Google Maps.
I concluded that the only way to determine whether the streets connected or not, at least in this case, was field examination. So I hopped in my car, drove to the location, and did a field inspection. The field inspection closed the question. I took a set of photos as positive proof that my assertion that the lane and the road did not connect was true.
Yes, current satellite imagery or current Street View imagery might have been used to resolve the issue. However, since the Map Maker edit reviewers presumably do not have access to the metadata on the date these images were captured, they cannot definitively determine connectivity or the lack of it in road involved in the edit I contributed. We can, also, presume that, unlike Jason Bourne, Google’s trusted reviewers do not have the ability to retask satellites or redeploy Street View vehicles to resolve the situation.
I suppose the “trusted reviewers” could look at the maps on other online services for this particular location. In respect to the edit in question, the Navteq and Bing websites agree with Google. On the other hand, OSM, Yahoo (Navteq data), and TomTom don’t show the parking area at all. Let see, that’s three in support and three against. What to do?
I suppose Google’s Map Maker reviewers might be tempted to refer to other sources, like I did for purposes of comparison, but in its Moderation Guidelines for Users of Map Maker Google lays out its position quite clearly:
“While moderating, do not post any material that you know, or should reasonably know to violate any law, contractual obligation, confidential information, proprietary information including copyright or the privacy or publicity rights. Google Map Maker expects you to respect and abide by copyright laws and does not condone any violation of copyright law. Users, moderating User Submissions, are expected to be familiar with the applicable copyright laws in their jurisdiction and in case of a doubt the Google Map Maker team encourages you to consult an expert in the field of copyright law in your jurisdiction for guidance.”
Good for you Google. How do you police that action? Do you really expect your volunteer users to consult an expert in the field of copyright law for their jurisdiction and all of the jurisdictions in which they review edits? Hmmm.
Well, for comparison purposes only, I examined the area using imagery available on Navteq’s website, but it did not provide enough detail to provide a clear answer to the question. Bing’s imagery was superior and showed several trees blocking the access of the lane to the street, but there was a lack of metadata about the age of this imagery opening the possibility that the intersection could be a more recent construction than that shown in the imagery. Just so you know, I do not consider a copyright date to be a surrogate for the currentness of the imagery or the map data, since the date of the copyright may have no relationship to the age of the information in the work covered by the copyright.
Given this uncertainty with the currentness of sources and the inadequacies of the Google source data, it appears that local knowledge gathered through field observation is the most immediate and authoritative method that can be used to resolve this particularly natty problem of road intersection, as well as the best way to solve a large class of problems similar in nature. I drove to the location in question, because there was no other way available to remedy my uncertainty about the location I was attempting to edit.
So, what’s the Google Moderator team to do when they encounter my assertion that the parking lot lane does not intersect the adjacent street as shown on Google Maps? How could they remedy this situation without observing it? Yes, they could look at surrogate data beyond that which I used, but do you really think they are going to spend that much time ferreting out other sources? I doubt it. If you were a moderator, would you? And how long would it take you to find useful sources that did not violate the restrictions on intellectual property that Google assert for its reviewers?
Even if the Map Maker trusted reviewers did collect other data for purposes of evaluation, how would they know with any degree of certainty that the source was current and reflected the status of an issue on the ground? The answer is that they would not have a clue what the correct call was in the case of my edit, and in thousands of cases like it. “Trusted Reviewer” Nigar was smart enough to reference Street View, but unless he had access to metadata describing the date the data was collected, Nigar could not have known that his decision was based on the reality of the situation on the ground.
The edit in which I marked the actual driveway into the medical office complex does not look like a difficult call, since it clearly connected with the adjacent street in both the satellite and Street View Imagery. Unfortunately, the imagery appears to lack metadata, so who, other than a local observer, really knows whether it reflects an access point that exists, or one that was reconfigured recently, or perhaps one that has a chain across it?
The decision on the underground horse tunnel edit is, also, troubling to me. The feature cannot be seen on the Google supplied imagery or on Street View and never will be unless they take a Street View bike down the horse trails. So how was the decision made to accept my edit?
You know, we could speculate for a long time about how the edits could have been made, but let’s skip that dance and see what Google wants its editors and reviewers to do.
In its Moderation Guidelines for Users of Map Maker Google tells editors and reviewers the following:
You agree and undertake that the underlying intention behind moderating User Submissions is to remove any submission that you know through your personal local knowledge either or inappropriate or factually incorrect and to approve such User Submissions that to your personal knowledge are accurate.
Under a section of the Moderation Guidelines titled How To Moderate are these instructions:
i. Approve- you can approve a User Submission, if from your personal local knowledge you are sure that the User Submission is accurate both in terms of location and its labeling and does not violate the Map Maker Terms of Service in any manner
Guidelines for Users desirous of moderating User Submissions
ii. The following rules shall govern any action taken by you to moderate any User Submission:
You shall moderate only through your personal knowledge of a local place.
Content that should be denied
Content that you know by your personal local knowledge to be factually incorrect, for e.g. a non-existent building or other landmark, a road going over a building or a water body etc.”
Ouch! So all of these attempts at moderating edits are supposed to be heavily weighted towards local knowledge. To me, that makes sense. After all, the benefits to a crowd sourced system are mainly based on the relevant information that local people can provide about the local situations with which they are familiar.
I guess, then, that the questions for the Google Team are 1) “Do the Map Maker Reviewers really have local knowledge?” 2) “How does Google measure that quality?” 3) ”How does Google enforce that requirement?”
To get to those answers, I realized that I needed to know something about trusted reviewers and use the trusted reviewers who evaluated my edits as examples of how the process works. (Please note that the information I found is publicly available and resulted from searching information available from Google and Google Map Maker. Indeed, some of it was provided by the reviewers of my edits. My interest in revealing this information is to point out the weaknesses in Google’s review system and I use this information to make several suggestions for improvement at the end of this blog.)
Lalit Katragadda, one of the inventors of Map Maker, indicated during a interview on Map Maker that “The most difficult part was not the coding, but the structure,” he says. “After all, how do you know which users to trust?”
The article’s author continued noting “As anyone who has asked for directions on the street knows, not everyone can make maps.
So the Google India team invented a software solution that treats each new edit like a separate page. Over time, the machine learns which users are trustworthy. When a user has reliably labeled enough points, he graduates from the system and can moderate other users’ map making too.”
Reliably labeled? Who evaluates this measure? How do they know the edit is correct or incorrect? Gee, that’s a great approach, but what does it have to do with local knowledge?
It would seem, the “trusted reviewers” are those who have successfully edited maps using Map Maker and reviewed edits by other contributors to Map Maker. Apparently, by contributing edits that are approved and by reviewing edits by others you gain “reputation in the system.” I presume that if your edits are always rejected that you get blackballed. If your review of edits by others is reversed on further review, I suppose you get debited and the “cred” that you have in the system is decreased by the ranking system.
While this “cred” system is interesting, it has little to do with measuring the local knowledge of the Google Map Maker Reviewers. In addition, the rating system is not a reasonable method to establish “authority”, although it is way to establish popularity or it could be regarded as a measure of how few actual map users critically view Google maps. Perhaps it could be interpreted as a measure of those reviewers who evaluate solely on the basis of satellite and Street View imagery?
In essence, the major problem with the “trusted reviewer” concept is that the information available the “trusted reviewer” to evaluate a contributed edit is at best comparable to that available to the contributors of edits and usually less valuable since it is not influenced by local knowledge. Based on my limited examination of Google Map Maker, I have concluded that the trusted reviewers in the Google Map Maker System may have limited or no geographical knowledge of the locations that they edit, or for which they review contributed edits.
The “trusted reviewers” who edited my works were identified only by these names: Shalini, Abhilash, Hemant and Nigar. Edits from two of the reviewers (Shalini and Abhilash) provided links to their history of accomplishments using the Map Maker system. I was able to find details on trusted reviewer Nigar, but nothing for Hemant. What I was able to find, however, provided several useful insights.
For example, “trusted reviewer” Shalini, has been editing and reviewing edits in Map Maker for 214 days. During that time, he produced 10,572 edits and 8599 reviews, including edits of 3,245 features. In essence, Shalini has produced an average of 50 edits and 40 reviews of edits each day since he joined. You can review the edits by “trusted reviewer” Shalini here. You might notice that the Shalini reviews and edits include detailed street level and feature corrections in Vanatu, Nigeria, Brazil, Guyana, Dominican Republic, Iceland, Montenegro, Puerto Rico, Paraguay, Azerbaijan, Moldova, Macedonia, Romania, Iran, Viet Nam, India, Nepal, Pakistan, Dubai, the United States and many other countries.
Do you think “trusted reviewer” Shalini uses compilation sources outside of those provided to him and all other users by Google? Of course not! In addition, I think we can safely assume that “trusted reviewer” Shalini is not applying local knowledge to the majority of the edits he makes himself or the submitted edits that he reviews. In essence, the skills Shalini appears to have are familiarity with how Map Maker works and the ability to evaluate the suggested edits based on a belief about whether they are supported by the imagery provided by Google (Street View and/or satellite imagery). If this is the case, then, where is the “personal local knowledge that Google requires of its reviewers? More importantly, where is the “authority” in the Google Map Maker System?
During this research, (fact checking earlier today) I went back to the link to Shalini on the page where he corrected my edit. I was shocked to see that instead of the information provided above (which you can still find by clicking that link as of tonight), it linked to a new page that indicated that Shalini’s stats were now 47 days with 257 edits and 420 reviews. All of the reviews were now exclusively for the United States. How curious. How did this change? Same link location, different days, different results. Ya gotta love the Internet. It’s so authoritative. Well, let’s move the next reviewer.
According to Google, ‘trusted reviewer” Abhilash (details here), has been a member for 45 days, with 739 edits and 275 reviews of edits. The Abhilash review and edits appear to be scattered across the United States with a minor focus on paths and trails. After reviewing several of the Abhilash edits and reviews of edits, I concluded that the reference material used by this reviewer, also, appears limited to Google Maps, Street View and the satellite imagery the Google provides to all users. Local knowledge was not in evidence, given the number of states and localities within which Abhilash contributed and reviewed edits.
“Trusted reviewer” Nigar was not linked to on my edit page, but I was able to find him and in his list of reviews was one the edits I contributed that he reviewed – so he was the Nigar for whom I was searching. Nigar has, apparently, been contributing for 161 days, including 1569 edits and 10183 reviews of edits contributed by others, for an average of approximately63 reviews per day.
It seemed as if it took me forever to page through the dates of his latest reviews of edits when I trying to establish that he was the Nigar who reviewed one of my edits. I became so interested in his productivity that I decided to analyze his activities on Map Maker for a given day. I selected May 16, 2011 for no other reason than I had the idea when I was looking at some of the reviews that he approved on that date.
Did you know that “trusted reviewer Nigar reviewed edits contributed by others on May 16th for 12 hours and 29 minutes straight? During that span he reviewed 79 edits or one every 9.5 minutes. The edits were spread across a number of localities in 21 of the states of the United States.
Stunning, isn’t it? But, as they say on television commercials, “there’s more.” Yes, “trusted reviewer” Nigar found time to perform 30 edits of his own in eight states while undertaking his Herculean work in reviewing 79 edits contributed by others. In total, Nigar was involved in reviewing or creating 109 corrections (30 edits, 79 reviews) in 21 states, popping one out every 6.8 minutes over a 12.5 hour period of non-stop map editing on May 16.
Wow, I wish my employees (that’s me) worked that hard! Other pages in Nigar’s portfolio are also filled with large numbers of edits that occur during one day. For instance, on May 20, he edited and reviewed approximately 70 locations. Hmm. Who does this Nigar work for that he can spend so much time editing Map Maker? Or maybe he just has a lot of spare time. Whatever the case, it is highly improbable the trusted reviewer Nigar has a working knowledge of street level geography in localities scattered across 21 states that would allow him to review the edits of others based on his “personal local knowledge of that place” and the imagery information provided by Google.
One of the comments submitted on yesterday’s blog indicated that the Google editing system was complex because it needed to accommodate newbies and “power users”. I admit that I had not thought of people freely volunteering to be power users for a profit-based company like Google. I realize that there are a number of power users working on OSM, but I had always assumed that these contributors were either: a) dedicated to the notion of an open, world-wide street level database, or b) hoping to create an open database that they could eventually use to support a business or another opportunity that might make moolah.
I suppose volunteering all your spare time to improve Google Maps is a possible explanation for the work of trusted reviewer Nigar and if so, Google should at least send him the bedding shown below, which can be purchased here (thanks to Duane Marble for pointing these out to me).
Having taken a look at the contributions of three of the four “trusted reviewers” who reviewed my edits, I think I can provide some preliminary answers to the questions I asked earlier in this blog.
1) “Do the Map Maker Reviewers really have local knowledge?” Although my experience the review system is limited, I suspect that many of the reviewers of Map Maker edits in the United States do not have the requisite familiarity with local places to review edits “…only through your personal knowledge of a local place,” as required by Google. This lack could change over time as more locals become involved in the process, which was opened to them in the U.S. approximately one-month ago. However, Google Map Maker is off to a poor start, at least if my experience is any measure of the process.
2) “How does Google measure that quality (local knowledge)?” I saw no evidence that Google directly measures or tries to measure the local knowledge of its reviewers. It may regard its rating system based on how many reviews of edits a person makes that are accepted or overturned as a surrogate measure of local knowledge, but if so, this is confounding the issues involved in the review process. Further, if few reviewers really have the local knowledge to review edits on the basis of familiarity with local geography, then how authoritative can this system be?
3) ”How does Google enforce the local knowledge requirement?” Sorry to say this, but on the basis on my limited analysis, they don’t. I have a suspicion that they are casting a “blind-eye” on issue in an attempt to build critical mass, but I have no substantive evidence to support that thought, at least not yet.
Based on the admittedly modest amount of research I conducted, I have concluded that the “trusted reviewers” Shalini, Abhilash, and Nigar operate within the research boundaries of the information normally provided in Google Maps and mirrored in Map Maker. It does not look like they could possibly bring local knowledge to the majority of their edits or their reviews of edits. Conversely, Google’s own guidelines require that “your personal local knowledge” be used in making and reviewing edits. If this is true, how can these reviewers who evaluate contributed edits submitted from a wide variety of geographic locations and other reviewers who presumably operate in the same manner qualify as “trusted reviewers”? Just to be sure you understand me, I am not suggesting that these reviewers are being dishonest, just that they likely do not have the required local knowledge to critically evaluate the edits they review.
If it took only a little digging on my part to unearth this problem, can Google be unaware of it? Hmmm. Wouldn’t it be something if some of these “trusted reviewers” were Google employees, or compensated by Google, or maybe not even be located in the United States or never have been in the United States? You know, just to get the old crowdsourcing ball going in the U.S. and moving it along to generate that critical buzz that will provide the growth required to make crowdsourcing input large enough to get the edit ball rolling? Now I get to use a phrase I hate, but can’t resist using here – it is “I’m just sayin…”
However, we need to remember that crowdsourced systems are considered self-healing over time. Edits are pushed out so that other people can see them and correct them, if necessary. The crucial issue is whether there are enough edits to provoke reactions/ corrections in the population that uses the map base. More use and correction is thought to improve the quality of the data. Whether this self-healing actually occurs in a crowdsourced system remains unclear at this time and should be a fertile area for those interested in researching crowdsourcing and map compilation.
The use of crowdsourced data to solve the types of editing problems common in map compilation is one that might not work out well in the U.S., at least not based on Google’s current approach. It is my sense that Google does not want to create a field organization similar to employed by Navteq, nor does it want to manage an operation that is not driven by algorithms. In other words, using a volunteer, self-organizing workforce is a big gamble for Google and it is one that has a reasonable probability of failing if the volunteers are left to their own devices.
Recommendations for Improving Map Maker
So, what can Google do? I think you know better than I. However, here are some suggestions (including several contributed by my colleague Steve Guptill).
1. Add improved structure/taxonomy to the features and objects that Google is willing to crowdsource.
2. Provide alternative paths/solutions/tools when current methods do not allow the user to transfer the type of edit information they had hoped to contribute.
3. Smarten up the edit system. There are too many complex objects that cannot be edited or corrected and it is these data that are major flaws in the Google Map Base.
4. Make metadata available to the satellite and Street View Imagery, but only if you really want it to be the crucial factor in edit decisions.
5. Track user IP and use this as part of the process to evaluate whether an edit contributor or a reviewer of edits might have relevant local knowledge.
6. Map Maker is buggy. Fix it.
7. Improve the boundary data files describing the location of edits. Users need to have confidence that you know what you are doing and Map Maker does not yet provide that confidence.
8. The decision process in reviews needs to be more transparent and helpful. A colleague had an edit rejected when he attempted to inform Google through Map Maker that a local chain of banks had changed their name. Google’s reviewer said “Nope.” My contact provided links to sites indicating the name had changed. No reason for the rejection was provided or any alternative solution. Good luck with that kind of thank you
9. For alternative approaches, talk to people who have experience with map compilation relevant to the type of map database you need to support your other strategic initiatives.
10. Spend more time focused on human factors and interface design relevant to map compilation systems.
There are a host of other things that occurred to me, but this blog is already too long (especially when added to yesterday’s opus), so I stopped the list.
I offer two final insights.
1. Google Map Maker was developed to assist in the creation of maps in countries where maps were either lacking or so expensive and copyright protected, that they were unavailable to the ordinary user. Whether the specific crowdsourced system of map compilation created by Google in the form of Map Maker, which is based on the scarcity of public maps, will provide an advantage in the United States, a country with a rich heritage of publicly available free maps, low cost maps and free and low cost map data, remains unclear to me.
In countries that lack a viable, public map infrastructure, it is likely that crowdsourcing is a viable method of creating detailed, street level databases. In these cases, the majority of the important contributed knowledge (street names, alignments, route numbers, addresses, points of interests, directionality, etc.) would be provided by local users, since there is no national source that can freely be used as a reference.
But will this model work in countries where national and local maps from authoritative sources are available online from a variety of sources? In these situations, will people be incentivized to contribute local knowledge that could improve the map or choose to serve as remote librarians, providing recommendations based on what they can observe online at other map sites without reference to or observation of local circumstances? If so, this becomes a game potentially unattached to local knowledge and one that will not significantly benefit the quality and accuracy of Google Maps.
2. I was amused to find that those editing data in the Map Maker system can do so because they are “editing spam data”. Ain’t life grand in the world of crowdsourcing? How do you recognize spam data? I mean, I’d know it if I ate it, but I am not sure I could recognize it if I saw it. What’s the difference between spam data and a mistake? I suppose its intent, but what tool do you use to measure that?
I had intended to compare Map Maker and Google’s efforts at map compilation with those of Navteq as part of this series and will do so next time I put electrons to plasma. But before that, I am going to do something else. I got so interested in the last two blogs that I spent too much time on them and not enough on my consulting practice or on my other interests – like maybe playing that new 12 string guitar my wife Bonnie gifted me with for my birthday last week.
On June 16, 2001 I will be speaking at the 2011 New York GeoSpatial Summit. It looks to be a very interesting day and I hope to see some of you there.