Exploring Local
Mike Dobson of TeleMapics on Local Search and All Things Geospatial

The Google Map Maker Review and Authority System

May 24th, 2011 by MDob

Please read yesterday’s blog before reading this one. It will make a lot more sense if you do.
Based on my experience with the Map Maker described in yesterday’s blog, the edit system is deeply flawed, at least in its present incarnation. Just so you know that all of those edits really happened – see that MDob in the right corner?

See.  I really did the edits I wrote about

Unfortunately my experience with the authority system in Google Map Maker was, perhaps, even more troubling than my exposure to the edit system. The “background” for this statement is described below. I point out here that I attempted only three edits and received reviews only on these three edits. As you will read later, those three reviews and an analysis of other edits by these reviewers opened the door to a broader view of the Map Maker and its “trusted reviewers.”

The first edit I attempted concerning the lane in the parking lot that was erroneously displayed on the Google Map as connecting to an adjacent street is a prime example of the structural weakness of the edit/authority system used in Map Maker.

While preparing to edit the map, I was quite certain that my recollection that the lane in question did not intersect the adjacent street was correct. I looked closely at Google’s satellite imagery and decided it was not clear enough to allow me to confirm my recollection.

Another alternative to resolve this problem was to use the Google Street View imagery. While it provided clear evidence that the aisle in the lot did not connect to the adjacent street, I could not find details on the date of the imagery and could not resolve the issue due to the absence of metadata on the Street View imagery available in either Map Maker or Google Maps.

I concluded that the only way to determine whether the streets connected or not, at least in this case, was field examination. So I hopped in my car, drove to the location, and did a field inspection. The field inspection closed the question. I took a set of photos as positive proof that my assertion that the lane and the road did not connect was true.

Yes, current satellite imagery or current Street View imagery might have been used to resolve the issue. However, since the Map Maker edit reviewers presumably do not have access to the metadata on the date these images were captured, they cannot definitively determine connectivity or the lack of it in road involved in the edit I contributed. We can, also, presume that, unlike Jason Bourne, Google’s trusted reviewers do not have the ability to retask satellites or redeploy Street View vehicles to resolve the situation.

I suppose the “trusted reviewers” could look at the maps on other online services for this particular location. In respect to the edit in question, the Navteq and Bing websites agree with Google. On the other hand, OSM, Yahoo (Navteq data), and TomTom don’t show the parking area at all. Let see, that’s three in support and three against. What to do?

I suppose Google’s Map Maker reviewers might be tempted to refer to other sources, like I did for purposes of comparison, but in its Moderation Guidelines for Users of Map Maker Google lays out its position quite clearly:

“While moderating, do not post any material that you know, or should reasonably know to violate any law, contractual obligation, confidential information, proprietary information including copyright or the privacy or publicity rights. Google Map Maker expects you to respect and abide by copyright laws and does not condone any violation of copyright law. Users, moderating User Submissions, are expected to be familiar with the applicable copyright laws in their jurisdiction and in case of a doubt the Google Map Maker team encourages you to consult an expert in the field of copyright law in your jurisdiction for guidance.”

Good for you Google. How do you police that action? Do you really expect your volunteer users to consult an expert in the field of copyright law for their jurisdiction and all of the jurisdictions in which they review edits? Hmmm.

Well, for comparison purposes only, I examined the area using imagery available on Navteq’s website, but it did not provide enough detail to provide a clear answer to the question. Bing’s imagery was superior and showed several trees blocking the access of the lane to the street, but there was a lack of metadata about the age of this imagery opening the possibility that the intersection could be a more recent construction than that shown in the imagery. Just so you know, I do not consider a copyright date to be a surrogate for the currentness of the imagery or the map data, since the date of the copyright may have no relationship to the age of the information in the work covered by the copyright.

Given this uncertainty with the currentness of sources and the inadequacies of the Google source data, it appears that local knowledge gathered through field observation is the most immediate and authoritative method that can be used to resolve this particularly natty problem of road intersection, as well as the best way to solve a large class of problems similar in nature. I drove to the location in question, because there was no other way available to remedy my uncertainty about the location I was attempting to edit.

So, what’s the Google Moderator team to do when they encounter my assertion that the parking lot lane does not intersect the adjacent street as shown on Google Maps? How could they remedy this situation without observing it? Yes, they could look at surrogate data beyond that which I used, but do you really think they are going to spend that much time ferreting out other sources? I doubt it. If you were a moderator, would you? And how long would it take you to find useful sources that did not violate the restrictions on intellectual property that Google assert for its reviewers?

Even if the Map Maker trusted reviewers did collect other data for purposes of evaluation, how would they know with any degree of certainty that the source was current and reflected the status of an issue on the ground? The answer is that they would not have a clue what the correct call was in the case of my edit, and in thousands of cases like it. “Trusted Reviewer” Nigar was smart enough to reference Street View, but unless he had access to metadata describing the date the data was collected, Nigar could not have known that his decision was based on the reality of the situation on the ground.

The edit in which I marked the actual driveway into the medical office complex does not look like a difficult call, since it clearly connected with the adjacent street in both the satellite and Street View Imagery. Unfortunately, the imagery appears to lack metadata, so who, other than a local observer, really knows whether it reflects an access point that exists, or one that was reconfigured recently, or perhaps one that has a chain across it?

The decision on the underground horse tunnel edit is, also, troubling to me. The feature cannot be seen on the Google supplied imagery or on Street View and never will be unless they take a Street View bike down the horse trails. So how was the decision made to accept my edit?

You know, we could speculate for a long time about how the edits could have been made, but let’s skip that dance and see what Google wants its editors and reviewers to do.

In its Moderation Guidelines for Users of Map Maker Google tells editors and reviewers the following:

“Moderation Guidelines

You agree and undertake that the underlying intention behind moderating User Submissions is to remove any submission that you know through your personal local knowledge either or inappropriate or factually incorrect and to approve such User Submissions that to your personal knowledge are accurate.

Under a section of the Moderation Guidelines titled How To Moderate are these instructions:

i. Approve- you can approve a User Submission, if from your personal local knowledge you are sure that the User Submission is accurate both in terms of location and its labeling and does not violate the Map Maker Terms of Service in any manner

Guidelines for Users desirous of moderating User Submissions

ii. The following rules shall govern any action taken by you to moderate any User Submission:
You shall moderate only through your personal knowledge of a local place.

Content that should be denied

Content that you know by your personal local knowledge to be factually incorrect, for e.g. a non-existent building or other landmark, a road going over a building or a water body etc.”

Ouch! So all of these attempts at moderating edits are supposed to be heavily weighted towards local knowledge. To me, that makes sense. After all, the benefits to a crowd sourced system are mainly based on the relevant information that local people can provide about the local situations with which they are familiar.
I guess, then, that the questions for the Google Team are 1) “Do the Map Maker Reviewers really have local knowledge?” 2) “How does Google measure that quality?” 3) ”How does Google enforce that requirement?”

To get to those answers, I realized that I needed to know something about trusted reviewers and use the trusted reviewers who evaluated my edits as examples of how the process works. (Please note that the information I found is publicly available and resulted from searching information available from Google and Google Map Maker. Indeed, some of it was provided by the reviewers of my edits. My interest in revealing this information is to point out the weaknesses in Google’s review system and I use this information to make several suggestions for improvement at the end of this blog.)

Starting Point

Lalit Katragadda, one of the inventors of Map Maker, indicated during a interview on Map Maker that “The most difficult part was not the coding, but the structure,” he says. “After all, how do you know which users to trust?”
The article’s author continued noting “As anyone who has asked for directions on the street knows, not everyone can make maps.
So the Google India team invented a software solution that treats each new edit like a separate page. Over time, the machine learns which users are trustworthy. When a user has reliably labeled enough points, he graduates from the system and can moderate other users’ map making too.”

Reliably labeled? Who evaluates this measure? How do they know the edit is correct or incorrect? Gee, that’s a great approach, but what does it have to do with local knowledge?
It would seem, the “trusted reviewers” are those who have successfully edited maps using Map Maker and reviewed edits by other contributors to Map Maker. Apparently, by contributing edits that are approved and by reviewing edits by others you gain “reputation in the system.” I presume that if your edits are always rejected that you get blackballed. If your review of edits by others is reversed on further review, I suppose you get debited and the “cred” that you have in the system is decreased by the ranking system.

While this “cred” system is interesting, it has little to do with measuring the local knowledge of the Google Map Maker Reviewers. In addition, the rating system is not a reasonable method to establish “authority”, although it is way to establish popularity or it could be regarded as a measure of how few actual map users critically view Google maps. Perhaps it could be interpreted as a measure of those reviewers who evaluate solely on the basis of satellite and Street View imagery?

In essence, the major problem with the “trusted reviewer” concept is that the information available the “trusted reviewer” to evaluate a contributed edit is at best comparable to that available to the contributors of edits and usually less valuable since it is not influenced by local knowledge. Based on my limited examination of Google Map Maker, I have concluded that the trusted reviewers in the Google Map Maker System may have limited or no geographical knowledge of the locations that they edit, or for which they review contributed edits.

The “trusted reviewers” who edited my works were identified only by these names: Shalini, Abhilash, Hemant and Nigar. Edits from two of the reviewers (Shalini and Abhilash) provided links to their history of accomplishments using the Map Maker system. I was able to find details on trusted reviewer Nigar, but nothing for Hemant. What I was able to find, however, provided several useful insights.

For example, “trusted reviewer” Shalini, has been editing and reviewing edits in Map Maker for 214 days. During that time, he produced 10,572 edits and 8599 reviews, including edits of 3,245 features. In essence, Shalini has produced an average of 50 edits and 40 reviews of edits each day since he joined. You can review the edits by “trusted reviewer” Shalini here. You might notice that the Shalini reviews and edits include detailed street level and feature corrections in Vanatu, Nigeria, Brazil, Guyana, Dominican Republic, Iceland, Montenegro, Puerto Rico, Paraguay, Azerbaijan, Moldova, Macedonia, Romania, Iran, Viet Nam, India, Nepal, Pakistan, Dubai, the United States and many other countries.

Do you think “trusted reviewer” Shalini uses compilation sources outside of those provided to him and all other users by Google? Of course not! In addition, I think we can safely assume that “trusted reviewer” Shalini is not applying local knowledge to the majority of the edits he makes himself or the submitted edits that he reviews. In essence, the skills Shalini appears to have are familiarity with how Map Maker works and the ability to evaluate the suggested edits based on a belief about whether they are supported by the imagery provided by Google (Street View and/or satellite imagery). If this is the case, then, where is the “personal local knowledge that Google requires of its reviewers? More importantly, where is the “authority” in the Google Map Maker System?

During this research, (fact checking earlier today) I went back to the link to Shalini on the page where he corrected my edit. I was shocked to see that instead of the information provided above (which you can still find by clicking that link as of tonight), it linked to a new page that indicated that Shalini’s stats were now 47 days with 257 edits and 420 reviews. All of the reviews were now exclusively for the United States. How curious. How did this change? Same link location, different days, different results. Ya gotta love the Internet. It’s so authoritative. Well, let’s move the next reviewer.

According to Google, ‘trusted reviewer” Abhilash (details here), has been a member for 45 days, with 739 edits and 275 reviews of edits. The Abhilash review and edits appear to be scattered across the United States with a minor focus on paths and trails. After reviewing several of the Abhilash edits and reviews of edits, I concluded that the reference material used by this reviewer, also, appears limited to Google Maps, Street View and the satellite imagery the Google provides to all users. Local knowledge was not in evidence, given the number of states and localities within which Abhilash contributed and reviewed edits.

“Trusted reviewer” Nigar was not linked to on my edit page, but I was able to find him and in his list of reviews was one the edits I contributed that he reviewed – so he was the Nigar for whom I was searching. Nigar has, apparently, been contributing for 161 days, including 1569 edits and 10183 reviews of edits contributed by others, for an average of approximately63 reviews per day.

It seemed as if it took me forever to page through the dates of his latest reviews of edits when I trying to establish that he was the Nigar who reviewed one of my edits. I became so interested in his productivity that I decided to analyze his activities on Map Maker for a given day. I selected May 16, 2011 for no other reason than I had the idea when I was looking at some of the reviews that he approved on that date.

Did you know that “trusted reviewer Nigar reviewed edits contributed by others on May 16th for 12 hours and 29 minutes straight? During that span he reviewed 79 edits or one every 9.5 minutes. The edits were spread across a number of localities in 21 of the states of the United States.

Numerous edits reviewed for localities in 21 states.

Stunning, isn’t it? But, as they say on television commercials, “there’s more.” Yes, “trusted reviewer” Nigar found time to perform 30 edits of his own in eight states while undertaking his Herculean work in reviewing 79 edits contributed by others. In total, Nigar was involved in reviewing or creating 109 corrections (30 edits, 79 reviews) in 21 states, popping one out every 6.8 minutes over a 12.5 hour period of non-stop map editing on May 16.

Wow, I wish my employees (that’s me) worked that hard! Other pages in Nigar’s portfolio are also filled with large numbers of edits that occur during one day. For instance, on May 20, he edited and reviewed approximately 70 locations. Hmm. Who does this Nigar work for that he can spend so much time editing Map Maker? Or maybe he just has a lot of spare time. Whatever the case, it is highly improbable the trusted reviewer Nigar has a working knowledge of street level geography in localities scattered across 21 states that would allow him to review the edits of others based on his “personal local knowledge of that place” and the imagery information provided by Google.

Scatter graph of Nigar's reviews and edits throughout the day

One of the comments submitted on yesterday’s blog indicated that the Google editing system was complex because it needed to accommodate newbies and “power users”. I admit that I had not thought of people freely volunteering to be power users for a profit-based company like Google. I realize that there are a number of power users working on OSM, but I had always assumed that these contributors were either: a) dedicated to the notion of an open, world-wide street level database, or b) hoping to create an open database that they could eventually use to support a business or another opportunity that might make moolah.

I suppose volunteering all your spare time to improve Google Maps is a possible explanation for the work of trusted reviewer Nigar and if so, Google should at least send him the bedding shown below, which can be purchased here (thanks to Duane Marble for pointing these out to me).

Dreamland for highly incentivized map reviewers.

Preliminary Conclusions

Having taken a look at the contributions of three of the four “trusted reviewers” who reviewed my edits, I think I can provide some preliminary answers to the questions I asked earlier in this blog.

1) “Do the Map Maker Reviewers really have local knowledge?” Although my experience the review system is limited, I suspect that many of the reviewers of Map Maker edits in the United States do not have the requisite familiarity with local places to review edits “…only through your personal knowledge of a local place,” as required by Google. This lack could change over time as more locals become involved in the process, which was opened to them in the U.S. approximately one-month ago. However, Google Map Maker is off to a poor start, at least if my experience is any measure of the process.

2) “How does Google measure that quality (local knowledge)?” I saw no evidence that Google directly measures or tries to measure the local knowledge of its reviewers. It may regard its rating system based on how many reviews of edits a person makes that are accepted or overturned as a surrogate measure of local knowledge, but if so, this is confounding the issues involved in the review process. Further, if few reviewers really have the local knowledge to review edits on the basis of familiarity with local geography, then how authoritative can this system be?

3) ”How does Google enforce the local knowledge requirement?” Sorry to say this, but on the basis on my limited analysis, they don’t. I have a suspicion that they are casting a “blind-eye” on issue in an attempt to build critical mass, but I have no substantive evidence to support that thought, at least not yet.

Summing Up

Based on the admittedly modest amount of research I conducted, I have concluded that the “trusted reviewers” Shalini, Abhilash, and Nigar operate within the research boundaries of the information normally provided in Google Maps and mirrored in Map Maker. It does not look like they could possibly bring local knowledge to the majority of their edits or their reviews of edits. Conversely, Google’s own guidelines require that “your personal local knowledge” be used in making and reviewing edits. If this is true, how can these reviewers who evaluate contributed edits submitted from a wide variety of geographic locations and other reviewers who presumably operate in the same manner qualify as “trusted reviewers”? Just to be sure you understand me, I am not suggesting that these reviewers are being dishonest, just that they likely do not have the required local knowledge to critically evaluate the edits they review.

If it took only a little digging on my part to unearth this problem, can Google be unaware of it? Hmmm. Wouldn’t it be something if some of these “trusted reviewers” were Google employees, or compensated by Google, or maybe not even be located in the United States or never have been in the United States? You know, just to get the old crowdsourcing ball going in the U.S. and moving it along to generate that critical buzz that will provide the growth required to make crowdsourcing input large enough to get the edit ball rolling? Now I get to use a phrase I hate, but can’t resist using here – it is “I’m just sayin…

However, we need to remember that crowdsourced systems are considered self-healing over time. Edits are pushed out so that other people can see them and correct them, if necessary. The crucial issue is whether there are enough edits to provoke reactions/ corrections in the population that uses the map base. More use and correction is thought to improve the quality of the data. Whether this self-healing actually occurs in a crowdsourced system remains unclear at this time and should be a fertile area for those interested in researching crowdsourcing and map compilation.

The use of crowdsourced data to solve the types of editing problems common in map compilation is one that might not work out well in the U.S., at least not based on Google’s current approach. It is my sense that Google does not want to create a field organization similar to employed by Navteq, nor does it want to manage an operation that is not driven by algorithms. In other words, using a volunteer, self-organizing workforce is a big gamble for Google and it is one that has a reasonable probability of failing if the volunteers are left to their own devices.

Recommendations for Improving Map Maker

So, what can Google do? I think you know better than I. However, here are some suggestions (including several contributed by my colleague Steve Guptill).

1. Add improved structure/taxonomy to the features and objects that Google is willing to crowdsource.

2. Provide alternative paths/solutions/tools when current methods do not allow the user to transfer the type of edit information they had hoped to contribute.

3. Smarten up the edit system. There are too many complex objects that cannot be edited or corrected and it is these data that are major flaws in the Google Map Base.

4. Make metadata available to the satellite and Street View Imagery, but only if you really want it to be the crucial factor in edit decisions.

5. Track user IP and use this as part of the process to evaluate whether an edit contributor or a reviewer of edits might have relevant local knowledge.

6. Map Maker is buggy. Fix it.

7. Improve the boundary data files describing the location of edits. Users need to have confidence that you know what you are doing and Map Maker does not yet provide that confidence.

8. The decision process in reviews needs to be more transparent and helpful. A colleague had an edit rejected when he attempted to inform Google through Map Maker that a local chain of banks had changed their name. Google’s reviewer said “Nope.” My contact provided links to sites indicating the name had changed. No reason for the rejection was provided or any alternative solution. Good luck with that kind of thank you

9. For alternative approaches, talk to people who have experience with map compilation relevant to the type of map database you need to support your other strategic initiatives.

10. Spend more time focused on human factors and interface design relevant to map compilation systems.

There are a host of other things that occurred to me, but this blog is already too long (especially when added to yesterday’s opus), so I stopped the list.

Insights

I offer two final insights.

1. Google Map Maker was developed to assist in the creation of maps in countries where maps were either lacking or so expensive and copyright protected, that they were unavailable to the ordinary user. Whether the specific crowdsourced system of map compilation created by Google in the form of Map Maker, which is based on the scarcity of public maps, will provide an advantage in the United States, a country with a rich heritage of publicly available free maps, low cost maps and free and low cost map data, remains unclear to me.

In countries that lack a viable, public map infrastructure, it is likely that crowdsourcing is a viable method of creating detailed, street level databases. In these cases, the majority of the important contributed knowledge (street names, alignments, route numbers, addresses, points of interests, directionality, etc.) would be provided by local users, since there is no national source that can freely be used as a reference.

But will this model work in countries where national and local maps from authoritative sources are available online from a variety of sources? In these situations, will people be incentivized to contribute local knowledge that could improve the map or choose to serve as remote librarians, providing recommendations based on what they can observe online at other map sites without reference to or observation of local circumstances? If so, this becomes a game potentially unattached to local knowledge and one that will not significantly benefit the quality and accuracy of Google Maps.

2. I was amused to find that those editing data in the Map Maker system can do so because they are “editing spam data”. Ain’t life grand in the world of crowdsourcing? How do you recognize spam data? I mean, I’d know it if I ate it, but I am not sure I could recognize it if I saw it. What’s the difference between spam data and a mistake? I suppose its intent, but what tool do you use to measure that?

Google Map Maker - Home of the famous spam edits.

I had intended to compare Map Maker and Google’s efforts at map compilation with those of Navteq as part of this series and will do so next time I put electrons to plasma. But before that, I am going to do something else. I got so interested in the last two blogs that I spent too much time on them and not enough on my consulting practice or on my other interests – like maybe playing that new 12 string guitar my wife Bonnie gifted me with for my birthday last week.

On June 16, 2001 I will be speaking at the 2011 New York GeoSpatial Summit. It looks to be a very interesting day and I hope to see some of you there.

Click for our contact Information

Bookmark and Share

Posted in Authority and mapping, Data Sources, Google, Google Map Maker, Google maps, Mike Dobson, Navteq, User Generated Content, crowdsourced map data, map compilation, map updating, openstreetmap

7 Responses

  1. Mikel Maron

    Thanks for this review. I have been looking mainly at MapMaker from the perspective of data licensing issues, highlighting the difference between proprietary licensing of Google and commons based licensing of OpenStreetMap. The OSM model of authority is based much more on community and communication, and no one has this kind of moderation control.

    Anyhow, you mentioned that OSM was missing the parking lot in mention. I couldn’t resist adding it, to best of my ability from your field survey notes and Bing imagery. If I happened to make an error, please do correct it .. your changes will be instantly available.

    http://www.openstreetmap.org/browse/changeset/8240716
    http://www.openstreetmap.org/?lat=33.56501&lon=-117.65853&zoom=17&layers=M

    Thanks for the comment, Mikel.

    The distinctions you draw between the editing systems are quite useful.

    Mike

  2. momers

    Hi again!

    You focused on Google Moderators….the official super reviewers.

    But just wondering, are you aware that even you can moderate edits any where and everywhere?

    The self healing you mention,that’s how it happens. Click the Community Edits tab and you will see pending edits in your view port.

    So say if Person A made a mistake and it was approved by Person B, MDob can go and fix it. And if Person C thinks Mdob made a mistake, they can fix it too…and self healing occurs eventually.

    The case of US is, ummm, curious, because as you pointed out, maps are good enough and freely available. And since a certain level of reliability is required or expected in the US, every edit is under a lock down. Only in the US is every edit left to be reviewed authoritatively by a Google moderator.
    Not so in the other countries where MM is open for editing and that has worked surprisingly well for people who never had any online or printed maps to begin with. Take Pakistan for example where I edit. You might have seen some videos on YouTube of Lahore Pakistan being mapped, from a clean slate.

    As for reviewers (any reviewer, not just Google reviewers) having local knowledge, well, at least in my part of the world and countries besides the US, it works with a grain of salt.

    Any edit (unless outrageous, like a road drawn not visible in the imagery etc), is assumed to be accurate, as long as it is logical. It is up to the local reviewers, (not Google reviewers) to fight over it and fix it if erroneous. In my part of the world many roads have unknown names, undocumented, but if a local mapper says this is road Y, he/she is trusted though Google reviewers might chime in if a road name was changed without explanation or comments relating to the edit.
    Many regions have teams of mappers, local ’super’ mappers, observing every edit in their area of interest. So as long as an edit is logical it stays.

    Still, some of your questions are very valid and which other mappers have struggled with and even quit mapping over as protest.

    P.S. Shalini would be a SHE based on my ‘local’ knowledge of names :)

    oh and your comments are discussed here: https://groups.google.com/d/topic/mapping-misc/jTha4nHi_4U/discussion

    “Hi:

    Thanks for your second set of comments. You raise a number of interesting issues.

    At a personal level, I do not have an interest in moderating maps anywhere and everywhere, because I do not have the local knowledge to make a difference. I understand many other do contribute without respect to local knowledge, but am not sure how this contributes to map quality. It is something that would make for interesting research.

    I had not known that the editing system in the U.S. was approached differently that the rest of the world. (Probably because I do not have much knowledge that I could contribute to these local geographies and have not attempted to edit them, as mentioned above.)

    Still not comfortable with your ‘grain of salt’ comment in reference to local knowledge. I realize that someone in a distant location could digitize roads, streets, trails, etc. off imagery, but there are a number of attributes, as well as objects that cannot be derived from imagery. If there are no maps to support the interpretation and Street View is not available, how does someone who does not have local knowledge know how to describe map elements of local importance? What an interesting topic.

    While writing the article I had tried to stay away from labeling any of the reviewers he, she, etc, Guess I must have missed. My apologies to all concerned.

    Thanks again,

    Mike

  3. Thom

    Mike-
    I was hoping you were going to review Google’s Map Maker in an upcoming blog post. I too tried using it a few weeks ago and ultimately gave up because of how buggy the interface was. Moreover I felt that the people who reviewed my edits did not have the “authority” to deny what I knew to be right. I added a few items myself. Some were ultimately refused because it could not be confirmed on imagery; evidence that the approvers did not know the local area.

    In another edit the reviewer approved a shopping complex outline that I drew but only after he explained that I should snap it to the road. Aparently GMM is not true “centerline” data for all features. It seems some areal features are expected to have a cartographic appeal (displaced) to them. Hence the name “Map” maker I suppose.

    I may end up being one of the last people to think crowdsourcing is a great source for the very reasons you pointed out. I’m getting tired of the canned answer that people use when questioned about quality in crowdsourced data…..”It ultimately will work itself out”. How is the user to know when the data is right? What if I need to make an immediate decision based on the data provided? Do I have to read through the metadata each time to be sure a “super user” approved the existing content?

    Google has a long way too go before they earn my and the trust of others. I think Mike gives some good insights that should be considered.

    “Thanks for your comment Thom.

    You raise a number of interesting perspectives. I am trying to get my arms wrapped around how to create a more useful crowd-source map compilation system and one of the main problems is one you pointed out – how am I supposed to know then the data is right. Others would level the same comment about maps from Navteq or TomTom and this is an interesting issue that deserves examination.

    Mike

  4. momers

    Hi Again,

    Ref. your grain of salt comment above, in our parts of the world, sometimes good enough is sufficient (since there wasn’t anything to begin with)!

    And the self healing process i mention above, it really does work. I think that maybe in the US there isnt a critical mass of local reviewers yet, but you would be surprised to find that when true local mappers show up things do take a corrective course.

    Why do you assume (do you?) that local reviewers never show up to establish their expertise and knowledge of an area? Google reviewers might not be local but there are local mappers/reviewers always.

    For us, if there are no local mappers, then generally that area has been left alone and non regional mappers end up mapping only the visible roads etc (they might not have 100% correct attributes).

    Business listing etc start pouring in when the locals end up on map maker, and obviously when the mapper says im a local and i know that there is a medical dispensary here, then we reviewers really have no reason to doubt him/her (spammy behaviors is easy to tell). Innocent till proven guilty or data presumed correct till shown otherwise!

    I suggest you give MM another try, and maybe more that 3 edits to see if you get anywhere?!

    Hi, Momers:

    Thanks for you comment and good to hear from you again.

    I have not said that local mappers do not show up and contribute their wisdom, but I think everone should think about the best way to elicit the “Wisdom of the Crowd”. Your mention of absolutely deferring to that which is purported to be “local knowledge” is not a persuasive argument.

    The are issues about POI’s and business listings that require specialization. Look at almost any listing in Map Maker and count the number of times that the business name, contact address or telephone number have been changed. While this may be an example of self-healing, is it ever at the same quality level as good as a business listing published by a directory publisher? I agree, however, that if you have no listings and no directory publisher, something might be better than nothing, but not something that was incorrect the majority of the time. This leads us to the main problem with crowdsourcing. Few researchers are taking the time to evaluate it in an attempt to see what level of accuracy it produces and how to improve that accuracy. My blogs are an attempt to point out that critical research needs to be undertaken to improve crowdsourced systems.

    Next. Whether locals (and which locals) show in Map Maker up is a function of their having time to contribute on a voluntary basis, own or have access to the equipment needed to participate (computer, internet access) and have an interest in contributing. Not everyone will have the opportunity to contibute. Of those that do, some will choose not to, others will choose to contribute a few things and never return. The interesting question is whether the current generation of super users will continue to voluntarily contribute through the course of their lives and whether or not someone will choose to replace them when they “retire” from the project. if not, what happens to map quality and currency? See my blog before this series for some general problems with the use of crowdsourcing in map compilation.

    I have, though, in various of my articles and blogs raised specificquestions about the self-healing processes and the amount of time that it might take to create a map that has a reasonable level of accuracy. Your generalizations do not persuade me that I am right or wrong. Again, I readily agree that something might be better than nothing – But, the real issue is how much better and how would you measure that difference.

    I am not interested in beating up Map Maker. Rather I am interested in finding out if crowdsourcing input in map compilation might be improved in a manner that makes it more useful than less. I suppose you can tell that I am not enamored with Map Maker, but I did provide 10 suggestions for improving it.

    Just so you know, I did take the time follow up on your link and looked at the group comments. I found the discussion to be quite interesting.

    Thanks again for your comments.

    Mike

  5. Pat McDevitt

    Thanks for the detailed reviews, Mike.

    I think a deficiency of the Map Maker approach is that it only employs active community feedback. A better system would also apply passive community feedback (for example, anonymous GPS data) that could be compared to the active data. Since most GPS data includes metadata (including time/date of collection), it could be made available to the reviewers as an additional resource. A review of anonymous vehicle-based GPS data would likely show that no cars travel from the parking lot to the road in question. Perhaps this is something Google will include in future iterations.

    Hi, Pat:

    Thanks for the comment.

    I had presumed that the reason that Google rolled out Map Maker in the United States was that its use of the anonymous probe data gathered from users of phones equipped with Android was proving to be inadequate for purposes of map compilation and updating.

    The Map Maker roll-out in the U.S., I think, was designed to provide the attribute data that cannot be gathered in a probe process or that may be confounded by probe data ( e.g. a bicyclist or pedestrian with an Android-based phone who crosses the berm at the end of the parking lot and turns right on the sidewalk – which on the track looks as if a car has exited the parking lot and entered the adjacent street).) In any event, it is my belief that Google now has both sides of the coin that could become currency for them in the map compilation wars. It is, however, unclear to me that they know how to process these data (both active and passive) in a manner that will benefit them. Of course, they are smart folks and may soon learn how to take advantage of these data – or not. Perhaps they should ask you about the process, although I presume your lips are sealed (or have been sealed for you).

    I suspect that Google has not been able to harness these data as elegantly as done in TomTom’s Map Share. In addition, I wonder if the number of GPS traces and points that could be available through Android users (anonymously, of course) has overwhelmed the ability to process it in a meaningful time frame. Perhaps we will find some indication of the process as Google continues with its attempt to improve the quality of its map base.

    Thanks again for the comment,

    Mike

  6. Dave

    Most of the “trusted reviewers” are indeed Google contracted staff. They moderate edits submitted by map maker editors from all over the world.

    Thanks for the information Dave. I had looked through a Map Maker blog that someone pointed me to after I wrote the blogs on Map Maker and discovered the true association between Google and its “trusted reviewers”. Pretty interesting relationship. Thanks for reaffirming, I appreciate your taking the time to comment.

    Mike

  7. sushant

    nice article, keep posting more dude…

    sushant
    http://www.kickstartwebsite.com/