OSM vs. the Mechanical Turk – A New Option For Mappers?
“I wrote Muki Haklay to ask a question about his recent, interesting paper titled “How Many Volunteers Does It Take To Map An Area Well? The validity of Linus’ law to Volunteered Geographic Information”. You can download a copy from UCL Discovery.
I asked Muki if he thought that the error signatures he discussed in the article could be a function of the variability between different GPS devices or, potentially, reflect the variability in the GPS readings from specific devices located in different positions within a vehicle (car, bicycle, skates, etc.). Muki responded that he felt the accuracy issue may be more related to the quality of the aerial imagery that the volunteers use, noting that imagery and not GPS receivers are the main source of OSM data (at least in the UK). He indicated, in his note to me, that in the OSM data he studied for the United Kingdom “… the positional accuracy is derived from the quality of the orthorectified imagery.”
I pondered Muki’s response, as it opened another door for me in how to think about Volunteered Geographic Information. I had conceptually linked crowdsourcing and VGI with User Generated Content, believing that those who participated in these activities were somehow contributing local knowledge to the solution of problems that were essentially geographical. I guess that when I thought about the term Volunteered Geographic Information, I made the mental leap that these volunteers were providing content in the form of spatial information reflecting the geographic areas in which they lived or with which they were more than casually familiar. It has now occurred to me that there may not always be a direct beneficial relationship between geographical knowledge and Volunteered Geographic Information.
Of course, this raises the question of whether or not various aspects of geographical information can be input into a system by different contributors and then harmonized to produce beneficial results. Joe digitizes the streets from aerial imagery, Jane attributes the streets with names, Bob adds addresses, Mary contributes Points of Interest and Blotto QCs the results. Based on my experience, I have found the method of sequential editing to work well in the compilation systems used by commercial map database vendors, where the team is structured and incented to perform the specific work assigned to them based on completeness measured by formal quality assurance methods within a specified schedule. But does this division of labor work well in a system like OSM where the workflow process is not managed to ensure the completion or comprehensiveness of a specific temporal goal for coverage?”
Certainly there have been examples of the success of this type of collaboration working for OSM, as can be found in their superb accomplishments in Haiti, Gaza and Baghdad. However, it is possible that these examples are exceptions, rather than common practice, whose results were accelerated by the humanitarian emergencies involved. Might the division of labor in OSM’s UK database result in data quality and completeness variations that preclude the use of the data across spatial extents? Since crowdsourced databases are considered to be self-healing over time, the logical question may be, “Can you know when the database or a segment of it is acceptable for some use?” Unfortunately, the crowdsourced system is constantly evolving and people choose to use it based on measures other than overall completeness or fitness for use. Measuring fitness for use is a difficult question to be sure and one that I will return to in a future blog on crowdsourcing.
More troubling to me is the possibility that, in some cases, applications that are described as examples of VGI or crowdsourcing may in fact be examples of the concept of the Mechanical Turk and devoid of any geographical expertise provided by the contributors. Before some of you erupt in shouts of “heresy”, read the rest of the blog, as this is really an interesting and thought provoking topic.
The original 18th century Mechanical Turk was billed as an automaton chess player seemingly driven by a box of gears and dressed in Turkish garb. Although successfully defeating capable chess players, the Mechanical Turk was a hoax, as a human chess master, concealed inside the apparatus, operated the machine.
In today’s world, the Mechanical Turk, billed as “Artificial Artificial Intelligence”, is Amazon’s automated marketplace for work in which requestors ask candidates to perform individual tasks, called Human Intelligence Tasks (HITS), which are defined as task requiring human intelligence to solve. Amazon’s Mechanical Turk is a web service that provides a large network of humans with computers to perform tasks of interest to requestors. The requestors can choose to approve completed HITS before paying for them or auto-approving sight unseen. You can find out more details of Amazon’s version of the Mechanical Turk here.
When I looked, there were 92,767 HITS available, in case you are looking for something to do. Try this link to find HITS). An example of a relevant HITS that I copied from the Amazon Mechanical Turk website follow
“Find URLs to Department Store hours of operation
Thank you for accepting this task! We need your help to improve our knowledge about Department Stores across the United States.
For each Department Store location, please find a link to the hours of operation (opening and closing hours) on the business’s official website. This information will be used to improve information in maps, websites, and mobile devices (phones, GPSs, etc.). This is what we need you do to:
Hours of Operation:
Many stores list their hours of operation online. We want to capture a URL/link to a page on the business’s official website that shows its hours of operation (opening and closing hours). The examples below demonstrate what these pages might look like for three different department store chains:”
Performing this task will net you $0.10 per hit. “
This is clearly a case of someone augmenting their Points of Interest or business listings database using HITS as a data collection method.
In the example above, the persons performing the HITS are not required to know anything about the department stores whose details they are being asked to collect. In essence, the contributors are being asked to create attribute data for department stores that will be found in business listings used in navigation and related mapping databases.
“So”, you ask “What is the relationship between OSM and the modern version of the Mechanical Turk?” If the majority of OSM contributors to the UK database are spending their time digitizing imagery for the UK portion of the OSM database, as opposed to contributing GPS traces and attributes from paths along which they have traveled or know something about, how likely is it that the OSM effort in the UK benefits from local knowledge to the same extent that it benefits from “free” digitizing?
Muki Haklay’s previous finding (Haklay and Ellul 2010)) that there are more unattributed cells in OSM UK database than in the Meridian data set he often compares it to, may just be a reflection of the fact that people are digitizing roads about which they know nothing – operating as if the task were a HIT, but not contributing any local knowledge that might enrich the effort. Based on this observation, we might ask, “Where is the local Knowledge in OSM?” An illustration in Haklay’s (2009) report on OSM in the UK appears to indicated that 50 percent of the data in the OSM United Kingdom Database (at that time) were contributed by less than 30 contributors, raising, for me, the issue of whether OSM is a good example of the transfer of collective geographical knowledge through crowdsourcing, or, perhaps, a “fee-less” example of HITS and the Mechanical Turk.
Is the fact that OSM usage has not made a dent in the fortunes of NAVTEQ, TomTom/Tele Atlas or Google somehow related to this division of labor between digitizers and attributers? Is OSM’s inability to effectively manage its workforce of voluntary contributors to complete coverage in an area further hampered by the lack of local geographic knowledge and is this limitation a fatal flaw?
Haklay and Ellul (2010) appear to indicate that there is a social bias in the OSM database of the UK. Obviously you do need a computer, an internet connection and plenty of time to digitize road networks, conveniences that are not available to all members of society. What that may mean is that the distribution of those empowered to contribute to OSM, at least in the UK, may not mirror the area of coverage targeted for the database. If this limiting scenario is a possibility, then who will attribute data in the areas not populated by those who fit the OSM profile? Hmmm. We may have to write about these issues in detail, but not today.
No, in this blog, I want to focus on the notion that the use of HITS might be a tool that could be used by the commercial map database providers to protect and extend the superiority of their products over the threat posed by crowdsourced databases. After all, the fact that OSM data is gathered by volunteers is, in my opinion, the most significant of the three competitive advantages that OSM has compared to the commercial providers of navigation databases. Of the two other advantages, I note that OSM’s use of open software is not unique, although its mapping efforts are advantaged by the lack of an expense for managerial overhead since OSM is a self-managing entity. It may be that the cost differential between the commercial vendors and OSM could be decreased by using HITS to create portions of a navigation database. Also, since the workers are self-regulated and incented to perform tasks, the management expense of creating the data would decrease in some relationship to the application of the HITS method.
You know, the cost differential could be so significant that it might be time for a new entrant in the navigation database market?
How about this – Assuming this was legal (another topic I will not discuss here), how about running a HITS based on examining Street View and encoding every street sign, address and informative marking visible on and along roads and buildings?. Yeah, I know, Google’s got this covered, but performing this task in a way that would provide valuable information is a task just made for a HITS. Maybe NAVTEQ should try this out with NAVTEQ True now being collected by their dilithium crystal- powered vans?
What if a commercial company adopted the HITS approach to building their navigation database or attributing some of the digitized information already in their database (in fact that could be the case in the example I provided above)? Could the use of HITS be part of solution to building and maintaining a low cost map database? Would people be willing to digitize or attribute digitized lines for a low fee per mile or scene?
Or consider this alternative. Google and others are using various forms of pattern recognition to extract information in imagery, but in certain cases the algorithms have difficulty in returning high-quality data due to problems interpreting the scene. Why not use HITS based on analyzing an image/public domain map duo and have humans unravel the problems that give machines fits? (This is an example of Dobson’s newly patented process popularly known as HITS for FITS)
Yep, there are a number of problems with the HITS approach and issues of quality control and accuracy of response seem to be leading the list. And yes, we could argue about them for weeks. But let’s cut to chase. If you could run HITS at a very low cost, you could afford to do it more than once for a specific location or area and mimic the self-healing error process of crowdsourced systems. Could HITS be cheap enough and good enough to provide a sustainable competitive advantage? I think it could be, if properly managed. And wouldn’t it be fun to find out?
Well, that’s enough thought provoking stuff for today (okay, maybe it was just mildly interesting and not thought provoking, but interesting enough that I may write about it again).
Note – in various ways, this blog benefited from notes kindly sent to me by Muki Haklay (University College London) and Don Cooke (Esri and almost professional “cellist”) as well as a conversation I had with Pat McDevitt, formerly of TomTom/Tele Atlas and now with MapQuest (that’s an interesting move). However, the mistakes, errors in logic, sarcasm, misspellings and lunatic ravings are, unfortunately, entirely mine.
Speaking of lunatic ravings, have you heard about the Google “Whack-A-Mole Rediscovery Project?”
According to the company, the response to any question about an error on their maps is
“Getting mapping right is a difficult challenge and we are working hard to improve our product. And, yes the problem was fixed, but then we found the incorrect information again when we spidered a website that had scrapped the incorrect information from ours and we updated our site to reflect this ‘new’ information. In the meantime the website we spidered had scraped the corrected information recently shown on Google Maps (but now changed), which we will rediscover in a few weeks and change back. Unfortunately, by that time, they will have discovered our current data and change their data again, which we will rediscover and…”
Haklay, Mordechai (Muki) and Clarie Ellul. (forthcoming). Completeness in volunteered geographical information – the evolution of OpenStreetMap coverage in England (2008-2009). Journal of Spatial Information Science
Haklay, Muki, 2009. Understanding the quality of user generated mapping – comparing OpenStreetMap to Ordnance Survey Geodata, (PowerPoint Presentation) http://povesham.wordpress.com/2009/01/12/osm-quality-assessment-s4-presentation/
Posted in Authority and mapping, crowdsourced map data, Google, Google maps, map compilation, map updating, Mapping, MapQuest, Mike Dobson, Navteq, openstreetmap, OSM, User Generated Content, Volunteered Geographic Information