A Crowdsourcing Engine for Mechanized Labor
https://doi.org/10.15514/ISPRAS-2015-27(3)-25
Abstract
References
1. J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, J. Movellan. Whose Vote Should Count More: Optimal Integration of Labels from Labelers of Unknown Expertise. Advances in Neural Information Processing Systems 22. Curran Associates, Inc., 2009, pp. 2035-2043.
2. M. S. Bernstein, G. Little, R. C. Miller, B. Hartmann, M. S. Ackerman, D. R. Karger, D. Crowell, K. Panovich. Soylent: A word processor with a crowd inside. Proceedings of the 23Nd Annual ACM Symposium on User Interface Software and Technology (UIST ’10). New York, NY, USA: ACM, 2010, pp. 313-322. doi: 10.1145/1866029.1866078
3. G. Demartini, D. E. Difallah, P. Cudré-Mauroux. ZenCrowd: Leveraging Probabilistic Reasoning and Crowdsourcing Techniques for Large-Scale Entity Linking. Proceedings of the 21st International Conference on World Wide Web (WWW ’12). New York, NY, USA: ACM, 2012, pp. 469-478. doi: 10.1145/2187836.2187900
4. S. M. Yimam, I. Gurevych, R. E. de Castilho, C. Biemann. WebAnno: A Flexible, Web-based and Visually Supported System for Distributed Annotations, in Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Sofia, Bulgaria: Association for Computational Linguistics, 2013, pp. 1-6.
5. V. Bocharov, S. Alexeeva, D. Granovsky, E. Protopopova, M. Stepanova, A. Surikov. Crowdsourcing morphological annotation, in Computational Linguistics and Intellectual Technologies: papers from the Annual conference “Dialogue”, vol. 1, no. 12(19). Moscow: RSUH, 2013, pp. 109-124.
6. P. Braslavski, D. Ustalov, M. Mukhin. A Spinning Wheel for YARN: User Interface for a Crowdsourced Thesaurus, in Proceedings of the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational Linguistics. Gothenburg, Sweden: Association for Computational Linguistics, 2014, pp. 101-104.
7. S. Lee, S. Park, S. Park. A Quality Enhancement of Crowdsourcing based on Quality Evaluation and User-Level Task Assignment Framework. 2014 International Conference on Big Data and Smart Computing (BIGCOMP). IEEE, 2014, pp. 60-65. doi: 10.1109/BIGCOMP.2014.6741408
8. M.-C. Yuen, I. King, K.-S. Leung. TaskRec: A Task Recommendation Framework in Crowdsourcing Systems. Neural Processing Letters, pp. 1-16, 2014. doi: 10.1007/s11063-014-9343-z
9. D. R. Karger, S. Oh, D. Shah. Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems. Operations Research, vol. 62, no. 1, pp. 1-24, 2014. doi: 10.1287/opre.2013.1235
10. P. Welinder P. Perona. Online crowdsourcing: Rating annotators and obtaining cost-effective labels. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2010, pp. 25-32. doi: 10.1109/CVPRW.2010.5543189
11. D. E. Difallah, G. Demartini, P. Cudré-Mauroux. Pick-A-Crowd: Tell Me What You Like, and I’ll Tell You What to Do. Proceedings of the 22Nd International Conference on World Wide Web (WWW ’13). Rio de Janeiro, Brazil: International World Wide Web Conferences Steering Committee, 2013, pp. 367-374.
12. M. Daltayanni, L. de Alfaro, P. Papadimitriou. WorkerRank: Using Employer Implicit Judgements to Infer Worker Reputation. Proceedings of the Eighth ACM International Conference on Web Search and Data Mining (WSDM ’15). New York, NY, USA: ACM, 2015, pp. 263-272. doi: 10.1145/2684822.2685286
13. A. Sheshadri, M. Lease. SQUARE: A Benchmark for Research on Computing Crowd Consensus. First AAAI Conference on Human Computation and Crowdsourcing, 2013, pp. 156-164.
14. C. M. Meyer, M. Mieskes, C. Stab, I. Gurevych. DKPro Agreement: An Open-Source Java Library for Measuring Inter-Rater Agreement. Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: System Demonstrations. Dublin, Ireland: Dublin City University and Association for Computational Linguistics, 2014, pp. 105-109.
15. B. Satzger, H. Psaier, D. Schall, S. Dustdar. Auction-based crowdsourcing supporting skill management. Information Systems, vol. 38, no. 4, pp. 547-560, 2013. doi: 10.1016/j.is.2012.09.003
16. Y. Gao, A. Parameswaran. Finish Them! : Pricing Algorithms for Human Computation. Proceedings of the VLDB Endowment, vol. 7, no. 14, 2014. doi: 10.14778/2733085.2733101
17. L. Tran-Thanh, T. D. Huynh, A. Rosenfeld, S. D. Ramchurn, N. R. Jennings. Crowdsourcing Complex Workflows under Budget Constraints. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI-15). AAAI Press, 2015, pp. 1298-1304.
18. M. Hosseini, K. Phalp, J. Taylor, R. Ali. The Four Pillars of Crowdsourcing: a Reference Model. 2014 IEEE Eighth International Conference on Research Challenges in Information Science (RCIS), 2014, pp. 1-12. doi: 10.1109/RCIS.2014.6861072
19. A. Panchenko, N. V. Loukachevitch, D. Ustalov, D. Paperno, C. M. Meyer, N. Konstantinova. RUSSE: The First Workshop on Russian Semantic Similarity. Computational Linguistics and Intellectual Technologies: papers from the Annual conference “Dialogue”. Moscow: RGGU, 2015, vol. 2, no. 14(21), pp. 89-105.
Review
For citations:
Ustalov D.A. A Crowdsourcing Engine for Mechanized Labor. Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS). 2015;27(3):351-364. (In Russ.) https://doi.org/10.15514/ISPRAS-2015-27(3)-25