ISSN 2071-8594

Russian academy of sciences

Editor-in-Chief

Gennady Osipov

E.J. Azofeifa, G.M. Novikova Development of a Crowdsourcing Multiagent System for Knowledge Extraction

Abstract.

Crowdsourcing is used for a wide variety of tasks on the Internet. From the point of view of knowledge extraction, it helps leverage knowledge in specific areas by gathering individual judgments of experts on specific subjects. In spite of crowdsourcing’s proven effectiveness in tackling various sorts of problems, researchers do not coincide in a standard framework to represent and model this approach. In this work, a multiagent system (MAS) is presented as a method for modelling crowdsourcing processes intended to obtain expert knowledge. The system, exemplified by a corpus annotation process, includes a formulation of its goals in terms of uniqueness, value and temporality, and comprises a dynamic reward scheme that produces a real measure of inter-annotator agreement (IAA) while constraining the model to a time window and a reward limit.

Keywords:

corpus annotation, crowdsourcing, expert knowledge, intelligent agents, dynamic reward, inter-annotator agreement.

PP. 40-48.

DOI 10.14357/20718594200104

References

1. Kamar, E (2016). Directions in Hybrid Intelligence: Complementing AI Systems with Human Intelligence. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, 4070–4073.
2. Azofeifa, E. J. & Novikova, G. M. (2018). VUZ: A Crowdsourced Framework for Scalable Interdisciplinary Curriculum Design. In Proceedings of IV International Conference on Information Technologies in Engineering Education (Inforino), Moscow, Russia, 1–6.
3. Karger, D. R., Oh, S. & Shah, D. (2013). Efficient crowdsourcing for multi-class labeling. In Proceedings of the ACM SIGMETRICS/International Conference on Measurement and Modeling of Computer Systems. ACM.
4. Difallah, D. E., Catasta, M., Demartini, G., Ipeirotis, P. G., & Cudré-Mauroux, P. (2015). The Dynamics of Micro- Task Crowdsourcing: The case of Amazon MTurk. In Proceedings of the 24th International World Wide Web Conference (WWW 2015), pp. 238–247.
5. Gao, Y. & Parameswaran, A. (2014). Finish them!: Pricing algorithms for human computation. In Proceedings of the VLDB Endowment, 7(14):1965–1976
6. Difallah, D. E., Catasta, M., Demartini, G. & Cudré- Mauroux, P. (2014). Scaling-up the Crowd: Micro-Task Pricing Schemes for Worker Retention and Latency Improvement. In Proceedings of the Second AAAI Conference on Human Computation and Crowdsourcing (HCOMP-14).
7. Jiuchuan, J., Bo, A., Yichuan, J., Donghui, L., Zhan, B., Jie, C. & Zhifeng, H. (2018). Understanding Crowdsourcing Systems from a Multiagent Perspective and Approach, ACM Transactions on Anonymous & Adaptive Systems, 13 (2), 8:1 - 8:32.
8. Nowak, S. & Rüger, S. (2010). How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation. In Proceedings of the international conference on Multimedia information retrieval (MIR '10), 557–566.
9. Ipeirotis, P. G. (2010). Analyzing the Amazon Mechanical Turk marketplace. XRDS: Crossroads, The ACM, Magazine for Students, 17 (2): 16–21.
10. Brambilla, M., Ceri, S., Mauri, A. & Volonterio, R. (2015). An Explorative Approach for Crowdsourcing Tasks Design. In Proceedings of the 24th International Conference on World Wide Web - WWW ’15 Companion, 1125–1130.
11. Daniel, F., Kucherbaev, P., Cappiello, C., Benatallah, B., & Allahbakhsh, M (2018). Quality control in crowdsourcing: A survey of quality attributes, assessment techniques, and assurance actions. ACM Computing Surveys (CSUR), 51(1):7:1–7:40.
12. Bozzon, A., Brambilla, M., Ceri, S., Silvestri, M., & Vesci, G. (2013). Choosing the right crowd: Expert finding in social networks. In Proceedings of the 16th International Conference on Extending Database Technology, EDBT ’13, pp. 637–648.
13. Difallah, D.E., Demartini, G. & Cudré-Mauroux, P. (2013). Pick-A-Crowd: Tell me what you like, and I’ll tell you what to do. In Proceedings of the 22th International World Wide Web Conference (WWW 2013), ACM, New York, USA.
14. Sheng, V.S., Provost, F. & Ipeirotis, P.G. (2008). Get another label? Improving data quality and data mining using multiple, noisy labelers. In Proceedings of the International Conference on Knowledge Discovery and Data Mining (KDD). ACM Press.
15. Huynh, T. D., Jennings, N.R. & Shadbolt N.R. (2006). An integrated trust and reputation model for open multi-agent systems. Journal of Autonomous Agents and MultiAgent Systems, 13(2):119–154.
16. Zhao, D., Li, X. & Ma, H. (2014). How to crowdsource tasks truthfully without sacrificing utility: Online incentive mechanisms with budget constraint. In Proceedings of the IEEE Conference on Computer Communications (INFOCOM 2014).
17. Singla, A. & Krause, A. (2013). Truthful incentives in crowdsourcing tasks using regret minimization mechanisms. In Proceedings of the 22nd International Conference on World Wide Web (WWW 2013), 1167–1178.
18. Wu, H., Corney, J. & Grant, M. (2014). Relationship between quality and payment in crowdsourced design. In Proceedings of the IEEE 18th International Conference on Computer Supported Cooperative Work in Design (CSCWD 2014), 499–504.
19. Morrison, C. T. & Cohen, P. R. (2005). Noisy information value in utility-based decision making. In Proceedings of the First International Workshop on Utility-based Data Mining (UBDM’05), 34–38.
20. Li, Q., Ma, F., Gao, J., Su, L. & Quinn, C. J. (2016). Crowdsourcing high quality labels with a tight budget. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining. ACM, 237–246.
21. Gavrilova, T. (2010). Orchestrating Ontologies for Courseware Design. In Affective, Interactive and Cognitive Methods for E-Learning Design: Creating an Optimal Education Experience (Eds. by A. Tzanavari & N. Tsapatsoulis), IGI Global, USA, 155-172.
22. Kittur, A., Nickerson, J. V., Bernstein, M., Gerber, E., Shaw, A., Zimmerman, J. ... Horton, J. (2013). The future of crowd work. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work, CSCW ’13 (pp. 1301-1318). New York, NY, USA.
23. Novikova, G.M. & Azofeifa, E.J. (2016). Domain Theory Verification Using Multi-agent Systems. Procedia Computer Science, 120-125.