DOI QR코드

DOI QR Code

Task Assignment Model for Crowdsourcing Software Development: TAM

  • Tunio, Muhammad Zahid (School of Software Engineering, Beijing University of Posts and Telecommunication) ;
  • Luo, Haiyong (Institute of Computer Technology, Chines Academy of Science, Beijing Key Laboratory of Mobile Computing and Pervasive Devices) ;
  • Wang, Cong (School of Software Engineering, Beijing University of Posts and Telecommunication) ;
  • Zhao, Fang (School of Software Engineering, Beijing University of Posts and Telecommunication) ;
  • Gilal, Abdul Rehman (Dept. of Computer Science, Sukkur IBA University) ;
  • Shao, Wenhua (School of Software Engineering, Beijing University of Posts and Telecommunication)
  • Received : 2017.09.19
  • Accepted : 2018.02.08
  • Published : 2018.06.30

Abstract

Selection of a suitable task from the extensively available large set of tasks is an intricate job for the developers in crowdsourcing software development (CSD). Besides, it is also a tiring and a time-consuming job for the platform to evaluate thousands of tasks submitted by developers. Previous studies stated that managerial and technical aspects have prime importance in bringing success for software development projects, however, these two aspects can be more effective and conducive if combined with human aspects. The main purpose of this paper is to present a conceptual framework for task assignment model for future research on the basis of personality types, that will provide a basic structure for CSD workers to find suitable tasks and also a platform to assign the task directly. This will also match their personality and task. Because personality is an internal force which whittles the behavior of developers. Consequently, this research presented a Task Assignment Model (TAM) from a developers point of view, moreover, it will also provide an opportunity to the platform to assign a task to CSD workers according to their personality types directly.

Keywords

References

  1. T. D. LaToza and A. Van der Hoek, "Crowdsourcing in software engineering: models, motivations, and challenges," IEEE Software, vol. 33, no. 1, pp. 74-80, 2016. https://doi.org/10.1109/MS.2016.12
  2. K. Mao, L. Capra, M. Harman, and Y. Jia, "A survey of the use of crowdsourcing in software engineering," Journal of Systems and Software, vol. 126, pp. 57-84, 2017. https://doi.org/10.1016/j.jss.2016.09.015
  3. K. Mao, Y. Yang, Q. Wang, Y. Jia, and M. Harman, "Developer recommendation for crowdsourced software development tasks," in Proceedings of 2015 IEEE Symposium on Service-Oriented System Engineering, San Francisco Bay, CA, 2015, pp. 347-356.
  4. L. B. Chilton, J. J. Horton, R. C. Miller, and S. Azenkot, "Task search in a human computation market," in Proceedings of the ACM SIGKDD Workshop on Human Computation, Washington, DC, 2010, pp. 1-9.
  5. E. Aldhahri, V. Shandilya, and S. Shiva, "Towards an effective crowdsourcing recommendation system: a survey of the state-of-the-art," in Proceedings of 2015 IEEE Symposium on Service-Oriented System Engineering, San Francisco Bay, CA, 2015, pp. 372-377.
  6. T. D. LaToza and A. Van der Hoek, "A vision of crowd development," in Proceedings of 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Florence, Italy, 2015, pp. 563-566.
  7. D. Geiger and M. Schader, "Personalized task recommendation in crowdsourcing information systems: current state of the art," Decision Support Systems, vol. 65, pp. 3-16, 2014. https://doi.org/10.1016/j.dss.2014.05.007
  8. D. Dang, Y. Liu, X. Zhang, and S. Huang, "A crowdsourcing worker quality evaluation algorithm on MapReduce for big data applications," IEEE Transactions on Parallel and Distributed Systems, vol. 27, no. 7, pp. 1879-1888, 2016. https://doi.org/10.1109/TPDS.2015.2457924
  9. B. Carpenter, "Multilevel Bayesian models of categorical data annotation," 2008 [Online]. Available: https://lingpipe.files.wordpress.com/2008/11/carp-bayesian-multilevel-annotation.pdf.
  10. A. Brew, D. Greene, and P. Cunningham, "Using crowdsourcing and active learning to track sentiment in online media," in Proceedings of the 19th European Conference on Artificial Intelligence, Lisbon, Portugal, 2010, pp. 145-150.
  11. J. Howe, "The rise of crowdsourcing," Wired Magazine, vol. 14, no. 6, pp. 1-4, 2006.
  12. L. Machado, R. Prikladnicki, F. Meneguzzi, C. R. de Souza, and E. Carmel, "Task allocation for crowdsourcing using AI planning," in Proceedings of the 3rd International Workshop on CrowdSourcing in Software Engineering, Austin, TX, 2016, pp. 36-40.
  13. Y. Fu, H. Chen, and F. Song, "STWM: a solution to self-adaptive task-worker matching in software crowdsourcing," in Proceedings of the International Conference on Algorithms and Architectures for Parallel Processing, Zhangjiajie, China, 2015, pp. 383-398.
  14. J. E. Tomayko and O. Hazzan, Human Aspects of Software Engineering. Hingham, MA : Charles River Media, 2004.
  15. R. Snow, B. O'Connor, D. Jurafsky, and A. Y. Ng, "Cheap and fast: but is it good? Evaluating non-expert annotations for natural language tasks," in Proceedings of the Conference on Empirical Methods in Natural Language Processing, Honolulu, HI, 2008, pp. 254-263.
  16. V. Ambati, S. Vogel, and J. G. Carbonell, "Towards task recommendation in micro-task markets," Human Computation, vol. 11, pp. 1-4, 2011.
  17. M. C. Yuen, I. King, and K. S. Leung, "Task matching in crowdsourcing," in Proceedings of 2011 International Conference on and 4th International Conference on Cyber, Physical and Social Computing, Dalian, China, 2011, pp. 409-412.
  18. V. S. Sheng, F. Provost, P. G. Ipeirotis, "Get another label? Improving data quality and data mining using multiple, noisy labelers," in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Las Vegas, NV, 2008, pp. 614-622.
  19. X. Liu, M. Lu, B. C. Ooi, Y. Shen, S. Wu, and M. Zhang, "CDAS: a crowdsourcing data analytics system," Proceedings of the VLDB Endowment, vol. 5, no. 10, pp. 1040-1051, 2012. https://doi.org/10.14778/2336664.2336676
  20. J. Whitehill, T. F. Wu, J. Bergsma, J. R. Movellan, and P. L. Ruvolo, "Whose vote should count more: optimal integration of labels from labelers of unknown expertise," Advances in Neural Information Processing Systems, vol. 22, pp. 2035-2043, 2009.
  21. V. C. Raykar, S. Yu, L. H. Zhao, A. Jerebko, C. Florin, G. H. Valadez, L. Bogoni, and L. Moy, "Supervised learning from multiple experts: whom to trust when everyone lies a bit," in Proceedings of the 26th Annual International Conference on Machine Learning, 2009, pp. 889-896.
  22. A. P. Dempster, N. M. Laird, and D. B. Rubin, "Maximum likelihood from incomplete data via the EM algorithm," Journal of the Royal Statistical Society Series B (Methodological), vol. 39, no. 1, pp. 1-38, 1977.
  23. A. P. Dawid and A. M. Skene, "Maximum likelihood estimation of observer error-rates using the EM algorithm," Applied Statistics, vol. 28, no. 1, pp. 20-28, 1979. https://doi.org/10.2307/2346806
  24. M. R. Gupta and Y. Chen, "Theory and use of the EM algorithm," Foundations and Trends in Signal Processing, vol. 4, no. 3, pp. 223-296, 2011. https://doi.org/10.1561/2000000034
  25. G. McLachlan and T. Krishnan, The EM Algorithm and Extensions, 2nd ed. Hoboken, NJ: John Wiley & Sons, 2007.
  26. L. F. Capretz and F. Ahmed, "Making sense of software development and personality types," IT Professional, vol. 12, no. 1, pp. 6-13, 2010. https://doi.org/10.1109/MITP.2010.33
  27. G. Kazai, J. Kamps, and N. Milic-Frayling, "Worker types and personality traits in crowdsourcing relevance labels," in Proceedings of the 20th ACM International Conference on Information and Knowledge Management, Glasgow, Scotland, 2011, pp. 1941-1944.
  28. C. M. Karapicak and O. Demirors, "A case study on the need to consider personality types for software team formation," in Proceedings of the International Conference on Software Process Improvement and Capability Determination, Bremen, Germany, 2013, pp. 120-129.
  29. L. F. Capretz, D. Varona, and A. Raza, "Influence of personality types in software tasks choices," Computers in Human Behavior, vol. 52, pp. 373-378, 2015. https://doi.org/10.1016/j.chb.2015.05.050
  30. S. Cruz, F. Q. da Silva, and L. F. Capretz, "Forty years of research on personality in software engineering: a mapping study," Computers in Human Behavior, vol. 46, pp. 94-113, 2015. https://doi.org/10.1016/j.chb.2014.12.008
  31. R. Valencia-Garcia, F. Garcia-Sanchez, D. Castellanos-Nieves, J. T. Fernandez-Breis, and A. Toval, "Exploitation of social semantic technology for software development team configuration," IET Software, vol. 4, no. 6, pp. 373-385, 2010. https://doi.org/10.1049/iet-sen.2010.0043
  32. N. R. Mead, "Software engineering education: how far we've come and how far we have to go," Journal of Systems and Software, vol. 82, no. 4, pp. 571-575, 2009. https://doi.org/10.1016/j.jss.2008.12.038
  33. A. R. Gilal, J. Jaafar, M. Omar, S. Basri, and A. Waqas, "A rule-based model for software development team composition: team leader role with personality types and gender classification," Information and Software Technology, vol. 74, pp. 105-113, 2016. https://doi.org/10.1016/j.infsof.2016.02.007
  34. M. Z. Tunio, H. Luo, W. Cong, Z. Fang, A. R. Gilal, A. Abro, and S. Wenhua, "Impact of personality on task selection in crowdsourcing software development: a sorting approach," IEEE Access, vol. 5, pp. 18287-18294, 2017. https://doi.org/10.1109/ACCESS.2017.2747660
  35. M. Z. Tunio, H. Luo, C. Wang and F. Zhao, "Crowdsourcing software development: task assignment using PDDL artificial intelligence planning," Journal of Information Processing Systems, vol. 14, no. 1, pp. 129-139, 2018. https://doi.org/10.3745/JIPS.04.0055