Please wait a minute...
Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

Postal Subscription Code 80-970

2018 Impact Factor: 1.129

Front. Comput. Sci.    2020, Vol. 14 Issue (6) : 146207    https://doi.org/10.1007/s11704-019-8418-4
RESEARCH ARTICLE
Quality assessment in competition-based software crowdsourcing
Zhenghui HU, Wenjun WU, Jie LUO(), Xin WANG, Boshu LI
State Key Laboratory of Software Development Environment, Beihang University, Beijing 100191, China
 Download: PDF(598 KB)  
 Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks
Abstract

Quality assessment is a critical component in crowdsourcing-based software engineering (CBSE) as software products are developed by the crowd with unknown or varied skills and motivations. In this paper, we propose a novel metric called the project score to measure the performance of projects and the quality of products for competitionbased software crowdsourcing development (CBSCD) activities. To the best of our knowledge, this is the first work to deal with the quality issue of CBSE in the perspective of projects instead of contests. In particular, we develop a hierarchical quality evaluation framework for CBSCD projects and come up with two metric aggregation models for project scores. The first model is a modified squale model that can locate the software modules of poor quality, and the second one is a clustering-based aggregationmodel, which takes different impacts of phases into account. To test the effectiveness of the proposed metrics, we conduct an empirical study on TopCoder, which is a famous CBSCD platform. Results show that the proposed project score is a strong indicator of the performance and product quality of CBSCD projects.We also find that the clustering-based aggregation model outperforms the Squale one by increasing the percentage of the performance evaluation criterion of aggregation models by an additional 29%. Our approach to quality assessment for CBSCD projects could potentially facilitate software managers to assess the overall quality of a crowdsourced project consisting of programming contests.

Keywords crowdsourcing      software engineering      product quality      competition      evaluation framework      metric aggregation     
Corresponding Author(s): Jie LUO   
Just Accepted Date: 23 August 2019   Issue Date: 17 March 2020
 Cite this article:   
Zhenghui HU,Wenjun WU,Jie LUO, et al. Quality assessment in competition-based software crowdsourcing[J]. Front. Comput. Sci., 2020, 14(6): 146207.
 URL:  
https://academic.hep.com.cn/fcs/EN/10.1007/s11704-019-8418-4
https://academic.hep.com.cn/fcs/EN/Y2020/V14/I6/146207
1 T D LaToza, A V D Hoek. Crowdsourcing in software engineering: models, motivations, and challenges. IEEE Software, 2016, 33(1): 74–80
https://doi.org/10.1109/MS.2016.12
2 K Li, J C Xiao, Y J Wang, Q Wang. Analysis of the key factors for software quality in crowdsourcing development: an empirical study on topcoder.com. In: Proceedings of the 37th Annual Computer Software and Applications Conference. 2013, 812–817
https://doi.org/10.1109/COMPSAC.2013.133
3 K Mao, Y Yang, M S Li, M Harman. Pricing crowdsourcing-based software development tasks. In: Proceedings of International Conference on Software Engineering. 2013, 1205–1208
https://doi.org/10.1109/ICSE.2013.6606679
4 N Archak. Money, glory and cheap talk: analyzing strategic behavior of contestants in simultaneous crowdsourcing contests on topcoder.com. In: Proceedings of the 19th International Conference on World Wide Web. 2010, 21–30
https://doi.org/10.1145/1772690.1772694
5 K Mao, L Capra, M Harman, Y Jia. A survey of the use of crowdsourcing in software engineering. Systems and Software, 2017, 126: 57–84
https://doi.org/10.1016/j.jss.2016.09.015
6 W J Wu , W T Tsai, W Li. An evaluation framework for software crowdsourcing. Frontiers of Computer Science, 2013, 7(5): 694–709
https://doi.org/10.1007/s11704-013-2320-2
7 F Daniel, P Kucherbaev, C Cappiello, B Benatallah, M Allahbakhsh. Quality control in crowdsourcing: a survey of quality attributes, assessment techniques, and assurance actions. ACM Computing Surveys, 2018, 51(1): 7
https://doi.org/10.1145/3148148
8 X Chen, H Jiang , X C Li, T K He, Z Y Chen. Automated quality assessment for crowdsourced test reports of mobile applications. In: Proceedings of the 25th International Conference on Software Analysis, Evolution and Reengineering. 2018, 368–379
https://doi.org/10.1109/SANER.2018.8330224
9 K Mao, Y Yang, Q Wang, Y Jia, M Harman. Developer recommendation for crowdsourced software development tasks. In: Proceedings of IEEE Symposium on Service-Oriented System Engineering. 2015, 347–356
https://doi.org/10.1109/SOSE.2015.46
10 K R Lakhani, D A Garvin, E Lonstein. Topcoder (a): developing software through crowdsourcing. Harvard Business School General Management Vnit Case No. 610-032, 2010
11 J P Miguel , D Mauricio , G Rodriguez . A review of software quality models for the evaluation of software products. International Journal of Software Engineering and Applications, 2014, 5(6): 31–54
https://doi.org/10.5121/ijsea.2014.5603
12 K Mordal, N Anquetil, J Laval, A Serebrenik, B Vasilescu, S Ducasse. Software quality metrics aggregation in industry. Journal of Software: Evolution and Process, 2013, 25(10): 1117–1135
https://doi.org/10.1002/smr.1558
13 ISO/IEC9126-1. Software engineering-product quality-part1: quality model. 1st ed. International Organization for Standardization, 2001
14 J A McCall, P K Richards, G F Walters. Factors in software quality. RADC TR-77369, 1977
15 X Wang, W J Wu, Z H Hu. Evaluation of software quality in the topcoder crowdsourcing environment. In: Proceedings of the 7th Annual Computing and Communication Workshop and Conference. 2017, 1–6
16 S R Kludt. Metrics and models in software quality engineering. Journal of Product Innovation Management, 1996, 13(2): 182–183
https://doi.org/10.1016/S0737-6782(96)90161-7
17 T D Breaux, F Schaub. Scaling requirements extraction to the crowd: experiments with privacy policies. In: Proceedings of the 22nd International Requirements Engineering Conference. 2014, 163–172
https://doi.org/10.1109/RE.2014.6912258
18 M Hosseini, A Shahri, K Phalp, J Taylor, R Ali, F Dalpiaz. Configuring crowdsourcing for requirements elicitation. In: Proceedings of the 9th International Conference on Research Challenges in Information Science. 2015, 133–138
https://doi.org/10.1109/RCIS.2015.7128873
19 M Nebeling, S Leone, M C Norrie. Crowdsourced web engineering and design. In: Proceedings of International Conference on Web Engineering. 2012, 31–45
https://doi.org/10.1007/978-3-642-31753-8_3
20 T D Latoza, M Chen, L X Jiang, M Y Zhao, A V D Hoek. Borrowing from the crowd: a study of recombination in software design competitions. In: Proceedings of the 37th International Conference on Software Engineering. 2015, 551–562
https://doi.org/10.1109/ICSE.2015.72
21 M Goldman, G Little, R C Miller. Real-time collaborative coding in a web IDE. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology. 2011, 155–164
https://doi.org/10.1145/2047196.2047215
22 R Pham, L Singer, K Schneider. Building test suites in social coding sites by leveraging drive-by commits. In: Proceedings of the 35th International Conference on Software Engineering. 2013, 1209–1212
https://doi.org/10.1109/ICSE.2013.6606680
23 J Bishop, R N Horspool, T Xie, N Tillmann, J D Halleux. Code hunt: experience with coding contests at scale. In: Proceedings of the 37th International Conference on Software Engineering. 2015, 398–407
https://doi.org/10.1109/ICSE.2015.172
24 T Xie. Cooperative testing and analysis: human-tool, tool-tool and human-human cooperations to get work done. In: Proceedings of the 12th International Working Conference on Source Code Analysis and Manipulation. 2012, 1–3
https://doi.org/10.1109/SCAM.2012.31
25 Y H Tung, S S Tseng. A novel approach to collaborative testing in a crowdsourcing environment. Journal of Systems and Software, 2013, 86(8): 2143–2153
https://doi.org/10.1016/j.jss.2013.03.079
26 J Itkonen. More testers–the effect of crowd size and time restriction in software testing. Information and Software Technology, 2013, 55(6): 986–1003
https://doi.org/10.1016/j.infsof.2012.12.004
27 E T Barr, M Harman, P Mcminn, M Shahbaz, S Yoo. The oracle problem in software testing: a survey. IEEE Transactions on Software Engineering, 2015, 41(5): 507–525
https://doi.org/10.1109/TSE.2014.2372785
[1] Article highlights Download
[1] Tao HAN, Hailong SUN, Yangqiu SONG, Yili FANG, Xudong LIU. Find truth in the hands of the few: acquiring specific knowledge with crowdsourcing[J]. Front. Comput. Sci., 2021, 15(4): 154315-.
[2] Ibrahim ALSEADOON, Aakash AHMAD, Adel ALKHALIL, Khalid SULTAN. Migration of existing software systems to mobile computing platforms: a systematic mapping study[J]. Front. Comput. Sci., 2021, 15(2): 152204-.
[3] Gang WU, Zhiyong CHEN, Jia LIU, Donghong HAN, Baiyou QIAO. Task assignment for social-oriented crowdsourcing[J]. Front. Comput. Sci., 2021, 15(2): 152316-.
[4] Yixuan TANG, Zhilei REN, Weiqiang KONG, He JIANG. Compiler testing: a systematic literature analysis[J]. Front. Comput. Sci., 2020, 14(1): 1-20.
[5] Bo YUAN, Xiaolei ZHOU, Xiaoqiang TENG, Deke GUO. Enabling entity discovery in indoor commercial environments without pre-deployed infrastructure[J]. Front. Comput. Sci., 2019, 13(3): 618-636.
[6] Anil Kumar KARNA, Yuting CHEN, Haibo YU, Hao ZHONG, Jianjun ZHAO. The role of model checking in software engineering[J]. Front. Comput. Sci., 2018, 12(4): 642-668.
[7] Xiaolei ZHOU, Tao CHEN, Deke GUO, Xiaoqiang TENG, Bo YUAN. From one to crowd: a survey on crowdsourcing-based wireless indoor localization[J]. Front. Comput. Sci., 2018, 12(3): 423-450.
[8] Najam NAZAR,He JIANG,Guojun GAO,Tao ZHANG,Xiaochen LI,Zhilei REN. Source code fragment summarization with small-scale crowdsourcing based features[J]. Front. Comput. Sci., 2016, 10(3): 504-517.
[9] Xiaolan XU,Wenjun WU,Ya WANG,Yuchuan WU. Software crowdsourcing for developing Software-as-a-Service[J]. Front. Comput. Sci., 2015, 9(4): 554-565.
[10] Wenjun WU, Wei-Tek TSAI, Wei LI. An evaluation framework for software crowdsourcing[J]. Front Comput Sci, 2013, 7(5): 694-709.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed