BIFER: a biphasic trace filter approach to scalable prediction of concurrency errors
Xi CHANG1,*(),Zhuo ZHANG2,Peng ZHANG2,Jianxin XUE3,Jianjun ZHAO1
1. Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China 2. Department of Software Engineering, National University of Defense Technology, Changsha 410073, China 3. Department of Software Engineering, Shanghai Second Polytechnic University, Shanghai 201209, China
Predictive trace analysis (PTA), a static trace analysis technique for concurrent programs, can offer powerful capability support for finding concurrency errors unseen in a previous program execution. Existing PTA techniques always face considerable challenges in scaling to large traces which contain numerous critical events. One main reason is that an analyzed trace includes not only redundant memory accessing events and threads that cannot contribute to discovering any additional errors different from the found candidate ones, but also many residual synchronization events which still affect PTA to check whether these candidate ones are feasible or not even after removing the redundant events. Removing them from the trace can significantly improve the scalability of PTA without affecting the quality of the PTA results. In this paper, we propose a biphasic trace filter approach, BIFER in short, to filter these redundant events and residual events for improving the scalability of PTA to expose general concurrency errors. In addition, we design a model which indicates the lock history and the happens-before history of each thread with two kinds of ways to achieve the efficient filtering. We implement a prototypical tool BIFER for Java programs on the basis of a predictive trace analysis framework. Experiments show that BIFER can improve the scalability of PTA during the process of analyzing all of the traces.
Sen K, Rosu G, Agha G. Detecting errors in multithreaded programs by generalized predictive analysis of executions. In: Proceedings of the 7th IFIP WG6.1 International Conference on Formal Methods for Open Object-based Distributed Systems. 2005, 211−226
https://doi.org/10.1007/11494881_14
2
Wang L, Stoller S. Accurate and efficient runtime detection of atomicity errors in concurrent programs. In: Proceedings of ACM International Conference on Principles and Practice of Parallel Programming. 2006, 137−146
https://doi.org/10.1145/1122971.1122993
3
Farzan A, Madhusudan P, Sorrentino F. Meta-analysis for atomicity violations under nested locking. In: Proceedings of the 21st International Conference on Computer Aided Verification. 2009, 248−262
https://doi.org/10.1007/978-3-642-02658-4_21
4
Chen F, Serbanuta T F, Rosu G. jPredictor: a predictive runtime analysis tool for Java. In: Proceedings of the 30th International Conference on Software Engineering. 2008, 221−230
https://doi.org/10.1145/1368088.1368119
5
Wang C, Kundu S, Ganai M K, Gupta A. Symbolic predictive analysis for concurrent programs. In: Proceedings of International Symposium on Formal Methods. 2009, 256−272
https://doi.org/10.1007/978-3-642-05089-3_17
6
Wang C, Limaye R, Ganai M K, Gupta A. Trace-based symbolic analysis for atomicity violations. In: Proceedings of the 16th International Conference on Tools and Algorithms for the Construction and Analysis of Systems. 2010, 328−342
https://doi.org/10.1007/978-3-642-12002-2_27
7
Kahlon V, Wang C. Universal causality graphs: a precise happensbefore model for detecting bugs in concurrent programs. In: Proceedings of the 22nd International Conference on Computer Aided Verification. 2010, 434−449
https://doi.org/10.1007/978-3-642-14295-6_39
8
Zhang W, Sun C, Lu S. Conmem: detecting severe concurrency bugs through an effect-oriented approach. ACM SIGPLAN Notices, 2010, 45(3): 179−192
https://doi.org/10.1145/1735971.1736041
9
Vaziri M, Tip F, Dolby J. Associating synchronization constraints with data in an object-oriented language. ACM SIGPLAN Notices, 2006, 41(1): 334−345
https://doi.org/10.1145/1111320.1111067
10
Huang J, Zhou J G, Zhang C. Scaling predictive analysis of concurrent programs by removing trace redundancy. ACM Transactions on Software Engineering and Methodology, 2013, 22(1): 8
https://doi.org/10.1145/2430536.2430542
11
Lu S, Park S, Seo E, Zhou Y Y. Learning from mistakes: a comprehensive study on real world concurrency bug characteristics. ACM SIGPLAN Notices, 2008, 43(3): 329−339
https://doi.org/10.1145/1353536.1346323
12
Lucia B, Wood B P, Ceze L. Isolating and understanding concurrency errors using reconstructed execution fragments. ACM SIGPLAN Notices, 2011, 46(6): 378−388
https://doi.org/10.1145/1993316.1993543
13
Shi Y, Park S, Yin Z N, Lu S, Zhou Y Y, Chen W G, Zheng W M. Do I use the wrong definition?: DeFuse: definition-use invariants for detecting concurrency and sequential bugs. ACM SIGPLAN Notices, 2010, 45(10): 160−174
https://doi.org/10.1145/1932682.1869474
14
Wang C, Said M, Gupta A. Coverage guided systematic concurrency testing. In: Proceedings of the 33rd International Conference on Software Engineering. 2011, 221−230
https://doi.org/10.1145/1985793.1985824
15
Lai Z, Cheung S C, Chan W K. Detecting atomic-set serializability violations in multithreaded programs through active randomized testing. In: Proceedings of the 32nd International Conference on Software Engineering. 2010, 235−244
https://doi.org/10.1145/1806799.1806836
16
Huang J, Zhang C. Persuasive prediction of concurrency access anomalies. In: Proceedings of the 2011 International Symposium on Software Testing and Analysis. 2011, 144−154
https://doi.org/10.1145/2001420.2001438
17
Savage S, Burrows M, Nelson G, Sobalvarro P, Anderson T E. Eraser: a dynamic data race detector for multithreaded programs. ACM Transactions on Computer Systems, 1997, 15(4): 391−411
https://doi.org/10.1145/265924.265927
Farchi E, Nir Y, Ur S. Concurrent bug patterns and how to test them. In: Proceedings of IEEE International Parallel and Distributed Processing Symposium. 2003
https://doi.org/10.1109/IPDPS.2003.1213511
20
Flanagan C, Godefroid P. Dynamic partial-order reduction for model checking software. ACM SIGPLAN Notices, 2005, 40(1): 110−121
https://doi.org/10.1145/1047659.1040315
21
Sinha N, Wang C. Staged concurrent program analysis. In: Proceedings of the 18th ACM SIGSOFT International Symposium on the Foundations of Software Engineering. 2010, 47−56
https://doi.org/10.1145/1882291.1882301
22
Serebryany K, Potapenko A, Iskhodzhanov T, Vyukov D. Dynamic race detection with LLVM compiler. In: Proceedings of the 2nd International Conference on Runtime Verification. 2011, 110−114
23
Sack P, Bliss B E, Ma Z Q, Petersen P, Torrellas J. Accurate and efficient filtering for the intel thread checker race detector. In: Proceedings of the 1st Workshop on Architectural and System Support for Improving Software Dependability. 2006, 34−41
https://doi.org/10.1145/1181309.1181315
24
Marino D, Musuvathi M, Narayanasamy S. Literace: effective sampling for lightweight data-race detection. ACM SIGPLAN Notices, 2009, 44(6): 134−143
https://doi.org/10.1145/1543135.1542491
25
Bond M D, Coons K E, McKinley K S. Pacer: proportional detection of data races. ACM SIGPLAN Notices, 2010, 45(6): 255−268
https://doi.org/10.1145/1809028.1806626
26
Yu Y, Rodeheffer T, W C. Racetrack: efficient detection of data race conditions via adaptive tracking. ACM SIGOPS Operating Systems Review, 2005, 39(5): 221−234
https://doi.org/10.1145/1095809.1095832