|
|
Challenges of human–machine collaboration in risky decision-making |
Wei XIONG, Hongmiao FAN, Liang MA( ), Chen WANG( ) |
Laboratory of Enhanced Human–Machine Collaborative Decision-Making, Department of Industrial Engineering, Tsinghua University, Beijing 100084, China |
|
|
Abstract The purpose of this paper is to delineate the research challenges of human–machine collaboration in risky decision-making. Technological advances in machine intelligence have enabled a growing number of applications in human–machine collaborative decision-making. Therefore, it is desirable to achieve superior performance by fully leveraging human and machine capabilities. In risky decision-making, a human decision-maker is vulnerable to cognitive biases when judging the possible outcomes of a risky event, whereas a machine decision-maker cannot handle new and dynamic contexts with incomplete information well. We first summarize features of risky decision-making and possible biases of human decision-makers therein. Then, we argue the necessity and urgency of advancing human–machine collaboration in risky decision-making. Afterward, we review the literature on human–machine collaboration in a general decision context, from the perspectives of human–machine organization, relationship, and collaboration. Lastly, we propose challenges of enhancing human–machine communication and teamwork in risky decision-making, followed by future research avenues.
|
Keywords
human–machine collaboration
risky decision-making
human–machine team and interaction
task allocation
human–machine relationship
|
Corresponding Author(s):
Liang MA,Chen WANG
|
Just Accepted Date: 17 December 2021
Online First Date: 19 January 2022
Issue Date: 14 February 2022
|
|
1 |
K Akash, W L Hu, T Reid, N Jain (2017). Dynamic modeling of trust in human–machine interactions. In: American Control Conference (ACC). Seattle, WA: IEEE, 1542–1548
|
2 |
J Amann, A Blasimme, E Vayena, D Frey, V I Madai (2020). Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20(1): 310
https://doi.org/10.1186/s12911-020-01332-6
|
3 |
H Apel, A H Thieken, B Merz, G Blöschl (2004). Flood risk assessment and associated uncertainty. Natural Hazards and Earth System Sciences, 4(2): 295–308
https://doi.org/10.5194/nhess-4-295-2004
|
4 |
T Bedford, R Cooke (2001). Probabilistic Risk Analysis: Foundations and Methods. Cambridge: Cambridge University Press
|
5 |
D E Bell (1982). Regret in decision making under uncertainty. Operations Research, 30(5): 961–981
https://doi.org/10.1287/opre.30.5.961
|
6 |
A Bhardwaj, A H Ghasemi, Y Zheng, H Febbo, P Jayakumar, T Ersal, J L Stein, R B Gillespie (2020). Who’s the boss? Arbitrating control authority between a human driver and automation system. Transportation Research Part F: Traffic Psychology and Behaviour, 68: 144–160
https://doi.org/10.1016/j.trf.2019.12.005
|
7 |
V Bier (2004). Implications of the research on expert overconfidence and dependence. Reliability Engineering & System Safety, 85(1–3): 321–329
https://doi.org/10.1016/j.ress.2004.03.020
|
8 |
V M Bier, Y Y Haimes, J H Lambert, N C Matalas, R Zimmerman (1999). A survey of approaches for assessing and managing the risk of extremes. Risk Analysis, 19(1): 83–94
https://doi.org/10.1111/j.1539-6924.1999.tb00391.x
|
9 |
J S Blumenthal-Barby, H Krieger (2015). Cognitive biases and heuristics in medical decision making: A critical review using a systematic search strategy. Medical Decision Making, 35(4): 539–557
https://doi.org/10.1177/0272989X14547740
|
10 |
J V Bradley (1954). Desirable control-display relationshipsfor moving-scale instruments. Technical Report 54–423. Dayton, OH: US Air Force, Wright Air Development Center (WADC)
|
11 |
S B Broomell, D V Budescu (2009). Why are experts correlated? Decomposing correlations between judges. Psychometrika, 74(3): 531–553
https://doi.org/10.1007/s11336-009-9118-z
|
12 |
R Cadario, C Longoni, C K Morewedge (2021). Understanding, explaining, and utilizing medical artificial intelligence. Nature Human Behaviour, in press, doi: 10.1038/s41562-021-01146-0
|
13 |
G L Calhoun, H A Ruff, K J Behymer, E M Frost (2018). Human-autonomy teaming interface design considerations for multi-unmanned vehicle control. Theoretical Issues in Ergonomics Science, 19(3): 321–352
https://doi.org/10.1080/1463922X.2017.1315751
|
14 |
J A Cannon-Bowers, E Salas, S Converse (1993). Shared mental models in expert team decision making. In: Castellan Jr N J, ed. Individual and Group Decision Making. New York: Taylor & Francis Psychology Press, 221–246
|
15 |
G Charness, E Karni, D Levin (2007). Individual and group decision making under risk: An experimental study of Bayesian updating and violations of first-order stochastic dominance. Journal of Risk and Uncertainty, 35(2): 129–148
https://doi.org/10.1007/s11166-007-9020-y
|
16 |
G Chen, K A Kim, J R Nofsinger, O M Rui (2007). Trading performance, disposition effect, overconfidence, representativeness bias, and experience of emerging market investors. Journal of Behavioral Decision Making, 20(4): 425–451
https://doi.org/10.1002/bdm.561
|
17 |
J Y C Chen, M J Barnes (2014). Human–agent teaming for multirobot control: A review of human factors issues. IEEE Transactions on Human–Machine Systems, 44(1): 13–29
https://doi.org/10.1109/THMS.2013.2293535
|
18 |
J Y C Chen, S G Lakhmani, K Stowers, A R Selkowitz, J L Wright, M Barnes (2018). Situation awareness-based agent transparency and human–autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science, 19(3): 259–282
https://doi.org/10.1080/1463922X.2017.1315750
|
19 |
M H Chignell, P A Hancock (1986). Knowledge-based load leveling and task allocation in human–machine systems. In: 21st Annual Conference on Manual Control. Moffett Field, CA: NASA Ames Research Center, 9
|
20 |
E T Cokely, C M Kelley (2009). Cognitive abilities and superior decision making under risk: A protocol analysis and process model evaluation. Judgment and Decision Making, 4(1): 20–33
|
21 |
H Cramer, V Evers, S Ramlal, M van Someren, L Rutledge, N Stash, L Aroyo, B Wielinga (2008). The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-Adapted Interaction, 18(5): 455–496
https://doi.org/10.1007/s11257-008-9051-3
|
22 |
P Croskerry (2013). From mindless to mindful practice — Cognitive bias and clinical decision making. New England Journal of Medicine, 368(26): 2445–2448
https://doi.org/10.1056/NEJMp1303712
|
23 |
A Dafoe, Y Bachrach, G Hadfield, E Horvitz, K Larson, T Graepel (2021). Cooperative AI: Machines must learn to find common ground. Nature, 593(7857): 33–36
https://doi.org/10.1038/d41586-021-01170-0
|
24 |
P Damacharla, A Y Javaid, J J Gallimore, V K Devabhaktuni (2018). Common metrics to benchmark Human–Machine Teams (HMT): A review. IEEE Access, 6: 38637–38655
https://doi.org/10.1109/ACCESS.2018.2853560
|
25 |
DARPA (2018). AI Next Campaign. Available at:
|
26 |
P R Daugherty, H J Wilson (2018). Human+ Machine: Reimagining Work in the Age of AI. Boston: Harvard Business Review Press
|
27 |
F D Davis, R P Bagozzi, P R Warshaw (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8): 982–1003
https://doi.org/10.1287/mnsc.35.8.982
|
28 |
R M Dawes, D Faust, P E Meehl (1989). Clinical versus actuarial judgment. Science, 243(4899): 1668–1674
https://doi.org/10.1126/science.2648573
|
29 |
E J de Visser, R Pak, T H Shaw (2018). From “automation” to “autonomy”: The importance of trust repair in human–machine interaction. Ergonomics, 61(10): 1409–1427
https://doi.org/10.1080/00140139.2018.1457725
|
30 |
C Deck, S Jahedi (2015). The effect of cognitive load on economic decision making: A survey and new experiments. European Economic Review, 78: 97–119
https://doi.org/10.1016/j.euroecorev.2015.05.004
|
31 |
A Degani, C V Goldman, O Deutsch, O Tsimhoni (2017). On human–machine relations. Cognition Technology and Work, 19(2–3): 211–231
https://doi.org/10.1007/s10111-017-0417-3
|
32 |
B J Dietvorst, J P Simmons, C Massey (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology, 144(1): 114–126
https://doi.org/10.1037/xge0000033
|
33 |
E Doherty, G Cockton, C Bloor , D Benigno (2001). Improving the performance of the cyberlink mental interface with the “Yes/No Program”. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York: ACM, 69–76
|
34 |
E Doherty, G Stephenson, W Engel (2000). Using a cyberlink mental interface for relaxation and controlling a robot. In: Proceedings of the SIGCAPH Computers and the Physically Handicapped. New York: ACM, 4–9
https://doi.org/10.1145/569309.569310
|
35 |
D Dörner, A J Wearing (1995). Complex problem solving: Toward a (computer simulated) theory. In: Frensch P A, Funke J, eds. Complex Problem Solving: The European Perspective. New York: Taylor & Francis Psychology Press, 65–99
|
36 |
N Du, J Haspiel, Q Zhang, D Tilbury, A K Pradhan, X J Yang, L P Robert Jr (2019). Look who’s talking now: Implications of AV’s explanations on driver’s trust, AV preference, anxiety and mental workload. Transportation Research Part C: Emerging Technologies, 104: 428–442
https://doi.org/10.1016/j.trc.2019.05.025
|
37 |
Y Duan, J S Edwards, Y K Dwivedi (2019). Artificial intelligence for decision making in the era of Big Data: Evolution, challenges and research agenda. International Journal of Information Management, 48: 63–71
https://doi.org/10.1016/j.ijinfomgt.2019.01.021
|
38 |
C Dubois, J Le Ny (2020). Adaptive task allocation in human–machine teams with trust and workload cognitive models. In: IEEE International Conference on Systems, Man, and Cybernetics (SMC). Toronto, ON, 3241–3246
|
39 |
M Edmonds, F Gao, H Liu, X Xie, S Qi, B Rothrock, Y X Zhu, Y N Wu, H J Lu, S C Zhu (2019). A tale of two explanations: Enhancing human trust by explaining robot behavior. Science Robotics, 4(37): eaay4663
https://doi.org/10.1126/scirobotics.aay4663
|
40 |
W Edwards (1962). Subjective probabilities inferred from decisions. Psychological Review, 69(2): 109–135
https://doi.org/10.1037/h0038674
|
41 |
M A El-Gamal, D M Grether (1995). Are people Bayesian? Uncovering behavioral strategies. Journal of the American Statistical Association, 90(432): 1137–1145
https://doi.org/10.1080/01621459.1995.10476620
|
42 |
M R Endsley (1988). Situation awareness global assessment technique (SAGAT). In: Proceedings of the IEEE National Aerospace and Electronics Conference. Dayton, OH, 789–795
|
43 |
M R Endsley (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1): 32–64
https://doi.org/10.1518/001872095779049543
|
44 |
V Ferrari (2019). Man–machine teaming: Towards a new paradigm of man–machine collaboration? In: Barbaroux P, ed. Disruptive Technology and Defence Innovation Ecosystems, vol. 5. Hoboken, NJ: John Wiley & Sons, 121–137
|
45 |
M Fishbein, I Ajzen (1975). Belief, Attitude, Intention and Behavior: An Introduction to Theory and Research. Boston, MA: Addison-Wesley Publishing Company
|
46 |
P M Fitts (1951). Human Engineering for An Effective Air-Navigation and Traffic Control System. Washington, DC: National Research Council
|
47 |
P M Fitts, C M Seeger (1953). S-R compatibility: Spatial characteristics of stimulus and response codes. Journal of Experimental Psychology, 46(3): 199–210
https://doi.org/10.1037/h0062827
|
48 |
F Flemisch, M Heesen, T Hesse, J Kelsch, A Schieben, J Beller (2012). Towards a dynamic balance between humans and automation: Authority, ability, responsibility and control in shared and cooperative control situations. Cognition Technology and Work, 14(1): 3–18
https://doi.org/10.1007/s10111-011-0191-6
|
49 |
D Gentner (2001). Mental models, psychology of. In: Smelser N J, Baltes P B, eds. International Encyclopedia of the Social & Behavioral Sciences. Amsterdam: Elsevier, 9683–9687
|
50 |
M A Goodrich, D Yi (2013). Toward task-based mental models of human–robot teaming: A Bayesian approach. In: International Conference on Virtual, Augmented and Mixed Reality. Designing and Developing Augmented and Virtual Environments. Berlin, Heidelberg: Springer, 267–276
|
51 |
R Gregory, P Slovic, J Flynn (1996). Risk perceptions, stigma, and health policy. Health & Place, 2(4): 213–220
https://doi.org/10.1016/1353-8292(96)00019-6
|
52 |
D M Grether (1992). Testing Bayes rule and the representativeness heuristic: Some experimental evidence. Journal of Economic Behavior & Organization, 17(1): 31–57
https://doi.org/10.1016/0167-2681(92)90078-P
|
53 |
T L Griffiths, J B Tenenbaum (2006). Optimal predictions in everyday cognition. Psychological Science, 17(9): 767–773
https://doi.org/10.1111/j.1467-9280.2006.01780.x
|
54 |
D Gunning (2016). Explainable Artificial Intelligence (XAI) — What are we trying to do? Available at:
|
55 |
D Gursoy, O H Chi, L Lu, R Nunkoo (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49: 157–169
https://doi.org/10.1016/j.ijinfomgt.2019.03.008
|
56 |
R S Gutzwiller, J Reeder (2021). Dancing with algorithms: Interaction creates greater preference and trust in machine-learned behavior. Human Factors, 63(5): 854–867
https://doi.org/10.1177/0018720820903893
|
57 |
T Haesevoets, D de Cremer, K Dierckx, A van Hiel (2021). Human–machine collaboration in managerial decision making. Computers in Human Behavior, 119: 106730
https://doi.org/10.1016/j.chb.2021.106730
|
58 |
P A Hancock, T Kajaks, J K Caird, M H Chignell, S Mizobuchi, P C Burns, J Feng, G R Fernie, M Lavallière, I Y Noy, D A Redelmeier, B H Vrkljan (2020). Challenges to human drivers in increasingly automated vehicles. Human Factors, 62(2): 310–328
https://doi.org/10.1177/0018720819900402
|
59 |
A Hancock P, D R Billings, K E Schaefer, J Y C Chen, E J de Visser, R Parasuraman (2011). A meta-analysis of factors affecting trust in human–robot interaction. Human Factors, 53(5): 517–527
https://doi.org/10.1177/0018720811417254
|
60 |
P A Hancock, M H Chignell (1989). Intelligent Interfaces: Theory, Research and Design. North Holland: Elsevier Science Inc.
|
61 |
J M Hoc (2000). From human–machine interaction to human–machine cooperation. Ergonomics, 43(7): 833–843
https://doi.org/10.1080/001401300409044
|
62 |
K A Hoff, M Bashir (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3): 407–434
https://doi.org/10.1177/0018720814547570
|
63 |
A Holzinger (2016). Interactive machine learning for health informatics: When do we need the human-in-the-loop? Brain Informatics, 3(2): 119–131
https://doi.org/10.1007/s40708-016-0042-6
|
64 |
R G Hunt, F J Krzystofiak, J R Meindl, A M Yousry (1989). Cognitive style and decision making. Organizational Behavior and Human Decision Processes, 44(3): 436–453
https://doi.org/10.1016/0749-5978(89)90018-6
|
65 |
M H Jarrahi (2018). Artificial intelligence and the future of work: Human–AI symbiosis in organizational decision making. Business Horizons, 61(4): 577–586
https://doi.org/10.1016/j.bushor.2018.03.007
|
66 |
P Johnson-Laird (1996). Mental models, deductive reasoning, and the brain. In: Gazzaniga M S, ed. The Cognitive Neurosciences. Cambridge, MA: The MIT Press, 999–1008
|
67 |
D Kahneman, S Frederick (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In: Gilovich T, Griffin D, Kahneman D, eds. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge: Cambridge University Press, 49–81
|
68 |
D Kahneman, A Tversky (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2): 263–291
https://doi.org/10.2307/1914185
|
69 |
C D Karstens, J Correia Jr, D S LaDue, J Wolfe, T C Meyer, D R Harrison, J L Cintineo, K M Calhoun, T M Smith, A E Gerard, L P Rothfusz (2018). Development of a human–machine mix for forecasting severe convective events. Weather and Forecasting, 33(3): 715–737
https://doi.org/10.1175/WAF-D-17-0188.1
|
70 |
C Kemp, J B Tenenbaum (2008). The discovery of structural form. Proceedings of the National Academy of Sciences of the United States of America, 105(31): 10687–10692
https://doi.org/10.1073/pnas.0802631105
|
71 |
J Kraus, D Scholz, D Stiegemeier, M Baumann (2020). The more you know: Trust dynamics and calibration in highly automated driving and the effects of take-overs, system malfunction, and system transparency. Human Factors, 62(5): 718–736
https://doi.org/10.1177/0018720819853686
|
72 |
M E Kreye, Y M Goh, L B Newnes, P Goodwin (2012). Approaches to displaying information to assist decisions under uncertainty. Omega, 40(6): 682–692
https://doi.org/10.1016/j.omega.2011.05.010
|
73 |
T Kulesza, W K Wong, S Stumpf, S Perona, R White, M M Burnett, I Oberst, A J Ko (2009). Fixing the program my computer learned: Barriers for end users, challenges for the machine. In: Proceedings of the 14th International Conference on Intelligent User Interfaces. Sanibel Island, FL: ACM, 187–196
|
74 |
N Kunnathuvalappil Hariharan (2018). Artificial Intelligence and human collaboration in financial planning. Journal of Emerging Technologies and Innovative Research, 5(7): 1348–1355
|
75 |
I H Kuo, J M Rabindran, E Broadbent, Y I Lee, N Kerse, R M Q Stafford, B A MacDonald (2009). Age and gender factors in user acceptance of healthcare robots. In: The 18th IEEE International Symposium on Robot and Human Interactive Communication. Toyama, 214–219
|
76 |
J Laid, C Ranganath, S Gershman (2020). Future directions in human machine teaming workshop. Arlington, VA: US Department of Defense
|
77 |
J Lee (2020). Is artificial intelligence better than human clinicians in predicting patient outcomes? Journal of Medical Internet Research, 22(8): e19918
https://doi.org/10.2196/19918
|
78 |
J Lee, N Moray (1992). Trust, control strategies and allocation of func-tion in human–machine systems. Ergonomics, 35(10): 1243–1270
https://doi.org/10.1080/00140139208967392
|
79 |
J D Lee, K A See (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1): 50–80
https://doi.org/10.1518/hfes.46.1.50.30392
|
80 |
F F Li, J Etchemendy (2018). Introducing Stanford’s human-centered AI initiative. Available at:
|
81 |
R D Luce, P C Fishburn (1991). Rank- and sign-dependent linear utility models for finite first-order gambles. Journal of Risk and Uncertainty, 4(1): 29–59
https://doi.org/10.1007/BF00057885
|
82 |
C Lyn Paul, L M Blaha, C K Fallon, C Gonzalez, R S Gutzwiller (2019). Opportunities and challenges for human–machine teaming in cybersecurity operations. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 63(1): 442–446
https://doi.org/10.1177/1071181319631079
|
83 |
J B Lyons, P R Havig (2014). Transparency in a human–machine context: Approaches for fostering shared awareness/intent. In: International Conference on Virtual, Augmented and Mixed Reality. Designing and Developing Virtual and Augmented Environments. Cham: Springer, 181–190
|
84 |
J B Lyons, S Mahoney, K T Wynne, M A Roebke (2018). Viewing machines as teammates: A qualitative study. In: AAAI Spring Symposium Series. Palo Alto, CA, 166–170
|
85 |
P Madhavan, D A Wiegmann (2007). Similarities and differences between human–human and human–automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8(4): 277–301
https://doi.org/10.1080/14639220500337708
|
86 |
J G March, Z Shapira (1987). Managerial perspectives on risk and risk taking. Management Science, 33(11): 1404–1418
https://doi.org/10.1287/mnsc.33.11.1404
|
87 |
J M McGuirl, N B Sarter (2006). Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Human Factors, 48(4): 656–665
https://doi.org/10.1518/001872006779166334
|
88 |
A Mearman (2011). Who do heterodox economists think they are? American Journal of Economics and Sociology, 70(2): 480–510
https://doi.org/10.1111/j.1536-7150.2011.00774.x
|
89 |
A P Miller (2018). Want less-biased decisions? Use algorithms. Harvard Business Review, 2018–7–26
|
90 |
L D Ordóñez, L Benson III, A Pittarello (2015). Time-pressure perception and decision making. In: Keren G, Wu G, eds. The Wiley Blackwell Handbook of Judgment and Decision Making, II. Hoboken, NJ: John Wiley & Sons, 517–542
|
91 |
C A Ortiz, M R Park (2011). Visual Controls: Applying Visual Management to the Factory. Boca Raton: Taylor & Francis Productivity Press
|
92 |
S Ososky, D Schuster, F Jentsch, S Fiore, R Shumaker, C Lebiere, U Kurup, J Oh, A Stentz (2012). The importance of shared mental models and shared situation awareness for transforming robots from tools to teammates. In: Proceedings of SPIE 8387, Unmanned Systems Technology XIV. Baltimore, MD, 838710
https://doi.org/10.1117/12.923283
|
93 |
S Ososky, D Schuster, E Phillips, F Jentsch (2013). Building appropriate trust in human–robot teams. In: AAAI Spring Symposium: Trust and Autonomous Systems. Stanford, CA: Association for the Advancement of Artificial Intelligence, 60–65
|
94 |
R Parasuraman, T B Sheridan, C D Wickens (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, 30(3): 286–297
https://doi.org/10.1109/3468.844354
|
95 |
S Parker, G Grote (2019). Automation, algorithms, and beyond: Why work design matters more than ever in a digital world. Applied Psychology, in press, doi: 10.1111/apps.12241
|
96 |
B N Patel, L Rosenberg, G Willcox, D Baltaxe, M Lyons, J Irvin, P Rajpurkar, T Amrhein, R Gupta, S Halabi, C Langlotz, E Lo, J Mammarappallil, A J Mariano, G Riley, J Seekins, L Shen, E Zucker, M P Lungren (2019). Human–machine partnership with artificial intelligence for chest radiograph diagnosis. NPJ Digital Medicine, 2: 111
https://doi.org/10.1038/s41746-019-0189-7
|
97 |
J W Payne, J R Bettman, E J Johnson (1993). The Adaptive Decision Maker. Cambridge: Cambridge University Press
|
98 |
E Phillips, S Ososky, J Grove, F Jentsch (2011). From tools to teammates: Toward the development of appropriate mental models for intelligent robots. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 55(1): 1491–1495
https://doi.org/10.1177/1071181311551310
|
99 |
I Rahwan, M Cebrian, N Obradovich, J Bongard, J F Bonnefon, C Breazeal, J W Crandall, N A Christakis, I D Couzin, M O Jackson, N R Jennings, E Kamar, I M Kloumann, H Larochelle, D Lazer, R McElreath, A Mislove, D C Parkes, A S Pentland, M E Roberts, A Shariff, J B Tenenbaum, M Wellman (2019). Machine behaviour. Nature, 568(7753): 477–486
https://doi.org/10.1038/s41586-019-1138-y
|
100 |
S Renooij (2001). Probability elicitation for belief networks: Issues to consider. Knowledge Engineering Review, 16(3): 255–269
https://doi.org/10.1017/S0269888901000145
|
101 |
E M Roth, C Sushereba, L G Militello, J Diiulio, K Ernst (2019). Function allocation considerations in the era of human autonomy teaming. Journal of Cognitive Engineering and Decision Making, 13(4): 199–220
https://doi.org/10.1177/1555343419878038
|
102 |
M J Saenz, E Revilla, C Simón (2020). Designing AI systems with human–machine teams. MIT Sloan Management Review, 61(3): 1–5
|
103 |
M Salem, G Lakatos, F Amirabdollahian, K Dautenhahn (2015). Would you trust a (faulty) robot: Effects of error, task type and personality on human–robot cooperation and trust. In: 10th ACM/IEEE International Conference on Human–Robot Interaction. Portland, OR, 141–148
|
104 |
P M Salmon, N A Stanton, G H Walker, C Baber, D P Jenkins, R McMaster, M S Young (2008). What really is going on? Review of situation awareness models for individuals and teams. Theoretical Issues in Ergonomics Science, 9(4): 297–323
https://doi.org/10.1080/14639220701561775
|
105 |
K E Schaefer, J Y C Chen, J L Szalma, P A Hancock (2016). A meta-analysis of factors influencing the development of trust in automation. Human Factors, 58(3): 377–400
https://doi.org/10.1177/0018720816634228
|
106 |
K E Schaefer, E R Straub, J Y C Chen, J Putney, A W Evans III (2017). Communicating intent to develop shared situation awareness and engender trust in human-agent teams. Cognitive Systems Research, 46: 26–39
https://doi.org/10.1016/j.cogsys.2017.02.002
|
107 |
I Seeber, E Bittner, R O Briggs, T de Vreede, G J de Vreede, A Elkins, R Maier, A B Merz, S Oeste-Reiß, N Randrup, G Schwabe, M Söllner (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57(2): 103174
https://doi.org/10.1016/j.im.2019.103174
|
108 |
I Seeber, L Waizenegger, S Seidel, S Morana, I Benbasat, P B Lowry (2019). Reinventing collaboration with autonomous technology-based agents. In: Proceedings of the 27th European Conference on Information Systems (ECIS). Stockholm: Association for Information Systems, 4
|
109 |
A R Selkowitz, S G Lakhmani, C N Larios, J Y C Chen (2016). Agent transparency and the autonomous squad member. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 60(1): 1319–1323
https://doi.org/10.1177/1541931213601305
|
110 |
Y Seong, A M Bisantz (2008). The impact of cognitive feedback on judgment performance and trust with decision aids. International Journal of Industrial Ergonomics, 38(7–8): 608–625
https://doi.org/10.1016/j.ergon.2008.01.007
|
111 |
T B Sheridan, R T Hennessy (1984). Research and modeling of supervisory control behavior: Report of a workshop. Washington, DC: The National Academies Press, US National ResearchCouncil
|
112 |
D Shin (2020). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human–Computer Studies, 146: 102551
|
113 |
D Shin, Y J Park (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98: 277–284
https://doi.org/10.1016/j.chb.2019.04.019
|
114 |
D Silver, A Huang, C J Maddison, A Guez, L Sifre, G van den Driessche, J Schrittwieser, I Antonoglou, V Panneershelvam, M Lanctot, S Dieleman, D Grewe, J Nham, N Kalchbrenner, I Sutskever, T Lillicrap, M Leach, K Kavukcuoglu, T Graepel, D Hassabis (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587): 484–489
https://doi.org/10.1038/nature16961
|
115 |
D Silver, J Schrittwieser, K Simonyan, I Antonoglou, A Huang, A Guez, T Hubert, L Baker, M Lai, A Bolton, Y Chen, T Lillicrap, F Hui, L Sifre, G van den Driessche, T Graepel, D Hassabis (2017). Mastering the game of Go without human knowledge. Nature, 550(7676): 354–359
https://doi.org/10.1038/nature24270
|
116 |
D Simon, D C Krawczyk, K J Holyoak (2004). Construction of preferences by constraint satisfaction. Psychological Science, 15(5): 331–336
https://doi.org/10.1111/j.0956-7976.2004.00678.x
|
117 |
G Skraaning, G A Jamieson (2019). Human performance benefits of the automation transparency design principle: Validation and variation. Human Factors, 63(3): 379–401
https://doi.org/10.1177/0018720819887252
|
118 |
C Speier (2006). The influence of information presentation formats on complex task decision-making performance. International Journal of Human–Computer Studies, 64(11): 1115–1131
https://doi.org/10.1016/j.ijhcs.2006.06.007
|
119 |
C Speier, M G Morris (2003). The influence of query interface design on decision-making performance. Management Information Systems Quarterly, 27(3): 397–423
https://doi.org/10.2307/30036539
|
120 |
K Stowers, N Kasdaglis, O Newton, S Lakhmani, R Wohleber, J Chen (2016). Intelligent agent transparency: The design and evaluation of an interface to facilitate human and intelligent agent collaboration. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 60(1): 1706–1710
https://doi.org/10.1177/1541931213601392
|
121 |
J B Tenenbaum, C Kemp, T L Griffiths, N D Goodman (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022): 1279–1285
https://doi.org/10.1126/science.1192788
|
122 |
P E Tetlock (2003). Thinking the unthinkable: Sacred values and taboo cognitions. Trends in Cognitive Sciences, 7(7): 320–324
https://doi.org/10.1016/S1364-6613(03)00135-9
|
123 |
J Tong, D Feiler (2017). A behavioral model of forecasting: Naive statistics on mental samples. Management Science, 63(11): 3609–3627
https://doi.org/10.1287/mnsc.2016.2537
|
124 |
E J Topol (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1): 44–56
https://doi.org/10.1038/s41591-018-0300-7
|
125 |
P Tschandl, C Rinner, Z Apalla, G Argenziano, N Codella, A Halpern, M Janda, A Lallas, C Longo, J Malvehy, J Paoli, S Puig, C Rosendahl, H P Soyer, I Zalaudek, H Kittler (2020). Human–computer collaboration for skin cancer recognition. Nature Medicine, 26(8): 1229–1234
https://doi.org/10.1038/s41591-020-0942-0
|
126 |
A Tversky, D Kahneman (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157): 1124–1131
https://doi.org/10.1126/science.185.4157.1124
|
127 |
P Urlings, L C Jain (2002). Teaming human and machine: A conceptual framework. In: Abraham A, Köppen M, eds. Hybrid Information Systems. Heidelberg: Springer, 711–721
|
128 |
P P van Maanen, K van Dongen (2005). Towards task allocation decision support by means of cognitive modeling of trust. In: Proceedings of 17th Belgian-Netherlands Artificial Intelligence Conference. Brussels, 399–400
|
129 |
V Venkatesh, J Y L Thong, X Xu (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. Management Information Systems Quarterly, 36(1): 157–178
https://doi.org/10.2307/41410412
|
130 |
J von Neumann, O Morgenstern (1944). Theory of Games and Economic Behavior. Princeton: Princeton University Press
|
131 |
G Vosgerau (2006). The perceptual nature of mental models. Advances in Psychology, 138: 255–275
https://doi.org/10.1016/S0166-4115(06)80039-7
|
132 |
P Wakker (1989). Continuous subjective expected utility with non-additive probabilities. Journal of Mathematical Economics, 18(1): 1–27
https://doi.org/10.1016/0304-4068(89)90002-5
|
133 |
N Wang, D V Pynadath, S G Hill (2016). Trust calibration within a human–robot team: Comparing automatically generated explanations. In: The 11th ACM/IEEE International Conference on Human–Robot Interaction. Christchurch, 109–116
|
134 |
T Warden, P Carayon, E M Roth, J Chen, W J Clancey, R Hoffman, M L Steinberg (2019). The national academies board on human system integration (BOHSI) panel: Explainable AI, system trans-parency, and human machine teaming. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 63(1): 631–635
https://doi.org/10.1177/1071181319631100
|
135 |
D F Whelehan, K C Conlon, P F Ridgway (2020). Medicine and heuristics: Cognitive biases and medical decision-making. Irish Journal of Medical Science, 189(4): 1477–1484
https://doi.org/10.1007/s11845-020-02235-1
|
136 |
C D Wickens, J G Hollands, S Banbury, R Parasuraman (2013). Engineering Psychology and Human Performance, 4th ed. New York: Taylor & Francis Psychology Press
|
137 |
P A Wickham (2003). The representativeness heuristic in judgements involving entrepreneurial success and failure. Management Decision, 41(2): 156–167
https://doi.org/10.1108/00251740310457605
|
138 |
K T Wynne, J B Lyons (2018). An integrative model of autonomous agent teammate-likeness. Theoretical Issues in Ergonomics Science, 19(3): 353–374
https://doi.org/10.1080/1463922X.2016.1260181
|
139 |
W Xu (2019). Towards human-centered AI: A perspective from human–computer interaction. Interaction, 26(4): 42–46
https://doi.org/10.1145/3328485
|
140 |
Ö N Yalçın, S DiPaola (2020). Modeling empathy: Building a link between affective and cognitive processes. Artificial Intelligence Review, 53(4): 2983–3006
https://doi.org/10.1007/s10462-019-09753-0
|
141 |
J O Zinn (2008). Heading into the unknown: Everyday strategies for managing risk and uncertainty. Health Risk & Society, 10(5): 439–450
https://doi.org/10.1080/13698570802380891
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|