top of page
Writer's pictureGökalp Boz

Decision-Making Under Risk and Uncertainty

In this article, the complexities of decision-making under risk and uncertainty will be discovered while examining various phenomena that challenge the descriptive adequacy of prevailing theories. How cognitive biases such as the Gambler's fallacy, conjunction and disjunction fallacies, base-rate neglect, confirmation bias, availability heuristic, and overconfidence, can lead individuals astray from rational decision-making will be explored. A critical analysis of these phenomena will uncover the underlying mechanisms shaping human judgment and decision-making processes, shedding light on the complexities of navigating uncertain environments.


Descriptive theory aims to elucidate how people make probabilistic judgments, providing insights into the cognitive processes underlying decision-making. On the other hand, normative theory prescribes how individuals should ideally make decisions based on rational principles, serving as a benchmark for evaluating the optimality of decision-making strategies (Damnjanovic & Jankovic, 2014).


Decision-making under risk and uncertainty is a complex and integral aspect of human cognition, influencing various domains such as economics, psychology, and everyday life (Mantel et al., 2005). Descriptive and normative theories play pivotal roles in understanding how individuals navigate uncertain environments, shaping both their perceptions and choices.


Probability theory, a cornerstone of understanding decision-making, is central to both descriptive and normative frameworks. It provides a formal language for quantifying uncertainty and assessing the likelihood of different outcomes. By incorporating probability theory into descriptive and normative models, researchers seek to elucidate the mechanisms underlying human judgment and decision-making, as well as identify potential biases and deviations from rationality (Pleskac et al., 2015).





Illustration of complexities of decision-making in uncertainty
Figure 1: Illustration of complexities of decision-making in uncertainty. (TeeJay, 2024)


The Gambler's Fallacy

In the realm of decision-making under uncertainty, the Gambler's Fallacy emerges as a compelling cognitive bias that challenges the fundamental concept of independence in economic and financial contexts (Huber et al., 2008). The essence of managing investments lies in diversification, avoiding the perilous act of putting all eggs in one basket. However, the fallacy comes to light when individuals misinterpret the notion of independence, committing errors that can impact crucial decision-making processes.

Independence errors manifest in two ways. First, there is the misconception of assuming two outcomes are independent when, in reality, they are entwined. Take the example of investors who wrongly believe that stocks and bonds operate independently in the market, oblivious to the intricate web of interconnectedness unveiled during the global financial crisis. On the flip side, the second error involves thinking two outcomes are dependent when, in fact, they are not. An illustrative case is the erroneous belief that the outcome of a roulette wheel can be predicted based on its past results (Angner, 2022).



Poker Game (Coolidge, 1894).
Figure 2: Poker Game (Coolidge, 1894).


At the heart of the Gambler's Fallacy is the cognitive bias where individuals believe that the outcome of a random event is influenced by previous occurrences. This bias leads to the mistaken belief that if an event has happened frequently, it becomes less likely in the future, or vice versa. The fallacy hinges on the anticipation that a departure from the average behavior of a system will be swiftly corrected.


According to Raue and Scholl (2018), explaining the roots of the Gambler's Fallacy unveils two psychological mechanisms. The first is the Representativeness Heuristic, where judgments about the probability of an event are shaped by its resemblance to a prototype or representative category rather than statistical reasoning. This heuristic elucidates how people might misconstrue the likelihood of a fair coin sequence based on its representativeness. The second mechanism, the Law of Small Numbers, reveals the tendency of individuals to expect small samples to be representative of the larger population, leading to flawed expectations regarding randomness and probability.



Illustration of randomness dice
Figure 3: Illustration of randomness dice (Zappile, 2024)


Conjunction and Disjunction Fallacies

The conjunction fallacy and the disjunction fallacy cast light on the complexities surrounding the evaluation of probabilities. The conjunction fallacy arises when individuals erroneously believe that a specific combination of events (A and B) is more likely than either of those events occurring in isolation (Hertwig & Gigerenzer, 1999). In essence, it involves overestimating the probability of a conjunction, where events happen simultaneously. For instance, assuming the probability of Alessia being a doctor and helping to feed the poor is higher than the probability of either event independently. Consider a Boeing 747–400 with approximately 6 million parts, each having a very low failure probability. If one commits the conjunction fallacy, one might overestimate the probability that all parts work simultaneously, neglecting the intricate nature of independence among them.


Another example could be the planning fallacy, a related cognitive bias, that further emphasizes the significance of the conjunction fallacy in practical scenarios (Angner, 2022). Complex projects, akin to puzzles with numerous interconnected pieces, require each component to function for overall success. Even with high probabilities for individual pieces, the likelihood of all elements falling into place may be surprisingly low.



Illustration of puzzle with numerous interconnected pieces
Figure 4: Illustration of puzzle with numerous interconnected pieces (TeeJay, 2024).


On the other hand, the disjunction fallacy involves the erroneous assumption that the probability of a disjunctive event (either A or B) is higher than the probability of the individual events A or B (Lu, 2016). This occurs when someone mistakenly perceives a disjunctive statement as more likely than its components. The disjunction fallacy holds particular significance in risk assessment scenarios where a complex system, such as a car or an organism, relies on multiple elements. Despite the low chance of individual element failure, the cumulative probability of at least one element failing might be notably higher. Those who fall prey to the disjunction fallacy risk underestimating the overall likelihood of system breakdowns.


Both the conjunction and disjunction fallacies can be explained through the psychological mechanisms of anchoring and adjustment. Overestimating the probability of conjunctions often results from anchoring judgments to the probability of any one conjunct and insufficiently adjusting downward. Conversely, underestimating the probability of disjunctions stems from anchoring judgments to the probability of any one disjunct and failing to adjust upward adequately (Angner, 2022).


Base-rate Neglect

Base-rate neglect, or the base-rate fallacy, emerges as a cognitive bias where individuals disproportionately disregard or downplay the significance of statistical base-rate information while forming judgments or decisions. The base rate, in this context, serves as the foundational probability of an event occurring within a specific population, offering essential contextual information for rational decision-making. In these judgment scenarios, three crucial factors come into play: the base rate itself, the available evidence, and the conditional probabilities associated with the evidence under both true and false hypotheses. The base-rate fallacy materializes when individuals fail to adequately consider the initial base rate, neglecting a fundamental aspect of the decision-making process (Calderisi, 2024).



According to Angner (2022), base-rate neglect extends its influence beyond specific scenarios, providing insights into broader cognitive biases. One such manifestation is the planning fallacy, where individuals exhibit an unwarranted optimism in plans and predictions, often aligning them unreasonably closely with best-case scenarios. Those who succumb to the planning fallacy ignore the wealth of base-rate information available regarding the historical performance of similar projects. Surprisingly, from the perspective of rationality theory, the planning fallacy poses an enigma. If individuals were to update their beliefs in a Bayesian fashion, incorporating past project overruns into their estimates, a more accurate projection for future projects would emerge. Base-rate neglect, in this context, serves as a critical lens through which to understand the deviations from rational decision-making and the persistent optimism that characterizes planning fallacies. Unraveling the complexities of base-rate neglect highlights the importance of incorporating foundational statistical information for more rational and contextually aware judgments.




Confirmation Bias

Confirmation bias is a cognitive bias, a systematic pattern of deviation from norm or rationality, in which individuals tend to interpret information in a way that confirms their pre-existing beliefs or hypotheses. It involves giving more weight or attention to evidence that supports one's existing views while downplaying or ignoring evidence that contradicts those views. In simpler terms, individuals affected by confirmation bias seek out, favor, and remember information that aligns with what they already believe. This bias can influence various aspects of decision-making, from everyday judgments to more significant choices. Confirmation bias can lead to the reinforcement of existing beliefs, even in the face of contrary evidence, and hinder objective analysis of information (Lange et al., 2021).


Confirmation bias operates in stark contrast to the Bayesian statistical concept known as "washing out of the priors." In Bayesian inference, prior beliefs are integrated with new evidence to refine probability distributions. However, confirmation bias disrupts this process by amplifying the influence of pre-existing beliefs, hindering the objective incorporation of new data.


An intriguing facet of confirmation bias is observed in selective exploration. Classic studies reveal that individuals, whether favoring or opposing a particular viewpoint, interpret ambiguous information in a manner that aligns with their pre-existing beliefs. This selective engagement perpetuates cognitive echo chambers rather than fostering convergence through shared exposure to the same information. Illustratively, the archetype of the jealous lover, as portrayed by Marcel Proust, exemplifies confirmation bias. The jealous lover selectively attends to evidence reinforcing suspicions while dismissing any indication of reciprocated affections. This biased encoding and interpretation of events are not confined to personal relationships; they extend to racial biases, where individuals may overlook positive actions by individuals of other races and disproportionately focus on negative instances (Angner, 2022).




Confirmation bias manifests in various domains, including gambling, where individuals erroneously believe they can predict outcomes despite past losses. The bias emerges as a result of selectively remembering instances of correct predictions while conveniently overlooking inaccuracies. This skewed perception perpetuates false beliefs and sustains the illusion of predictability in inherently random events. Psychological research unveils multiple factors contributing to confirmation bias. One key factor is the "motivation" wherein individuals are prone to ignoring evidence contradicting their beliefs while readily embracing supportive evidence. Ambiguity further amplifies the bias, as individuals interpret vague or unclear evidence in a manner that aligns with their existing beliefs.


Availability

Within the intricate tapestry of decision-making, the concept of availability emerges as a pivotal determinant, influencing the ease with which information is brought to mind during judgment. Similar to anchoring and adjustment, availability can introduce bias into the decision-making process, leading individuals to rely on easily accessible information rather than conducting a thorough and accurate analysis.


The availability heuristic, a cognitive bias ingrained in human judgment, manifests as a mental shortcut where individuals lean on information readily available or easily accessible to them when assessing the likelihood or frequency of events (Schwarz et al., 1991). This heuristic simplifies decision-making by relying on information that quickly comes to mind, eschewing the intricacies of a more comprehensive analysis. The pervasive influence of the availability heuristic becomes evident in various phenomena, such as the perception of crime. The belief that violent crime is more prevalent than other types often stems from the ease with which vivid images of violence come to mind, amplified by extensive media coverage. The availability of these mental images contributes to an inflated perception of the frequency of violent incidents. In the realm of public health, the availability heuristic sheds light on the persistence of anti-vaccination sentiments. Anecdotal instances of adverse reactions to vaccines, even if statistically rare, gain heightened salience and influence perceptions. The heuristic accentuates the emotional impact of individual cases, potentially overshadowing overwhelming evidence supporting the safety and efficacy of vaccines.


According to Sunstein et al., (2003), availability bias also plays a crucial role in perpetuating dangerous behaviors. Individuals who engage in risky activities and emerge unscathed may perceive the associated dangers as exaggerated. This distorted perception, fueled by the availability of personal experiences, can lead to a self-reinforcing cycle. Such cycles, especially when shared among a community, give rise to what is termed availability cascades—cycles where shared beliefs and behaviors reinforce each other. Delving deeper, availability bias contributes to the base-rate fallacy, as exemplified in health-related scenarios like cancer diagnoses. Even if individuals are cognizant of the possibility of false positives in medical tests, the availability bias can lead them to remember instances where the test correctly identified cancer more vividly than cases where it erred. This selective recall distorts perceptions, potentially inflating the perceived likelihood of a true positive, thus impacting subsequent decision-making processes.



Overconfidence

Overconfidence refers to a cognitive bias where individuals exhibit an unwarranted and exaggerated belief in their abilities, knowledge, or judgment. In essence, it involves an overestimation of one's performance, accuracy, or the certainty of one's beliefs, beyond what is objectively justified by the evidence or reality. This bias can manifest in various domains, including decision-making, problem-solving, and assessments of personal skills (Fast et al., 2012).


Bayesian updating, a cornerstone of probability estimation, precisely dictates the probabilities assigned to events based on available evidence (Grieco & Hogarth, 2009). These probabilities play a crucial role in confidence statements, wherein individuals express certainty about the likelihood of various outcomes. Calibration, the accuracy of confidence statements over time, becomes paramount, ensuring that stated probabilities align with actual outcomes in the long run. Calibration, in the context of confidence statements, requires accuracy over time. Perfect calibration implies that statements like "90 percent certain" are accurate about 90 percent of the time in the long run. However, calibration does not guarantee that most predictions are correct. Even with low confidence levels, individuals can be calibrated, allowing for occasional inaccuracies.



The paradox of overconfidence surfaces as a cognitive bias that intensifies with higher confidence levels. Research indicates that individuals tend to be most overconfident when their confidence is exceptionally high. Conversely, overconfidence diminishes as confidence ratings decrease, occasionally leading to underconfidence in situations of low confidence. Intriguingly, accumulating more knowledge does not necessarily mitigate overconfidence, and the degree of overconfidence escalates with the difficulty of the judgment task. The prevalence of overconfidence finds its roots in the scarcity of regular, prompt, and clear feedback in many judgment scenarios. Learning from experience becomes challenging due to confirmation bias, which directs attention toward evidence-supporting predictions while neglecting contradictory information. Availability bias compounds the issue by emphasizing easily recallable successful situations, distorting the perceived likelihood of repeated success. Hindsight bias further complicates matters by making past events seem more predictable after the fact, hindering the recognition of the unreliability of past predictions. The Dunning-Kruger effect adds another layer to the intricacies of overconfidence, revealing a cognitive bias where individuals with low ability tend to overestimate their competence, while highly competent individuals may underestimate their capabilities. The effect is often associated with a lack of metacognitive awareness, where individuals remain unaware of their incompetence or struggle to gauge the difficulty of tasks for others (Angner, 2022).


The perceptions of competence are not solely driven by objective assessments but rather stem from a top-down approach. Individuals start with preconceived beliefs about their skills, creating a priori estimates or anchors that influence subsequent adjustments. This dynamic interplay of self-assessment involves the intricate balance between cognitive skills required for a task and metacognitive abilities to evaluate one's performance.



Illustration of metacognition
Figure 10: Illustration of metacognition (Alsalloom, 2024)


Discussion

The theory of probability, while a powerful framework, faces the daunting task of capturing the nuanced cognitive processes underlying human judgments. As revealed by the examined deviations, there exist circumstances where people's intuitive probability judgments systematically and predictably diverge from theoretical expectations, potentially incurring significant costs.


The observed phenomena not only raise questions about the descriptive adequacy of probability theory but also cast shadows on its normative correctness. The demanding nature of adhering to probability theory in practical decision-making scenarios is a contributing factor to the observed deviations, prompting reflections on the theory's normative standards.


Our exploration has extended to the theoretical tools employed by behavioral economists, particularly within the heuristics-and-biases program. This framework, designed to offer a descriptively adequate theory of probabilistic judgment, acknowledges the functional role of heuristics while recognizing their potential to lead astray. The article underscores the importance of understanding the conditions under which heuristics may err and suggests that awareness could mitigate the likelihood of misjudgments.


Bibliographical References

Mantel, S. P., Tatikonda, M. V., & Liao, Y. (2005). A behavioral study of Supply Manager decision‐making: Factors influencing make versus Buy Evaluation. Journal of Operations Management, 24(6), 822–838. https://doi.org/10.1016/j.jom.2005.09.007


Damnjanovic, K., & Jankovic, I. (2014). Normative and descriptive theories of decision making under risk. Theoria, Beograd, 57(4), 25–50. https://doi.org/10.2298/theo1404025d Pleskac, T. J., Diederich, A., & Wallsten, T. S. (2015). Models of Decision Making under Risk and Uncertainty. In The Oxford Handbook of Computational and Mathematical Psychology (pp. 209–231). essay, Oxford University Press. Retrieved 2024, from https://books.google.it/books?hl=tr&lr=&id=_dbFBgAAQBAJ&oi=fnd&pg=PA209&dq=%22Probability+theory%22+decision+making+under+risk&ots=-nucDdJC-0&sig=sfqMJW1fWy_RPkadKbr1Lxyz6RM#v=onepage&q=%22Probability%20theory%22%20decision%20making%20under%20risk&f=false.

Huber, J., Kirchler, M., & Stöckl, T. (2008). The hot hand belief and the gambler’s fallacy in investment decisions under risk. Theory and Decision, 68(4), 445–462. https://doi.org/10.1007/s11238-008-9106-2


Angner, E. (2022). JUDGMENT UNDER RISK AND UNCERTAINTY. In A Course in Behavioral Economics (3rd ed., pp. 167–210). essay, Bloomsbury Academic.


Raue, M., & Scholl, S. G. (2018). The use of heuristics in decision making under risk and uncertainty. Psychological Perspectives on Risk and Risk Analysis, 153–179. https://doi.org/10.1007/978-3-319-92478-6_7


Hertwig, R., & Gigerenzer, G. (1999). The ‘Conjunction Fallacy’ Revisited: How Intelligent Inferences look like reasoning errors. Journal of Behavioral Decision Making, 12(4), 275–305. https://psycnet.apa.org/doi/10.1002/(SICI)1099-0771(199912)12:4%3C275::AID-BDM323%3E3.0.CO;2-M


Lu, Y. (2016). The conjunction and disjunction fallacies: Explanations of the Linda problem by the equate-to-differentiate model. Integrative Psychological and Behavioral Science, 50(3), 507-531.


Calderisi, M. (2024). On the reality of the base-rate fallacy: A logical reconstruction of the debate. Review of Philosophy and Psychology. https://doi.org/10.1007/s13164-023-00712-x


Lange, R. D., Chattoraj, A., Beck, J. M., Yates, J. L., & Haefner, R. M. (2021). A confirmation bias in perceptual decision-making due to hierarchical approximate inference. PLOS Computational Biology, 17(11). https://doi.org/10.1371/journal.pcbi.1009517


Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991). Ease of retrieval as information: Another look at the availability heuristic. Journal of Personality and Social Psychology, 61(2), 195–202. https://doi.org/10.1037/0022-3514.61.2.195


Sunstein, C. R., Gilovich, T., Griffin, D. W., & Kahneman, D. (2003). Hazardous heuristics. The University of Chicago Law Review, 70(2), 751. https://doi.org/10.2307/1600596


Fast, N. J., Sivanathan, N., Mayer, N. D., & Galinsky, A. D. (2012). Power and overconfident decision-making. Organizational Behavior and Human Decision Processes, 117(2), 249–260. https://doi.org/10.1016/j.obhdp.2011.11.009


Grieco, D., & Hogarth, R. M. (2009). Overconfidence in absolute and relative performance: The regression hypothesis and Bayesian updating. Journal of Economic Psychology, 30(5), 756–771. https://doi.org/10.1016/j.joep.2009.06.007


Visual Sources

2件のコメント


Sergio Answer
Sergio Answer
8月21日

Making decisions under risk is always accompanied by stress. It often happens that people do not like to leave their comfort zone and that is why many people do not achieve success in their fields of activity. I often like to read CLC Lodging reviews and try to travel to different countries and learn about the culture of other people. It opens your eyes to many things and makes you more confident.

いいね!

Morse Norman
Morse Norman
5月20日

slice master is a simple tap-to-play arcade game. It's all about motions that flow and physics. Use your virtual blade to demonstrate your cutting prowess and strive to become the greatest slice master!

いいね!
Author Photo

Gökalp Boz

Arcadia _ Logo.png

Arcadia has an extensive catalog of articles on everything from literature to science — all available for free! If you liked this article and would like to read more, subscribe below and click the “Read More” button to discover a world of unique content.

Let the posts come to you!

Thanks for submitting!

  • Instagram
  • Twitter
  • LinkedIn
bottom of page