Confidence

Decision Theory was born in the 1700s, alongside the emerging science of probability theory as developed by Blaise Pascal and Pierre de Fermat. Its aim was to aid in gambling decisions, helping agents maximise their potential gains. From these somewhat humble beginnings, the discipline has grown beyond simple gambles to explore human decision-making in all potential life situations, attempting to gamify the art of good living into an easily applicable set of normative rules. Like religion which came before it, the doctrines of Decision Theory promise all those that follow it maximal gains in all their endeavours (in the long term at least).

Modern Decision Theory, pioneered by Frank Ramsey, John Von Neumann and Leonard Savage, aimed to build a formal framework for Decision Theory to represent an agent's doxastic states and framing the decision problems around these states. Modern Decision Theory is built on two key principles: The Rationality Hypothesis, which requires that an agent has coherent preferences, beliefs and desires, and the Choice Principle, which states that agents should (or do) choose what they most prefer. Decisions under risk, whereby an agent knows the probabilities for each potential outcome (for example an agent knows the probability of landing a 6 on rolling a fair dice is 1/6) are managed using Bayesian Decision Theory in the Modern framework. In this Bayesian approach, probabilities represent an agent’s level of belief that a specific outcome will occur. Now for this representation to be compatible within the formal framework, each agent must have a precise measurable degree of belief for every and any given credal statement (statements about degree of belief). By the Rationality Hypothesis, this idea becomes a central requirement for rationality within the context of Decision Theory.

Ambiguity in the decision context is defined as a situation where agents are provided with incomplete knowledge of the involved risks. An example Ellsberg (1961) uses involves agents making bets on different coloured balls being removed from an urn. They are told, for example, that an urn contains black and red balls and they can bet on either colour. They however, are given no information on the distribution of red to black balls within the urn making it impossible for them to calculate the precise probabilities. Ellsberg found, through a number of experiments, that when given the choice between a bet with known risks and ones that have ambiguous risks, agent’s prefer the known probabilities from which they can draw precise degrees of belief. In making this choice the agent knowingly violates the accepted axioms of Decision Theory, which if followed, define rationality. Agents who knowingly violate these axioms in ambiguous decisions are said to be ambiguity averse. This result can be interpreted in two ways: Either situations under ambiguity cause agents to behave irrationally, or Bayesian Decision Theory cannot handle situations under ambiguity and another framework is needed for such situations.

Arguably, the most important decision problems in everyday life are decisions under ambiguity. Policy decisions relating to climate and most decisions in medicine are decisions under ambiguity. Therefore, a normative framework that includes ambiguity aversion is crucial. In an attempt to accommodate ambiguity, decision theorists have been moving against classical Bayesian ideals in recent years. Brian Hill's Confidence in Belief and Rational Decision (2019) offers a non-Bayesian framework using confidence as a solution to the problems brought up by ambiguity aversion, specifically for application in real life. Confidence within this context refers to the confidence an agent has in their own beliefs, separate from degree of belief which reflects the confidence an agent has in the truth of a proposition. Confidence is assigned to a degree of belief (or probability) where degree of belief is assigned to a statement. Arguments can be made about the role played by confidence in belief formation, whether agents factor in their own epistemological limitations when determining their degree of belief in a given situation. Hill does not address this issue; however, I argue that this separation ambiguity arises mostly in the formation of an individual agent’s degree of belief and should be ignored in the context of social choice decisions, including climate decisions.

The formalism for confidence uses set theory to create a confidence ranking, where credal statements are sorted into a hierarchy, ordered by their level of evidential support. Credal statements, higher up the hierarchy are held with higher confidence than statements lower down the hierarchy, representing the higher confidence associated with credences supported by larger bodies of evidence. For example, many would associate higher confidence with the credence 'smoking increases one's risk of lung cancer' based on the large body of scientific evidence supporting this claim over the statement 'smoking is good for the lungs', which has a much smaller body of supporting evidence. Hill interprets that the confidence ranking may represent a committee of scientists voting on their agreement with different credal statements; for example, “The average global temperature will increase by 1.5 degrees above the pre-industrial average by 2023”. The largest set in the confidence ranking, giving statements for which we have the most confidence in, represents scenarios where unanimous voting is achieved. We are said to have no confidence in statements which are not supported by any votes from the committee.

A cautiousness coefficient is used to determine the level of confidence required for decision in a given context. Hill’s model aims to keep the conceptual clarity held by the Bayesian approach with its neat separation of beliefs from conative attitudes, allowing for value-free communication of belief. In many social decision contexts, there is a separation between those assigning judgments about belief and uncertainty and those assigning values.

We can compare this non-Bayesian approach with the classical Bayesian through an example:

We have two patients, both experiencing the same symptoms which a doctor attributes to endometriosis. Patient One has had a scan which shows evidence suggesting abnormal tissue growth in her abdomen, while Patient Two’s scan results showed no evidence of this. Given that it is common for endometriosis to not show up in ultrasounds this does not lower the doctor’s degree of belief that Patient Two has endometriosis. She has assigned both patients the same degree of belief (or probability) that they are suffering from the condition, being 75%. Should the doctor recommend the same treatment options to both patients if her degree of belief that they have endometriosis is the same?

According to the Bayesian, yes she should, but we can see that this is not the case with the non-Bayesian approach. We can see that the set of statements supporting the claim that Patient One has endometriosis is larger than the set of statements supporting the claim that Patient Two has endometriosis. Patient Two has all of the same symptoms as Patient One, but Patient One has the additional evidential support of a positive scan result. Therefore, the confidence in the doctor’s credence that Patient One has endometriosis with a probability of 75% is higher than her confidence that Patient Two has endometriosis with a probability of 75%, as the first credal statement is generated higher up in the hierarchy than the second credal statement. How the doctor decides to recommend treatment will depend on the assignment of the cautiousness coefficient, which is decided by a medical authority, not the individual doctor, clearly separating judgements (the degree of belief) from values (the level of confidence required). The doctor is then perfectly justified in suggesting a more invasive treatment for Patient One, for whom she has a higher confidence in the diagnosis. This would not be possible under the classic Bayesian framework for decision.

Hill gives the following maxim for decision in his framework:

The higher the stakes in a given decision, the higher confidence we should have in our choice.

This means that if we are making a decision with the potential for large losses, we must ensure that we have a high level of confidence in the outcomes of the selected choice. The requirement for high confidence levels can be seen as analogous to ambiguity aversion. Rational agents prefer to make decisions for which they have the highest confidence in their outcomes, whereby confidence is determined by evidential support.

This seems like a more practical approach to decision, but still leaves open some core issues. Firstly, the quality of evidential support is not considered, only the quantity. This ties in with problems in confirmation theory, which explores how we confirm evidence. Without an accepted theory for confirmation, especially scientific confirmation, then this framework will remain to have problems in its practice application.

Marianna Barcenas

Marianna is currently studying for a MSc in Philosophy of Science at the London School of Economics and Political Science. Marianna is interested in Decision Theory and the Philosophy of Mathematics.

https://www.linkedin.com/in/mariannabarcenassimmonsb39653175
Previous
Previous

AI & A Jobless World

Next
Next

AI & Audience Receptivity