Navigating Uncertainty: How Relative Maximum Likelihood Can Refine Your Beliefs
"Unlock smarter decision-making by updating ambiguous beliefs using Relative Maximum Likelihood and adapt to uncertain situations with flexibility."
In today's rapidly evolving world, making decisions under uncertainty is more critical than ever. We often lack complete information, making it difficult to rely on simple probability calculations. In such situations, decision-makers frequently depend on a range of beliefs, rather than a single perspective, which requires tools that can effectively refine and update these ambiguous beliefs to improve decision making.
Traditional methods for updating beliefs often fall short. Full Bayesian (FB) updating considers all existing beliefs without discarding any, which may lead to including outdated or irrelevant information. On the other hand, Maximum Likelihood (ML) updating discards beliefs that do not align perfectly with observed events, potentially oversimplifying complex situations. These methods, while useful, represent extreme approaches and fail to capture nuanced, real-world scenarios.
To address these limitations, a new approach called Relative Maximum Likelihood (RML) updating has emerged, offering a more flexible and adaptive method for refining ambiguous beliefs. RML integrates aspects of both FB and ML updating, allowing decision-makers to fine-tune their strategies and navigate uncertainty with greater precision. This innovative technique acknowledges the importance of considering multiple perspectives while adapting to new information.
What is Relative Maximum Likelihood (RML) and How Does It Work?

Relative Maximum Likelihood (RML) updating is a novel approach to refining ambiguous beliefs, particularly when dealing with multiple possibilities or scenarios. Unlike traditional methods that either consider all possibilities equally or discard those that do not precisely fit new data, RML strikes a balance by selectively updating a subset of beliefs. It is designed to linearly adjust from the entire set of beliefs down to those that ascribe the maximum probability to a new event. Developed by Xiaoyu Cheng, RML offers a nuanced way to adapt to new information by combining the principles of Full Bayesian (FB) and Maximum Likelihood (ML) updating.
- Initial Beliefs: RML starts with a set of possible beliefs, known as priors (C), representing various perspectives on an uncertain situation.
- Observed Event: When a new event (E) occurs, RML identifies a subset of these priors that assign the highest probability to the event (C(E)). This subset represents the most likely explanations for what has been observed.
- Linear Contraction: The core of RML involves a linear contraction, where the initial set of priors (C) is adjusted toward the maximum-likelihood subset (C(E)). This adjustment is controlled by a parameter (α), which ranges from 0 to 1. The equation for this adjustment is: Ca(E) = αC(E) + (1 − α)C Here, α determines the degree to which the decision-maker is willing to discard priors based on their likelihood.
- Updating Beliefs: Once the subset of priors is determined, Bayes' rule is applied to update each prior conditional on the observed event. Bayes' rule is a fundamental concept in probability theory that describes how to update the probability of a hypothesis based on new evidence.
RML: A Path Forward
In an era defined by rapid change and profound uncertainty, the ability to nimbly update and refine beliefs is more than an advantage—it's a necessity. Relative Maximum Likelihood updating provides a robust, flexible framework for navigating this complex landscape, bridging the gap between rigid adherence to existing beliefs and the potentially destabilizing effects of overreacting to new information. Whether in the realms of business, policy, or personal decision-making, RML equips individuals and organizations with a powerful tool for staying ahead in an unpredictable world.