# “Explain Bayesian probability to someone who only understands Charles Sanders Pierce”

“Explain Bayesian probability to someone who only understands Charles Sanders Pierce”
Ian Hacking?
The first tiny bit of a bridge into me understanding Bayesian.
Asking ChatGPT4’s help with Bayesian helped a little, but gave me better suggestions, including Ian Hacking. I recognized that name, rang a bell. I came across it a year or two ago when I was connecting INFLUENCED and INFLUENCED_BY from Wikipedia’s WikiData thing, and one of them was CHARLES SANDERS PIERCE INFLUENCED IAN HACKING.
Well, I know enough about Charles Sanders Pierce to know I’m likely sympatico to whatever he’d say probably. But instead of asking it to bridge Charles Sanders Pierce to Ian Hacking, I asked, knowing Pierce didn’t do anything Bayesian:
————————-
Explain Bayesian probability to someone who only understands Charles Sanders Pierce
Charles Sanders Peirce was an American philosopher, logician, and mathematician known for his work in pragmatism, semiotics, and scientific methodology. While Peirce’s work is not directly related to Bayesian probability, his emphasis on the continuous process of inquiry and updating beliefs in response to new evidence can serve as a helpful analogy.
Bayesian probability is a method for updating beliefs or probabilities based on new evidence. It is named after the Reverend Thomas Bayes, an 18th-century mathematician and philosopher. The key idea behind Bayesian probability is that probability represents a degree of belief, which can be updated when new information is obtained.
To explain Bayesian probability to someone familiar with Peirce’s philosophy, consider the following analogy:
Imagine that you are a detective, and like Peirce, you are trying to solve a mystery by collecting clues, making hypotheses, and testing those hypotheses with evidence. As you gather more evidence, your beliefs about the likelihood of different hypotheses change.
Bayesian probability offers a formal way of updating these beliefs based on new evidence. It uses a formula called Bayes’ theorem, which tells you how to combine your prior belief (before seeing the new evidence) with the likelihood of the new evidence given different hypotheses. The result is an updated probability, or posterior belief, that incorporates both your prior beliefs and the new evidence.
In this sense, Bayesian probability can be thought of as a systematic method for updating beliefs based on evidence, similar to how Peirce’s philosophy emphasizes the continuous process of inquiry and refinement of beliefs. The key difference is that Bayesian probability provides a mathematical framework for quantifying and updating these beliefs, while Peirce’s work focuses more on the philosophical and methodological aspects of inquiry.
So, if you understand Peirce’s ideas about the continuous process of inquiry and updating beliefs, you can think of Bayesian probability as a mathematical tool for doing just that, by quantifying and updating your beliefs based on new evidence.