I believe that pretending there is a clean line between epistemic and non epistemic values denies how human cognition and institutions actually function. I don’t think purity is achievable. This is why it’s so crucial to adapt the construction of such lines as a pragmatic, useful discipline.
Knowledge is lived and concepts are interdependent of each other. Knowledge creation and discovery is a human endeavor, by and for the purpose of humans primarily and subject to all of our ways, for better or worse.
We like clean lines. We are easily fooled into believing that knowledge can be neatly categorized like sorting pretty colorful rocks into different buckets at the beach. Information being comprehensible is comforting and reduces stress, and we don’t like to be overly stressed, only just a little.
So we develop systems and frameworks. And we must. This is what we do as humans because we need to make sense of the world, both for psychological comfort and to cooperate with each other in projects and goals because none of us can do great things alone. Systems and frameworks are languages which allow for communication and coordination and cooperation.
For transparency, my own personal biases from a psychology framwork might be scoring “high openness” on OCEAN5 or being INFP or having ADHD-I, any of which might make it difficult to have strict, unyielding mental categories naturally and so I must construct them with reasoning. Yet I also have Generalized Anxiety Disorder which might me distrust the ability of people to remove their own biases, even my own unconscious motives, yet I score low-neuroticism on OCEAN5 which may mean I can paradoxically be confident when I am. Yet all of that might also be bunk or some of it. But it remains potentially non-epistemic influences to my values as might be various cultural or social factors.
A dedication to transparency in our own non-epistemic values is key because without it, whole projects can go astray from our tendencies to believe and trust knowledge systems as objective and overestimate our abilities to conform to them without introducing our own biases.
This includes not only transparency at a personal level but also at institutional levels, subject to constant review to prevent non-epistemic value creep or invisible group bias such as a panel of white upper middle class male scientists or administrators not realizing the influence of their shared status on the steering of the scientific processes can be extremely profound, whether funding choices, hypothesis decisions, conclusion rationalizations.
However, with all of those upper level processes continuously in place, at that point, the internal epistemic checks and balances built into the scientific processes rule.
Is it “pretend objectivity”? Is it “pretend categories”? Yes and also no. Because of our human limitations and tendencies, we have to stay vigilant as non-objectivity is ultimately inescapable BUT despite that inevitability, we must attempt to “strive towards” something that resembles what objectivity might be, always willing to recalibrate and not being overly quick to discard potential non-epistemic value creep which can quickly spoil all of the hard work put into applying epistemic values while “doing the sciences”. Care must be built-into every level and be open and subject to testing and review. It helps bring us to a point where we can “conditionally trust the science” as it were with confidence for we’ve done what can with the tools we had at hand at the time.
[responsivevoice_button voice="US English Male"]