“I feel like you should be on the front lines of computer science research.”
Thanks Josh I’m always asking the question, “What are we *missing* here?” as I look around and I see people doing what I’d consider “settling too soon”. They see a persuasive argument, run with it, and because it seems solid enough, they stop.
From the point they stop, they construct further.
Amazing things can be built upon even the most questionable of foundations and even more amazing things can be built upon seemingly logically consistent foundations.
Yet there’s always fractures in the foundation to be found, places where the walls and ceiling meet where there’s no caulking and the air gets through, or a spot on the roof by metal flashing that’s missing a little bit of tar creating a slight gap that water vapor gets through that gets into the plywood sheathing below that gets around vapor barriers, causing the wood to swell and eventually condense on the underside in small drops, all unnoticed by the residents below, that eventually leads to a collapsed roof over a period of years, its original source never traced.
One word waves these things away at the time of construction, whether house or logic:
I wouldn’t want construction of things to stop before everything is perfected of course : I’m sloppy with that sort of thing myself so I don’t expect it out of others. But I guess my thing is “awareness” of such thing.
Logic: is it a fundamental or is it built with other constructors?
If we say it’s fundamental we can stop.
If we say “it might be built with other constructors”, then we have a direction to pursue, water vapor to follow through a tiny little crack to see where it goes and where *not* paying attention to a “just” _could_ lead to.
It’s in the area of the subtles but potentially importants.
How this relates? Likely laterally. People say I make lateral connection that are logical but don’t follow expected courses. So, I take their word for it. While I might not be in a position to lead computer science research on a paid level, I do it on my own for my own edification and if I’m lucky, for others to be able to edify themselves.
I guess my base assumption is that everything is constructed out of flawed materials whereby even the very concept of perfect is imperfect. So, that’s how I pursue.
Sometimes to a high accuracy but to low precision.
The danger with overly tight logical arguments is that they seem to more often end up in the “high precision, low accuracy” quadrant rather than the “high accuracy / high precision” quadrant, which I consider to be a very improbable quadrant to reach.
Having belief as a metric is useful – and certainly an improvement over binary true/false. But there’s a danger with a belief metric, because there’s ALSO what you _think_ you believe and say you believe, versus what you *really* believe, which you might not even have direct access too.
[that’s at the level that other people usually see first in you before you see it in yourself, because they’re not hampered by the lumbering processing of “being” you and can see you with a _sometimes_ more objective eye. That said, their subjectivities and projections play into that as well so the opinions of others are also not an entirely reliable source imo, but they’re still valuable. [they have a better access of the history of your own hypocrisy whereas most people likely don’t do a very good job of assessing their own for themselves]
[bias: I see hypocrisy as human and normal. I’m defining it as, “the difference between what you say you believe and believe you believe versus your beliefs as shown through your actions and words in unguarded moments – aspects of belief that sometimes your enemies see in you best — or your family]
Hardest thing to throw out is: “I’m not a hypocrite because [x]”.
Before we can assess whether natural selection affords humans correct reasoning, can you consistently do it yourself in a manner beyond reproach?
We could speak hypothetically about the invisible hand of natural selection in the realm of maybe/possibly and feel rather confident that we’ve achieved something.
Yet is it possible to speak for a process to whom (to which?) we cannot ask directly about a concept we have trouble enacting ourselves?
I’m generalizing of course and basing it on myself as a template.
For me, having a logical/scientific-ish curiosity and consistent pursuit is paramount. I may not always be able to explain properly but I wish to have “correct reasoning” at all times regardless of my ability to explain in a way that will convince others that I am, in fact, logically and scientifically pursuing whatever is my object of interest at the time in a way they will find agreeable.
It’s hard. I’m my harshest critic.
But I would like to prove that humans ARE capable of such clarity of reasoning but as I do not have access to the inner minds of others, I can only work on one human using the tools available. Thankfully, there are many.