Did you know you can turn 0, 1, 2, 3, … into an algorithm?
Church, whose Lambda Calculus is what’s hiding underneath every computer language, can do this.
Basically, what you see is: First, Second, Third, Fourth.
It’s answering, “How many times did I run this function?”
It’s not AUTOMATICALLY turned into numbers though. You can do ANYTHING not at all, or once or twice or three times.
To turn it into numbers, you have to:
a) start with 0
b) the function will likely be a https://en.wikipedia.org/wiki/Successor_function which is a ” https://en.wikipedia.org/wiki/Primitive_recursive_function ”
How many times you’ve run the operation.
If you run it zeroth times, function(x) = x.
If you run it once through, function(x)=function(x)
If you run it through twice, function(x)=function(function(x))
Oh, but it’s not infinite as functions operate over time (or change).
That is what is progressing forward in these functions.
“Everything is a function. There are no other primitive types—no integers, strings, cons objects, Booleans, etc. If you want these things, you must encode them using functions.”
It doesn’t have to be time, but it necessarily has to be over change, at least in computation, which is what the various work done in the 1930s led to.
Wait, I mean you’re right in that in theory, yes a function is a mapping on sets.
But by incorporating incremental steps within functions as a part of the process of solving functions, it became “computable”. Machines to do the broken down steps of a function could be created and were and are.
It’s pairs though isn’t it? Or really, a tuple, right?
In Haskell and Rust, the unit type is called () and its only value is also (), reflecting the 0-tuple interpretation
The company Y-combinator certainly does. All I know about it is that it has something to do with recursion and a paradox.
Interesting solution in “Resolution in unrestricted languages”
“The existence of the variable x was implicit in the natural language. The variable x is created when the natural language is translated into mathematics. This allows us to use natural language, with natural semantics, while maintaining mathematical integrity. ”
Domain Theory is right at the edge of my current abilities. I agree that it was hypnotic.
The analytical philosophy stuff is over my head. I can’t read it, it give me a headache. All I could gleam is that “if and only if” and that thing they call “modus ponens” is the problem and why they can’t seem to solve it.
Thank you by the way. I often don’t know what I’m thinking about until I talk about it (write about it) and see what I just wrote.
Having the right people to bounce off of and challenge what I’m writing is important and you’re definitely one.
Also, this conversation led me to https://en.wikipedia.org/wiki/Fixed-point_theorem which I’d been wanting to go back to one day and through my various clickings, I’m back to it again.
Always more to research.
Quite true — and I think I’m working towards them (again). Various clickings just led me back to https://en.wikipedia.org/wiki/Fixed-point_theorem
I don’t think it follows that questioning modus ponens results in the annihilation of mathematics.
Rather, I think that it shows a limitation to mathematics that rarely but occasionally, comes up.
Considering how rare it is that “therefores” do not follow from implications, I don’t consider this a problem for the validity and continued application of mathematics, both “in the world” and also as theory (recreationally or professionally).
If you google Modus ponens paradox or modus tollens paradox, you’ll find only a few paradoxes.
They’re there but very rare. Curry’s paradox, Sorites Paradox and Liar’s paradox are usually the only ones I’ve seen.
It’s possible that they simply haven’t been unequivocably resolved but will be. Or it could be part of incompleteness.
That they exist don’t take away from its power, just as optical illusions don’t make us blind.