I really like this (out of the thing I’m reading: “ontologies-across-disciplines”)
I’d pinned Lewis a few years back as: The biggest spreadsheet to work with, so I kept him, not knowing where I’d put or find him.
I LOVE how they split up objective and subjective in a 3D way here, although I need to understand Kant vs Aristotle now as I’ve never directly studied Kant. … or Aristotle.. but now I have a reason to. [I was unfriended a month ago when a friend discovered I’d never read Kant]
—-
I need to read this to make sure I’m not missing it.
Whereas the user of an ontology can always be identified, its author may well be unknown or even inexistent. Take Caenorhabditis elegans, the nematode or roundworm that became famous among other things for its nervous system which consists of only 302 neurons. One could say that the ontology of C. elegans comprises at least the following categories and their complements: (a) increasing concentration of an attractant, (b) decreasing concentration of a repellent, (c) increasing closeness to the preferred temperature. This makes sense insofar as the behavior of this nematode is geared towards situations of category (a), (b) and (c), and not their complements; the former two forms of behavior are called chemotaxis, the latter thermotaxis. So if one is ready to speak of the world view or ontology of a robot or a roundworm, the identity of the user is clear, but in the latter case the identity of the author is problematic. It therefore seems reasonable to assume that oftentimes ontologies have simply evolved, without any specific author available. Alongside with the roles of the author and the user, the role of the object or domain of an ontology (in the right-hand column of our conceptual space) is of prime importance. It has already been shortly addressed in terms of the horizontal dimension of variation of our conceptual space, so a short reminder will suffice here. Ontologies vary with respect to their ‘aboutness’, i.e., what they are ontologies of. This can be, with decreasing generality, (i) all possible worlds, (ii) one world only, especially the one we live in (or seem to live in; cf. the depth dimension), (iii) subdomains of this world (or others) of increasing degrees of specificity. Is there an upper bound for the specificity in (iii)? In other words: What is the minimum degree of generality that is required for an ontology? Does it make sense to speak of the ontology of this nasty fly that keeps circling your head as you are reading this?We submit that the answer should be negative. There is something like the ontology of C. elegans, or at least it is in the making, but this does not mean that this ontology provides a systematic account in all relevant aspects of a single exemplar of this species, but of all exemplars that come from the same kind of genome. Similarly, it certainly makes sense to develop an ontology not only of aircraft, but also one of aircraft accidents. But it does not make sense to create an ontology of the Airbus A 340 crash at Toronto on August 2nd, 2005,10 at least not without another significant extension of the concept. So far, even the most specific domain ontologies like the one of the famous roundworm have a generic object. They are intrinsically intensional insofar as they entail predictions about new entities that instantiate the generic entity. Insofar, ontological knowledge about a domain is definitional knowledge about it, not episodic knowledge about its states and fates.
Whereas the user of an ontology can always be identified, its author may well be unknown or even inexistent. Take Caenorhabditis elegans, the nematode or roundworm that became famous among other things for its nervous system which consists of only 302 neurons. One could say that the ontology of C. elegans comprises at least the following categories and their complements: (a) increasing concentration of an attractant, (b) decreasing concentration of a repellent, (c) increasing closeness to the preferred temperature. This makes sense insofar as the behavior of this nematode is geared towards situations of category (a), (b) and (c), and not their complements; the former two forms of behavior are called chemotaxis, the latter thermotaxis. So if one is ready to speak of the world view or ontology of a robot or a roundworm, the identity of the user is clear, but in the latter case the identity of the author is problematic. It therefore seems reasonable to assume that oftentimes ontologies have simply evolved, without any specific author available. Alongside with the roles of the author and the user, the role of the object or domain of an ontology (in the right-hand column of our conceptual space) is of prime importance. It has already been shortly addressed in terms of the horizontal dimension of variation of our conceptual space, so a short reminder will suffice here. Ontologies vary with respect to their ‘aboutness’, i.e., what they are ontologies of. This can be, with decreasing generality, (i) all possible worlds, (ii) one world only, especially the one we live in (or seem to live in; cf. the depth dimension), (iii) subdomains of this world (or others) of increasing degrees of specificity. Is there an upper bound for the specificity in (iii)? In other words: What is the minimum degree of generality that is required for an ontology? Does it make sense to speak of the ontology of this nasty fly that keeps circling your head as you are reading this?We submit that the answer should be negative. There is something like the ontology of C. elegans, or at least it is in the making, but this does not mean that this ontology provides a systematic account in all relevant aspects of a single exemplar of this species, but of all exemplars that come from the same kind of genome. Similarly, it certainly makes sense to develop an ontology not only of aircraft, but also one of aircraft accidents. But it does not make sense to create an ontology of the Airbus A 340 crash at Toronto on August 2nd, 2005,10 at least not without another significant extension of the concept. So far, even the most specific domain ontologies like the one of the famous roundworm have a generic object. They are intrinsically intensional insofar as they entail predictions about new entities that instantiate the generic entity. Insofar, ontological knowledge about a domain is definitional knowledge about it, not episodic knowledge about its states and fates.
—
Truth? truth? t-rooth. I’m not working on truth. I want to understand the world’s lies.
—
Haven’t reached a not yet or rather the not is what’s shaping the is.
===
ontology is a structured wordlist. There’s lots of ontologies
===
I’m generally a pragmatist yeah, but I don’t know if pragmatism is the ultimate ‘it’ either. It’s just easier to navigate pragmatism than idealism.
-
Like
—
social nominals
-
Like
—
social nominals can be placed into a fictitious structure consisting of structured wordlists used socially in
===
the ‘depends’ part is what I’m working on.
what I just called “social nominals” just now can be placed into a fictitious structure consisting of structured wordlists called ontologies (used socially among humans (or AI (or nemotodes within themselves)).
-
I’ll probably never used the phrase social nominals again for anything because I made it up as a temporary construct which consists of nothing in particular and yet i intended to convey something to you which may or may not be successful.
-
Like
—–
-
—
Something can be coherent to me but I can’t convey that coherence I experience to someone else, resulting in unsuccessful communication.
I expect to be considered incoherent. It is being received “as if” I am coherent that surprises me.
===
I have my own language for things. Successful communication is nice but I don’t expect it. I expect to be misunderstood or mocked: that is the norm for me and you are behaving in a normal fashion.
Occasionally someone seems to understand me and it is exciting. It’s not long before I discover that it’s a case of “not quite” so I don’t keep my hopes up but it’s nice for that moment.
===