Logic does not easily model either the real world or how we reason about it. This is a well known problem in the logic community and organizations hat perform deep analysis. The IntelligenceCommunity is an example.
In fact, many examples of this shortfall abound, ranging from psychological factors of collaboration to cultural issues. Much is unknown in any real situation, either because it is unknowable or because maintaining an explicit model is too costly. Some great percentage of knowledge is tacit and much communication is implicit.
The general approach to these problems is to ignore them, or to collapse soft elements to a quantitative surrogate. More careful work is rare because of the immense cost, but it usually involves modal logics for small tasks and domains.
The cleanest solution from a formal perspective was developed three decades ago at Stanford’s Center for the Study of Logic and Information, in the form of situation theory.
A good introduction to situation theory (before our extensions) is by Keith Devlin.
For decades this remained a largely philosophical proposal with a mathematical basis. More recently, we have been given the tools to implement workable systems of situation theory.
Situation theory provides a way to formally reason over softelements and dynamics.
Modern situation theory uses two integrated reasoning systems. One system uses logic. In fact, this system can employ any logic, to any degree of sophistication, as currently used in the target domain. This right hand side system supports what is normally called reasoning using logic, the sort of logic that can be supported without situation theory.
The second, new system enables reasoning over contexts (or ‘situations and attitudes’). It can be based in logic as well, but not necessarily so (and not in our case). The power of that system is in the way it tracks the relationship between a (logical) instance and the way it can be recontextualized in multiple ways based on soft dynamics.
This second sort of reasoning is similar to the way humans comprehend, and many of the challenges of artificial intelligence involve this sort of soft reasoning. We easily understand how a new context can alter the interpretation of an event, or the formation of an artifact. In explaining, we rely on the logic within a situation to convey our ideas, yet the structure of the explanation itself carries more information than those nouns and verbs. We are at the cusp of a breakthrough in this area.
We rely on category theory to support this left hand side, also called the second sort of logic.
A rather long paper of ours (unfortunately behind a paywall) gives a detailed non-technical view of what we do, using an example from medical research.
Check out Richard Heuer's handbook for CIA analysts. It reports on studies begun in the 1970's on why the Intelligence Community (and not just that of the US) so consistently gets things wrong. Early in the volume, he lists ten qualities that the analyst should consider. None of them supported by logic.
Note, the use of the term soft here does not mean fuzzy as in fuzzy logic. That community has a concept of ‘soft computing’ which is simply the imposition of probability into ordinary logic, leagues away from what we are talking about.