Kutachi Project

The emerging report from the Kutachi project. This is a collaborative project to develop a formal vocabulary for logical elegance based on intuitive perception of form.

Use Scenarios

Published: 23 Jul 2013

A reasonable question is what would Kutachi-enabled machines do that currently cannot be done? What would be different?

Keep in mind that the Project is concerned only with the visual presentation and interaction with information at the interface. We presume related groups will make progress on implementing the necessary new foundations internally.

Assuming that both halves succeed, a possible use case could be:

FilmsFolded

The ability to collaboratively annotate movies is the initial focus with a plan to evolve to support for biomedical modeling. For the former, all the usual techniques might be included, to mark objects, frames and sequences. What we want to annotate are situational components:

  • how a camera is placed and moves.
  • how the space is managed, by the physical environment in the film, how it is constrained by the frame and by movement in both of these.
  • the effects of light, sound and the rhythm of edits.
  • how truth changes.
  • how certain previous films, tropes, genres and archetypes are referenced.
  • how self-reference and irony (part of what we call folding) are indicated.

Some more detail can be found here in the example section.

More general use cases are:

Changing the Frame on Search

We can do better than Google.

Google is only the most visible practitioner of information retrieval. The basic problem is that you have a great sea of information and you want to pull out those bits that help you perform a task. With Google, you choose some words or a short phrase; Google finds matches and uses some clever algorithms to prioritize what may be the most significant.

  • These algorithms consider context in a limited sense, what words surround the query words.
  • The returned results are not transformed, combined or resituated in any way. You just get a list.
  • The system doesn't care what you need the information for.
  • It requires you to do all the work to ask the right questions even within its constraints.

Manyfolks know this is too stupid to last long.

Suppose instead you interacted with an intelligent system in this way:

“I really don’t understand what was going on in Prussian politics before World War I, and it puzzles me.”

And the assistant replied: “how will you use this?” or a series of conversational questions to either glean your intent or to help you clarify one. Perhaps it would be a very sophisticated one: “I am writing an essay on the myopia of multinational corporations that thrive on class distinctions and suspect that a relevant dynamic was in play in the Prussian situation. I am looking for both the relevant causal dynamics and a way to convey them in the context of my essay. I intend to suggest steps to mitigate societal damage.”

Suppose that you had a machine that could reason this way. And suppose that instead of just speaking to it (her) with presumably spoken words, you were able to convey even more information with gestures as the interaction progressed.

Such an assistant might start by speaking back to you: “Here is a story about that dynamic...” Even the most intelligent and knowledgeable assistant still might have to work very hard with you to get you just what you want — or help you discover that it is not there because you guessed wrong or that we simply don't know enough.

But suppose the assistant responded: “Here is the form of all the narratives that surround that dynamic...” and you had a way to interact not about specific narratives but about whole classes of them and were able to comprehend at a high level not just what you were looking for but how it fit into a bigger picture.

Better than Google telling you “Here are some records that have the words: Prussian, war and class in them.”? You bet. With Google, you have to take your intent and hide it from the computer, guessing what words were used by someone who can take you in a positive direction. Worse, there is no construction. Google gives you stuff that you have to assemble. More or less, this is the current state of the art, and it isn't going to get any better soon without a revolution.

So, one goal is to reframe the nature of information retrieval. We want coherent causal narratives that grow out of our queries, and we want the assistant to understand our context and perhaps the context of who we would be communicating to and for what purpose.

We explored this in some depth in the original work for the intelligence community.

But this is also possibly the most contentious issue in computer science.

Many senior people in the research community have great faith in the future of statistical correlation. Possibly the best defense is by Peter Norvig, who leads Google research. Read his response to a challenge here and make up your own mind.

Understanding Multilevel Emergence

The management of complex (business) organizations as an example.

One use is to more accurately model emergent behavior where many levels are involved and influences from some levels influence agents on another. A great example that we have studied in some depth is what we called the Agile (or Advanced) VirtualEnterprise.

The idea is simple: any collaborative work we do is constrained by the tools we use, even if it is among a small group that shares intuition and a spoken vocabulary. More complex enterprises are limited primarily by the technology we have that constructs and manages them. Since WW II, our technical infrastructure has made things worse, not better in this dimension. Methods now favor large multinational corporations with the power to bend market forces and influence laws. These are designed and managed on a Soviet model, a strict hierarchical system that in the end benefits few.

The computers and communication hardware is better, faster and cheaper of course. But the abstractions we use in reasoning about what we do is less and less nuanced.

What if we had technology that favored small creative groups that were able to flexibly aggregate with others to do complex work? A simple example could be making consumer electronics and associated platform and services. A better example may be eliminating malaria or providing disaster relief.

You can boil this down to two problems.

  • Value Features. We need a collection of abstractions that would clearly expose the value that a small group or even an individual would add to a potential enterprise. Consider that the ideal virtual enterprise would involve novel products or services and potentially new processes or technology. And that any component has to be evaluated in a huge number of possibilities at many levels, each one of which changes the value. We are not so far away from having this, and we can count on parallelprojects to solve this problem.
  • Overlapping Situations. This is basically a visualization problem, the Kutachi problem. Here we mean situations as combinations of companies and product that comprise the virtual enterprise of course. But there are many more, briefly outlined below.

So what if we had a system that would allow someone to see the form of a number of possible narratives of what could be made (and/or serviced) and by what means. And showed how classes of these would improve if they were inserted into those narratives? What if such visualizations required few skills, so that anyone with a sense of beauty or balance could navigate their way into a beneficial process and product story?

Such an assistant will know what you do, what you want to do, how you can change what you do and how to present any combination of these to others. She will know each instance as a narrative which can be explained and the world of instances as a visualization. She will imagine countless mixtures of combinations of you with others, not by exhaustively defining them but by looking at the way things work in the large and how players can fit.

Our work further assumed that she can collaborate with other, similar assistance to create clouds of possible forms that if the sound promising can be reified in a simple narrative about what what you do, what you want to do, how you can change what you do and how in some combination of these with others optimizing value to a target audience. This agent capability is best illustrated in the following biomedical example.

This use case is an excellent example of multilevel emergence. The emergent behavior is simply groups using market forces to contribute to larger groups for selfish gain, their attractiveness determined by the value feature metrics that will be developed.

The multilevel quality is in how you describe larger groups, or what we called situations. Each player has to satisfy the larger group of its own stakeholders: investors, families, communities and so on. There is the obvious larger group of the enterprise as a whole that is presented to customers. (We supposed that a virtual enterprise could be assembled to competitively make custom cars and that it would be branded in the way that Ford is.)

But there are all sorts of situations in between and even beyond. An example of beyond is the coupling of the potential customer pool into the definition of the enterprise. After all, that's where value is rooted. Another extension is the communities and economies in which components work; the enterprise has to be a net benefit for them as well, and a good argument could be made for optimizing that way.

Internally, enterprises are sliced in more than one way. You have functions like market research, design (of product), design (of manufacturing), the manufacturing, marketing, selling and support. Each of these has their own concerns, culture and language. In a truly advanced enterprise, these roles may fade but for the next generation we are stuck with them because this is how progress is audit and managed, including by external forces (like financial, and legal communities).

One intriguing focus of this is just limiting ourselves to financiers. They are currently set up to capitalize monolithic organizations, the larger and more stable the better. How would you capitalize a virtual enterprise competitor to Ford, consisting of two thousand independent small companies?

This might also be asked with a small twist. Is it possible for the financial community to evaluate Ford as if it were a virtual enterprise and see how each piece actually adds value? If you could do that, investments wouldn't be so blunt, so blind and so bad for most people involved. Investments would be in the creation of real value instead of getting further away via derivatives.

As a part of the DARPA modeling research, we examined the notion of agile enterprises, and particularly the agility enabled by the virtual enterprise. A virtual enterprise is a collection of small organizations that are integrated, and operate so as to accomplish what is expected of a traditional enterprise (one comprised of a central, large prime contractor/integrator and a hierarchical supply chain of partners, subcontractors, suppliers and consultants). Virtual enterprises are of interest because:

  • Most of the innovation in advanced complex systems originates in small independent groups.
  • Most of the job growth in the US and Europe in the last 40 years has been through small businesses.
  • Small groups are generally more agile, more productive and more responsive.
  • Small businesses can be more flexible in supporting the lifestyle of its workers.
  • Small groups are more comfortable for creative people.

The research surrounding such organizations was comprehensive. It identified several models of integrated enterprise that would be desirable, but are constrained by current integration methods. Some examples of these models and their characteristics are:

  • Partners may be widely dispersed, have no conventionally auditable trust metrics and are unknown to each other.
  • Partners may not exist in the form required for the enterprise, and/or be required to perform a task or use a process that is unknown to them, or does not yet exist.
  • Partners are radically heterogeneous in fundamental ways, so as to enhance innovation and competitiveness. These operations are characterized by diverse information systems, models and business practices. Such diversity can also extend to fundamental differences in values, analytical methods and ethnomathematicalinsights.
  • The goals of partners in the enterprise may not be externally quantifiable. Instead they might be participating for market share/introduction, brand building, experience enhancement, competitive blocking or even some seemingly irrational goal.
  • A partner might be a virtual enterprise in its own right, with opaque internal structures.
  • Partners might play unconventional roles. These could include the ability to intrusively modify the roles and processes of other partners, to add or exclude partners, or even put themselves out of the enterprise.

These characteristics enable advanced virtual enterprises that are highly dynamic and frangible. The dynamism may be a continuous optimization, but could also include radical product or service pivots, or major reorganizations among partners. The frangibility can be expressed as lowered costs of an expected dissolution, internal reorganization or graduation to a more conventional model if needed.

Benefits

The anticipated benefits of these types of advanced virtual enterprises are:

  • Reduced costs of failure so that more and greater risks can be taken, radically affecting some markets.
  • Increased productivity, based on the assumption that self-organizing methods can reduce the ratio of the costs of management processes compared to processes that create direct value.
  • Increased innovation, as small groups create more leverageable intellectual property than large cumbersome organizations.
  • Improved upward mobility for organizations and individuals based on their actual added value.
  • Greater opportunity for advancement in the developing world, as opposed to neocolonial exploitation.
  • Reduced political power of multinational corporations, making them less likely to compromise responsible government.
  • More effective value development strategies emerge, as the methods of management of capital investment become decoupled from the methods of production.
  • Happier workers, based on the notion that actual value added is better rewarded, unnecessary institutional rules are minimized and group processes are more flexible.
  • Improved national economies, based on the experience that small businesses are traditionally the basis of economic health (in the US, Europe and much of the rest of the world).
  • A return to the coupling of liberal democracy and free markets, as market forces are allowed to do what they do best, without distortions from powerful oligopolies.

Internal studies and some pilot programs in the military sector indicated that radical improvements in innovation, as well as complexity of product integration, product cost and/or time to market can result when some of these advanced virtual enterprise concepts are empowered (by technical, legal and political means). A central requirement for these features was improved novelty in the information infra-structure.

A Simpler, Easier Notion

Note that this notion of virtual enterprise is radically different from the concept of virtual enter-prise (or virtual organization) adopted by a majority of EU-sponsored programs (Camarinha-Matos & Afsarmanesh, 2004). These programs emphasize a notion of:

  • Generally collocated,
  • Existing small businesses organized and coordinated by a prime contractor or similar agent,
  • Who perform known tasks that are stable and clearly advertised,
  • And which are prequalified for joining the enterprise, including harmonization of process and business models,
  • With pre-arranged legal documents,
  • To deliver conventional goods (compared to novel advanced aerospace systems).

Interoperability in this context is much the same as in a unified enterprise (except you use the web more). It depends on preserving the centers of influence and the definitions of enterprise and operation. Disruptive models cannot emerge from this.

Ethnomathematics is the study of abstraction systems in cultures other than the modern West.

It relies on two assumptions:

  • that humans are generally intelligent and that wherever you find them you will find complex notions. Discovering these requires ethnographic methods.
  • that many ideas that we believe are intrinsically fundamental in mathematics are cultural constructions or notational artifacts.

to these we add a less commonly accepted assumption:

  • that early modern humans (of say, 20,000 years ago) were capable of equal sophistication of thought.

The systems we are designing are intended to be closer to the real world than the more opportunistic conventions of mathematics and specifically logic. So we are interested in what is common across these systems. (The answer includes symmetry.)

Many problems in science seem hard only because we have not yet found the right abstractions. For these problems, it makes sense to tap alternative abstraction systems, regardless of their literate presentation.

Similarly, we presume that in cases where disruptive innovation matters, a virtual enterprise with several abstraction systems in play will be particularly powerful.

Historically, integration/interoperability is a matter of:

  • adequately shared, structured information (cheap, correct, timely, trusted, useful…),
  • that is relevant to the formation, operation and optimization (lean or agile),
  • of all functions in the potential organization that are to be reached.

In this context, integration strategies have evolved in two ways: by increasing the scope of what can be included and by becoming more relevant to activity at the local level. Both of these depend on the nature of the shared information. In turn, the nature of the shared information depends on the scope of the underlying abstractions.

In initial interoperability, the common denominator was numeric abstractions that took the out-ward-facing quantitative baseline — the cost to customers — and used it internally. Today this is called Activity Based Costing. Numeric abstractions have been built into contract boundaries and then extended into internal operations, addressing not only cost but duration of activity, quality, responsiveness and even trustworthiness. In retrospect, we’ll call these abstractions quantitative features.

Enterprise integration depended on this form of abstraction for centuries, maturing in the period between the two World Wars.

Then with the introduction of numerically-controlled milling machines (and previously, looms) — and the sponsorship of wartime manufacturing research through the ICAM program — these abstractions transitioned from numeric to symbolic (in the computer science sense of symbolic). The evolutionary path is easy to trace as the abstractions moved to what we can call product feature abstractions. The shift was from an administrative artifact to a goal-oriented artifact: what was being made.

Because the design and specification of these artifacts happened to be expressed in digitally created engineering drawings, the development of these abstractions was associated with the CAD industry and CAD-centric exchange standards. But that is merely a residue of the previous exchange technology — paper-based drawings. The effect was to move the nature of enterprise models — and hence interoperability abstractions — to the physical goal of the enterprise: the created object.

Evolved Abstractions

As with every step of the evolution of abstractions, the new one inherits all the attributes of the old. Even in the early days, product features were primarily characterized by numeric measurements (geometric and other physical properties) and by annotations that carried all the old cost, quality and other metrics.

We’ve already noted the power structures within the enterprise that require abstractions such as those related to cost and product. It is important to note that the abstractions create the power centers, not the other way around. Until the science base of product features was created, there was no agent in the enterprise who owned that enterprise-wide engineering function. The ability to harness that functionality came from the science base.

More recently, the abstractions and interoperability tools evolved in a different way, from an orientation towards nouns (‘what we make’) to verbs (‘how we make it’). So-called process feature abstractions resulted, and this new level of modeling opened complexities and promises that are still being dealt with. The promise is tantalizing because when we integrate a wide-ranging enterprise system we are really trying to put the working pieces together in the best way. We are integrating processes.

Abstractions that describe how the components of the enterprise actually perform the work is the best way to engineer at this level. Once again, however, all the older abstractions have been clumsily accommodated. For example, a process feature is annotated by describing how it transforms or adds a product feature. (In the current state of the art, service attributes can be captured as product attributes. Service attributes can include be included in product features. Brand value is an example.)

The challenges to designing a new, more incisive set of abstractions are significant. The early, numeric, abstractions can be managed by simple arithmetic, matrix operations, statistics and linear algebra — all of which are simple to comprehend and code. Anyone in the enterprise knows what numbers are and how they combine.

Product Features

Product features are more complex, but as they are statically-typed and noun-centric, the computational difficulties are relatively trivial, appearing only in scalability. Products are generally physical items that are easy to comprehend. The abstractions derived from them thus become only slightly less intuitive when they must encompass a pervasive vision throughout the enterprise. For example, aircraft are complex systems but the notion of an aircraft and its parts are easy to imagine. (Integration of soft-ware and pure service enterprises lacks this advantage.)

But process abstractions present more challenges, because everything is dynamically connected, and involves entire situations of elements. A minor change in one area can produce nonpolynomial-hard or non-deterministic changes in the evolving system. Such permutations populate incomprehensibly huge sets. A requirement that each process see the integrated process model seems optimal, like the way each product agent can (theoretically) see and evaluate its role in an integrated product model. But this is simply impossible from the scalability perspective.

Moreover, until recently, we did not have good formal foundations for process calculus and process ontologies. We are improving in this regard, but the complexity is growing faster than our ability to manage it. The dynamism is just too great.

(And this is before one addresses the problem that potential partners may be fine with modeling and revealing some characteristics of their processes, but unable to describe them all. The core competi-tive advantage of a business is often captured in these process models, if they are done well.)

What we need is the next logical step in enterprise abstractions. We’ll call it value feature abstraction.

Value Feature Abstractions

A value feature abstraction would be drawn from an external view of the enterprise, in order to express what customers really are paying for: what they value. Value feature assemblies would be related to product features, but they are not equivalent; a given array of product features would be only one way to deliver a portfolio of value features. As already noted, product features should include brand value, lifestyle definition and other soft deliveries.

Value features would be complex expressions of why an enterprise exists, or might exist. These expressions can cover influences of what we used to call stakeholders, and include societal values (community and human well-being, ecological sense...). Complex ‘statements’ composed of these abstractions should be decomposable into the ‘nouns’ of product features and ‘verbs’ of process features. These would be related by a higher level abstraction that characterizes how the relations themselves relate — a simpler way to understand this is to think of it as the overall ‘story’ of the product’s purpose. The ‘noun’ and ‘verb’ process features should in turn be decomposable into quantitative metrics (cost, quality, agility...). Each decomposition will be lossy in the sense that some ontological richness is lost to gain computational efficiency.

Engineering New Agents

Story-creating biological systems.

Suppose that you took the system we just described. It models emergent behavior, and you coded it in such a way that it understood that behavior well enough to host a universe.

Also suppose that we used relatively normal agent coding techniques. The magic is in the multilevel logic, not the action code. Such a system would contain worlds that exhibit exceeding complex and unpredictable emergent behavior. You could, for instance just allow virtual enterprises to emerge to new forms and keep emerging as if they were living systems.

You could even go further and work with real living systems (as in human bodies), a goal of ours.

Using ordinary methods, before you could simulate a system, you would need to understand and model all the relevant dynamics. That's simply not going to fly in medical research; the body is too complex to understand at present. What we do understand, we don't know how to model well. and what we model well enough for some needs are not causally modeled in the way you could use in building workable agent systems.

But suppose you could model what you know and turn the system loose. Every time it goes off and develops a causal mechanism you haven't seen, you could simply look at the human system and see if it correlates. If yes, you now know something you didn't. If no, you refactor the dynamics and start over. We suppose that convergence on reality-mirroring systems can be quick.

If you had this, what would it give you? A mirror of the way the body works in many interrelated levels simultaneously. A model that you can exercise in massively more and quicker ways than rodent models.

A big missing piece is how do you see where a system is going awry, and how do you perceive what needs to be fixed? This is where the need for Kutachi is great.

A Narrative Assistant

Our ultimate vision, sHeherazade.

Suppose you had the system just described. It could mature into an intelligent assistant that could have something like a dialog with you and your collaborators. This system could understand many things much better than we could, but also explore possible dynamics we couldn't imagine.

We'd need some capable Kutachi-enabled user interface to keep it aware of what works and what does not, but as we said it will probably converge on congruence with reality rather quickly. That's because all the system cares about is physical dynamics. More precisely, it cares about dynamics that have physical implications.

That is, it understands the physics, chemistry and biology of the body. It probably will use abstractions that aren't familiar, but we are talking about physics, chemistry and biology here.

Suppose you didn't limit the world of the system that way. Suppose you wanted the system to understand you when you speak of disappointment in love, about bad art or about attractive ironies of ignorance. Suppose you wanted to share poetry and religious views. Suppose you wanted to

The FilmsFolded project runs in parallel to the Kutachi Project.

One way to look at FilmsFolded is as a collaborative means for building for structured concepts the equivalent of the physics, chemistry and biology of the biomedical assistant.

Two assumptions are behind this. One is that structured concepts are managed in our minds as narratives. The other is that film is where we explore our most complex and challenging structures.

FilmsFolded will be a few things. Ostensibly it will be a business. It will be a proving ground for our first user interface and Kutachi ideas. It will be fun.

But it will also be a way to discover and test ways of modeling what goes on in our lives.

blog comments powered by Disqus
© copyright Ted Goranson, 2013