PART TWO: Our Models

In this part we present our model of cosmic consciousness in full detail. In Part One we had occasion to refer to two novel aspects of our model: it exists outside of space and time, and it is digital (discrete, or atomic). We also prepared the way for the way for these novel features by describing in some detail the history and concepts of the quantum vacuum, or QV. Our model does not require the QV. However, we have derived our model from a mathematical model for the QV that is characterized by these two novel features: the model of Requardt and Roy. (We will refer to this model as the RR model). We have repurposed the RR model as an abstract scheme for consciousness.

So we begin the exposition of our model, in Chapter 6, with an outline of the RR model for the QV. Then we go on to describe a derived model for the QV, the AR model, in Chapter 7. This model is accompanied by monochrome graphics (color graphics and animations are available online). Part of the motivation of the AR model is to simplify the RR model to its esssentials, and to make it clearly understandable.

In the final Chapter 8 we adapt the AR model for the QV as a model for consciousness, and apply it to the mind/body problem. It is here that we bring together the historical material from Part One with the new methods of quantum physics.

In the Conclusion we will step back and assess the implications of our model. Finally, some relevant articles have been reprinted as appendices at the end of this book.

Chapter 6. The RR model

The full RR model is described in Appendix 1 of this book by Requardt and Roy (2001), but we will need only a brief summary of its essential features, which are collected in this chapter.

6.1. History of the RR model

Recent developments in quantum physics (quantum gravity and string theory) have raised questions about the basic concepts of spacetime and causality at the smallest (Planck) scale. The length and time at Planck scale are the smallest length and smallest time increments below which no measurement is possible. The concepts of space, time, and causality lose their meaning below this scale. Spacetime behaves discretely at the Planck scale.

The RR model was created around the year 2000. Requard had been working on quantum gravity for many years and had published several papers on the discrete structure of spacetime at the Planck scale. He introduced the idea of pregeometry in the following sense. Discrete spatial points transition from a structure of disorder to one of order at the Planck scale in a process somewhat like phase transition in magnetic material, in which the orientations of the magnetic elements change. This kind of phase transition may happen in the case of pregeometric points.

On the other hand one of us (Roy) had been working on probabilistic geometry as proposed by Menger to understand the small-scale structure of space-time. He wrote Requardt about this approach and the thought emerged that both approaches could be combined. This became the RR model.

6.2. Some optional graph theory

Graph theory is the branch of mathematics dealing with graphs. In this theory, a graph is a collection of nodes (abstract points) connected by links (line segments) like a tinker-toy. A directed graph is a collection of nodes connected by bonds (links with a chosen direction indicated by an arrow-head). A directed graph may be turned into a graph by erasing the arrow-heads. A subgraph of graph is a selection of some of its nodes and some of its links. A subgraph is fully connected if every pair of nodes is connected by a link. A subgraph is maximally fully connected if it is fully connected, but whenever another node from its parent graph is adjoined along with all its links, the enlarged subgraph is no longer fully connected. A clique of a graph is defined as a maximal fully connected subgraph.

Abraham and Roy 49

6.3. Dynamical cellular networks

A dynamical cellular network is a system similar to a neural network consisting of:

• a directed graph, • a natural number (positive or zero integer), the node-state, attached to each node, • a label, −1,0, or +1, called the bond-state, attached to each bond, • a counter, which ticks off increments of ”network time”, and • rules according to which all node-states and bond-states change with each tick.

We conceptualize such a system as a flow diagram. Each node is envisioned as holding a quantized amount of information or ‘charge’ that is changing, step-by step, with a ticking clock that is part of the model, and keeps track of (discrete) microtime. With each tick of the clock, quanta of information will flow through each bond, like tennis balls through a pipe. Specifically, if a bond connects a node A to a node B and has bond-state +1, then, at the tick, information will move from A to B. The quantity of information at A will decrease, while that at B will increase, by discrete amounts that are specified by the rules of the model. Similarly, if the bond-state is −1, then information will flow from B to A. And in case the bond-state is 0, no information will flow. Finally, with each tick of the network clock, not only information will flow according to rules, but when the flows are finished, all of the bond-states will change according to another set of rules. The two sets of rules are the primary data of the model, and are spelled out in detail in the next chapter.

Here is an example of the kind of rule that might be encountered. For any two nodes of the network, say Susan and George, if there is a bond (directed link) from Susan to George with bond-state +1, or if there is a bond from George to Susan with bond-state −1, and also Susan’s node-state (wealth) is greater than George’s, then Susan will give all her wealth to George.

At any moment in the history of the network, a graph may be constructed by erasing all bonds carrying bond-state zero, and replacing all remaining bonds by links. Thus the network is shadowed by a sequence of graphs, which we call simply the graphs of the network.

In complex dynamical systems theory there are many different kinds of models which are similar to dynamical cellular networks. For example, there are cellular automata, which are a very narrow class of models introduced by von Neumann and Ulam around 1950. These have a lattice of identical discrete dynamical systems, each with finite number of states, and connections to nearest neighbors only. Then there are spin networks, introduced by Penrose in 1971. These are directed graphs with three links at each node, and a weight (a positive integer) on each link. Finally we may mention graph dynamical systems, which emerged around 2000 to model biological networks and epidemics in social networks. These have a graph of nodes with finite states, along with dynamical rules for updating the states with

50 The Digital Akasha

each tick of the clock, depending on the states of nearby nodes. A graph dynamical system is thus a generalized cellular automata, and a dynamical cellular network is a generalized graph dynamical system.

6.4. Back to the RR model

The RR model is a two-level system, comprising two dynamical cellular networks. The model describes how macroscopic spacetime or its underlying mesoscopic substratum emerges from a more fundamental concept, a fluctuating cellular network around the Planck scale. Geometry emerges from a purely relational picture a la Leibniz. The discrete structure at the Planck scale consists of elementary nodes which interact or exchange information with each other via bonds that play the role of irreducible elementary interactions. Essentially, the RR model

The microscopic level, QX is a dynamical cellular network of nodes and bonds. The macroscopic level, ST, that self-organizes from QX, is another cellular network of nodes called supernodes and bonds called superbonds. The supernodes of ST of are cliques of the underlying graphs of QX, The system of RR ends with a metric space, that is, a geometric space endowed with a ruler for measuring distances between points.

Here we will briefly describe the condensation process, by which the ST (spacetime) network is derived from the QX (quantum vacuum) network.

6.5. The Condensation Process

This process creates the ST universe from the submicroscopic QX network, which is fluctuating in its own microtime scale, outside of ordinary space and time. With each microtime tick of the network clock, the QX network is updated. We now imagine that after a rather large and perhaps variable number of these microtime ticks, the state of the universe is to be updated, or recreated, to a new state that we will call an occasion. The process of creating an occasion from the activity of the QX network we call condensation, following the early philosophers. The ongoing condensation process and its sequence of occasions create space, macrotime, and the spacetime history of objects moving through spacetime, that we experience as human consciousness.

We may never know the details of the condensation process. But we will now attempt an abstract outline. Our goal now is to envision a process with the digital QX network as input, and the analog universe as we know it as output: the condensation process.

First of all, we may assume that although only one occasion at a time is allowed in the universe, all prior occasions have been memorized. These data may be needed for the

Abraham and Roy 51

algorithmic process of condensation.

Our experience of computer graphic creation of simulacra such as science fiction films and animated videogames provides some guidance. Objects, once created, may be placed (or as we say, instanced) in spacetime in sequential occasions merely by giving the spacetime coordinates of a central point, and updated such attributes as relative size, color, texture, temperature, biochemical concentrations, or what-have-you.

It would be very convenient if we could condense spacetime for once and for all, and then use it repeatedly for successive occasions. However, no matter whether we follow the paradigm of general relativity, that of process physics, or some other upstart coming down the line in the future, we will have to deal with the algorithmic evolution of the spacetime geometry. So, the emanation of the spatial substrate must be repeated in each condensation, and must be ongoing as we speak. We have modeled this construction in a two-step process. This is the important innovation of the RR model.

First, we pose a protospace, a discrete (digital) space with characteristics of macroscopic three-dimensional space. It is another dynamical cellular network, the ST network. Like the QX network from which it is algorithmically derived, the ST network changes with every tick of the microtime clock. The algorithm is easily understood in the context of graph theory. The nodes of the ST network are cliques, groups of nodes of the QX network.

In the second step of the RR model, a continuous geometry is derived from the ST network by a smoothing process.72 In this process, the ST nodes are regarded as fuzzy lumps of space, replacing the usual notion of points of Euclidean space. These fuzzy lumps were proposed in RR as the active sites of the quantum vacuum, at which elementary particles appear and disappear in particle/antiparticle pairs.

This concludes our summary of hte RR model. In the following chapter we, will present a modification of this process that we call the AR model. In this process, the ST network is endowed with a sort of geometry, or pseudogeometry, in which two fuzzy lumps are considered close to each other if they have a substantial overlap when seen in the ST context. RElatively more overlap translates as closer together in space.

We then embed the ST network into Euclidean three-space as isometrically as possible. That is, we try to position the fuzzy lumps as points in Euclidean space so that their Euclidean distance approximates their pseudogeometrical distance (or fuzzy overlap) as closely as possible, using a neural network sort of approximation procedure.

72See (Requardt and Roy, 2001), which is reproduced as Appendix 1 here.

52 The Digital Akasha

6.6. Afterword on Process Physics

It seems that the search for unity in modern physics has encountered difficulties, and some experts suspect it is at a dead-end.73 One radical alternative to the current approach is process physics.74 Only time will tell which path will lead to victory, and a theory of everything. However, we cannot avoid pointing to some similarities between our model and process physics. These include: a model outside of space and time, two kinds of time (both discrete), and an ultimately reality that is somewhat like a neural network. According Reginald Cahill, a leading exponent,

In process physics and process philosophy reality is a succession of distinct temporal states, actual occasions, where perceivable and/or detectable aspects of reality are those long-lived states that persist because they are protected from immediate decay and dissipation by their fractal topological structure, and their ‘laws’ of global time evolution are emergent and not imposed. Because we have an intrinsic non-local pattern recognition system, with innovation enabled by the noise implicit in the limit to self-referencing, we see that reality is analogous to the operation of neural networks, that it is mind-like.75

It is now time to encounter our QX network and the AR process in detail, with graphics.

73(Smolin, 2006), (Woit, 2006) 74(Cahill, 2006, 2008) 75See (Cahill, 2008; p. 123) and (Stapp, 2007; Ch. 13).

Abraham and Roy 53

Chapter 7. The AR Model

Our model for the quantum vacuum, the AR model, departs only slightly from the RR model. It is simplified version of hte RR model. We now summarize this model, and also illustrate it with computer graphics. This chapter is adapted from our first joint paper.76 Further technical details are given in Appendix 3.

As in the RR model of the preceding chapter we will have macroscopic spacetime, ST, emerging from a more fundamental concept, a dynamical cellular network, QX, which is outside of space and time. The system of RR ends with a metric space, but we follow a different method to advance to a macroscopic cellular network embedded in ordinary, flat, three-dimensional Euclidean space. We would like to obtain an isometric embedding, that is, a mapping of our macroscopic cellular network into Euclidean space that preserves the distances between nodes, but that is mathematically impossible in general. Even though an isometric embedding is not possible, we will try to approximate one using neural network technology.

Agent-based modeling is a new style of computer programming, suitable for modeling dynamical networks, cellular membranes, and complex dynamical systems in general. There are several programming environments for agent-based modeling, and we have used one of these, NetLogo,77 to create a computer simulation of a simplified version of the RR model, that we call the AR model. This simulation helps to understand the action of the model, and we include in this chapter some monochrome graphics created by the simulations. The NetLogo models are made available on our website where anyone may run them as applets, and we encourage this as a supplement to this text.78

7.1. An outline of the AR process

So, we now describe a NetLogo model of spacetime that self-organizes from a submicroscopic cellular network. Here is an outline of the five-step process, along with the math concepts required.

1. We begin with a dynamical cellular network, QX, with its cellular automaton-like dynamics, as described in the RR model.

2. Recall that QX consists of nodes connected by bonds (directed links, or arrows). If we drop the arrow heads and the bonds with bond-sstate zero, we have a graph. As in the RR model, the process leading from QX to QT at a given microtime proceeds from the graph G of the dynamical cellular network QX. 76Abraham and Roy, 2006 77 78

54 The Digital Akasha

3. In our AR model we interpolate an extra step. A permutation is a reordering of the ordered set (1,2,3,...n) for some positive integer, n.. For example, the sequence (ordered tuple) (1,3,2,4) is a permutation of the sequence (1,2,3,4). Permutations are much studied in a branch of mathematics called combinatorics, and are very useful in graph theory as well as other branches of math. Any permutation may be represented as a graph. Simply place the index sequence (1,...,n) around a circle, in clockwise order, starting at the top. Given any two of these indices, say A and B, with A preceding B in the original order, and draw an undirected link from A to B only if they are in reverse order in the permutation. This is called a permutation graph. Each permutation has a unique permutation graph, and vice versa. And now, back to our AR process. Rather than defining the emergent supernodes directly as the cliques of the graph G of QX, we derive from G its permutation graph, which is made as follows. Count the nodes of G. That is, list the nodes in any arbitrary order, Assign the number 1 to the first one in this ordering, 2 to the second one, and so on. Now indicate the nodes in order around a circle, and fill in the nondirected links from the data of the graph, G.

4. We now define the supernodes of the emergent ST as the cliques of the permutation graph of P, rather than those of G. The purpose of this extension is to achieve a manageable computational task. While the computation of the cliques of a general graph is very difficult, it is relatively easy to compute the cliques of a permutation graph.

5. Spatial geometry is going to evolve from the dynamics of the QX network. For the emergence of spatial organization we use a neural network approach, based on the differences of finite sets, rather than the random metric of RR based on fuzzy sets.

And now for some details.

7.2. The QX model

We consider a set of nodes. The number of nodes in a serious simulation would be astronomical, but for the same of illustration we will take a small number, such as 6. In general, we let N denote the number of nodes. Also, we assume that the nodes given an arbitrary ordering, (A1,A2,...,AN), for convenience in describing (and programming) the model. The subscripts, 1,2,...,N, etc called indices.


Nodes have node-states, which are interpreted as quantities of information. That is, each node has an attribute, its node-state, which is an integral multiple of a small positive number,

Abraham and Roy 55

the quantum of information. Choose an index number, say i, and consider the i-th node, Ai. Then we will let si denote the node-state (information storage) of Ai.

Now choose another index number, say k, with i < k, and consider the two nodes, Ai and Ak. Then Ai precedes Ak in the given ordering, and we have a bond (that is, a directed link) directed from Ai to Ak. We will denote this bond by bik. And each bond has a bond-state, Jik, which is +1, 0, or −1. The bond-state may be interpreted as outgoing, off, or incoming, respectively.

Let sik denote the difference between the information si stored at the node Ai, and the information sk stored at the node Ak. That is, sik = sk − si. Sometimes we call these node-diffs.

In this approach, the bonds are information pipelines, and the bond-states are switches which can be switched on, off, or reversed. The wiring diagram, the pure geometry of the network, is an emergent, dynamical property and is not given in advance. Consequently, the nodes and bonds are not arranged in any regular way, and have no fixed near/far relations.

We will also have occasion to refer to the node-weight of a node. This is defined as the number of bonds connecting our node, Ai, to any other node, for which the bond-state is not zero. It will be denoted by wi.

Local dynamical law

The internal node and bond states are to be updated, in discrete steps of clock microtime, according to a set of rules. The rules are the same as those of the RR model, briefly mentioned in Section 6.2 above. While various local dynamical rules might be contemplated, we are going to use just one set of rules, which is given in Definition 2.1 of the RR model.79 Here we will paraphrase Definition 2.1. Assume two critical parameters given, 0 ≤ λ1 ≤ λ2. Then these are the rules.

• Each node-state (information store) is increased by the net amount of incoming information from all its bond neighbors • Each bond-state, Jik, – is unchanged if the node-state at Ai is equal to that at Ak (sik = 0) – becomes +1 if the difference is positive but not too much so (0 < sik < λ1) – becomes −1 if the difference is negative but not too much so (−λ1 < sik < 0) 79See Appendix 1 for the precise specifications.

56 The Digital Akasha

– becomes 0 if the difference of node state at Ai and that at Ak is too large (sik > λ2 or sik < −λ2) – becomes +1 if Jik is not 0 and the difference is medium positive (λ1 < sik < λ2) – becomes −1 if Jik is not 0 and the difference is medium negative (−λ2 < sik < −λ1) – becomes 0 if Jik = 0 and the difference is medium positive or negative

Of course, we must have some initial conditions, si(0) and Jik(0) in order to begin a dynamical trajectory of the cellular network.

Graphical displays

Our model will begin with random values for the node-states and bond-states, and then evolve with discrete steps of clock microtime according to the rules above. The node-states, si, node-weights, wi, and bond-states, Jik, are changing with each tick of the clock. Our computer simulations will display these data, for every tick of the clock, in a set of rapidly changing graphical displays.

Our first display will show the instantaneous bond-states of QX, Jik, which take on only one of the three values, +1,0.−1. Note that there are no bonds bik(t), having i = k, as they would connect the node to itself. Therefore there are no bond-states, Jii to display. Also, we only have bonds bik for i < k, so we only need to display the bond-states, Jik, for i < k,. Hence the values Jik(t) we need to display comprise what mathematics calls an upper semi-diagonal matrix of size N ×N. Within this triangular display, we will indicate the three bond-state values with the color code: green for +1, red for −1, and yellow for 0.

We use the diagonal of the triangular matrix to show the node-states with colors: red, orange, yellow, or green, for decreasing values of node-state, si, which is the current charge on the i−th node. Alternatively, we may show the node-weights on the diagonal. All this is shown in Figure 7.1, which is a screen shot of a NetLogo simulation.

A second display shows the node-diffs in a convenient color code, above the diagonal, and the node-weights on the diagonal.

A third display shows the digraph as follows. For any (i,k),i 6= k, the corresponding position in the display is illuminated if there is a directed link from the i−th node to the k−th.

The fourth and final display is the simple undirected graph underling the digraph, shown as a symmetric matrix.

Figure 1: The NetLogo graphics window showing bond-states and node-weights. Note that the indices (i,j) are coordinates in the display, with the first index, i, increasing horizontally from left to right, and the second index, j (recall i < j), increasing vertically from bottom to top. The diagonal elements, (i,i), run from the lower left to the upper right corners. The colors (which may be seen in the online computer simulations) appear here as shades of gray.

7.3. The ST model

The process by which the ST network self-organizes from QX, as described in RR, uses, as supernodes, the cliques of the graph G that underlies the dynamical cellular network. As described above, we are going to modify the prescription of RR by the addition of an intermediate step, the permutation graph, P of G.

The supernodes

Recall that the node-weight, wi, of the i-th node, is the number of its adjacent nodes, t hat is, the number of bonds attached to it. Next, we form, for the i−th node, the pair (i,wi), and collect all of these in a sequence of pairs, A. Now we sort this sequence of pairs in order of decreasing weights, obtaining a new sequence of pairs, B. Finally, from B, we extract the sequence of first members, obtaining the N-permutation, P. This is a reordering of the ordered set, (1,...,N). We may now easily compute the cliques of the permutation graph of P as the supernodes for the ST network.

One may object that the cliques of the graph of P are not intuitively motivated, but we feel that they are at least as meaningful as the cliques of G. In fact, if we were to try to identify the cliques of G by hand, we would probably start with the nodes of highest weight.

Our NetLogo model includes a button ”show permutation” that prints out, when pressed at time t, the permutation, P(t). It is to export this to an external program, such as Combinatorica, to compute its cliques, and then to submit these to a further NetLogo model (or self-organizing map software) to obtain the ST model.

The clique computation

The cliques of a permutation graph are just the inverse sequences of its permutation, which may be found by inspection, or by software such as Combinatorica. We explain by considering a few examples. Here we will follow (Pemmeraju, 2003; pp. 69-71) closely, except that we use parentheses rather than brackets for vectors, that is, sequences of natural numbers. Example 1

Let π be the permutation (6,5,4,3,2,1) of the sequence (1,2,3,4,5,6). Then the inversion vector of π is the 5-vector v = (5,4,3,2,1). The permutation graph of π, Gπ, consists of the six nodes with a link from i to j only if they are inverted, that is, i < j while π(i) > π(j). In this case, all nodes of Gπ are linked: 65/2 = 15 links.

In (Pemmeraju, 2003), the clique of a graph is a subset of vertices which are totally connected. We say a clique is maximal-size if no node may be adjoined without destroying the clique property of total connection. In (Requardt, 2001), a clique is always maximal-size, and we shall use this convention throughout. So in this example, there is just one clique: the entire graph is totally connected. The unique clique is the set, {1,2,3,4,5,6}. This is a set of nodes (indices) of Gπ, not of values of the permutation, π.

Example 2

Let π be the permutation (3,2,1,6,5,4). Then the permutation graph, Gπ, has six links, for the inversions: (1,2) as π(1) = 3 > π(2) = 2, and similarly (2,3), (1,3), (4,5), (5,6), and (4,6). There are two cliques, each of the same size, 3, which are disjoint. The permutation graph is the disjoint union of the two cliques, {1,2,3} and {4,5,6}. Note that the cliques of Gπ correspond to maximal decreasing sequences of π, and these are observable in reading π from left to right. It is easiest to reverse the sequence of π, and read its maximal increasing sequences. In this case, Reverse(π) = (4,5,6,1,2,3) from which we read immediately the two cliques, {4,5,6} and {1,2,3}

Example 3

Let π be the permutation (3,6,2,5,1,4). In this case, Reverse(π) = (4,1,5,2,6,3) from which we read immediately the two cliques, {4,5,6} and {1,2,3}, as before.

Example 4

Let π be the permutation (4,1,2,3,6,5). In this case, Reverse(π) = (5,6,3,2,1,4) from which we read immediately the four cliques, (5,6), (3,4),(2,4), (1,4).

The superbonds and weights

Given a permutation arising from our simulation of the QX cellular network, we are going to define its cliques as our supernodes, that is, the nodes of our ST digraph. So we now need to connect these clique supernodes with bonds, the superbonds of our scheme. It is here that we diverge from RR, and follow a new path to precise sets and weights of entanglement, rather than fuzzy sets and random metric distances. We will use Example 4 above to illustrate the concepts.

Given a finite set of natural numbers, S, define its span by the interval of natural numbers,

span(S) = [min(S),max(S)],

and its length as the natural number,

length(S) = card(span(S)) = max(S)−min(S) + 1. Note that the empty set has length zero.

Next, given two finite sets of natural numbers, S and T, define their lap by the set,

lap(S,T) = span(S)∩span(T), and their lapsize by the natural number,

lapsize(S,T) = card(lap(S,T)),

that is, the cardinality of their lap. Note that if the two sets are disjoint, then their lapsize is zero.

Similarly, we define their span by the set,

span(S,T) = span(S T),

and their spansize by the natural number,

spansize(S,T) = card(span(S,T)).

Finally, we define the weight of entanglement of the pair (S,T) (not both empty) by the ratio, weight(S,T) = 1−lapsize(S,T)/spansize(S,T). Note that the weight of two sets with disjoint spans is one. Also, if span(S) = span(T), then weight(S,T) = 0.

We may wish at this point to modify the definition of weight in the case of two sets with disjoint spans, so that the weight may be greater than one, and actually measure the distance between the two spans.

Now let’s compute the weights of pairs of the cliques of Example 4 above. Let K1 = (5,6), K2 = (3,4), K3 = (2,4), and K4 = (1,4). We will compute the symmetric matrix W = [wij = weight(Ki,Kj)]. Note that all the diagonal elements are zero.

We begin with w12. But this is one as K1 and K2 are disjoint. Similarly with w13 and w14, so we have only three weights to compute from the definitions. Here we go: w23 = weight(K2,K3) = 1−lapsize(K2,K3)/spansize(K2,K3), lap(K2,K3) = span(K2)∩span(K3) = {3,4}∩{2,3,4} = {3,4} lapsize(K2,K3) = card(lap(K2,K3)) = card({3,4}) = 2 spansize(K2,K3) = card(span(K2 K3)) = card({2,3,4}) = 3 so finally, w23 = 1−2/3 = 1/3. Similarly, we compute w24, lap(K2,K4) = span(K2)∩span(K4) = {3,4}∩{1,2,3,4} = {3,4} lapsize(K2,K4) = card(lap(K2,K4)) = card({3,4}) = 2 spansize(K2,K4) = card(span(K2 K4)) = card({1,2,3,4}) = 4 so finally, w24 = 1−2/4 = 1/2. Finally, we compute w34, lap(K3,K4) = span(K3)∩span(K4) = {2,3,4}∩{1,2,3,4} = {2,3,4} lapsize(K3,K4) = card(lap(K3,K4)) = card({3,4}) = 3 spansize(K3,K4) = card(span(K3 K4)) = card({1,2,3,4}) = 4 so finally,

w34 = 1−3/4 = 1/4.

Displaying all our weights in matrix form, we have,

0 1 1 1 1 0 1/3 1/2 1 1/3 0 1/4 1 1/2 1/4 0

7.4. The spatial organization

The above simulations are preliminary to the emergence of spatial organisation. In the RR framework, the emergence of spatial organization has been formulated as a random metric space.80 Instead, we will seek an isometric embedding of our cliques and their entanglement weights. We now have our cliques and weights, but notice that the triangle inequalities are not satisfied. These are required for geometry, demanding that the distance from a point C to a point D plus the distance from D to another point E is not less than the distance directly from C to E.

The isometric embedding problem

Even were the distances to satisfy the triangle inequalities, an isometric embedding into a Euclidean space of a given dimension might not be possible. For example, consider the pyramid or tetrahedron, the simplest of the Platonic solids. This is a system of four nodes with all six weights equal. We may isometrically embed in Euclidean 3-space, but not in the plane. In our case, we may have a cellular system with millions of nodes and we wish to embed as isometrically as possible in 3-space, so we must adjust a random embedding by a dynamical process.

So we propose to regard the nodes and weights as a neural network, and try to embed the nodes in Euclidean space (of dimension three) such that the distances at least approximate the weights as well as possible. One technique for this process is the neural network method of self-organizing maps.81 A simpler method, easily implemented in NetLogo, is a multidimensional variant of least squares.82 Let us begin with a random map of the nodes into Euclidean space. Then, sum up the squares of the differences between the internodal distances and the weights. We then move the node positions in 3-space so as to minimize this sum of squares.

The method of least squares (optional)

We will illustrate this simpler method for the special case described in detail in the preceding section. This case has four nodes. As above, let w12 = w13 = w14 = 1, w23 = 1/3, w24 = 1/2, and w34 = 1/4. We are going to try to embed these four nodes in the Euclidean plane, as isometrically as possible. We begin with an arbitrary map of the nodes into the plane, assuming only that all the positions are distinct.

Let pi = (xi,yi) denote the current position of node Ki in the Cartesian plane, i = 1,2,3,4,

and dij the Euclidean distance between Ki and Kj. Then there is a contribution eij = (dij − wij)2 to the square error we wish to minimize. Let E denote the total error, that is, the sum of the six pair errors, eij, for pairs ij = 12,13,14,23,24,34. We regard E as a function of the eight variables, (x1,y1,...,x4,y4). We will adjust the positions so as to minimize this function, that is, to find the most nearly isometric positions. In fact, we will integrate the gradient of E by the Euler algorithm.83

So we must now compute symbolically the partial derivatives of E with respect to each of the eight coordinate variables. Note that E is the sum of six square terms. For any one of the eight coordinate variables, there are three of the six square terms that yield zero. For example, the square term involving p1 and p2, e12 = (d12 −w12)2, has nonzero partial derivatives only with respect to the four variables, x1,y1,x2,y2.

The partial of e12 with respect to x1 is ∂x1e12 = ∂x1(d12 −w12)2 = 2(d12 −w12)∂x1d12 while ∂x1d12 = ∂x1[(x1 −x2)2 + (y1 −y2)2]1/2 = (x1 −x2)/d12 and thus ∂x1e12 = 2(d12 −w12)(x1 −x2)/d12 = 2(1−w12/d12)(x1 −x2) as d12 6= 0. Note that if d12 = w12, which is the result we would like, then ∂x1e12 = 0. Likewise, if x1 = x2.

All of the partial differentiations of E with respect to the eight coordinates are very similar to this first case, we must only take care with the signs.

Thus we find the eight new coordinates, (X1,...,Y4), by the Euler algorithm applied to the negradient of the error, E, as follows. For the first of the eight coordinates of the adjusted configuration, X1 = x1 −(∂x1E)∆t where ∆t is chosen suitably small. Using the above template for all three nonzero terms,

∂x1E = ∂x1(e12 + e13 + e14)

we have, X1 = x1 −2{+(1−w12/d12)(x1 −x2) + (1−w13/d13)(x1 −x3) + (1−w14/d14)(x1 −x4)}∆t The other seven adjusted coordinates are found similarly, Y1 = y1 −2{+(1−w12/d12)(y1 −y2) + (1−w13/d13)(y1 −y3) + (1−w14/d14)(y1 −y4)}∆t X2 = x2 −2{−(1−w12/d12)(x1 −x2) + (1−w23/d23)(x2 −x3) + (1−w24/d24)(x2 −x4)}∆t

Y2 = y2 −2{−(1−w12/d12)(y1 −y2) + (1−w23/d23)(y2 −y3) + (1−w24/d24)(y2 −y4)}∆t X3 = x3 −2{−(1−w13/d13)(x1 −x3)−(1−w23/d23)(x2 −x3) + (1−w34/d34)(x3 −x4)}∆t Y3 = y3 −2{−(1−w13/d13)(y1 −y3)−(1−w23/d23)(y2 −y3) + (1−w34/d34)(y3 −y4)}∆t X4 = x4 −2{−(1−w14/d14)(x1 −x4)−(1−w24/d24)(x2 −x4)−(1−w34/d34)(x3 −x4)}∆t Y4 = y4 −2{−(1−w14/d14)(y1 −y4)−(1−w24/d24)(y2 −y4)−(1−w34/d34)(y3 −y4)}∆t Notice the pattern of signs: + + +,−+ +,−−+,−−−.

7.5. Possible Implications

The validity of the postulates of geometry has been questioned around or below Planck scale during the development of modern physics in the late twentieth century. It is worth mentioning that Riemann in 1854 discussed similar issues in connection with the validity of metrical relations in indefinitely small regions.84 Here, we have started with a working hypothesis that a type of cellular network exists at the ultimate level of the universe from which the usual spacetime emerges. On the other hand, the people working on non-commutative geometry started with the proposition that space is pointless and a kind of non-commutativity of algebra exists at the ultimate level.85 However, they also discussed the concept of fuzzy space at Planck scale. We have shown the emergence of spatial organization using agent-based simulations.