# Re: How many differences, categories?

From: Immortalist (reanimater_2000_at_yahoo.com)
Date: 03/25/05

```Date: 25 Mar 2005 11:36:15 -0800

```

Just Playing wrote:
> Immortalist wrote:
> > Just Playing wrote:
> > > Thank you, I saved it on my computer and I will look at it later.
> > > OTOH my concern is mostly with the the categorization at the
> > > conceptual, verbal level.
> >
> > > It is more like something that could be tested by asking a group
> to
> > > describe verbally an image.
> >
> > > It is a translation from one type of perception, visual, to
> another,
> > > verbal and back.
> >
> > ......In a sparse distributed network - memory is a type of
> > perception.....The act of remembering and the act of perceiving
both
> > detect a pattern in a vary large choice of possible
patterns....When
> we
> > remember we recreate the act of the original perception - that is
we
> > relocate the pattern by a process similar to the one we used to
> > perceive the pattern originally.
>
> I look at pattern as a combination of differences or categories.
> I am wondering how many differences we pack into a pattern before the
> pattern becomes meaningless?
> JP
>

Learning about systems theory I stumbled upon the hi/lo or minimal and
maximal number of nodes that can be in a kind of network. Not only that
but the changing of even a few of the relationships between particular
nodes can alter the entire dynamic.

Here is an example from brain research, noting the degree of arousal
people experience in relation to degrees of complexity of the number
and size of objects in the visual field of view.This is a clue to
interior decoration and the brain;

There are more examples but me search in dBase come up with;

Network logic is counterintuitive. Say you need to lay a telephone
cable that will connect a bunch of cities; let's make that three for
illustration: Kansas City, San Diego, and Seattle. The total length of
the lines connecting those three cities is 3,000 miles. Common sense
says that if you add a fourth city to your telephone network, the total
length of your cable will have to increase. But that's not how network
logic works. By adding a fourth city as a hub (let's make that Salt
Lake City) and running the lines from each of the three cities through
Salt Lake City, we can decrease the total mileage of cable to 2,850 or
5 percent less than the original 3,000 miles. Therefore the total
unraveled length of a network can be shortened by adding nodes to it!
Yet there is a limit to this effect. Frank Hwang and Ding Zhu Du,
working at Bell Laboratories in 1990, proved that the best savings a
system might enjoy from introducing new points into a network would
peak at about 13 percent. More is different.

On the other hand, in 1968 Dietrich Braess, a German operations
network will only slow it down. Now called Braess's Paradox, scientists
have found many examples of how adding capacity to a crowded network
reduces its overall production. In the late 1960s the city planners of
Stuttgart tried to ease downtown traffic by adding a street. When they
did, traffic got worse; then they blocked it off and traffic improved.
In 1992, New York City closed congested 42nd Street on Earth Day,
fearing the worst, but traffic actually improved that day.

Then again, in 1990, three scientists working on networks of brain
neurons reported that increasing the gain-the responsivity-of
individual neurons did not increase their individual signal detection
performance, but it did increase the performance of the whole network
to detect signals.

The prime variable Kauffman played with was the connectivity of the
network. In a sparsely connected network, each node would on average
only connect to one other node, or less. In a richly connected network,
each node would link to ten or a hundred or a thousand or a million
other nodes. In theory the limit to the number of connections per node
is simply the total number of nodes, minus one. A million-headed
network could have a million-minus-one connections at each node; every
node is connected to every other node. To continue our rough analogy,
every employee of GM could be directly linked to all 749,999 other
employees of GM.

As Kauffman varied this connectivity parameter in his generic networks,
he discovered something that would not surprise the CEO of GM. A system
where few agents influenced other agents was not very adaptable. The
soup of connections was too thin to transmit an innovation. The system
would fail to evolve. As Kauffman increased the average number of links
between nodes, the system became more resilient, "bouncing back" when
perturbed. The system could maintain stability while the environment
changed. It would evolve. The completely unexpected finding was that
beyond a certain level of linking density, continued connectivity would
only decrease the adaptability of the system as a whole.

Kauffman graphed this effect as a hill. The top of the hill was optimal
flexibility to change. One low side of the hill was a sparsely
connected system: flat-footed and stagnant. The other low side was an
overly connected system: a frozen grid-lock of a thousand mutual pulls.
So many conflicting influences came to bear on one node that whole
sections of the system sank into rigid paralysis. Kauffman called this
second extreme a "complexity catastrophe." Much to everyone's surprise,
you could have too much connectivity. In the long run, an overly linked
system was as debilitating as a mob of uncoordinated loners.

Somewhere in the middle was a peak of just-right connectivity that gave
the network its maximal nimbleness. Kauffman found this measurable
"Goldilocks'" point in his model networks. His colleagues had trouble
believing his maximal value at first because it seemed counterintuitive
at the time. The optimal connectivity for the distilled systems
Kauffman studied was very low, "somewhere in the single digits." Large
networks with thousands of members adapted best with less than ten
connections per member. Some nets peaked at less than two connections
on average per node! A massively parallel system did not need to be
heavily connected in order to adapt. Minimal average connection, done
widely, was enough.

Kauffman's second unexpected finding was that this low optimal value
didn't seem to fluctuate much, no matter how many members comprised a
specific network. In other words, as more members were added to the
network, it didn't pay (in terms of systemwide adaptability) to
increase the number of links to each node. To evolve most rapidly, add
members but don't increase average link rates. This result confirmed
what Craig Reynolds had found in his synthetic flocks: you could load a
flock up with more and more members without having to reconfigure its
structure.

Kauffman found that at the low end, with less than two connections per
agent or organism, the whole system wasn't nimble enough to keep up
with change. If the community of agents lacked sufficient internal
communication, it could not solve a problem as a group. More exactly,
they fell into isolated patches of cooperative feedback but didn't
interact with each other.

At the ideal number of connections, the ideal amount of information
flowed between agents, and the system as a whole found the optimal
solutions consistently. If their environment was changing rapidly, this
meant that the network remained stable-persisting as a whole over time.

Kauffman's Law states that above a certain point, increasing the
richness of connections between agents freezes adaptation. Nothing gets
done because too many actions hinge on too many other contradictory
actions. In the landscape metaphor, ultra-connectance produces
ultra-ruggedness, making any move a likely fall off a peak of
too many agents have a say in each other's work, and bureaucratic rigor
mortis sets in. Adaptability conks out into grid-lock. For a
contemporary culture primed to the virtues of connecting up, this low
ceiling of connectivity comes as unexpected news.

We postmodern communication addicts might want to pay attention to
this. In our networked society we are pumping up both the total number
of people connected (in 1993, the global network of networks was
expanding at the rate of 15 percent additional users per month!), and
the number of people and places to whom each member is connected.
Faxes, phones, direct junk mail, and large cross-referenced data bases
between each person. Neither expansion particularly increases the
adaptability of our system (society) as a whole.

>
>
>
> > Kevin Kelly......oUt Of cOnTrOl......page 18---
> > http://www.kk.org/outofcontrol/ch2-d.html
> >
> > Somehow human noises initiate similar non-verbal/syntaxic- memories
> and
> > experiences. If the act of remembering and the act of perceiving
both
> > detect a pattern in a vary large choice of possible patterns and
when
> > we remember we recreate the act of the original perception - that
is
> we
> > relocate the pattern by a process similar to the one we used to
> > perceive the pattern originally, then these noises of language gain
> translatable
> > into those experience.
> >
> > --reanimater
> >
> > transcribe: rewrite or arrange a piece of music for an instrument
or
> > medium other than that originally intended.
> >
> >
> > 1. To make a full written or typewritten copy of (dictated
> > material, for example).
> >
> > 2. Computer Science. To transfer (information) from one
recording
> > and storing system to another.
> >
> > 3. Music.
> >
> > 1. To adapt or arrange (a composition) for a voice or
> > instrument other than the original.
> >
> > 2. To translate (a composition) from one notational system
> to
> > another.
> >
> > 3. To reduce (live or recorded music) to notation.
> >
> > 4. To record, usually on tape, for broadcast at a later date.
> >
> > 5. Linguistics. To represent (speech sounds) by phonetic
symbols.
> >
> > 6. To translate or transliterate.
> >
> > 7. Biology. To cause (DNA) to undergo transcription.
> >
> >
> > > If we create a verbal limitation, as in a specified number of
> words
> > > for a verbal description, is it possible for another person that
> has
> > > not seen the image, to recognize it from the verbal description?
> > > Do we create new concepts in order to do it?
> >
> > > How do we describe so many differences with a limited number of
> > words?
> > > What is the minimum number of words to create a "perfect
> > translation"?
> > >
> > > Are the words used for description understandbly for somebody
> else?
> > >
> > > JP