« on hypertrails between vistas, which we create by walking on them | Main | Meaning flows more smoothly if we compact before we ship »

CI, AI, UI or just I?

stonehenge-wallpaper-1.jpg
The term "Collective Intelligence" is sufficiently vague and positive that it is quite easy to agree that of course we want it. We want to be smarter together. 

But we might imply quite different subtexts when we talk with each other about it, and we might make different assumptions. It is probably a good idea to be aware of those differences.

Some would group CI with AI. That would imply that it is constructed, or that mechanisms are constructed that will evolve into the AI, like neural nets or cellular automata. And at some point the AI will become self-aware and we can talk with it. And further down the line it becomes smarter than us, and it might decide to no longer bother to talk with us.

I personally am skeptical of Artificial Intelligence. I'm even more skeptical of the idea of brains as the seat of consciousness. I think it is a little naive to expect that if one constructs a network with a sufficient number of inter-connected nodes, modeled superficially on the cells we see in brains, then, suddenly - a miracle happens - and it becomes self-aware. Oh, there's probably plenty of still undiscovered magic in inter-connected networks of nodes and cellular automata, etc., and we don't really know yet what will be most significant, so we should explore all if it. I'm just saying that it isn't a good idea to bet everything on that one sketchy theory.

Another possibility is what we're harnessing something that already is there. The universe is very intelligently constructed, in ways we can't approximate by many orders of magnitude, even if we can do some clever things of our own. Just think of evolution. Here's a process that has continued without fail for billions of years, continuously developing better and better life forms. A fantastic variety of life forms, all representing clever solutions to many, many problems. Some life forms, like us, even can think abstractly and creatively, and seem to have to opportunity to continue and expand evolution consciously, at a higher order. All set in motion by autonomous processes started many billions of years ago. Mind-blowing. It is entirely possible that the whole thing, the universe, or multiverse, already is one giant immortal quantum entangled 11 dimensional Universal Intelligence. In which case we'd maybe want to tap into that, instead of trying to re-invent the wheel. 

A more down-to-earth way of looking at it is that CI is simply us. How well we work together, and how coherent the result is. If we work together in ways that maximize synergy, where things fit well together, and we accomplish more together than apart, then CI is high. 

I believe that by its very nature, an increase of CI can only be a good thing. Because it is collective. It is us operating at a higher level. As contrasted with the AI scenario, where something alien wakes up and it might or might not be friendly, might or might not like us. A positive level of CI is by its nature friendly to us. A negative level of CI is destructive and maybe suicidal. But, as it is at all times us, it is never really against us. It couldn't ever decide to continue without us. 

It might well be a red herring to pursue a personification for CIs or AIs. You know, the Turing test, where we'd expect a new kind of intelligence to be able to have a conversation indistinguishable from one with a human. It is a bit like the logical mistake often made by religious prophets. Like how Christians cast their God in their own image. Some old man with a grey beard. It might be just as ridiculous to expect a higher order intelligence to be a person, with whom you have idle smalltalk. We don't need another person, we already have plenty of those, we need higher order intelligence.

The kind of higher order intelligence we most need isn't particularly the self-aware kind. Rather, we need a coherent noosphere to operate in. Simply an environment where the information we need is likely to be easily available to us. Like how you've all noticed that Google makes you seem much smarter. You can find information much faster. You seem more intelligent. But we need something several orders of magnitude more flexible and organized. Where it is very easy to know what you need or want to know, very easy to find the people you need or want to work with, and where the decreased resistance and increased efficiency makes bigger things appear to come together effortlessly and "by themselves". That doesn't so much require an AI God who stays up all night figuring things out for us, as it requires an instant friction-less access to huge amounts of reliable information. 

We need systems that make us more intelligent. I.e. that increase our collective ability to solve problems, to plan, to learn, to understand complexity. That might have more to do with understanding intelligence than with constructing an intelligence, or waiting for one to appear.

Comments

Thanks Flemming for this in particular "We don't need another person, we already have plenty of those, we need higher order intelligence."

IMHO a higher order intelligence will most likely emerge as us realising that we are not limited by the identity 'person'.

Gratitude for the unfolding shared meaning making that this conversation is making possible.

Right on, Flemming. Re: AI dilemma, check out Monica Anderson's summary (there is more before that.)

http://artificial-intuition.com/tradeoff.html

I'm wondering if not you're confusing intelligence with knowledge. Google endows you with more knowledge. It doesn't make you more intelligent.

Google provides you with information you're looking for, which makes you seem to know more. It augments your knowledge. Which, incidentally, helps you solve problems faster, or to solve problems you otherwise couldn't solve. To others who aren't as fluent as you with Google might think that you simply have fantastic analytic and problem solving abilities. I.e. you seem more intelligent. It is in part because you borrowed some of the pre-packaged intelligence of others, who shared the results of what they had figured out.

Intelligence and knowledge are of course related to each other, and often intertwined or confused. If you take a standard IQ test, you'll notice that many of the questions require a particular type of knowledge from a certain cultural context. It is hard to test intelligence separately from knowledge. It is maybe even meaningless to separate intelligence from all context.

IMHO i view intelligence in terms of an innate capacity for understanding and using knowledge of varying complexities. it is true that intelligence and knowledge are related but I'm more comfortable separating them since knowledge that is external such as information accesible on google can be lost. It would be dangerous to assume an increased personal capacity by virtue of a larger body of knowledge being accesible. What do you think Flemming?

I'd say that maybe awareness could be seen as innate and distinct from any context. You might as well call it consciousness. Something is paying attention. In that sense, I suppose we could use the word intelligence as well.

But if we're talking about understanding specific situations, using certain kinds of knowledge, solving certain kinds of problems, I'm a lot more skeptical about the existence of an innate ability. You can't just plug a knowledge module into an intelligence. The quality of the structuring of the knowledge is largely what determines how intelligent it is.

Since I do believe in the primordial existence of awareness, there's nothing terribly important one can lose. You might lose bodies of knowledge, and with them the ability to operate intelligent in certain contexts. But, as long as there's still somebody there who can at least pay attention, they can be reassembled as necessary.