Date: 11-27-90 (09:51) Number: 194 of 200 (Echo) To: ALL Refer#: NONE From: TIM RUE Read: (N/A) Subj: AI Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (+) input -> translation -> output. considering how we humans define our enviroment and thoughts in order to communicate, the above is a nutshell description of AI. If you think about it the core of AI is "translation". There is what is called natural language processing, but what is natural language? The languages we use today in human to human communication are many and all are a result of evolution from more basic languages. Computers have there "natural language" too. I don't recall not having to learn english ;-}. From my own investigation and deep thought reguarding AI over several years, I've come to the conclusion that there are constants that yet have to be recognized in order for AI to really evolve. These constants could be considered the atomic structure of intelligence, but will better fit the phrase "atomic structure of translation". As a result of reasearchers playing the "king of the hill" game, these constants are being over looked. Instead of developing or creating AI, there should be a better focus on researching the constants that could only be discovered (rather than created). The dictionary is one of mans references of what he has previously defined and it's the agreed use that makes it useful. However, the words Artifical and Intelligence in the dictionary contridict the term "artifical intelligence". It is a matter of standard (as is the dictionary) that makes things useful. The field of AI lacks a great deal in standardization, perhaps due to the overlooking of the constants. Date: 11-29-90 (09:39) Number: 197 of 200 (Echo) To: RICHARD CARLSON Refer#: 196 From: TIM RUE Read: NO Subj: AI Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (+) -> Could you give at least one example of a "constant" as you are -> using the term? (It doesn't have to be a fully detailed -> description, just a rough idea.) -> -> My own interest is in post-structuralist and deconstructionist -> "literary" theory. A major theme in this body of thought is the -> difficulty of "translation" from one natural language to another. Are -> you thinking of "translation" in a sense something like that? Constants: input is a constant, without it there is nothing to translate. the same with output. these are basic and obvious. Using computers there are constraints as to format of input and output, such as text, binary (text mode, binary mode, text format, binary format). Perhaps a major constant is that of "pattern = definition" just as is the dictionaries (this is an absolute, otherwise learning is impossible). Actually, you can consider a program code to be the definition of the program name (where the definition does indeed contain actions and many other "pattern = definition"). Other constants exist in the process/translation actions and are not as obivious (but are perhaps as simple once recoginized). Recognizing the constants is a matter of determining what must be, reguardless of perspective one has. A plant does translation in getting sunlight, water, nutrients, etc... and then growing. Computers lack something that living forms have, self motivation or drive to survive (at least at this point). But here again there are constants which can be put into action (giving computers this). Recognizing these constants is the first step, the second is to put them into action within the constraints of a computer. On Language Translation: Perhaps the biggest stumbling block is that of not recognizing well enought the constant of "pattern = definition". Language is a way of describe something, a something that is constant reguardless of what language it is decsribed in. Rather than focus on the structure of various languages, focus on the something being described. Once the something is recognized then just describe it in another "non-natural" language.. My time is up.... Date: 12-01-90 (10:08) Number: 204 of 204 (Echo) To: RICHARD CARLSON Refer#: 200 From: TIM RUE Read: NO Subj: AI Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (+) Language is a man made tool used to enable communication. The term "Natural Language" in reguards to "Natural Language Processing" (also the use of the word "Natural" in the term N.L.P) is in error and is resulting in false limitations. Consider the Roman Numerial system, it has no symbol to signify zero or a place value. As a result it is very limited, addition is difficult and the more advanced math is next to, if not, impossible. Todays technology could not have been developed using such a system. An example it that of computers which use at the very base the comcept of 1 and 0. Then came along or was invented another Numbering system (BTW even the abacus could have an empty column) which had a symbol to signify a place value and also represent the concept of nothing or void. Technology moved forward past the limitations of the man created roman numerials. Nature was not limited mans numbering system but man was. -> Most contemporary natural language theorists see "words" -> (signifiers) as made up of a collection or set or bundle (whatever -- -> the notion is of a collection of "units" of "meaning") of -> associations or signifieds or semantemes or semes (whatever -- -> each name for the "unit" has a slightly different shading of -> meaning, presumably reflecting different or differently weighted -> semantemes [i.e. units]). There is nothing pattern-like about -> meaning at this level, often thought of as the metonymy level, -> although the particular semantemes accessed depend upon context and -> the semantemes themselves are one end of a bipolar contrast (e.g., -> hot/cold, in/out, new/old etc.) In language there are limitations also. These limitations have resulted in such concepts as metaphors, analogies, etc. Concepts that are intended to help overcome the limitations of this language. Each language of man has limitations, I understand early American Indians had no word for past or future, the concept of time didn't exist. A few of the word you have used, I have no reference (there not in either of the dictionaries I have - thanks for the elaboration). The metonymy level you mentioned is a result of the limitations of this language and you did mention that the semantemes accessed "depend" upon context (a pattern). I understand the term Bipolar Contrast as "Spectrum". Of the "constants" the concept of a spectrum certainly exist. I believe even neural nets deal with a spectrum between 0 and 1. Care must be taken not to falsely limit oneself when doing research into AI and language processing/translation. Knowledge has no limitations, only the tools of knowlegde expression have limitations. Date: 12-02-90 (08:40) Number: 206 of 206 (Echo) To: TED KIHM Refer#: 202 From: TIM RUE Read: NO Subj: AI Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (+) -> I don't see any inherent problems with trying to design these -> features. Sure, it hasn't been done yet, but they haven't been -> discovered yet either. I believe we are both on the same road -> though. We really want to see simpler, underlying principles with -> emergent abilities. Designing and discovering seem to be on opposite ends of a spectrum. You cannot design something to parrallel something not yet known. Yet you cannot discover something untill you develop the tools to discover it. Programming language is interesting in its use of variables, where a variable is for internal representation of something external (perhaps this relates to brain phy.). Discovery/recognition of the constances relating to knowledge processing shouldn't be difficult, any and all intelligent creatures use these constants, naturally. Perhaps the difficulty is tuning into the natural (where such things as pattents, copyrights, etc... don't and can't exist). There then the development of a way to represent the constants within the constrants of a computer (that which may be patentable?). Date: 12-03-90 (10:47) Number: 208 of 208 (Echo) To: RICHARD CARLSON Refer#: 205 From: TIM RUE Read: NO Subj: AI AND NATURAL LANGUAGES Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (-) -> Notational systems are a very interesting area of study, but they are -> not really that closely related to natural languages. The perspective I was attempting to express is that language has limitations. Weither it be a notational system or a "natural language". One of the problems of language translation is that there are expressions in one given language which cannot be expressed clearly in another. This even applies to programming languages. Langauge is indeed a tool used to express knowledge (Knowledge Expression tools) and each has limitations. The vocabulary of Aero Dynamic cannot explain how a bumble bee or humming bird flies, yet they do. -> Almost any bipolar opposition can be reframed as a spectrum or -> continuum (pregnant/not pregnant can't -- up until recently we -> thought alive/dead couldn't), but natural language actually seems to -> use these contrasts as binaries -- that's one of the things -> that produces so much "illogical" thinking since we really do seem -> inclined to see things as *either* GOOD *OR* BAD. I really don't know much about the vocabulary or technology of structuralist, but I can fill in the spectrum between pregnant/not pregnant. The term "illogical thinking" may be a result of not seeing the whole equasion (sp?) perhaps due to the limitation of the knowledge expression tools being used. Fields of study/ research/ technology do have there vocabulary which is often a subset of a language. Limiting perhaps further the ability to express knowledge. Being inclined to see things as "*either* GOOD *OR* BAD" further limits knowledge expression, but is perhaps done in an effort better communicate (a smaller tool set is less complicated). Language is also a tool used in thought, BUT not always, visualization is also a tool. Consider all of what you see and then describe it so that another sees the same exact thing. In using a tool, one must recognize its limitations, then use the tool when it works well, otherwize find another tool or create one or don't express what you don't have the tools to express. -> I'd like to understand a little more about AI and where it's going to -> see if some of the post-structuralist thought, which uses -> computer-like notions somewhat loosely, might really converge with -> and help guide investigations into information/thought processing. Considering the subject matter of AI (knowledge) I feel certain that the post-structuralist will converge (as perhaps all fields) into helping guide investigations into information/thoght processing. The vocabulary of the structuralist is a tool that can be used in knowledge expression. You can study what has happened in the field of AI to date, but to really determine where AI is headed you will probably need to be intouch with those whom have the position and power to dictate it's direction (gravity). A thought about Language translation I've had that seems solid, is that in he process of translation there is a center point (in the spectrum) that is internal representation. Time is up.... Date: 12-04-90 (09:57) Number: 209 of 209 (Echo) To: ALL Refer#: NONE From: TIM RUE Read: (N/A) Subj: AI - INPUT DEVICES Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL Input Devices enable knowledge to be captured. Man has his input devices of sight, sound, touch, smell, taste. Alone these devices have their limitations, field or spectrum of what they perceive. Together, depending upon which are used, man is able to perceive beyond the abilities of any one of the input devices. Perhaps the so called six sence of man is a result of the team-work of his sences. But where does the sence of gravity come from (touch?). Through evolution man has developed other devices which enable him to sence or give some representation of what he cannot perceive (such as atoms, electricity, etc...). Yet without some abstract tool(s) how is man to store and communicate what he perceives? Man creates the tools he needs to store and express knowledge. This is the basis of language. Internally there must be some form of representation for what man preceives/learns. But what is the form or structure? Whatever it is it must be versatile enough to organize and store all the input a man receives. Perhaps the form is not as important (for AI development) as is the need for versatility. Anything which can be calculated (algorithms are a way to represent alot in a small space) and is capable of representing an infinite number of things. The Mandelbrot fratical set is a possibility. By letting the matimatical formula for each possible point be an abstract representation of a something. Much like how we let words represent things. The available space is unlimited for defining what point is to represent what. Date: 12-06-90 (09:49) Number: 211 of 216 (Echo) To: TED KIHM Refer#: 210 From: TIM RUE Read: NO Subj: AI Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (+) -> There's only one Noam Chomsky in the ol' card catalog, but many -> titles! Well worth a trot to the library! Making a trot to the library, I found quite a few titles by Noam. Not yet having any to read (it'll take a few days for the library to get a couple selections from other libraries) I have noticed the subject matter of the works of Noam. It seams that Noam writes about limitations. With the basic subject matter of AI being "Knowledge" and it's forms of representation, it seems to me there is need to find or create a field of unification. Using computers, with their basic representation being 1 and 0, we have a beginning point for knowledge navigational mapping. The title "Problems of Knowledge and Freedom" seems to suggest a negitive perspective of knowledge and freedom. Or perhaps overlooks the use of constants. I know I perhaps need to read the book. But considering the subject matter of some of his books "Politics" I think it is likley He is influenced by a man made machine which is resulting in limitations. By following basic human rights the Political Machinery which infulences society can be exposed for its true value. Its statement is perhaps "man is not able to govern himself, therefore a man made machine is to govern man". Tools, not rules, are the basic element of knowledge expression. Rules are secondary. You gotta have a tool before you can put down rules on it's use, or at least the concept of a tool. Looking forward to reading some of Noams work. Date: 12-05-90 (03:46) Number: 212 of 216 (Echo) To: TIM RUE Refer#: NONE From: TED KIHM Read: NO Subj: AI Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (+) Much of your discussion *is* the focus of current research! While there are clearly areas of the brain which perform certain tasks, it is also clear that the brain performs synergistically. It is in pursuit of this synergy that technologies such as Content Addressable Memories were developed. These studies look for properties of self organization which emerge from large collections of elements operated on by simple sets of rules. In these distributed memory models, storage of information involves many of the computers variables and each variable may be involved with many stored patterns. --- þ DeLuxeýáa #2979 ú Help! I've fallen and I can't get up! þ QNet 2.04: ILink: Sound Advice BBS þ Gladstone þ MO Date: 12-05-90 (07:10) Number: 218 of 219 (Echo) To: TIM RUE Refer#: NONE From: WILLIAM WRIGHT Read: NO Subj: AI Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (+) TR>-> Yep. I like to think of it as atomic learning. Brain physiology reveals that the learning of complex input is indeed broken up into attributes which are handled by different areas. I believe your statement is accurate as far as it goes. There is a danger in this attitude, however, because the brain is a _distributed_ processor. The "classical" programmer is trained to break a problem into modules, such that each module encompasses all of a particular task. If a module is large (more than a page of code), then break it into appropriate submodules....and so on. This is the classic approach which gradeschools use when introducing kids to programming. When the kids grow up, they continue to look for "atoms" and "constants" and clearly-defined "modules". Neural nets take the opposite approach. If a "constant" exists, it's spread out amongst many nodes and isn't identifiable (or expressable) as a single "rule" or "atom". In this sense, neural nets may have a beneficial effect on science education in general. --- þ SLMR 1.0 þ Not young enough to know everything,but I'm working on it þ RNet 1.06R:ILink þ Console Cmd HQ þ Santa Barbara, CA þ 805-683-0499 Date: 12-09-90 (02:44) Number: 222 of 229 (Echo) To: ALL Refer#: NONE From: TIM RUE Read: (N/A) Subj: INTERESTING READING Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (+) Some of you all (yep, I'm southern) may find some interesting reading from the following Publishing Company. I & O Publishing Company, Inc. P.O.Box 906 Boulder City, Nevada 89005 Amoung their titles: The Philosophical Zero, by Yasuhiko Kimura. This is a rather interesting work which can be related quite well with AI. The Universal Computer, by Michael Thomas. This work refers to another Surpassing Einstein's Goal, by Frank R. Wallace. There are other interesting works but You'll have to contact the company for their current list. Date: 12-10-90 (10:40) Number: 225 of 229 (Echo) To: RANDY BENNETT Refer#: 213 From: TIM RUE Read: NO Subj: AI Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (+) >If you think about it, AI is the only field of "science" that >named itself for what it aspired to, rather than for what it >studies. AI has had to live with the inconsistancies that naming >has brought about for decades. Perhaps it's time for the process of evolution to take effect in reguards to naming. In reaching a goal into a yet unknown it is common to set a reachable goal and once there, having learned a better understanding, set another reachable goal, etc. until what is aspired is reached (where it no longer is an aspiration). I think maybe artifical intelligence has been reached and it's time to set another aspiration. Perhaps Actual or Active Intelligence (keeping with the "A.I." abbrivation). Realizing this evolution of naming has yet to happen, I tend to just use "AI" in communication with others. Date: 12-10-90 (10:41) Number: 226 of 229 (Echo) To: WILLIAM WRIGHT Refer#: 218 From: TIM RUE Read: NO Subj: AI Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (+) >Neural nets take the opposite approach. If a "constant" > exists, it's spread out amongst many nodes and isn't > identifiable (or expressable) as a single "rule" or "atom". > In this sense, neural nets may have a beneficial effect on > science education in general. In an earlier post of mine I mentioned internal representation of knowledge along with the possible use of the Mandelbrot fratical. Allowing an infinite number of points to be used for internal representation. Nobody responded to the fact that it is impossible for both humans and computers to calculate all possible points! Perhaps this is because we can calculate various levels of resolution. Through the concept and application of resolution in a neural net would it be possible to identify "constants" as single "rules" or "atoms"? I don't know enough about the technology of neural nets to know if such application of resolution is possible within a neural net. BTW, You were quoting Ted Kihm from a message where I was quoting him, perhaps your message was intended for the originator of what was being guoted. :-} An ILink node software result perhaps. Date: 12-10-90 (10:42) Number: 227 of 229 (Echo) To: TED KIHM Refer#: 219 From: TIM RUE Read: NO Subj: AI Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (+) >In the tapestry of life, the seams *are* the limitations! But >seruiously, your concern on limitations has me wondering how this >view fits in with your earlier post of: In the spectrum (possible context) of the use of the word limitation there is the real or abstract to which the word references. Limitations of humans, birds, trees, etc. are real (physical reality). Limitations of abstract concepts (such as political systems) are not conected to reality, but rather to the definition of the concept and application of it. In other words abstract concepts are tools used to represent reality and as a result have the ability to be deceptive of reality, causing false limitations to those using them, of what is possible. Knowing the limitations of the tools one uses will help one to deal with reality better. And it can also help one to decieve others who don't know the limitations of the tools used. Date: 12-10-90 (10:48) Number: 228 of 229 (Echo) To: ALL Refer#: NONE From: TIM RUE Read: (N/A) Subj: REDEFINING Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (+) An interesting aspect of mans consciousness is his ability to define, re-define, refine, organize, and re-organize the tools he uses for knowledge expression. Such abilities are only possible through consciousness. Date: 12-10-90 (12:28) Number: 229 of 229 (Echo) To: ALL Refer#: NONE From: TIM RUE Read: (N/A) Subj: ORDER/RESOLUTION Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (+) A rather interesting organization of knowledge expression tools can be found in the organization of a Thesaurus. The orgainzation allows for various levels of resolutions (and the concept of "focusing in" to be applied). Using Roget's Thesaurus and a primary resolution of 6 you would have the "Plan of Classification" then from here (depending on which class you access) there is a secondary resolution "Section" of a given number of elements. And from here (depending on which section you access) there is the next level of resolution "Degree?" of a given number of elements. From here (depending on which degree you access) there is another level of resolution which further breaks down into english (verb, adj, adv, phr, noun, etc.). Going to the next level of resolution your at the word resolution level, which is defined by the overall path you have taken. At this point it would also be possible to associate/connect the "Dictionary" definition, but is perhaps not really needed. Using the concept of the Mandelbrot fratical as a mapping tool, it becomes possible to access or navigate the knowledge expression tools via mathmatics. What would be represented with a primary resolution of 2? Perhaps the concept of bipolar contrast. Date: 12-12-90 (12:14) Number: 230 of 233 (Echo) To: ALL Refer#: NONE From: TIM RUE Read: (N/A) Subj: FORMULA? Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL A constant is the element which enables a variable to travel. A constant is defined via resolution. An example is the technology of "Electron"ics where the resolution is of three elements (electron, protron, nutron). The protron and nutron are the constants upon which the electron has something pass by. At another resolution there would also be a constant and a variable, the constant being internal to the resolution and the variable being external. Super conductivity is established by stablizing at a given resolution what is to be used as a constant (defining a constant), perhaps determined by what is to be used as the variable (in this case the electron - which internal constant is also stablized). Ceramics, normally an insulator, when cooled (atomic speed is decreased) becomes a super conductor. The syncronization is set so that resistance of the variable against the constant is eliminated. Constants in AI development need to be discovered which will allow variables to freely flow (the resistance or acceptions eliminated). The constants will be of a finer resolution! Meaning the knowledge expression tools to be used as variables cannot be used as constants, (at least not at the same resolution) otherwise resistance happens. However, the variables can be used for resistance at the same resolution (such as the phrase "try to take it easy", where the word "try" causes resistance to "take it easy"). Another use of resistance is in refining meaning via causing resistance to what is not meant to more clearly express what is. A constant must be versatile enough to handle all variables and combinations of variables. An example is the primary constants of computers (the repesentation of 1 and 0 binary - which is enabled via electronics as a constant) of which we are able to use many forms of expression (text, pictures, program action, etc.). The binary system is the basic constant upon which we have to build AI constants. From here we can add a third element from which we are able to define a constant (made up of binary and the third element). Question: What is the third element, which will be versatile enough to handle the variables and combinations of? Perhaps we already have them! Numbering systems are established world wide along with mathmatics. However, we only need one element! Perhaps the third element is what is constant about the numbering systems? Resolution? Place value? Recursion? Sound like the formula (the three basic elements) for mapping a word in a Thesaurus via Mandelbrot fratical analogy tool. Is the missing constant the above formula? But this only handles a single word! However, it is at the word resolution and not the word combination resolution. Perhaps the same formula can be applied at the "sequence" of words resolution? And not to forget the concept of "input/ translation/ output" for which AI must have, which also describes the most primary resolution to establish (from which all else is of a finer internal resolution, meaning "input/translation/output" is seen as the constant from which we (the user) is the variable and at a resolution the computer AI has no means of comprehending unless self-reference is possible which would result in machine consciousness. Date: 12-14-90 (10:05) Number: 234 of 239 (Echo) To: ALL Refer#: NONE From: TIM RUE Read: (N/A) Subj: CONSCIOUSNESS A VARIABLE Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (+) In viewing the life cycles of a universe as a constant the variable will be consciousness. Reference: Pervious postings. "Surpassing Einstein's Ultimate Goal, by: Frank Wallace" Date: 12-16-90 (12:09) Number: 235 of 239 (Echo) To: ALL Refer#: NONE From: TIM RUE Read: (N/A) Subj: TOOL Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (+) Knowledge Navigational Mapping/Expression Tool. - | | + 0 --| N |-- 0 | | - | + | | | - | | + 0 --| 1 | | | + | ------------- | | X -------| A |------- O ------------- | | | ------------- | | | | | - | | + | | | - | | + P |-- 0 | | | 0 --| E | - | + | ----------------------------------- 1 Resolution/ Place Value/ Resursion. Date: 12-17-90 (09:23) Number: 239 of 239 (Echo) To: MARK COLETTI Refer#: 237 From: TIM RUE Read: NO Subj: NO LIMITS TO LIMITATIONS. Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL (+) -> I fail to understand your line of argument here. Specifically, -> what -> limitations are you referring that would prevent computer emulation -> of human cognitive processes? Humans and computers are two very different creatures. Both have abilities which the other does not. A true cognitive process cannot exist without *autopoiesis (self-creation, production, or generation) nor can autopoiesis exist without cognition. Consciousness is the most complex form of cognition which is distincly different from any other modality of cognition found in other sentient beings. In this respect computers will never have a true cognitive process, they will always lack something in *emulation* that humans have. However, Computers are capable of alot which humans are not (such as speed and acuracy of mathmatical calculations). From this perspective it would be better to not try to emulate something but rather make application of what a computer can do best or better than humans. The concept AI may be better applied in the area of human <-> computer communication/interaction than in effort to emulate human cognition/consciousness. Date: 12-20-90 (10:31) Number: 243 of 243 (Echo) To: TED KIHM Refer#: 242 From: TIM RUE Read: NO Subj: TOOL Status: PUBLIC MESSAGE Conf: AI (64) Read Type: GENERAL -> >Tim: Knowledge Navigational Mapping/Expression Tool. -> -> Fess up Tim! You are someone's AI project! Is this not so? ??? Not sure what you mean. :-) As I'd mentioned in my first post reguarding the subject of constants, I've spent several years of investigation and DEEP THOUGHT (an important requirement for AI research, with an objective of being objective, rather than being subjective). Over these years I've noticed a constant (a pattern) which is not easy to clearly see (it's had to see objectively when the subject is related to oneself, not to mention what others forces interfere, such as job pressure). The constant (pattern) is difficult to describe but the KNM/ET is a very good representation (even more difficult is to describe it's elements, this is why no definition was given reguarding it's elements, and I leave it up to whom ever is interested of define it for themselves). The human brain is the ultimate unifier! If I'm someones AI project then it would only be myself. I don't have the resources to proceed as fast as I really want with my perspective of AI. However, I also realize advances in this field will result in major breakthroughs in other areas of technology (amoung these would certainly be such fields as medicine, communication, finances, etc..) I'm frustrated with my lack of needed resources and also in my reading I find other at the edge of a break through but then turn the other way. Like Mr. Bell (who got lucky by an accident of spilling a conductive liquid on his experiment), There was another inventor who was at the same place in developing a telephone and he was making the same mistake as Mr. Bell (both only had to turn a screw 1/4 turn to make a CONNECTION). I read about AI developer and realize they're doing the same thing (not turning the screw). Perhaps this has something to do with the learning problems of Neural Nets? At any rate, I've been posting some of my thoughts in hope someone will be inspired to see things from a different perspective. There are some things I don't post (least not yet, first things first, one step at a time, etc..). As technology advances, so does the enviroment in which I and all of us live (What I call the forward loop of technology, which is what causes ideas that onced worked to not work, relate this to finances). How about the concept of self-supportive dependancies (drug addition is one, government may be another) Time is up... Timothy Rue (AAi member)