rev. 1

 The Undeniable, Self Serving, Software Development Fraud Exposed

                Copyright 1996, 1997 by: Timothy Rue


              (see copyright info. at end of document)


The Computer Industry is a very young field and important changes are in
the process of happening. This document is for those whom want to know
what these changes are going to be, why and how they can help make the
changes happen. There is also the important issue of social change,
starting within the industry, these changes will both require and cause
to happen.

After reading this you will think twice about anything you find and know
is something you would have done differently about a program, or other
element regarding the computers you use, but are unable to do anything
about yourself. I'm talking about such things causing you additional
time, expense and frustration, those things you know only make your use
of computers more difficult.

Public Awareness only requires the reading of this document, the knowing
that falling for any illusions of what can't be done on computers is not
helping to correct the problem. However, by knowing what is possible,
you'll have the ability to say no to such illusions and even demand

It may appear that I'm a bit hard on the software development industry.
Even if I was, which I'm not, IT WOULD NOT BE PRODUCTIVE TO ALLOW AN
But there is no valid excuse for this dishonesty to have gone on for so
long or any need to "research for ten more years" the fundamental
functionality exposed here, before making it publicly available, and
this will be made quite clear. Fifty percent or more of the lingering
problem is the lack of motivation to do what should have been done long
ago. Motivation that public awareness and demand can now create.


    As an end-user is it to much to ask that credit be given where it
is due? I'm asking for, NO I INSIST, the functionality very much needed
and easily possible, to be created and made freely and widely available.
The simple integrated functionality that should have existed long ago,
allowing the end-user, hobbyist, programmer and researcher to easily
build upon, not only their personal knowledge and skills but to include
the knowledge and skill of other efforts. The ability to easily build
the data and processes that make up the general and personal interface
environment(s) and ability to integrate this with program functionality
for improved productivity.

    The software development industry took a wrong turn no less than
ten years ago, perhaps much more than ten years ago. This wrong turn
was the result of seeing an easy to obtain and contain large profit.
This wrong turn has caused a stall in honestly evolving the ease of
software development as well as created alot in non-productive waste.
The products it has left us with are problematic, constraining and
costly in human resources and direct expense. A great deal of resources
have been spent, over the years, in many fields of computer science
research that cannot possibly succeed by anything more than isolated
small successes, which cannot possibly reach an "in the black" return.
And if this is not enough, the investment we now have in the problematic
products and much research is so high that there is little effort to
bring about the correction any time soon. But quite frankly, I'm @%$&*!
tired of dealing with the crap problems created by others for their
insured profit.

    Nine years ago I attempted to do something on my computer I thought
would be simple, but found instead something I didn't expect. I
discovered that the simple integrated configuration of functionality I
saw wasn't possible to do with the end-user tools available. I knew
enough about electronics and a few years later I went to school to learn
C programming and passed with flying colors. But what I found, that I
didn't expect, was the wrong turn of the software development industry
and why. This simple integrated configuration of functionality always
seems to be just beyond my resources to produce. My resources of time vs.
my knowledge and experience with the complexity and idiosyncrasies of
the development tools and resource material. Yet I know, with absolute
certainty, there is nothing within the hardware or software abilities of
computers to prevent this functionality from being easily created by
anyone whom has worked in the field of software development, as a
programmer, for as little as two years.

    Today, after nine years of keeping tabs on the industry, personally
communicating this simple integrated functionality around the world and
waiting for it or easier to use development tools to come along, so that
I might create it, I realize it's not going to happen without some hell
rasing about it. And I damn well expect to receive the credit due me, on
identifying it, for it is not a development but an identification of the
reality based required fundamental functionality needed to get software
development, as well as general productivity back on the track it should
never have left. Functionality that would have allowed a great deal
better direction and successful return on computer science based

    So here I am, an end-user, no college degree but I have absolute
certainty that I will succeed in causing this functionality to come
about without waiting for research to figure it out or stall its general
availability. The functionality that took me something less than three
months to identify but what the past three or so years of a change in
direction of research has yet to realize. Whether or not I receive
credit for doing so will determine how much more I'll not only be
allowed to do (ability to obtain resources) but be willing to do.
Certainly if I have the mind set to see and communicate all this I can
do more, but why if I'm kept from earning a living doing it. Giving
credit where credit is due is fair and important.


    Up until several years ago the computer industry has focused on
field specialization. Today, though field specialization is still
important, the direction has turned to the integration of fields. To
identify the common elements in order to simplify the common working

    In the research samples and problems sections of this document there
are notes by myself. This is done to point out the general direction of
the industry, technology requirements and social issues important to

    If you are one to have a platform bias, even if you work for a
company manufacturing such platform, you are just another customer of
the computer industry. With this in mind, know this is not about
platforms but about the evolving technology.

    Following this introduction is:

    1) Samples of current research (direction)

    2) Identification of the general and expensive technology problems
       to correct (an important part of why)

    3) The verified social and industry issue important to address
       (key human issue in need of change and how)

    4) The important fundamental OS (environment) functionality

    5) The simple but advanced fundamental functionality tool set
       identified. And needed to reach many research objectives and
       solutions to major problems. (technology change).

    6) Conclusion

1) Research direction samples:

Carnegie Mellon University - Software Engineering Institute

At the Software Engineering Institute, we have been working in open
systems since 1993, developing courses, related products, and other
sources of open systems information, including some work on formal

Open Systems promise the faster and more economical development of
high-quality systems that are technologically up-to-date. An open
systems approach is important to advancing the causes of acquisition
efficiency and system interoperability.

What Is An Open System?

An open system is a collection of interacting software, hardware,
and human components.

* designed to satisfy stated needs

* with interface specifications of its components that are:

    - fully defined
    - available to the public
    - maintained according to group consensus

* in which the implementations of the components conform to the
  interface specifications

Authors' note: What is important to recognize here is there exist an
effort to integrate components and that open systems allow for this
much better than closed systems. Also of note is the subject of


National Science Foundation
(this is current with $20 million available)

NSF announces an opportunity for interdisciplinary research in
Learning and Intelligent Systems (LIS).

The LIS initiative seeks to stimulate interdisciplinary research that
will unify experimentally and theoretically derived concepts related
to learning and intelligent systems, and that will promote the use
and development of information technologies in learning across a wide
variety of fields. The long-range goal of this initiative is very
broad and has the potential to make significant contributions toward
innovative applications.

While the ultimate goal of this initiative is to understand and
enhance peoples ability to learn and create, the attainment of this
goal requires achievable intermediate goals and strategies. Such
strategies include combining the theory, concepts, research tools,
and methodologies of two or more disciplines, in order to focus on a
tractable component of a larger problem. This initiative seeks to
achieve these goals by encouraging interdisciplinary research that
has the potential to unify disciplinary knowledge about learning and
intelligent systems, and to foster technology research and prototype
development to explore and test designs and models that can lead to
supportive interactions between natural and artificial systems.


Authors' note: Again here is the effort to integrate but with a focus
on integration of fields, learning and creativity. Also in combining
theory, concepts, research tools and methodologies to address a larger



Third International Conference on Principles and Practice of

      Constraint Programming (CP97)

Schloss Hagenberg, Austria, October 29 - November 1, 1997

Scope of the Conference

   Constraints have emerged as the basis of a representational and
   computational paradigm that draws from many disciplines and can
   be brought to bear on many problem domains. The conference is
   concerned with all aspects of computing with constraints
   including: algorithms, applications, environments, languages,
   models, systems.

   Contributions are welcome from any discipline concerned with
   constraints, including: artificial intelligence, combinatorial
   algorithms, computational logic, concurrent computation,
   databases, discrete mathematics, operations research,
   programming languages, symbolic computation.

   Contributions are welcome from any domain employing constraints,
   including: computational linguistics, configuration, decision
   support, design, diagnosis, graphics, hardware verification,
   molecular biology, planning, program analysis, qualitative
   reasoning, real-time systems, resource allocation, robotics,
   scheduling, software engineering, temporal reasoning, type
   inference, vision, visualization, user interfaces.

   Papers are especially welcome that bridge disciplines or combine
   theory and practice.


Authors' note: And again the effort to integrate. Although the number
of CFPs' (call for papers), in general, has greatly increased over the
years, due specialization, this CFP is of particular interest. On the
surface it would appear to be a contradiction (constraint - limitation
vs. integration - limitation removal) but there is a fundamental reason
why constraints have emerged as the basis it has. Simply put, for both
technology and communication to evolve we must focus in to establish
details and then classify, categorize, label, define details so we may
better communicate the details and integration of the details into the
bigger picture.

2) Problems important to solve.(a major part of why behind direction)


Power Panel - "What's Wrong with Software Development"

** In The U.S. Only **

$81 Billion = 31% of software development is cancelled before complete.

$59 Billion = 53% of software development has cost over-runs of 189%

16% success - project success and failure ratio.

61% customer requested features and functions make it in.

Maintenance and repair is where most of the U.S. dollars are going,
instead of new, better, easier to use software.

---- Overall Summary ----

Problems - all-around lack of complete documentation and weak training,
faulty user input and feed back - self contradictory user request, lack
of project leadership between developers and users, management created
problems and low quality control standards, feature creep and software
size increase, advancing technology rate of change and lack of general
standards, solutions around the corner but never arrive and our tools
are better than theirs attitude, lack of a value chain structure for
value added abilities, failure to produce a functional model before
coding and constant remodeling, etc.

Solution directions - code re-use, object oriented programming,
component-based programming, distributed components, better tools,
better programming methodologies, leaner software, a splitting of code
writer types into two catagories - architects and assemblers, better
effort to establish a working vocabulary between developers and users
so users can in some way lead development, etc.

---- A Few Comments from Panel Members ----

A culture needs to evolve that respect software engineers as
crafts-people. Writing code is not just writing code but like the field
of writing where you have technical, creative, documentary, etc., there
are different types of code writing.

Authors' note: I agree with this but also realize end users are even
more specialized in what they need and do. Respect for the end user
needs and abilities is needed even more so. Without respect given to the
end user, the software engineer will not be given respect in return.

A fundamental change in the programming environment needs to happen that
allows the tools to work together more.

Authors' note: the panel member making this comment, did not specify
what tools or who the tools would be used by. It was a very general
comment pointing to a fundamental programming environment change. A
lead-in to the concept of component programming. But, there was no
recognition given to the concept of component software or component
applications. At least not in the sense of being outside of "plug-ins."
Read on!

Jokingly - one of the best ways to copy protect software is to put it
in a dll, give it an obscure name and put it in the windows system
directory. Because you'd never find it.

Authors' note: This does not make it any easier for the end user in
keeping their system organized, clean and optimized. This attitude of
constraints, though humorous, cost end users alot.

The meaning of "intellectual property" became questioned. Did it mean
you take the best ideas or something owned?

(Authors' note: it was the panel supporting "best ideas" but wouldn't
the correct term for this use be "intellectual value" rather than
"intellectual property"? What would happen, regarding this, in a court
room? The audience member whom brought this up, was a bit angry about
the distortion. Her question was: Is it the developers whom are
creating the problems? And what are the developers going to do about
it? The response was "that's not the problem!"

Users shouldn't develope software but know, better than the developers,
what they want and need.

Authors' note: users don't have the time to write code, it's not their
job or duties!!! I can cut the lawn, I know how, but if I don't have
the time, I hire someone. And because I know how to better communicate
what I want done, I'll get what I want and know I'll not be greatly

Author observation from attending this gathering - alot of good points
where brought up from both the audience and members of the panel but it
became clear there was no solution being brought forward to satisfy the
majority. The audience saw this as it thinned out over the course, as
they perceived the power panel struggling for a sales pitch. There
where two on the panel not biased due their position, leaving six
biased. Microsoft, Borland, Powersoft, Oracle, Software Associates, and
IBM were the biased parties.

Panel mix - Tools developers, Data Base Developers, Application
Developers, Application salvagers, and software consultants.

------ AND FROM ------


Article - "Software's Chronic Crisis"

The article covers much the same ground as the above but with a focus
and flavor of the magazine. The article also goes more into solution
efforts with software development on large scale projects. But finding
consistent solutions are still hard to come by.

Mass-produced PC products makes up less than 10% of the $92.8 billion
U.S. software market.

Mary M. Shaw of Carnegie Mellon University, observes a parallel between
chemical engineering evolution and software engineering evolution.
However, this evolution has not yet made the connection between science
and commercialization required to establish a consistent experimental
foundation for professional software engineering.

Authors' Notes on overall problem:

Another important factor not included in the above figures is the
expense end users incur in the process of dealing with faulty and
difficult to use software. The expense caused from loss of work effort,
time and money in dealing with the results of faulty and difficult to
use software. From the simple loss of a letter, due an application crash
to the loss of a great deal of work from complete system crashes, and
the expense of dealing with it. And lets not forget the frustration and
distraction it causes end users. Been there, done that, who hasn't!?

The industry of software development with "its" problems. Problems it
claims to be working on solutions to, but never really delivering. The
real problem here is in lack of understanding the problem, which
inherently and consistently results in non-delivery. The concept of
re-use, object oriented programming, component programming - at what
access level to this general concept of combining parts are we, the
end users and programmers, going to be denied?

* Software development has evolved from one stage to another. At one
point structured programming was the thing, then o.o.p., and now
component programming. The next logical step of this evolution should
be component applications, where the users (end-user and/or programmer)
are able to put functionality together to gain a greater value.

Chemical engineering is founded on connecting what exist in physical
reality with what exist in physical reality. Software engineering is
founded on connecting what exist in physical reality with what comes
from creative imagination through the concept of definitions and use of.
Definitions we create or evolve from our imagination and tie to the
physical reality of computers through software - to control what is
positive, what is negative, and what is not used. Or in other words, and
one level up in abstraction definition, 1s' and 0s'. The foundation is
where imagination connects with physical reality, and is probably why it
is so easy to be hidden, yet so difficult to find. The connection
between software science and software commercialization has not happen,
to give us a consistent experimental foundation for professional
software engineering, because of failure to recognize the role conscious
imagination plays. Failure to recognize there are many levels of
applying conscious creative imagination in making use of computers.

Of the parallels observed between chemical engineering and software
engineering is that both of these fields aspire to exploit on an
industrial scale the process that are discovered by small-scale
Authors' note: though I'm sure the complete definition of "exploit"
wasn't intended here the reality of the computer industry is that the
full definition is used. The positive as well as the negative and this
takes us to the next section. It is in how small-scale research gets
exploited, right or wrong and why there is no choice but to correct

3) The verified social and industry issue in need of change. (ethics?)

(SUMMARY of this section - I'd like to have done several things in this
section differently, one of being shorter. However, due to priorities
I may not get to this any time soon. This summary is provided here as
an effort to make up for this. It is not intended to be an excuse for
skipping this section but for the innocent programmers whom feel I may
be hitting to hard on programmers. Understanding the many, easy to apply
and safe from proof, techniques in deception, leaves no choice than to
CORRECTION. You've heard the saying "if you give an inch to someone,
they will take a mile", well it applies here. To the innocent programmer,
don't take it personal. To the guilty programmer, you bet it is intended
to be personal. Fact of the matter is: I've already been able to
identify a few dishonest and several "gray area" attempts to extract
more information from me (the insanity of this I have no explaination
for, except to say there are apparently those whom just don't feel right
unless  they think they are getting away with something for nothing).

I really wish it wasn't this way but I'm not going to lie to myself or
to the readers of this document. ENOUGH IS ENOUGH!!!

The computer industry has become a very nasty, greedy and dishonest
industry. The practice of creating problems or illusional constraints in
order to corner a profit. A practice that is building on itself with
many small acts of greed into a serious larger problem. While preventing
the needed and easy to achieve advancements from happening as well as
preventing much in the way of things that should be simple and
productive to do. Greed that has personally kept me, like many others,
from being much more creatively productive and in turn unable to earn my
real worth. The only possible way to change this is through public
awareness of this growing problem and the public demand to correct it.
And there is a great deal for everyone to gain from doing so).


After much consideration on how this section might be written, it comes
down to the verifiable fact that there is no way to be nice, diplomatic
or politically correct about this subject matter. There is only honesty.
Between the intentionally dishonest and the misguided supporting the
dishonest the honest productive don't seem to have much of a chance
and certainly the honest don't need this subject matter addressed in a
"beating around the bush" or "side stepping" avoiding reality manner.

For the needed changes in the computer industry to happen, for the
defined goals of much research to honestly be reached, for the real and
expensive problems to be solved, there simply is no other choice than
to expose and remove the dishonest and wrongful exploitation that has
been going on.

The "social" and "industry" issue may be researched to death and then
dissected further until there is no real connection to the issue. The
result being a room filled with research that can then be further
distorted to fill a large library, etc.. All of which is nothing more
than a resource of excuses for the dishonest to use in rationalizing
wrongful exploitation.

This is not claiming all research is dishonest but again between the
intentionally dishonest and misguided supporting the dishonest, honest
research has little chance of being widely seen and understood for its
real and important value.

----- Research on the subject of dishonesty -----

There has been and continues to be ongoing research, and communication
of its findings, on the subject of dishonesty. The sole objective of
this research is of exposing the dishonest and the real damage being
done by the dishonest. Research that exposes the easy to apply and risk
free, leaving no direct proof, techniques used by the dishonest.
Dishonesties that are exposed with certainty through the honesty of
wide-scope accountability (an act that requires honest wide-scope
integration). Research that not only covers social issues but politics,
religion, education, sex, business, philosophy and more. Research of
which the foundation work does not take up a library, a room, or even a
single shelf in a bookcase, but contained in less than 700 pages of
excellent integration and cross-referenced material. Research that began
by the efforts of one man, whom like some of the past honest researchers
and discovers has suffered persecution and even political imprisonment
as a result of his integrated discovery and communication. Communication
that shakes the governing elements of society. Released from prison with
an apology, a man whose work has been changing the world. Research that
continues to verify the correctness of the foundation work and exposure
of the dishonest. That work has been titled "Neo-Tech" and the man is
known mostly by his pen name of Dr. Frank Wallace.

From Dr. Wallaces' research it is, without doubt, in the hands of the
individuals of society to cause the needed changes to happen in society.
That through the expanding public awareness, exposure of the dishonest
and their ways, that society will inherently change for the better. It
is inherently clear, honest wide-scope integration research does not
lead to rooms full of research papers but simplicity and productivity.
And it is also clear that honesty easily out does dishonesty even
should such dishonesty be applied by those having high IQs'.

-------- social and industry issue of important functionality  -----

This document is about the computer industry but more specifically the
evolving technology of the computer industry. And this section is
about the social and industry issue of what is evolving. It is about
applied dishonesty and what is motivating it, why real solutions and
major advances are easily possible today but not happening. It is about
important technology that through the honesty of wide-scope
accountability, there is clearly intent to suppress it, the dishonest
reason behind it and who all is contributing to this intent.

The computer industry has, for over a decade, made available and
affordable to the general public, such a collection of integrated
technology containing the fundamental functionality required to
reach the research goals and solutions of the above sections and much
more. But for this section what is of importance is the history of this
technology. The ability to apply the 20/20 vision of hindsight and
honesty of wide-scope accountability to expose the real value this
technology presents as well as the dishonest wrongful destructive abuse
and exploitation of it. The history from the small scale research that
created it to its present day status.

When this technology was introduced to the public it presented an
integration of innovation and versatility that opened the door for
creativity and productivity. In time this technology together with
other innovative technologies resulted in a major shift, causing
greatly reduced cost and greatly expanded creativity in another
industry. This technology has been often used to do what other
compared technologies couldn't and used in many different fields.

This innovative technology introduced easier integration, versatility
and creativity far beyond the then compared technologies. A door
open to all, including the creative dishonest whom saw it for
something to wrongly abuse and exploit for personal gain. From the
corporate investment and control level down to the consumer, and
those between, there exist the destructive dishonest. Though many did
not wrongly abuse or exploit this technology they have been out
numbered or out powered by those who have. An overall effect causing
this technology to be hidden more and more from the public, contrary
to where it productively belongs, causing everything contradictory to
evolving this technology, but not its death of which the productive
honest have prevented.

Today this technology is claimed to be dead or set aside by the
dishonest and those whom are being wrongly mislead by the dishonest.
The dishonest attacks on this technology are of claims that this or
that part of this "well integrated technology" is now less than what
is over here or over there, and these attacks are on technology that
has evolved little in over ten years. But there is not one word of
attack or even mention of the "sum total," of the important fundamental
functionality, that not only gives this technology the easy and
creative integration versatility it has, but is also required to reach
the research goals and solutions mentioned in the above sections. This
technology is not dead or being set aside but in fact is the main focus
of attention. Not so much the name placed on this innovative integrated
technology but the versatile and creatively open fundamental
functionality it makes available.

This innovative integrated technology did not create dishonesty,
people do, and there is no major conspiracy of the dishonest here. To
understand the why behind the dishonest is to simply understand one
word, "greed," as it is applied from the individual to the group.
Intent to take without paying and the intent to hide the use of
productive and easy to use technology in order to gain an easy
advantage and quick high dollar return from doing so. Greed that
through the 20/20 vision of hindsight and the honesty of wide scope
accountability has: proven to lead to self destruction; caused general
destruction and loss from not making more wide spread use of this
productive and cost saving technology; and has losses suffered by the
honest productive as they do their best to productively move forward
with this technology and their investment in it.

Greed is clearly the apex social and industry issue in need of
addressing and correcting. Whether it be viewed from society in
general or from a narrow but powerful perspective of the computer
industry with an even narrower focus on the selected technology
mentioned here. The history of this technology has and continues to
be documented and shows, with certainty, greed generating dishonesty is
a very real, expensive and important social and technological damaging
problem. Despite the fact this technology has evolved little over its
decade plus of existence, the sale value of used equipment increased to
over fifty percent more than its previous value as new equipment when
the manufacture went out of business. And this at a time when
technology compared against it began dropping in price. Today, and
after a quick fall of a second, and as well, dishonest company owning
this technology, the technology is not only without an owner but
supposedly ownership of some part of the technology is being challenged
in court. But still this technology is being manufactured and sold for
a price not much less than it's original retail price. And this while
other comparable technologies have continued to drop faster and faster
in price to be near a daily drop in price.

This is technology the dishonest claim is dead or being set aside, yet
other companies and organizations are claiming they are not only
coming out with clones and/or directly compatible technology but that
they are making such improvements the original was kept from having.
Even the concepts of this innovative integrated technology is being
incorporated into a competitors technology so to save the competitor
from death. Technology the dishonest claim is dead or being set aside?
Who is fooling who here? Or is it just the self degradation of the

But with all of this obvious dishonesty about this technology, the
dishonesty hasn't stopped, perhaps it has even increased, and there is
a reason why. The important fundamental functionality made available
through this innovative integrated technology has proven to be highly
productive, not through theory based research objectives but through
real life field application. Even the greedy have helped to prove this
with their destructive and obvious actions of hiding the technology
rather than openly promoting and helping it to better evolve. But
the important fundamental functionality, made available through this
technology, is only the needed foundation environment for something
much more advanced, versatile and productive, in the way of easy
integration functionality, to happen.


The dishonesty of wrongful exploitation and abuse this technology has
suffered does extend to the rest of the industry, mostly in software
development, as the software development problem exposes. Software
development has not evolved as quickly as it should have due greed
CANNOT BE EMPHASIZED ENOUGH. Given the level of dishonesty generated
from just the availability of the required foundation functionality,
it is not difficult to imagine some of how the addition of such a new
level of highly productive but simple functionality can be used through
the "technology limiting" and "highly destructive" acts of greed
generating dishonesty. Not to mention the potential sum total
destructive results of such acts.

--------- The History Lesson ---------------------------------

The honest, wide-scope accountable, history of the innovative
technology that has made available this important fundamental
foundation functionality is the undeniable example. The example to
learn from and know that the only way to prevent such a high level of
destructive dishonesty in wrongful exploitation and abuse, is to make
the simple but advanced functionality and its user documentation
available to the public, on all platforms, at no cost above reasonable
transfer media cost (freeware). As well as making the public aware of
it's existence.

Though the fundamental foundation functionality is more important to
identify than the label or name given the innovative technology that
made it publicly available, for the history lesson the name is Amiga.
A history that does extend beyond the Amiga into the industry of which
it is a part of.

------ Software dishonesty through-out the industry ------------

As a simple example of the workings of greed: Disk formats vary but
up until not so long ago the Mac couldn't read or write IBM formatted
disks. This was an attempt to lock the consumer into a system type, and
a possible effort to set a standard the rest of the industry would have
to pay royalties on. But the Amiga made it possible to read and write
different formats. When the consumer needed to transfer files the Amiga
owner might charge a hefty price to do it, while keeping hidden the
simplicity of it (certainly not promoting the Amiga). This was a
problem created by the greed of the industry that the individual
further took advantage of and in turn cost the consumer. Something the
Amiga proved was just a software thing (with the exception of the Macs'
low density multispeed drives). Outside of the hardware exception,
which also had a not so expensive work-around, there is no reason
multi-format reads and writes where not made generally available, other
than greed. Greed on top of greed on top of greed, etc.! Greed of which
has a sum total that greatly cripples technological advancements in the
computer industry which in turn crippled research in fields that use
computers for research and development.

The evidence and proof the level of greed is so widespread that it is
beyond being obvious, due lack of having a direct comparison base as
well as being an accumulative result of many small acts of greed, is in
knowing: hardware requires a great deal more in resources to bring to
market than software, yet hardware development is ahead of software
development. Sure technology evolves but software evolution has stalled
for far to long and caused far to high an expense, due greed, and why
there is no room here to be nice, diplomatic or be even concerned about
political correctness.

Both hardware and software technologies have the same beginning, an
idea, but hardware must pass the test of physical reality, while being
versatile enough to allow the software on top of it to manipulate
representations of pure abstract concepts. Abstract concepts created
by human imagination that may have no direct physical connection to
reality, outside it's representation through the hardware and our
agreed use (i.e. the concept of exchange value may be represented in
many physical forms but it is just an agreed upon abstract concept).

With the creative versatility of software development, it should be
ahead of hardware development and giving more direction to what to do
next in hardware development. But software development is well behind
hardware development and this prevents more advanced hardware, of
which we can produce today, from being made widely available, used, and
at affordable prices. Furthermore, software development being versatile
enough to handle such created abstract concepts, through greed, makes
possible the efforts to lock the consumer into a specific platform
and company product line (a closed system mentality that has brought
a great many small problems that add up) rather than having the
openness to allow simplicity, consistency and versatility that leads
to overall increased creativity and productivity.

---------- Software Development Dishonesty -------------------

And if this is not enough of greed, software development itself has
been greatly kept a matter of learning languages to program in. And
although there appears to be effort in software development products
that are beginning to automate the creation of programming code, it is
still a matter of keeping the end-user constrained from doing
development. Where the end-user and/or client must communicates the
details of the needs, problems and maybe even the solutions to the
programmer but it is the programmer who often takes the credit and
intellectual property rights.

But software is such a unique and versatile product that the
possibilities of what can be done in software include bringing software
development to a level enabling end-users to do a great deal more, in
the direction of programming, than what they have been allowed.

However, it is the programmer whom must make this possible for the
end-user but the programmer believes that in doing so they will remove
the need for their programming skills. The uniqueness of the product
of software is that it is a product, in its development, that only
takes written communication to create. It is the only product that goes
from human thought to written communication then directly to a product
with functionality that may, and often, controls some movement in
physical reality (i.e. printer, storage media operation, monitor image,

Programming is the skill of translation and this is the bottom
line, hard reality, undeniable fact. Sure the programmer can apply
their creativity and ability to find solutions but so can the end-user,
but what the end user lacks is the translation training. Such training
that even the programmer must continually be learning, as those whom
produce the development tools also believe they will remove the need
for their products if they do what should have already been done. But
the tool developers, in their efforts to counter this belief, are now
attempting to hide, via greater abstraction such as iconic visual
programming, the fact that translation can be automated. And they are
doing this by promoting illusional problems that only exist due their
efforts to constrain the end user, and inherently the programmers. All
in effort to lock their customers into their products and upgrades.
Ultimately, and regardless of the development interface used, program
"code" goes through a process of automated translation into the native
language of a computer, machine language. The language that is directly
connected to the physical reality of controlling hardware via
controlling physical electronic signals.

All programming languages and development systems are in fact
abstractions of physical reality. However, Machine language is the only
one that is of a first level of abstraction. Machine language is
written with only two symbols, one "1" and zero "0," to control the
electronic signals, which can be described through a light switch
position of "on" and "off." Two symbols often used to mark the power
switch or button of a computer based device.

It is what is between machine language and the software development
interface where the dishonesties of greed have an open door. It is here
where abstract illusional constraints are created that have caused the
stalling of the software development evolution. It is here where the
general public as well as many software developers are easily deceived
and wrongly constrained in what all is possible with and through
computers. It is here where the growing expense of software development
failures and overruns begin.

As an example of dishonesty, the fact of the so called programming
language of Borlands Delphi, is that it is not a programming language
but an interface for automated programming of the pascal programming
language. Where the programmer has the option of directly editing the
pascal code as well as adding in pre-compiled code written in any
programming language. Delphi is not a programming language but it is
being promoted as such and for only one reason, to hide from the end
user the fact that translation from written communication, as well as
abstract iconic visual language (an additional level of abstraction
over written language), to machine language is not only possible but
being productively used.

Given the time span for software development to evolve, why has not the
computer industry better evolved, from the foundation of physical
reality, software development environments that are openly honest about
the single most important fact about software? The fact software is
produced through the hierarchy or structure of created abstractions and
processes that eventually connect to and control computer hardware. That
such abstractions are created that either tie directly to the physical
signal control of computer hardware (i.e. machine language) or to other
abstractions that are eventually translated to physical signal control
of computer hardware.

The "complexity of the hierarchy or structure of abstraction and
processes" is not the issue but is often the dishonest greedy excuse
used to avoid exposing the fabricated illusion for what it is, intent
to deceive for wrongful exploitation and abuse of the paying customer
or client. And it is this dishonesty that has caused a great deal more
expense than the expenses calculated in software development failure
and overruns. It is not possible to calculated ALL that might have come
to exist as a benefit to society through research, development and
business had this dishonesty not evolved but rather the required
fundamental functionality and use of. The honest functionality that
allows the end user, the hobbyist, the programmer, the researcher to
create, store, transfer, build upon and use "abstraction hierarchies or
structures and processes" that tie to and control computer hardware. The
functionality that allows this is the real issue.

Software development, without doubt, should be a great deal more
advanced than it currently is, but it is where it is because of one
thing and that is greed. And it is this greed that caused software
advances to stall and what is inspiring intent to hide this important
fundamental functionality as well as the advanced functionality. The
functionality required to reach many more research objectives and
solutions than the ones mentioned in the above sections.

Is this an ethics issue? It would be if there was a way to establish
and enforce ethics. But this is not possible so the only thing left is
to expose the dishonesty by both exposing the easy and safe techniques
in deception used, to the public, and making the important functionality
widely known and commonly available where it belongs. Otherwise, the
wrongful exploitation, abuse, destructiveness and expense of greed will
only increase to new levels.



It is important to understand that in the spectrum of using all this
functionality there will exist those whom do not directly use it, to
those whom use it for simple things, to those whom make extensive use
of it. Functionality that allows an individual to easily move towards
greater use as they build on their experience and established usage. And
there is also the spectrum of use by applications of this functionality
of which an end user may or may not be aware of, or need be aware of but
as a place to begin to learn and do. However, IT IS PRODUCTIVELY
AVAILABLE, otherwise a destructive level of greed will accumulate..

4) The important fundamental OS environment functionality identified.

By using a real examples of this functionality it will be more difficult
to intentionally distort it. Although I refer to specific computer
platforms, what is important to see is the functionality. I will also
point out the weakness of the platforms mentioned.

The important fundamental functionality made available through the OS is
a matter of what type of program interfaces are possible. Overall it is
of the "OPEN SYSTEM" mentality. The following three types, or
combination of, is:

Type 1) Programs that accept start-up arguments.

    This may also include programs that can accept arguments after the
    program is running, by running the program again with such
    arguments. On the Amiga this can be done in two ways, through the
    command line interface (the DOS prompt on other systems) or setting
    the arguments in the icon information of the workbench (the Amiga
    window and icon interface similar to the Windows environment on IBM
    type systems). Also, there are many example of such programs that
    can receive additional information or instructions through another
    running of the program.

Type 2) Programs that have a direct or built-in user interface.

    This is the most common interface of all. You run a program and to
    make use of it you input data, click on something, move the mouse,
    etc.. This interface might be call the GUI (graphical user
    interface) of the program but it's not always so graphical.

Type 3) Programs that have a command/data port or door interface.

    This is the least commonly recognized interface of all but what
    proves to be a very important one, required for supporting the more
    productive open system as well as the ability to reach many research
    objectives. What is perhaps the best example of this is the so
    called "AREXX" port many programs for the Amiga have. What this
    makes possible is the ability to control a program external to it's
    GUI, to automate what the program does, add functionality to the
    program externally, tie programs together to create an integrated
    productive environment, etc.

    You may have heard of "plug-in" for programs on the IBM and
    Macintosh type systems, but in many ways this is a dishonest attempt
    to constrain and lock the end-user into a "closed system" mentality.
    Such "plug-ins" prove that it is possible to create such a
    command/data port or door but where is it?

    Another example that proves such doors are possible is the use of
    DLLs, libraries, devices, etc.. that exist as files. Through the use
    of these files by programs, functionality is added to the program.
    Yet these files are separate from the program file.

    It might be argued that the end-user won't make much use of such a
    door. This is not only untrue, presumptuous and dictating but even
    more so dishonest in that it prevents the next stage or level of
    evolution for software development and use to happen.

  *) In addition to the above, although it is not an absolute
    requirement, it is highly productive to have a multi-tasking OS.
    (Even a single-tasking system can pass commands and data between
    programs via files including such files that shut down and startup
    the applications). The better an OS is with multi-tasking the more
    productive all this will be. But where the Amiga falls short, and
    some of the other systems, is in it's ability to avoid a system
    crash due a single application failure. Unix based systems are very
    good about preventing such system crashes but do not make so easily
    available the needed command/data door found on the Amiga. The
    multi-tasking abilities of the current IBM and Mac based systems
    are examples of poor multi-tasking, in comparison to the Amigas.
    Having this system level protection becomes important when the
    possibility of running many programs at the same time exist.

It's not to difficult to see the importance of all this interface
functionality, especially when you consider the research objectives to
integrate. But to give a few examples of how such functionality can
greatly increase productivity as well as solve costly problems,
consider the following:

    a) Consider what you do on computers over a period of a week. Think
about all those repetitious sequences of actions you perform. Perhaps
you use more than one of one type of program because of the ease of use
and functionality difference of some aspect of the programs. Maybe there
is something about one or more of the programs you use that if it was
done differently your job would be easier. Or perhaps you use different
types of programs in a process to achieve a single output. And let's not
forget those sequence of things you do only every once in a while and
have to stop for a moment and remember or find the written list you made.
        Now consider what it would be like to have the ability to
"record" the repetition so that you only have to issue one command or
menu selection, instead of the sequence. What it would be like to
switch programs without having to manually go through all the motions to
save a file, exit a program, run another program and load the file, etc.
How it would improve your ease of use to automate a sequence of action
that otherwise makes your job more difficult, error prone and time
consuming. All this is easily possible through the use of the third type
of interface.

    b) How often have you had to deal with the learning curve because of
a change in programs used or upgraded? Consider what is probably one of
the oldest types of programs, word processing and how it has evolved
from being a simple text editor to having more functions than you know
what to do with, care to or even need to know. How often have you found
such change or upgrade actually reduces your productivity, simply
because you have to start the learning curve all over again?
        Now wouldn't it be nice to be able to continue to build on what
you know and are use to, rather than having to relearn how to do what
you had been doing, simply because of a new or different program
interface type #2? Stated another way, to have the interface type #2
evolve with you, following you rather than you following the programmers
direction and choice.
        All this and more is easily possible and even has advantages for
the programmer in reducing repetitive programming. By designing such
programs that are just the  user interfaces of type #2 but versatile
enough to allow the adding of menu items or icons as the user decides,
the user need not learn another interface but only to add the
functionality as the user needs. This keeping the user interface type #2
from being cluttered with functionality the user doesn't need. This also
lends well to multi-users being able to customize for each of their
needs, different interfaces of type #2, perhaps using interface type #1
at start-up to select the customization. This also makes it possible for
the end-user to take their personal interface type #2 with them, should
they change employers.

        The general functionality of programs of a given type, such as
word processors, is the same. Actually, there is common functionality
used by most programs, such as load and save file, but don't be fooled
by the programmer that tells you programs read and write files
differently, though this may be true the underlying programming
functions they use are the same. The difference is in how the programmer
makes use of the data read in, where the data is placed in memory, how
it is used and how it is displayed. But such data, as well as programs,
exist in memory and in storage as a sequence of 1's and 0's. There is a
saying in the software development industry "write once, sell many
times" so know what is meant by this.

        Just as type #2 interfaces can exist as independent programs so
can the functionality exist as independent programs. Functionality that
may be used by different program or application interfaces. Actually,
this already exist in many forms, DLL, libraries, devices, etc.. but
in the way it has been used it is kept hidden from being generally
recognized and understood by the end user. By making such functionality
accessible through interface type #3 and understood by the end-user, it
becomes possible for the end user to have the freedom to add
functionality to programs. Otherwise the end-user must convince the
programmer to add it (and there is a good possibility the programmer
won't), as well as the wait and upgrade cost incurred by the end-user.
        Now consider the individual whom is geared for doing word
processing but needs to move into doing a little desk top publishing
(adding simple clip art to the work). Should this individual have to
learn to use a DTP program, as well as relearn how to do the things they
did in a word processor, or would it be more productive to just add the
needed functionality to the interface the individual knows? Even
programmers can benefit from such ability to evolve an interface with
their evolving skills. But for the programmer this has the additional
benefit of being able to make use of functionality already written and
tested, rather than re-inventing the wheel and then debugging it.
However, the field of programming is moving in this direction with
component programming code. But by using component programs, component
programming code can become redundant but it certainly helps hide it
from the end-user, as it places illusional constraints on the end-user.

    c) As a last example and in consideration of the above two examples.
Using the three types of interfaces with the proper types of programs,
it is possible to create user oriented integration interfaces. Such
interfaces that help the end user to easily integrate their usage of
computers, as well as properly adding functionality to the programs they
use, increasing their level of productivity. Thus, allowing the end user
to evolve their understanding and use of computers as they build their
own abstraction hierarchies, structures and processes.
        Even within the field of programming such abilities to improve
and customize the interface a programmer uses, as they evolve their
skills, will only help the programmer to be more productive and less
repetitive and error prone. Eventually this will lead to both a general
and personal programming environments of increased automation, and at
some point reach a level of automation that programming becomes a matter
of how well one can define the objective, rather than defining the
details of how a program is to reach the objective. And on the way to
such automation, the field of programming will become more of a spectrum
rather than the current 1 position while out-doing efforts to create 2-3
programer types. In effect, bridging the problematic gap between
end-users and programmers will only help evolve computer technology and
usage while greatly reducing the software development failures and

        Whether you are dealing with defining the complex and detailed
logic of program code, defining the simple sequencing of repetitious
actions or adding functionality to an interface, the process is the same.
A process of building something with smaller pieces or parts. A process
everyone does use in learning and productivity, whether they deal with
computers or not. Everyone had to learn the numbers before they could
learn to add and subtract and once they learned addition and
substraction they could learn multiplication and division, etc..
Likewise for spoken and written language as well as with learning
written programming languages, the process is the same. So let's wake-up,
see and remove the wrongful exploitation and abuse the computer industry
has done the consumer and client with "it's" illusions of complexity
that has caused "it's" problems to exist. Remove the dishonesty and the
problems will disappear. And as the door of creativity opens for the
end-user, research will become more successful, simply because: 1) It
will become much easier to integrate fields and evolve research. 2) It
makes it possible for "small scale research" to include many more
hobbyists and 3) The certainty of increased feedback, from having more
of both theory based research and real life field application, to
collect for finding answers to the next level of computer evolution.

    Closing this section out, it's worth mentioning there are such
programs that work towards this direction. Outside of the Amiga, which
has available the three interface types but not enough of the needed
user base philosophy or ease of use integration applications, there are
such programs along the line of multi-media creation tools. But the
problem with these tools is they are, for the most part, closed systems
in that they have built-in limitations on versatility, are complex to
use and expensive to own.
    As with the illusions we, as end users, have been presented with so
far, the integration tool set need not be so complex, closed system
oriented or expensive to be versatile and powerful. If there is any one
thing to understand about computers, is that computers will do anything,
within their inherent physical constraints, that they are told how to
do. This, of course, exposes the ultimate motive behind not telling
computers how to allow the end user to do more, insured profit via.
illusional constraints. But the next section will remove the issue and
excuse of complexity through the simple and versatile "must exist
functionality" tool set identification as well as open the door to
making major advances in computer technology and usage.

5) The Advanced Simple Tool Set of Fundamental Functionality identified.

Before getting into the functionality let's take a look at where
computer technology should be today.

I don't know if you're a Star Trek program watcher or not, but it has
been pointed out, the communication devices used on the earlier series
is very much the fiction equal to the cellular phones we have today. The
communication devices of todays series "com-badges" is not as fictional
as might be thought by the general public. The working technology for
such com-badges exist today (actually its existed for over 30 years).
It was first called the "Neurophone" and to the best of my information
it is now called the Transdermal Hearing Aid. No moving parts,
communication is done through the nervous system to the brain,
by-passing the outer ears. I had the fortune to play around with one of
the early prototypes and it does work. It is also my understanding the
inventor finally got his patent when the patent office let a long time
employee of the patent office, whom had been deaf since childhood, try
it. The problem with getting the patent was the inventor could not
communicate the technology using current technology of the time. There
was no basis of connecting it to know technology. The patent office
employee broke out in tears and the invention received Patent #3,393,279.

These communication devices are only a couple examples of what I'm
getting at. The fiction of yesterday can become the reality of today.
Star Trek is such a show having high odds of becoming the reality of
tomorrow because the originator, the late Gene Roddenberry, consulted
with scientist working in the field. NASA was no stranger of Genes
requests and in return Star Trek has helped to promote space
exploration through public awareness of the possible.

Now consider the fictional computers used on Star Trek, though you
rarely see the systems you do see many of it access panels, devices
and applications. If you are a Star Trek fan you may know just how
well the fictional technology of the show was put together. There may
be a few minor holes or contradictions of this fictional technology but
for the most part it is extreamly well done. Though fictional, it is
based on real theory and even real technology. However, and not to
contradict the technical working description of this fictional
technology, there is what we can do today to accomplish some of the
ideas and concepts of this fictional technology. The computer Access
Panels can be made today and even more versatile than what you see on
the Show. Through the use of combining LCD technology with Touch Screen
technology it becomes possible to not only make these types of computer
access control panels but to change what is displayed and controlled by
them. There are no keyboards or mice on the show, but only the combined
input and output display device and voice command.

There are the Science Stations, the Medical Devices, the Universal
Translator but what is perhaps the most amazing of all is the Holodeck.
The Holodeck is supposed to be made possible by combining numerous
technology, such as the transporter and replicator technology, but it's
all controlled by the computer. And it is this example, idea or concept
of programing something as complex as the holodeck that we are
interested in. Forget about what the holodeck does and think about just
programming it, maybe for the technology we have today of virtual
reality (VR) helmets and body suits.

Now with todays programming practices, how long do you think it would
take to create such a detailed holodeck type VR program? Does the
programmer need to program everything down to the details of the leg of
a chair used in a scene? Or does the programmer simply describe the
setting and characteristics of interaction where the computer then
accesses the data base for details and brings it all together including
movement? Well the answer is the programmer defines the objective and
the computer handles the details of how. This may seem to be fictional
programming today but the fundamental knowledge to easily do this level
of programming not only exist today but has existed for no less than
nine years.

Just as proper use of the three types of interfaces would have, and
will, cause a leap forward in computer usage and technology, the
advanced simple tool set functionality identified in this section can
bring about an even greater multiple or exponential leap on top of the
three interface types. Yes, the ability to program something as complex
and as detailed as a holodeck program is very much possible today,
though many years have been wasted, due greed, and there is much to
change and do.

The previous section of this document showed three types of interfaces
of which have existed in being available to the public for over a
decade. It also showed what has been possible, to increase creativity
and productivity through computers, but not done and the greedy reason
why. Had such interface functionality been properly used the difference
we would have today in computer technology and our use of it, would be a
multiple or exponential positive difference in advancement. It's hard to
say or imagine where we would be today but it is certain that we would
not have the serious and expensive problems of programming we have
today. Nor would we be, just now, beginning to integrate fields.

Does this advanced functionality meet or help to meet the stated
research objectives and solutions of the earlier sections? Let's take a

*** On "Open Systems" -  An open system is a collection of interacting
                     software, hardware, and human components.

    Seems the three interface types, mentioned in the above section,
are required for a system to be considered an "open system." All that is
left here to do is to determine what and how the interface standards are
going to be. Now it seems to me that the only way to really determine
what works and what doesn't is to gather feedback from real life field
application of open systems. Problem is that "open systems" are not in
common use as much as they should be and this is bound to cause a bias
towards the much more complicated and expensive existing "open systems."
Perhaps what SEI is really all about is meeting government requirements
of creating alot of paperwork and illusional complexity to what an open
system is and how it works. Paperwork and complexity that is only going
to make things sound and work in a manner much more complicated than
they really are. And as a result prevent advancements to happen in
keeping things simple.
    All that is needed here is the three interface types and the
component applications. The advanced functionality is only an additional
plus that allows going beyond these objectives to allow the end users to
evolve their interface environment and computer usage as they evolve
their understanding. And through the use of the advanced functionality,
issues like system management and maintenance of open systems can be
made much easier to handle and less of a concern.

*** On National Science Foundations:
        Research in "Learning and Intelligent Systems"

A recap:

The LIS initiative seeks to stimulate interdisciplinary research that
will unify experimentally and theoretically derived concepts related
to learning and intelligent systems, and that will promote the use
and development of information technologies in learning across a wide
variety of fields. The long-range goal of this initiative is very
broad and has the potential to make significant contributions toward
innovative applications.

While the ultimate goal of this initiative is to understand and
enhance peoples ability to learn and create, the attainment of this
goal requires achievable intermediate goals and strategies. Such
strategies include combining the theory, concepts, research tools,
and methodologies of two or more disciplines, in order to focus on a
tractable component of a larger problem. This initiative seeks to
achieve these goals by encouraging interdisciplinary research that
has the potential to unify disciplinary knowledge about learning and
intelligent systems, and to foster technology research and prototype
development to explore and test designs and models that can lead to
supportive interactions between natural and artificial systems.


    Considering what the "long-range" and "ultimate" goals are, this
line of research seem to be an efforts to further stall out software
development and usage evolution. To really enhance peoples ability to
learn and create, which includes the ability to create innovative
applications, what more is needed than honesty about the fact learning
and creating are both a matter of building on and/or with what you
know? A: Well, the functionality and honesty about it that allows one
to do so, is needed.
    To accomplish this does not require tons of research or long range
"lab" development. The three interface types and programs that make use
of these interface types, allowing the end user to do such "building on
or with what one knows," is what is needed and it's needed today not
tomorrow or ten years from now.
    As far as unifying things, whom better to figure this out than the
end user working in the field? Clearly the three interface types are
required here but the advanced functionality allows the building or
evolution of a system to naturally reach a level that can "appear" to
be much more intelligent than what artificial intelligence research and
applications have brought in their "closed systems" applications.
    As complicated as this LIS initiative has been made out to sound, I
have no doubt that with a little effort it can be made out to sound so
complicated that even the party writing it doesn't understand it. Just
like much software that has been produced and sold.

*** On another "CALL FOR PAPERS"

  "Third International Conference on Principles and Practice of
   Constraint Programming"

A recap:

   Scope of the Conference

   Constraints have emerged as the basis of a representational and
   computational paradigm that draws from many disciplines and can
   be brought to bear on many problem domains. The conference is
   concerned with all aspects of computing with constraints
   including: algorithms, applications, environments, languages,
   models, systems.


   Papers are especially welcome that bridge disciplines or combine
   theory and practice.


    To give a general example of using the concept of constraints, refer
to the use of a Thesaurus' "Plan of Classification" and "Tabular
Synopsis of Categories." Starting with a general picture you narrow the
word objective down using word "type" constraints until you have the
word. Such constraints have not "emerged" but have always been the way
we classify, categorize, define things in order to learn, create and
    To easily "bridge disciplines or combine theory and practice" in the
computer environment requires an "open system" functionality. Of course
the advanced functionality makes use of the concept of applying
constraints. Otherwise, there would be no way to classify, categorize or
define things, and without this you cannot evolve a system very far
without creating a mess that is difficult to keep versatile, organized
or effectively productive.
    Another way to look at the concept of applied constraints is that of
the dictionary. A word can mean anything, often it has several meanings,
but the definition constrains it down to a given meaning or meanings.
From here it is the way the word is used that further constrains it to
pin down the meaning intended.
    In a way "constrain" is just another way to say "define."

*** On the Software Development problem.

A recap:

---- Overall Summary ----

Problems - all-around lack of complete documentation and weak training,
faulty user input and feed back - self contradictory user request, lack
of project leadership between developers and users, management created
problems and low quality control standards, feature creep and software
size increase, advancing technology rate of change and lack of general
standards, solutions around the corner but never arrive and our tools
are better than theirs attitude, lack of a value chain structure for
value added abilities, failure to produce a functional model before
coding and constant remodeling, etc.

Solution directions - code re-use, object oriented programming,
component-based programming, distributed components, better tools,
better programming methodologies, leaner software, a splitting of code
writer types into two catagories - architects and assemblers, better
effort to establish a working vocabulary between developers and users
so users can in some way lead development, etc.


    If you haven't figured it out yet, the biggest problem here is the
accumulation of dishonesty. There is no conspiracy, just many small, and
a few not so small, acts of greed. To support this greed is to support
the "closed system mentality" for much of this greed simply cannot exist
with an "open system mentality."

Begin Side Bar Elaboration:

    Whether you are calling something a program, a DLL, a Library, a
Device, etc.. they are all files. Even the OS is of files even when
permanently put into a ROM (Read Only Memory) chip. Software is the tool
where dishonesty is created. Software has been made out to be so
complicated and/or complicated sounding that the average end-user will
believe most anything they are told. So when it comes to communication
between the programmer and the end user or client there really is no
surprise that there is a problem here. There really is no reason, other
than greed, to create more and more complex sounding software concepts.
    Either a file contains functions that controls hardware directly or
some abstract process that controls hardware, or a file contains data,
though a combination can also exist. The Macintosh is perhaps the most
honest about this separation of data and functions, in how it store
files, but even here this is hidden from most users with its GUI only
interface. And there are other facets of the Mac that help support
"closed system mentality."

    The advantage of using "closed system mentality" for the programmer
is it helps lock the end user into the programmers products and prices.
The disadvantage of "closed system mentality" for the end user is it
forces the end user to do things the way the programmer dictates as well
as having to deal with the problems the programmer creates with faulty
software. Also, if the end user wants improved or added functionality
then they must communicate this to the programmer in the hope the
programmer will understand and put it in the next release. The end user
will then have to again buy the software, perhaps paying for additional
functionality the end user didn't want or need, and maybe paying again
for the what they already had.
    What is so dishonest about this is that the programmer may know
absolutely nothing about using the software in real life or the field it
is used in, what all is really needed in the software, but rely on the
end users to educated him. For this education, what does the programmer
give in return? He claims property rights that can allow him to prevent
another programmer from using the solutions the end user came up with.
Solutions that perhaps another programmer would use in a much better and
less expensive software package and support.
    The "software development tools" developers use the same "closed
system mentality" in the products they sell the programmers. So when the
end user ask for a little education from the programmer, in order to
better communicate solutions, the programmer either can't or won't do
this because the hierarchy of illusional complexity is so big. There are
simply to many lies being told in supporting insured profit for the
programmer and tools developers that it becomes impossible for anything
but a one way communication of end user to programmer. Ultimately it
comes down to the attitude of "we are going to screw the customer,
client, end user and if they don't help us do this then we are still
going to screw them."
    Of course there exist all the problems the software development
industry has and the solution directions presented are as well lies in
that they don't really help the end user! With such one way
communication and property right claims, what problems can one expect?
A: Re-read the recap!!! Yes! The software industry is nothing more than
alot of "fu*king crap!" If my language here offends you then you should
be offended even more so, and in proper proportion to what it is costing
you, by the enormous accumulated dishonesty causing wrongful
exploitation and abuse of the end user, consumer and tax payer by the
software industry: Who do you think pays for such failures and overruns
of the huge and expensive projects funded by the government? Where do
you think large, non-government, projects get the finances to pay the
software developers when the project goes into overruns or gets
cancelled after software development has begun? Perhaps increased prices
of general products from the investing company? Etc..

End Side Bar Elaboration ----

    To remove the dishonesty of complexity and "closed system mentality"
the software development problem will near completely vanish. To remove
what is left of the problem is to apply honesty in communicating the
simplicity of what software development should be, so to quickly become
much simpler. Applying open system mentality and functionality that will
not only allow the end user to do more but allow the end user to learn
general programming concepts so to enable honest two-way communication
with programmers. Thus, creating a spectrum of, rather than the proposed
2-3, programmer types. A spectrum that ranges from the end user to the
"real software engineer" and allows programmers to be better positioned
where their personal skills, field knowledge and experience really

    The advanced functionality not only helps this spectrum to evolve
quicker than it would without it, but maybe the only way for such a
spectrum evolution to happen effectively and honestly.

*** EXTRA - On the general concept of Artificial Intelligence.

    The field of artificial intelligence contains many concepts like
constraint programming, blackboarding, forward and backward chaining,
knowledge representation, natural language, vision, automatic deduction,
expert systems, neural nets, LISP, Prolog, etc.. And there has been a
great deal of research and publication on the topic of artificial
intelligence. But with all of this the field of artificial intelligence
fails to realize:

    A.I. is in fact a "by-product illusion" resulting from the
accumulation or build up of integrating processes (active data) and
information (static data) through a configuration of fundamental
functionality that has long been available through computers.

    This is absolutely verifiable.

    The only real problem to address is human acceptability of the
simplicity of this. The fact is there has been and continues to be such
a great deal of investment into research that the willingness to accept
the simplicity is near non-existence, if not non-existent.

    This is not to say all research in this field is worthless but there
is the spectrum between pure worthlessness to completely valid and
important research. Upon verification of the first paragraph, it will be
possible to determine what research to drop and what research to invest
more into.

    Within the inherent physical constraints of computers and attached
devices, computers will do anything they are first told how to do. This
includes telling such devices how to collect data and analyze it for
later use (the illusion of learning). The problems with research is that
it has been to concept specific and therefore to complicated, to closed
system oriented, to constraining in how something must be done or not
done at all. And this is not being honest about the potential of
computers. A computer only "sees" ones "1" and zeros "0" in its
processing and data and therefore will never know what a tree is. To
argue against this is nothing more than trying to cause a deceptive
wrongful illusion.
    However, by removing the constraints caused by closed systems
mentality and the limited focus of applying specific human created
concepts, to allow open system ease, limitless addition and integration
of human created computer processes, the by-product illusion of A.I.
will evolve in a very natural manner. And by having the required
functionality publicly available and in use, the field of A.I. can put
to rest the illusional and wrongful expectations that have been placed
on it, while the illusion is understood and allowed to naturally evolve
far beyond such expectations.

    To say all this in a simpler to understand way: Human Intelligence
and knowledge is something we evolve because we have the ability to do
so. We are not limited in what concepts we can create, learn, integrate
and apply. To apply "closed system" mentality to computers is a direct
contradiction of allowing us to evolve and integrate our use of
computers. To apply open system mentality in computer usage, along with
a computer based tool set that gives us the ability to build upon and
evolve our usage, the illusion of Artificial Intelligence will also
evolve as a by-product.

    There is no question that A.I. research has brought us some
important concepts, methods and functionality, however, by keeping all
these things isolated from integration also prevents simpler and direct
methods and functionality from being used. In some ways the numerous
concepts and functionality that have come about through A.I. research
are a perverted dissection and distortion of much simpler functionality.
The advanced functionality of this section is such functionality that
gives us the ability to evolve our usage of computers in both a general
and personal way.

The Advanced Functionality.

Acts of the dishonest -- Given the wide spread and high accumulation
level of dishonesty throughout the software industry, it would be
foolish to think such dishonesty wouldn't be attempted here. This is
functionality the software tools developers and programmers will not
likely produce to make generally available or support without demand
from programmers and end users, respectively. Because, as things are in
physical reality, only one way when it comes to the basics or
fundamentals, it is likely to see: Dishonest attempts to distort the
functionality to be less functional and/or more complicated than what it
is; Making property right claims to it for personal profit; Hiding it
from programmers or end users, etc.. Anything but honesty, acceptance
and support for it. However, I, as an end user having the mindset to
apply my effort and time to see, detail and communicate these honest
concerns and the functionality, know there is far more for everyone to
gain by having this functionality freely available, used and supported,
than what a select group will gain through dishonest, self serving,
wrongful exploitation, abuse and fraud.

Hardware evolution -- As this functionality will allow major advances
to happen in software development, system OSs' and software use,
hardware and OS developers will find it profitable to evolve their
products in the direction of better support of the advanced
functionality. Just as they will do so towards open system
functionality. With this in mind, the details of the advanced
functionality operation may change in the direction of becoming simpler
to use.

Advanced functionality evolution -- Software does evolve but any
alteration(s) to the advanced functionality will only be such
alteration(s) addressing and correcting exception failures and does
not constrain/break previous working vocabularies. In other words: such
alterations will only improve the integration of the programming
concepts used within the advanced functionality and as such, improve its
abilities. Let it be understood: the objective of evolving the advanced
functionality is to enable virtual interaction connection to happen
using must-exist functionality. Any optional functionality is best left
to what already exist or can be created external to the advanced
functionality. We are not inventing or re-inventing the wheel, but
creating/producing the functionality to allow the wheels to be attached
to and usable by many different things.

Advanced Functionality Objective:

        Due to the versatility of the advanced functionality this is a
    tough one to define. An open system environment supporting the three
    interface types is assumed, as well as software that makes use of
    such interfaces. With this in mind, consider the following as an

    * To provide an advanced integration of simple functionality that
      allows the computer user to evolve their use of computers to new
      levels of productivity through user defined integrations and
      automations of any accessible software and hardware functionality.

    * To make the advanced functionality available as a shell type
      application having all three types of open system interfaces.

      With the ability to:

        - Detach from it, applications it starts up.

        - Run as windowed shell(s) and/or background process(es).

        - Attach to it, via type #3 interface, custom user interfaces.

    * To have this advanced functionality freely available on all
      computer platforms.

The General Concept of the Advanced functionality:

    * To allow the computer user to create, maintain and use words and
        related definitions of processes, data and application
        integrations, to integrate, automate and evolve their usage of


        - the definitions may contain user type defined sub-definitions.

          (i.e. text, shell or batch scripts, program or application
          execution, program or application automation commands via
          application doors of type three interface, programming source
          code, binary code, pointers to other definition, processes or
          data files to use, any combination, etc.).

    And the user can:

        - build new word-definition vocabularies using existing

        - use the basic programming concept of "variables" to enable
          versatility and flexibility of words, vocabularies and
          processes, within such definition, sub-definitions and

        - apply constraints to the use of words or vocabulary sets,
          definitions, sub-definitions and constraint set. And with the
          option to do so using "variables."

        - manually control, as well as automate, operational mode
          changes of the advanced functionality. (i.e. start, step,
          stop, re-start, continue).

        - create processes that can change which vocabulary set(s),
          sub-definition type(s), variable(s), and/or constraint(s),
          that are to be used and at what stage of processing.

   Example of general concept:

   For this example we will use:

   Amiga - due it's having the three open system interfaces

   LightWave (a graphics and animation program) - due it's having an
         AREXX port (interface type #3) and vocabulary accessible
         through the AREXX port. However, Lightwave requires the use of
         AREXX scripts to access and use this vocabulary, so we also

   AREXX - An interpreted programming language that can make use of
         "libraries" to add functionality to it and likewise the
         programs like Lightwave which use AREXX.

   ImageFX (an Image processing program) - This program has an
         AREXX port but does not need AREXX, though AREXX can be used
         to add functionality. This program is well suited for
         supporting open system mentality, having a full vocabulary
         set (all you can do through its GUI type #2 interface)
         available through its AREXX port (type #3 interface). And the
         additional functionality to: record and save what actions the
         users performs (automatic script creation from user action);
         and ability to automatically bring up any of its input
         requester for input that wasn't supplied through it's type #3

        With these programs we have three vocabulary sets, but for this
    example we will have a few other vocabularies (mentioned as we come
    to them).

        Now from the "advanced functionality" interface type #2 we

    Create a ball in Lightwave, 1/4 the size of the screen and
    give it a gold surface. Render it out using test.iff as a backdrop
    then use ImageFX to apply the effects of fx1, fx5 and fx8. Save the
    image as test.jpg

        The first thing to happen is the Advanced Functionality will
    access our evolved natural language processing vocabulary to parse
    our input down into just the basic needed elements. This process
    will happen in several phases of changing vocabularies,
    sub-definitions and constraints used. Changing vocabularies is
    needed to determine what to drop, what to convert into LightWave
    information and what to convert into ImageFX information. Changing
    vocabularies because it may be possible that Lightwave and ImageFX
    might have commands of the same name but different meaning. Also
    such Vocabularies are kept in separate files rather than one big
    dictionary, allowing better and easier maintenance of specific

    (natural language processing {NLP} can easily be applied/evolve
    using the abilities of the advanced functionality so there is no
    need of an external program here, though one or more can be used.
    Additionally, the advanced functionality can handle evolving the NLP
    vocabularies ability to properly handle exceptions. In fact, the
    abilities of the advanced functionality make it possible to handle a
    great deal more than NLP. Simply because it is not a dedicated NLP
    system but an open system able to handle any number of NLP methods
    and change methods during processing or use more direct but non-NLP
    methods. However, the advanced functionality may not be as fast as
    a dedicated NLP system, but then again it may not need to be to get
    to the output objective faster. And NLP through the Advanced
    functionality is something more likely to evolve rather than be
    available all at once).

        Once the basic elements have been reduced down, converted to
    just the needed information and placed into variables the advanced
    functionality then proceeds to create the AREXX script(s) needed for
    Lightwave. Of course accessing the AREXX and LightWave vocabularies.

    (this process is one of automated programming that will also change
    vocabularies, sub definitions and constraints used as well as having
    several phases in producing the needed AREXX code).

        After this, the advanced functionality starts up lightwave and
    runs the AREXX script(s). Then it waits for Lightwave to finish its
    work. The advanced functionality could continue to create an AREXX
    script for ImageFX but ImageFX doesn't need AREXX to do what we
    want, of which we can be more direct about.

        Of course the next thing to do, once lightwave is finished, is
    to start up ImageFX and grab the LightWave buffer containing the
    rendered image. Once this is done the advanced functionality can
    then shut down lightwave and proceed to send ImageFX the needed
    commands until it is done and we have our image file.

         * * * * * * * * * * * * * * * * * * * * * * * * * * *

        The above example is missing alot of details, as to how the
    advanced functionality will do all this, because this example is
    just giving the general idea and it is something of a complex
    example. However, for those who know these programs (not including
    the advanced functionality), you know it's possible to write the
    specific AREXX scripts to perform this specific control of these
    programs. Furthermore, you know there are the standard required
    things to do in AREXX scripts, for lightwave, that can be put into
    AREXX code pieces. And through AREXX you can even reduce this down
    to an AREXX script front-end, so to be more like ImageFX, in ease of
    sending commands to lightwave.
        The point here is that all this can be broken down into simple
    to do pieces. Pieces that can later be put together in a customizing
    automated manner. This includes dealing with natural language
    processing. And this is how the advanced functionality works, by
    putting simple to do pieces together, evolving via the addition of
    simple to do things that simply control larger collections of small
    pieces. The above example breakdown could be an actual outline for
    a stage of evolution of the advanced functionality vocabularies and
        The important thing to note here is, as the vocabularies evolve,
    automation evolves. Just as we learn and evolve our spoken and
    written language vocabularies. But here such advanced vocabularies
    can be transferred to other systems, perhaps via the internet and
    maybe even accessed through the internet during processing.
        With internet accessing in mind, it becomes possible to evolve
    a very large, internet based, catalog of vocabularies. Where
    specialized vocabularies are produced and made available by those
    whom work in such specialized fields. Thus causing a much quicker
    and broader evolution of productively usable vocabularies than what
    any constrained access to just theory based research and/or a few
    select businesses, will evolve over the same time period. Again,
    emphasizing the importance of the advanced functionality being
    freeware available on all platforms.

The Advanced Functionality identified:

        If there is anything that qualifies having used the word
    "Advanced" it is that the "Advanced Functionality" is: An integrated
    configuration of common, but extended, functionality that enables a
    very high level of versatility.

    By this, the advanced functionality is labeled:

              "Virtual Interaction Configuration" or "VIC."

    And it consist of just nine commands and their related files.

    The general functionality of the individual commands is rather
simple, but the details and/or command options may seem, at first,
complicated. And the overall use of the configuration may seem to be
even more complex. However, it's really not complicated but rather a
configuration of integrated commands of which each is carried out to
it's logical integrated conclusion. What this enables is versatility and
ability to handle exceptions. Allowing the user to do simple things as
they evolve their vocabularies, usage and understanding of the VIC. And
to continue adding simple things that integrate previous simple things
into a larger, more complex integration. But the functionality is there
to keep a reasonable level of organization and a way out for the user
whom might otherwise trap themselves into false constraints or corners.
    The VIC functionality is not field specific but interfaces and
processes can be made for it that are. With this in mind, interfaces and
processes can also be made for creation, maintenance, debugging, etc.
of VICabularies.

    For this document, we will not go into the details of the commands
usage but only the general concepts and a little detail. This document
was create for public awareness of the real problems and that there are
possible easy solutions. There is other documentation with more details
that can be found through the web page:


    And the Nine Command of the VIC are:

        AI - (Activate, Alternate, or Address -Interface)

        PK - (Place Keeper)

        OI - (Output to Input or Obtain Input)

        IP - (InPut from)

        OP - (OutPut to)

        SF - (Script File processing)

        IQ - (Index Queue argument)

        ID - (IDentify argument)

        KE - (KEy or Knowledge Enable)


*** AI - Setup, Start and Stop a VIC, and VIC external control.

        The concept of AI is to start-up a VIC with options of setting
        the basic starting contents of the VIC parts and to also shut a
        VIC down. AI is also used to allow one VIC to communicate and
        control another VIC. It is one way to access a VIC from an
        external program.

        * This is the VICs' type #1 interface,

        File: option exist for this command to generate or use a file
              it previously generated from a shutdown.

*** PK - Track and alter a VIC reference data and sequence position.

        The concept of the Place-Keeper is to keep track of the
        reference data and sequence position of a VIC parts including
        itself. It does this through a PK-file. PK can also directly
        alter the contents of the PK-file and change to another PK-file.

        The PK-file might be considered a process and environment type
        of file. The PK-file is intended to always be changing, by at
        least the changing SF running file@line# stack but in other ways
        too. Due to this the actual PK-file is only updated, saved, at
        selected times. If a watch window is open, then the contents of
        the watch window is always updated.

        The purpose of the PK-file, other than just keeping track of a
        VIC reference data and sequence position, is to allow snap-shots
        or frames of a VIC process to be taken and at selected times in
        a VIC process sequence. Doing this allows a VIC process to be
        set aside, a frame to be saved, and picked up later or passed on
        to one or more VICs to continue. This makes possible many types
        of processing, such as using the concept of sub-processes,
        parallel or network processing, tree or parent/child processing,
        etc. All done with the PK-file while also having the ability to
        communicate, pass data and processes between VICs. And all of
        this is done by simply changing the contents of the PK-file or
        the PK-file itself via the PK command.
            The best way to describe the PK-file is to take a look at
        the it. It is a text based file, as is all VIC files.

 The PK-file structure:

 AI: AI-name.# ; PK-file directory ; Current Directory

 PK: PK-filename ; last/alt. PK-filename ; opt. default PK-filename

 OI: OI-filename ; last/alt. OI-filename ; opt. default OI-filename

 IP: device ; preprocess,Class ; BOI,EOI : opt. last/alt. set

 OP: device ; postprocess,Class ; BOO,EOO : opt. last/alt. set

 SF: SF-LPC flags ; last/alt. flags ; SF-fname@line# : opt. last/alt.set
   : SF-filename@line#,....>running stack>

 IQ: IQ flags ; last/alt. flags ; IQ-fname@line# : opt. last/alt. set
   : IQ-filename@line#,....>running stack>

 ID: ID flags ; last/alt. flags ; ID-fname@line# : opt. last/alt. set
   : ID-filename@line#,....>running stack>

 KE: Master-teeth ; last/alt. M-teeth ; KE-fname : opt. last/alt. set


*** OI - get input into variable

        Output-to-Input - The concept of Output-to-Input is, for there
        to be input, there must be output from something to grab as
        input. Output-to-Input is a reminder of "Connection."

        Obtain-Input (same "OI") - Concept of Obtain-Input is to grab
        input and place it as the contents of a variable. A very common
        function but also contained is the data CLASS tag of the value
        as DEFINED by IP. Also if the amount of input is larger than a
        given amount, a file is created and the contents of the variable
        is the filename. Like PK there is an "OI-file." The file
        contains the list of variables, data CLASS tags and the values.
        PK cannot alter the contents of the OI-file, only OI can. But PK
        can change the OI-file used by OI. Generally, the active OI-file
        is located in RAM. OI can by-pass the IP setting.

*** IP - where to get "OI" input and what to get

        The concept of IP is to select the input device or program port
        for OI to use and optionally DEFINE the input data CLASS tag.
        This is similar to input redirection control but with the
        ability to insert a processing step between the incoming input
        and destination variable. A process that can also generate a
        data CLASS tag. This input setting is contained in the PK-file
        and can only be altered by PK, but it can be bypassed by OI.

        Defaults to standard Keyboard as input device.

        DEFINE can be considered to be a pre-processor, if used, such as
        a filter, data type identification, or just a user created CLASS

        IP settings are just a line in the PK file. In the IP line of
        the above PK-File, BOI and EOI determine what characters begin
        and end input but defaults with first character received and EOF
        (end of file or input stream).

        This command does not have it's own related file.

*** OP - where to send "SF" output and what to send

        The concept of OP is to select the output device or program port
        for SF to use and optionally DEFINE the output data CLASS tag.
        This is similar to output redirection control but with the
        ability to insert a processing step between SF and the output
        device or program-port. This output setting is contained in the
        PK-file and can only be altered by PK, but it can be by-passed
        by SF.

        Defaults to calling device or program-port for output to, but
        can be set for standard out.

        DEFINE can be considered to be a post-processor, if used, such
        as a filter, data type identification, or just a user created
        CLASS tag.

        OP settings are just a lines in the PK file. In the OP line of
        the above PK-File, BOO and EOO determine what characters begin
        and end output but defaults with first character sent and EOF
        (end of file or output stream).

        This command does not have it's own related file.

NOTE: Both IP and OP could be removed as commands with their single
        functionality being made as an command option of PK. Or their
        functionality options could be increased by moving some options
        from PK to them. However, I advise against it because in doing
        so the logical hierarchy of the VIC command set and options is
        disrupted. It is simply better to leave them as they are for the
        purpose of human learning, understanding and recall of the VIC
        command set and options hierarchy. It makes for a consistent
        logical hierarchy.

*** SF - edit script-file lines and/or process script, and output

        The concept of SF is to access and control the typical "grab the
        next line of the active script-file, process it for VIC
        variables and commands, then pass it on to the device or
        program-port as set by OP when there is something to pass on
        (not completely consumed by SF). Contained in the PK-file is the
        current stack of SF script-file name(s) with current line
        number(s) of execution of the script-file(s). Through PK this
        stack of SF-file name(s) and the line number(s) can be altered
        to change the up and coming sequence.

        SF is the user interface type #2 for the VIC. It can be
        iconified, sized just for SF, or full debug/watch of various
        elements but mostly watch, step, and edit of the current files

        SF has options to step through the processing of script-file
        lines and to limit processing. The ability to allow the user to
        directly alter the line being processed and at the different
        steps in the processing of the line. The user can insert lines
        to be processed before and/or after the current line and use SF
        in interactive mode (no SF-file being processed). The option to
        send the line(s) to a file (tee) with or (redirection) without
        sending it to the OP defined device or program-port. All this
        can be done through PK setting the SF-LPC (Line Processing
        Control) flags.

        There is a keyboard key combination (Ctrl-P) to toggle step and
        auto processing. It is also possible for a script-line to select
        the processing mode, through a PK command call.

        To properly understand the limitation of SF and PK abilities to
        alter the sequence, understand that SF only processes the
        current line for VIC variable and commands, optionally with user
        interaction, before passing it on to the device or program-port
        defined by OP. PK, in altering the SF stack only alters the
        coming sequence of files and/or their next line number(s) that
        SF processes.

        The SF-script can be a file or a pipe type temporary file, as is
        generated by IQ, ID, or a type #3 interface accessed by external
        programs (such as another VIC or user interface). As well, the
        tee/redirected to file output can also be such a pipe type
        temporary file, type #3 interface or even a null device so
        that SF has no output. Output direction, of course set by the OP
        line of the PK file, unless bypassed by SF.

*** IQ - select argument word definition(s) and output

        The concept of IQ is that of search and output (searching
        through vocabulary files which can be formatted much like a
        dictionary format).

        This command has similarities to other search type programs
        (Csh search, grep, etc.), having flags and pattern matching
        wildcards. Also similar to the Csh "man" command in that it can
        output several consecutive lines from one match. But there are
        some important overall differences. It has sub-search abilities
        of two flavors. One of sub-file, and one of sub-definition. KE
        plays a part in the search choices or direction. There is a
        temporary internal list of files with "path, name, date, and
        time" kept to assist in preventing searching through what has
        already been searched.

        To understand the search and sub-searches consider the IQ
        argument as a word-key to match with a word-keyhole in an
        IQ-file. The IQ-file may also contain file-keyholes and

        * The objective of the three different keyholes is to allow
          infinite search depth and constraint versatility. Minimum use
          is always best (keep it simple) but use is not limited.

          The successful search possibilities are:
       A) File-keyhole match. (continue search in an additional file)

         A file-keyhole is found in the current IQ-file and a matching
         file-key is available, so the file is made the current IQ-file
         and the search continues in this current IQ-file.

       B) Word-key match. (output definition)

         The word-key finds a match within the current IQ-file.
         There are no file-keyholes or definition-keyholes, so the
         definition is output.

       C) Word-key + definition-keyhole match. (output definition)

         The word-key finds a match within the current IQ-file.
         There is a definition-keyhole and an available matching
         The definition does not contain a file-keyhole, so the
         definition is output.

       D) Word-key + file-keyhole match.
            (continue search in an additional file)

         The word-key finds a match within the current IQ-file.
         There are no definition-keyholes but there is a file-keyhole
         and a matching file-key is available, so the file is made the
         current IQ-file and the search continues in the current.

       E) Word-key + definition-keyhole + file-keyhole match.
            (continue search in an additional file)

         The word-key finds a match within the current IQ-file.
         There is a definition-keyhole and an available matching
         The definition contains a file-keyhole and a matching file-key
         is available, so the file is made the current IQ-file and the
         search continues in the current.

          Once a file has been searched then the search picks up
          where it left off on the previous file.

       F) If no match was found it is possible to have a "match any
         word" definition at the end of the first file and optionally
         based on a variable CLASS tag. This way a default can exist.

    The OutPut of IQ is feed through SF to process any VIC commands or
    variables where the remainder is passed on to the receiving program
    or device as set by the OP line of the PK file.

*** ID - select argument identification definition(s) and output

        The concept of ID is similar to IQ but instead of looking for a
        word-pattern match it looks for the relationship(s) of its
        argument to the VIC first. Then uses the relationship-word-
        pattern in the definition search (as is done in IQ). This
        command can be considered similar to the Class/Actions of Csh
        but does not work on just files. It is also possible to cause ID
        to compare its argument to non-VIC elements, such as external
        shell variables, system information, etc. This is done in the ID
        file (as named and changeable in the PK file). Because it is
        impossible to know or build in all possible relationships
        checking, the ID file can contain the program name and any
        arguments it needs to do such non-VIC comparison. This means ID
        can do more than a relationship comparison related search. This
        is also where the VIC variable CLASS tag can be of use.

        Other than the above, ID performs the same processing sequence
        that IQ does. Of course, files and other IQ references (made in
        the above description of IQ) are ID, not IQ.

*** KE - constrain access and use of definitions

        The concept of KE is best described as using keys to open doors
        to knowledge and parts usage. It allows IQ and ID vocabularies
        (knowledge enable) constraints to be applied and has the trade
        off of search time. More constrained, faster search. Keys can be
        created, changed and removed from the current KE file through
        KE. The current KE file can be created, changed or removed
        through PK. Keys can have teeth and a master key can exist in
        the PK file.

Second Level Development:

        Understanding the First Level Development is that of creating
    the VIC, the Second Level Development is that of creating general
    and custom processes or vocabularies.

    Programming basics teaches the following primary and secondary

    The Three Primary Programming Concepts -

                   process, decision, and input/output

    The Three Secondary Programming Concepts - these are made up from
    combining the primary in different configuration.

                   sequence, selection, and iteration

    These concepts relate to the general picture of the VIC,

             OI through IP for input.  SF through OP for output.

             PK for iteration

             SF for processing and sequence.

             IQ, ID, and KE for decision and selection.

    but on closer inspection it's hard to say any one part plays only
    one role. It depends on how one makes use of the VIC and remember
    VIC means "Virtual Interaction Configuration."

    So lets look at some of the main roles the parts can play:

    AI - Start and stop AI/VICs. And allow VICs to communicate with
        others through files and AI. Timing between VICs by using AI to
        control the settings and positions of VICs. A hint of parallel
        or networked processing?

    PK - Jump from one VIC sequence to another (PK file) within a single
        VIC. Because only one sequence can be active per VIC, this can
        act like function calls and can pass information between the
        inactive sequences through files. It is also possible for a VIC
        to pick up an inactive sequence started by another VIC. PK can
        also cause iteration in both SF processing and IQ/ID.

    OI - Besides getting input from the user it is also useful for
        getting input from other sources, including other VICs. Remember
        OutPut-to-InPut." Output generated from a VIC can be input to
        the same VIC or another. And because input is to a variable file,
        the file can be use by more than one VIC at the same time (or
        near enough the same time). Also, by swapping OI files and/or
        using the data-tag of a variable, it is possible to setup global
        and local style variables as well as external application
        specific variables. Parsing is possible by using the IP and OP
        setting of beginning and end of input/output characters.

    IP - Besides determining where OI gets input from, IP can call a
        filter, data identification program, time-stamp, etc. CLASS
        Tagging the input can be from a program or user determined. The
        CLASS can be useful in determining, or reminding the user, what
        the input is to be used for.

    SF - SF can be used for edit, creation, and debugging. But being the
        VIC user interface of type #2 and #3, it can be used to do alot
        of things in the way of creating and using automated processes.

    IQ - With the IQ files and Keys one can set up a variety of
        selection methods. Keeping in mind the ability of a match to be
        able to alter Keys, the IQ stack, and even the word being
        searched while continuing a search, it is possible to setup
        complex selection methods. Also it is possible to have IQ files
        which are standard but other IQ files that are user created for
        the purpose of allowing different paths to the standard IQ files.
        Consider automated programming, it is possible to have a
        standard IQ file with code chunks but definition keyholes for
        different languages. By using the proper (user created) key one
        can have the proper code chunk selected. This can happen in a
        recursive manner, making it possible to get more specific code
        while prompting the user for any needed information.

In Conclusion:

Dishonesty spectrum - The never ending bait and switch game with a depth
and width of greed that clearly holds the majority of computer users
trapped in a corner of wrongful exploitation and abuse. It's well past
the time for this to stop! It is costing all of us far to much in
non-productive expenses and preventing technological advancements that
should have happened many years ago. Advancements not only within the
computer industry but all industries that use computers, including
medical research. The benefits to the few profiting from such wrongful
exploitation and abuse are so small that the day will come where we look
back with 20/20 hindsight vision and view this period of computer
history as barbaricly ignorant, to say the least. Looking back, just as
we can do now to see the earth is not flat as many had thought but very
round, we will see the destructive illusions caused by those whom wanted
us to believe computer had to be complicated, closed system based and
only "the experts" could program, as the rest of us paid them to fool us.

    There is no reason to research for another ten years the fundamental
OS level functionality or the functionality of the VIC. It is very clear
there is benefits for all of us to share in and contribute to. Benefits
that will only come about through making the open system functionality
and the advanced tool set of the VIC widely and freely available.

What about me (in all this dishonesty)? I insist on honesty, team-work
and fair competition to bring the best out in front, where it should be.
I insist on credit being given where credit is due, otherwise how will
the best be able to do more? With this, I insist I be given the credit
due me for identifying, detailing, documenting and communication what
all I have and will do.
    I suppose this is the apex trial of proper recognition and credit
being given, being I present what I do as an end user rather than the
programmer(s) that will create and make freely available the important
needed functionality. Credit will be given where credit is due.

                        Looking for Programmers

    I am looking for honest programmers, on all platforms, willing to
contribute their time and skills, as I do mine, to create the needed
functionality. Feedback is vital to proper understanding. Like a simple
op-amp, without a feedback circuit there is no productive guidance

    As the VIC functionality becomes available I do expect to earn a
good living in consultation and training on the use of the VIC. And I
suspect the programmers whom contribute may also find this a productive
income generating direction to go. I am open to any such offers now.

                  Timothy Rue -



Although this work is copyrighted, the intent of the copyright is to
support the concept of giving credit where credit is due and to prevent
the wrongful constraint and/or abuse/exploitation and/or distortion/
manipulation of it's content. This work may be transfer and used
following these conditions:

1) This work may be transferred only in whole and so long as NO
   consideration is received in return.

2) That proper credit is given to the Author(s) responsible for the
   creation of the work.

3) That there is no intentional distortion or manipulation of the work
   that in any way damages or harms the work or authors(s) responsible
   for the creation of the work. And that in finding any unintentional
   distortions or manipulations, correction(s) will be made A.S.A.P. and
   with reasonable effort to communicate the correction(s) to all.

4) This work shall not be included in any for-profit product and/or
   service without the written approval of the authors(s) responsible
   for it's creation. The exception to this is, of course relates to the
   world wide web, in that the work may be made available on and through
   the W.W.W. so long as the other conditions are adhered to.