Digital Life-Form™ Simulation Technology Architecture
As we pointed out on the introduction page, our approach to this topic may be far different from what any you have likely seen before in the state of the art for Artificial Life (AL), so be prepared to challenge many of your basic conceptions and thinking behaviors, some of which you may not even be aware you have because you have simply “absorbed them” from our culture without even realizing it. So some of your conceptions about things like "causality" and how life processes actually work may be “implicit” in your subconscious memories and you may not even know you have them. In other words, you may have ideas or other content in your memory the "imply" certain conclusions, but they are conclusions you simply never thought about making before. That may sound strange to you, but it is actually quite normal. The fact is having implicit ideas that affect what we do or how we do it are really very common.
For example, have you ever had trouble learning a new sport or interacting in a relationship with other people at work or in your personal life. Then upon reflection or with the help of a coach you discover you have unconsciously learned a whole bunch of bad speaking habits or physical moves that were interfering with your golf swing or skill at catching a ball or causing you pain when you exercise. In your relationships with others, perhaps you had some behavior that was making it hard for you to be a good public speaker or sales person. But once you made these implicit and unconscious ideas or behaviors explicit, once you brought them into your conscious mind, you then could focus on them and proactively change them or learn new ones to help you get better at what you needed to do.
A similar situation could be true for some of your ideas about the world we all share and live in too. It is usually especially true for some of those of fundamental ideas about how we think about reality and our everyday environment, about how we think we know what we know, and how we think about the nature of causal interactions in our environment (as we summarized in our introduction). Most fundamental ideas like that we may carry subconsciously, and therefore some of them may be mistaken in some of us and making us less effective at accomplishing our goals than we could be. Worse, we may not even know these mistaken ideas or bad action habits exist in our subconscious memories. For example, we have found that to be the case with some of our own ideas about the source of “objectivity” of important ideas we think about all the time. That is, ideas like how "objectivity" is possible given the specifics of how sensory perceptual observation system works and how the thought processes we use to understand how it works. Some of these ideas depend on direct observational knowledge of biological systems, but others depend on the abstract concepts we use to define them and the mental processes we use to form and apply our concepts to what we observe. These mental processes are how our human conscious processes enable us to perceive the content data we store in our memories, and how our minds operate on that incoming data and enable us to “know things.”
Some of these processes are performed automatically by our nervous system such as sensory perception, physical pleasure/pain, and emotional reactions. But others are voluntary, such as concept formation, judgement, and how we select what we value. In the course of our own studies we have found our grasp of some ideas was incorrect because we formed or validated some concepts incorrectly or learned traditional ideas that are mistaken. And as a consequence like most people, we had mistaken assumptions about the specifics of how we came to know and built knowledge about the identities of the objects around us. We were confused about how we identify know the properties of those things and their interactions, or how “causality” works as a process of actions of change in our environment as the things around us interact and morph from one thing to another. We were not clear on how causality works in our own conscious processes of interaction with our environment, the judgements we must make about them to know what to do next, and causal relationships involved both internally and in externally as we act in our environment. But now we have resolved many of the confusions and our earlier assumptions that were incorrect. We have learned new ways to think about life processes using layered abstraction models to separate the many causal contexts for easier understanding. We have learned new ways to think about the nature of consciousness itself as an objective, quantitative, relational causal process. This includes the very idea that conscious processes can even be causal and quantitative at all. This last idea was a real surprise to us (as opposed to traditional ideas), but now we believe the evidence says that is the case. Can you become convinced of that too? The only way to find out is to read more and think these issues through for yourself.
Could you have implicit assumptions about how biology works too? Have you ever really thought about ideas like this if you are interested in the subject of AL? Few people take the time to learn about things like this, or just do not care about them. But all these ideas are crucial to the study of digital biology and the development of AL to make our designs function more like real organisms. That is especially true of biology as applied to the development of AL simulation systems, for the reasons we summarized, and many more reasons that will be explained in much more detail as you read. So for us it became necessary to examine our own implicit ideas, if we were to succeed in meeting some of the challenges of our goal to design more life-like AL simulation systems. Are you willing to do the same and challenge your own implicit ideas? That is up to you. All we ask as you read about the ideas we present on this website is that you set aside your implicit ideas long enough to give our views a fair evaluation before you judge them true or false, valid or invalid. You can only do that with your own firsthand observations of the world, your own firsthand thinking from your own perspective, and your own rational judgement about whatever you conclude.
For us, however, this is not an issue because we have already decided to challenge any ideas necessary to succeed in coming up with better AL simulation system designs. We are only interested in ideas that we can validate as objective and true, and that we find are helpful in our AL technology development work. So we are open to challenge any ideas that may limit us in doing such work to determine the ideas actually are objective and consistent with what real organisms are and do. Where did we get our approach? The author of these pages got the approach to thinking in the 1980s from his former employer Apple Computer, Inc.
Simply put and as Steve Jobs once said, “Think different.” Steve also used to promote the idea as an Apple computer company value that “the journey is the reward.” In other words, facing difficult challenges and changing our ideas in order to innovate is part of the adventure of life. Likewise, the content you will find on our website will take you on an intellectual journey, if you care to follow it. We agree with Steve’s attitude (though certainly not all of his ideas). But the attitude we like, and that is how we think about our work in AL to challenge many traditional ideas about how animals and people know what they know. We hope you will find the journey of your own process of discovery through our site content as much fun as we did discovering and creating it. And that is a discovery and creation process the author started more than 50 years ago in early March of 1967 while studying epistemology and computer science and the State University of NY at Buffalo.
Since then, we have learned many new ideas, corrected many false or otherwise invalid ideas, and simply revised others. Some of the ideas we revised are those cited in our examples are the ones the very concept of AL depends for it to function properly in our minds as a tool for technology development. Without clear concepts of the topics like those in our examples, concepts that are logically linked by chains of contextual definitions to observable facts in our environment, we do not believe that it is possible to advance the science of AL beyond the state of the art. And even though it sounds difficult, this is not an insurmountable challenge: If you think through any mistaken fundamental ideas you may have one by one, and then change them if you find ideas that are invalid because they are not logically connected to what you observe or are contradictory, you can resolve those issues. Then, you can solve problems that seemed like they had no solutions before. Doing that, however, may require you to change your thinking in both content and method of how you think, and maybe you do not want to do so. That is a decision only you can make for AL, just like it is if your baseball coach suggests you change the way you swing the bat or catch the ball. We are not out to change your mind, just point out that it is possible to identify and change mistaken fundamental ideas you may have, and that it is worth your consideration to do so because doing so can open up all kinds for new opportunities.
But it is up to you to study our ideas and decide for yourself using your own firsthand observations, reasoning, and judgement to decide if what we propose on these pages is right or wrong, if it is sense or nonsense.
It is our opinion, based on our observations and reasoning that our proactive, layered, and relational simulation technology architectures are to the robot what a "window" operating system is to the PC. A new kind of interface, not for the use of people, but for the robots themselves to make them more biological. The modern window operating system design makes computers much more useable for most people than they were with only a command line interface. Our simulation technology architecture can do something similar for robots, androids, and computer systems from their own perspective on their own environment to make them act less like robots and more like animals or people. Blue Oak technology is designed to enable robots and androids, from their own perspective, to interact with the world and their users (their environment) more like a real organisms do by giving them a proactive and goal-directed, relational interface similar to our own. Few people think about this issue in this way because implicitly they do not think of robots as even having a perspective because they are "merely" machines. Our goal is to change that, and make robots that are causally more like organisms and less like machines.
But doing so requires you to also have an entirely new biological perspective and grasp of life processes as proactive, relational, and goal-directed --- as opposed to the traditional mechanistic perspective. Biology may be based on the mechanistic processes of organic chemistry, but the functioning of organisms and especially higher animals is anything but mechanistic. The non-living is mechanistic. The biology of life processes of organisms performing survival behaviors uses an entirely different kind of active system logic than mechanistic computer and network systems do. The biological systems of organisms are of a specific nature and they operate just as logically as mechanical systems do. However, biological systems use a different logic that is more complex because this logic is not just internal to organisms. Biological systems use a more complex form of relationally proactive, goal-directed logic for processes that include both the organism and the environment with which they continuously must interact in an unbreakable relationship. If mechanistic active systems logic has the complexity algebra, then biological active systems logic has the complexity calculus. That may not be a perfect analogy, but it should help make our point.
Gaining insight into this perspective on biology is no easy process because the idea that life is mechanistic is what most of us have been taught in our biology and other science classes all our lives. And as a result, one of the implicit beliefs that most of us have acquired is that organisms are a special kind of machine, just a machine that is made of so-called "wet-ware" instead of hardware. And so it is especially hard to gain a different insight about how life works if you have to overcome your own implicit ideas about life being mechanistic, and then have to retrain your own thinking and subconscious memories as well. Changing that belief can be very difficult.
For example, here is a similar situation the existed in the early history of personal computers. When the graphical user interface was introduced, there were many computer professionals who preferred the command-line user interface of old style computers. They did not want to change and did not like the new windowing and mouse clicking designs of the graphical user interface. They could not imagine what the graphical interface could be used for that they could not do just as well with the command-line interface. Some of these people took many years to adapt, while others jumped at the opportunity to learn the new systems. And look at the world of computers and smart phones today. Adapting to our approach to digital biology and AL will require some similar challenges of thought and work habits of those who want to learn it to make robots and new kinds of computer and network systems that behave more like we do, more like organisms.
Many people today would agree that a Star Trek™* like computer or "Mr. Data*-like" android would be a valuable technology to have, and that it would form the basis for a very profitable business, as well as have many scientific and educational uses. While it is not possible for computers to be conscious in the exact same way as biological life-forms, a simulation system that mimics the goal-directed actions of organisms and the relational nature of sensory perceptual consciousness in some animals and people would be a huge improvement over today's robots for many applications. So would the ability of robots and androids to learn human concepts and languages. For example, a proactive, goal-directed simulation system would help a robot or android "identify" and interact with everyday objects in ways that would simulate more aspects of human conscious action than is currently possible would be great. And using such a product would be much more realistic to us. Just as some of the newest humanoid robots mimic some aspects of human form and behavior better than their predecessors, a goal-directed robot or android with a relational simulated consciousness that could "see" or perceive the world as inter-related 3D objects at the human scale (rather than bitmaps of pixels) would have awareness more like us too. And if it could also compare and contrast those objects it perceives more like people do, robots that could do so too would be better able to mimic human-like consciousness conceptually.
In recent years, many new kinds of robots have been created, and some of the most successful of those machines do successfully mimic some physical actions of real organisms such as insects, birds, mules, fish, and so on. Self-driving cars are even on our roads now, thought they do not work the same way biological organisms do, these automated cars to mimic human behaviors. This strategy works because a few aspects the biological context was included in the process, though manually programmed by people and automated by these machines. But the idea we are proposing builds biology right into the system and is partially supported by evidence from the success of state of the art robot designs that do mimic the behavior of organisms. The designs work precisely because they better mimic naturally selected properties found in real organisms that are “reality tested” because they can actually survive in real world environments. That is, designs that have proven successful in nature to aid the survival of the lives of actual organisms. For example: Story of life-like robot 5 Life-like Robots Life-like Military Robot Robot with Relational Sensory Control of its Actions
So why isn’t the same thing true for today's robots that mimic basic life processes and those of conscious animals and people? The success of the mechanistic life-like mimics of non-humanoid robots has ironically added support to the view that at root biological organisms are just machines too, though much more advanced than any people can yet build. And it is true that machines mimicking biology clearly work well for simulating some animal behaviors. Thinking about life in this mechanistic way implicitly leads to most of the popular science fiction about life-like machines “somehow” learning to think and act like people. Making that illogical leap is easy to understand, but we think it is wrong. Consciousness and thinking is not mechanistic, it is a biologically relational, goal-directed process. What do you think? Are you starting to see what we mean? Are you starting to think there might be another explanation of how life works than mechanism?
We “think different” about this issue and do not share in the view that life is mechanistic. Life is biological, which means it can only exist in relation to its environment and operates by a proactive and goal-directed kind of process logic. Yes, at the molecular level life is based on the mechanisms of physics and organic chemistry, but does that mean mechanism is the main operating principle at all levels of scale and in all processes of organisms? We think that is one of the errors in state of the art thinking. Could it be there biology uses a different logic than machines, and the overall system architecture of an organism is layered instead of monolithic?
We think that when the full relational context of an organism proactively surviving in its environment is included, using a layered model with standardized interfaces can help us grasp the complexity of life processes it supports our viewpoint. When we do so, we see life processes as a multi-level active system that consists of an entire stack of sometimes incompatible causal action contexts that interact with each other and in a relational manner with the environment outside through standardized interfaces. You can think of your own action capacities as people to operate in the world and your own sensory perceptual system as your own interfaces to your environment, interfaces that have been standardized for all human beings by evolution.
We summarized the idea of our "think different" understanding how organisms work in relation to their environment in our introduction, and we will reference it many times as we explain more about our views on life processes. We emphasize this because most of us have been taught to focus almost completely on an organism itself to understand how it works, and tend to drop the context of its habitat or environment. Changing this thinking habit can be difficult, but if you do, you will start seeing examples pop up quite often. Recently for example, the author of this website was reading an article in the July 9, 2016 Science News Magazine entitled "Squid Stays Hidden by Leaking Light" and was about work done by Allison Sweeney, who is a biophysicist at the University of PA.
The article is about some photophore cells found in glass squids that can produce different effects when they "leak" light in various clusters by bio-luminescence. The cells are very inefficient and no one could figure out why they evolved that way or what they were for. But Sweeney says: "Then came the 'of course' moment. These things are meaningless until you consider the habitat." In other words, until you consider the relationship of the organism to the environment in which the squid must survive. It turns out that the "inefficient, light leaky cells" evolved that way because they only leak light mostly out the bottom of the squid. So when predators look up from below where they usually hunt, they see no silhouette of the prey. And the cells might even work from some other angles between straight down and horizontal, so more research is being done to find out. The discovery of this relationship of the squid and their environment means the cells have "value-significance" to the squid. But they have such value-significance only in relation to the proactive actions of the squid in their environment to survive attacks from predators. When the habitat or environmental context is dropped to focus only on the organism, the relationship information is lost and the cells seem meaningless.
Many of the processes our technology is designed to simulate are to mimic the "organism side" of this kind of organism-environment relationship. So please keep this idea firmly in mind as you read more. While you do, ask yourself this: If the reason for the evolution of the photophore cells in squid is to support the survival relationship of these organisms in their specific environment, could not the so-called mysteries of other properties of organisms such as sensory perceptual consciousness of many animals and the rational consciousness of people also be better explained as interactive relational processes, as opposed to intrinsic to the organism, or the brain as is so commonly assumed?
In the environments for all living organisms there are many different kinds of things in action. And all of the things have or are a specific identity, an identity consisting of specific properties in specific quantities. The universe is not static, but dynamic. All the actions of the things are some of the properties of the various things in action (changing), so there are also various kinds of action. (Look around yourself right how and verify that assertion is true. The assertion is a crucial observation, not an assumption.)
Observation of all environments shows us that in most of them some things are static and some things are dynamic, that is, they are in action. In many cases, the actions relate the things we see acting, but in some cases their action relationships are the result of things we cannot see or otherwise sense. Each kind of thing has a context associated with it (other objects in the thing's immediate environment), and so each thing's actions also have a context. An "action context" is the set of actions that are associated with a given kind of thing in relation to its environment, and always in some specific range of degrees or quantities based on the fact that those actions are properties of that particular thing. In other words, the actions of a thing are part of the identity of that thing. Since all actions are causal, from the perspective of causality, a thing and the other things it interacts with are a what we call a causal action context.
For example, the performance envelope of an aircraft in flight or a car being driven down the road are action contexts. So is a football or a baseball game. Any situation with several things that interact with each other is an action context. Even a big rock on a steep hillside is a potential action context. If an earthquake dislodges the rock and causes it to fall, it becomes an actual action context. Computer systems have action contexts too. Computer and network systems are built in process layers according to process abstraction layered model designs. Most consist of a hardware or physical layer, and operating system layer, and an application layer. Network systems tend to be much more complex, but also follow this model as you can see in this 7 layer example: The OSI Network Systems Model.
Causal action contexts can only be mixed if the active system logic they operate with is compatible. If not, incompatible systems must be separated and can only inter-operate through special standardized interfaces. This is one reason layered models with standardized interfaces like the Open Systems Interconnection model for computer networks are so useful. Networks with incompatible hardware or protocols can still be easily interconnected by means of switching or or substituting one kind of layer for another using the standardized interfaces. This model and many others are mostly used only for mechanistic active systems such as computers and robots or manufacturing systems and so on. Mechanistic active systems are mostly reactive and work like falling dominos.
A key feature of layered model systems that helps make them useful is that each layer in a layered model can be thought of and designed as a separate causal action context. This means that the causal actions of a given kind cannot go beyond the layer boundary. This is an important feature because the processes required in some layers may be incompatible with the processes required in other layers. For example, in computer and robot systems, the physical layer is mechanical and electronic. The components in this layer use electricity or light, as in the case of fiber optics. Its processes operate by the mechanistic laws of physics. The system layers above the physical layer are usually digital, and these layers operate by the mechanistic digital laws of mathematics. The causal actions in these layers can never be intermixed because their identities simply do not match up because they are incommensurable. For this and other reasons, these layers communicate through standardized interfaces in modern system designs.
Why is this concept so important? It is important because looking at wider contexts, or the so-called "big picture" sometimes makes it possible to understand things that a singular focus on a single system part or a traditional idea and context prevents. In the case of grasping how organisms work, we think looking at the wider concept of action contexts helps clarify how organisms actually operate and function in real environments.
The point of this explanation is that every environment is some set of things, each with its own specific identity interacting in causal action with other things in specific ways and specific amounts in various process relationships. The list of things includes not only non-living things, but organisms too, which also have complex internal causal process relationships. The internal processs relationships inside an organism are part of its identity. The internal causal processes of an organism are the organism side of the active relationships every organism has with its environment. As mentioned in our introduction, organisms operate as proactively relational goal-directed active systems that are very different from mechanistic active systems. We call the action logic of organisms "teleologic" to clearly distinguish them from mechanistic active systems. Teleological active systems are a goal-directed relationship between a biological organism and the unique habitat or environment in which it exists, with the ultimate goal of the relationship being the continued survival and future existence of the organism itself. So far as is known, teleological active systems are only found in biology.
These ideas may be difficult for you to grasp because they probably run counter to what you have been taught in the past. Take as much time as you need to think them through. Do some firsthand observations and perhaps some thought experiments to validate our assertions for yourself. Practice thinking about active systems of both the mechanical and teleological kind as layered models with standardized interfaces. Try some layer substitutions so you can get comfortable with the genius of how layered model systems work to enable incompatible processes to operate separated by layer boundaries in the same overall system, but without any direct contact. Doing so will help you grasp the ideas that follow.
When you are ready, ask yourself: Does everything at every scale and process abstraction layer in a given environment necessarily have mechanistic process logic as its operating principle? Or could there be more than one kind of system process logic that operates differently for some causal action contexts in different layers, if processes of organisms are thought of using a layered model? Could that different kind of operation be because the causal action is relational? After all, if incompatible mechanistic processes can co-exist in the same overall system as explained with the OSI model for computer networks, then why cannot the same be true for the causal incompatibilities between mechanistic and teleological systems be overcome in the same way? Obviously, we think they can.
Could the causal action processes of life depend on different kinds of relational process logic at different levels of abstraction in their operation? Without implying some kind of mysticism or arbitrary randomness, could there be another causal context that is different from the mechanistic causal context on which life is based at the level of organic chemistry? The key relational thing about machines is their relationships are with the people who design and use them, not their own environment. Machines do not have a life relationship with their environment like an organism does (unless human designers give them one). Their survival and future existence as an entity does not depend on their own goal-directed proactions to generate their own energy to satisfy survival needs, like it does for an organism. The state of the art largely ignores this distinction as trivial. We do not think it should be ignored. Do you?
Our observations of the world and reasoning based on those observations lead us to conclude that life processes are fundamentally different from non-living processes. Most people know this, but do not realize the importance of the differences and how fundamental they are, or simply ignore them because their teachers told them they are not important. But they certainly are important to AL system designs. To convince yourself of which is true, if you care to, will require you to observe non-living things, machines, and organisms. You will need to compare their identities and note their similarities and differences. You will need to judge for yourself which of their properties are essential to the way these various things behave and why. You will need to also grasp now layered system architectures work and how they allow incompatible causal action contexts to co-exist and interact in different layers through interfaces.
All living organisms share many fundamental properties in their complex identities that are very different from non-living things as just described. Organisms share these properties in order to cause and control their future existence, and all of them must survive to stay in existence in order to do so, given their biological identity. Not to initiate and perform proactions in order to survive is to cease to exist, to die and disintegrate. Organisms have only one alternative to certain death: Be proactive to survive.
These propositions are not merely wordy abstractions or randomly formed opinions. They are objective conclusions that integrate with the rest of what we are saying on these pages. Here is a thought experiment to help you see that, to help you visualize the abstractions in concrete, everyday experience for yourself firsthand. If you take the time to carefully perform the following thought experiment for yourself and analyze its results in your imagination firsthand, you can easily validate with your own thoughts that these abstract propositions really are connected to the world we see around us, not some arbitrary theory. The theoretical is the practical if it is true. That is, that these propositions are valid and true and consistent with reality because they are all connected to reality by unbroken logical chains. (Note: The following thought experiment was paraphrased from a circa 1995 audio tape lecture course, source - Dr. Harry Binswanger. See the "Learn more about the ideas that underlie our work" link to his new book at the end of this page for more details.)
“Think of a board set at an angle of 45 degrees from horizontal and above it a heat lamp is suspended and not turned on. On the board below the heat lamp a very cold ice cube with dry surfaces is placed, and then a living earthworm is placed next to it. What happens when the heat lamp is turned on? Both the ice cube and the worm move away from the heat as a result of the changing conditions in the environment they share, and their individual identities (including the quantitative aspects of each of identity property). Here are some questions for you to consider and answer: Are the identity-interaction causal processes the same for the ice cube and the earthworm? Why do these existents each do what they do after the lamp is switched on? What aspects of their identities cause them to act the way they do in relation to the environment? What is the difference that makes the difference between these two objects of very different complexity, and their actions in the face of the change in conditions of the heat lamp went it is turned on? What are the causal details of the actions of the ice cube and the earthworm as the environment gets hotter? How does each respond to their environmental relationship and why?” Think about these questions carefully.
Let's look at the identity of the ice cube and the earthworm in full context to find out using the identity-interaction theory of causality* how these objects differ in their response to the heat lamp changing their environment. Consider the similarities and key differences between these two action contexts. (*more about this below)
In the case of the ice cube, we know that heat melts ice and the resulting water lubricates the board until the cube slides down the board. In other words, the identity of the heat energy in a specific quantity interacts with the board and the ice by raising its temperature (the cause). The ice melts, and as the water lubricates the board under the ice cube it slides. Due to the mutual attraction of gravity between the earth and the ice cube, when the lubrication of the water overcomes the specific quantity of friction holding the cube in place, the ice cube slides away from the heat (the effect). In this case, all of the causes and actions are external to the passive ice cube. All of the actions of the ice cube are reactive and most easily explained by simple mechanistic physics and molecular mechanics. Ice cubes cannot act for themselves on their own. Ice cubes have no energy generators, motive power, actuators, or control systems as part of their identity. Nor do they have in their identity the means to sense their environment to have their own perspective on reality or means to evaluate what action should be taken. The relationship of the ice cube to its environment is merely passive. The only things that can change it are reactive. True, the existence of the ice cube is conditional, and it ceases to exist (melts) because environmental conditions change, but that happens only as the result of a simple, reactive, and external (to the ice cube) mechanistic causal transformation in the identity of the environment and the ice cube. The ice cube itself has no pro-active role or awareness of any of what happens during this entire process, nor even the capacity for one. In other words, the identity of the ice cube does not have the action capacity to do anything for itself because ice cubes have no “self” and no means to do so. It is simply some water in frozen form that melts when it gets hotter.
Now consider the earthworm. It is a living organism with a very different identity and action context from the ice cube, and therefore a very different action capacity. The earthworm is a biological organism. It is made of organic materials that have a much more complex identity. Each of its properties exists in specific quantitative amounts like the ice cube, but that is where the similarity ends. The organism’s energy, processes, and actions are self-generated, self-sustaining, self-regulated, and continuous. If any of them stop for very long, the worm dies and eventually ceases to exist. In order to survive, the earthworm must always continuously and pro-actively interrelate with the environment around it, and does so with the goal of maintaining the conditions necessary for its own survival. Why? The worm cannot exist without continuous relational connection to its specific environment. That environment is a set of very specific conditions that must stay within very specific ranges, including temperature. The worm also has the means as part of its identity to have its own firsthand perspective of reality, so it can sense and feel the heat from the lamp with its sensory system, automatically compare the heat to its survival condition limits, and proactively change its location --- unlike the ice cube which cannot. The worm has a very different identity and action capacity as compared to the ice cube. What is that complex internal system architecture for?
The answer is to enable the worm to proactively control and manage its inseparable interaction relationship with its environment as its means of survival and continued future existence.
The worm has a built-in system that senses the heat, and compares those sensations to the limits its survival requires that were naturally selected and recorded in its DNA. These processes give the organism a simple relational awareness of its environment in the form of the specific, quantitative senory content they produce. This awareness is crucial to the survival of the worm because as part of its identity as a biological organism, the worm cannot exist outside of a certain temperature range (set of specific quantitative conditions) in relation to its environment. The worm must move away from the heat under its own power to cause its own future survival --- or die. Notice the difference here. While the ice cube passively melts causing it to move reactively and mechanistically. The ices cube does nothing because (in its causal context) the ice cube has no action capacity to be proactive. On the other hand, the worm proactively initiates a cause: self-regulated action to preserve itself biologically. And this proaction is internally powered, generated, sustained, and controlled based on the action capacity the worm’s identity. Notice the entirely different causal context in operation here. The proaction makes possible, it causes the survival and future existence of the worm, specifically in response to the environmental conditions its identity requires. All of the causes involved are internal and highly complex relational identity interactions with the worm’s environment, as compared to the passive ice cube. And all of those complex interactions are only possible now, due to the past actions of all previous earthworms of its type that survived previously and passed the worm's specific identity on by reproduction. Whereas inside the ice cube nothing happens except melting---no systemic proactions occur or can occur because there are no systems in the ice cube. The outside of the cube starts reactively melting due to the heat from the lamp, and that is all that happens.
The earthworm, on the other hand, proactively moves in response to what it senses in its environment due to its inseparable environmental interaction relationship, then the worm registers the change and proactively moves again. The worm’s neurons effectively compare the measurements (implicit quantities) from its naturally selected senses. The comparisons are in the form of the identity of the analogue neural circuits themselves. The worm’s sensing system reactively reports the identity of conditions outside itself in the form of internal sensory content. This content is the form of the worm's awareness of its environment that is processed by its biologically automatic, genetically determined system of survival values implicit in its DNA, the DNA that pro-actively grew and formed the neural sensor circuits in the first place. And that DNA was formed by past earthworms that survived all past conditions in the worm’s current identity form.
The DNA (as well as RNA, epi-genetic processes and so on) record and maintain the identity and future action capacity or action context of this specific species of worm for this specific habitat for future generations. Ice cubes have no such identity or action context.
If the implicit measurements are outside the appropriate range that the worm’s life requires, the neural control system in the worm initiates the action to escape the heat in order to preserve its own life, and the worm pro-actively continues that action until the heat is gone. The worm is reactively aware of the heat in the form of specific, quantitative sensory content, and that form of awareness is for the control of its future existence. The neural control system proactively moves the worm when any environmental condition is out of bounds, as determined by the worm’s own naturally selected survival condition ranges recorded in its DNA. This is a biologically automatic, goal-directed process. It is not a simple reactive mechanism like the ice cube. The worm has its own firsthand internal perspective of the surrounding conditions in the environment in the form of its sensory content, and unlike the ice cube, the worm feels the heat with its own sensory system in some internal form when that sensory content causes certain specific, quantitative, neural changes.
Again, what is that complex internal system for? Why is it necessary for organisms to spend all the energy required to evolve and maintain such complex systems if not for their very future existence? The worm needs its ability to relationally interact with its environment to cause proactions. Why? The proactions are the only way the worm can control that environmental relationship in order to survive. Why must it do so relationally? The worm’s environment is as important to the worm as its internal processes. While the environment is the primary, the worm and the environment co-evolved, and the body identity of the worm is meaningless without the environment to which it is inseparably related. Too much change to either one outside of some specific range of quantities such as temperature will destroy the worm. In other words, the worm can neither survive without its internal processes nor can it survive without its environment. The worm cannot exist as a continuous, on-going process without its interactive relationship with its environment, period.
This is true for all organisms, but not true for most mechanistic existents like rocks or machines or ice cubes. Yes, some existents like fire and storms are relationally linked and dependent on their environments too, but only in reactive ways. Their internal processes do not proactively select and cause action in their environment based stored memories and genetics, and with the implicit goal of continuing their own existence.
If you doubt our proposition about environmental interactive dependence is true for you as an organism, just stop breathing and see how long you last before you pass out. Another excellent way to convince yourself of this point is to watch several episodes of the History Channel TV series Alone. The National Geographic Channel TV series Life Below Zero demonstrates the close, interdependent relationships of animals and people to their environments and each other. Observe the similarities and differences between the non-living things in that environment and the organisms. Note the similarities and differences between the animals and the people. Listen to the explanations of the people in the shows as they explain their actions and reasoning about their goals and objectives and why they choose those goals and actions and not others. Then judge which characteristics or properties are the ones that make each kind of thing unique. You may notice a lot of things you never noticed before about the differences between life and non-life and between the behaviors of people and animals. Direct observation of our environment is the basis for all we know. Your very survival depends on it every moment you are alive.
You may also be able to verify firsthand that what we are asserting on these pages is consistent with your own observations. We want less to convince our readers with our words, and more for our readers to convince themselves with their own firsthand observations and reasoning based their observations.
You and your environment are two inseparable, interrelated systems just like the earthworm. Look at what happens to the contestants of Life Below Zero or or the TV series Alone as they try to survive in nature with almost no modern tools. Their interactions with their environment, especially their proactions to get shelter and food, their loss of body mass, and so on are proof of the tight relationship between every organism and its environment. Their needs and their increasingly extreme actions to get what they need to survive clearly demonstrate the relational nature of life and why we must always include the environment as part of our context.
And it is not just breathing and food that is environmentally inter-dependent, as experiments with space flight have shown. In the micro gravity of space on orbit, human muscles deteriorate, bones decalcify, and many other biological systems within organism bodies cease to function normally. That is why space travelers must take as much of their earthly environment with them as possible, or they die---either fast or slowly. For example, some space craft designs call for spinning the craft at just the right speed to simulate one G of gravity while in orbit or on the way to a distant planet like Mars. Space craft also need radiation shielding and many other features to make the space environment as much like earth as possible.
This too is common knowledge for many people, but they tend to forget about it in the context of AL systems design and development for some reason. In the AL context, many people try to pretend that the entire life and awareness of the organism is in the organism, specifically in the brain, and the environment is essentially irrelevant. That is simply not the case.
This fact and all the systems in the identity of the worm enable us to identify one more key difference between the worm and the ice cube. The worm has a very primitive form of “self” perspective in relation to reality outside itself that the ice cube cannot possibly have. The worm’s internal sensations (sensory content) of the heat from the lamp exist in some form (probably pain) in its nervous system. That form IS the worm’s internal perspective of heat with its form of awareness of its environment.
The worm’s self-perspective is not in the worm. The self is the relational perspective of the worm from its “point of view,” but only in continuous interaction relation to its environment by the worm and its specific, on-going sensing process that continously generates new sensory content. If that relation is severed, the self-perspective or point of view of the worm disappears, and so does the worm’s awareness of the heat from the lamp. (Close your eyes. What happens to your self-perspective of your surroundings?) The self-perspective of the worm is provided by the identity of its internal neural processes as we explained earlier, but only in relation to the continuous content provided to those processes by the environment. The ice cube does not have this same identity of internal processes and the proaction capacity that comes with it. So the ice cube has neither self-perspective nor pro-action capacity. Therefore, it passively melts and slides down the board, as opposed to the worm, which proactively moves away from the heat.
The cause of the movement by the earthworm is internal and self-generated, but not by the body of the worm in a vacuum. The movement is caused only in relation to its environment because the conditions that in environment are as crucial to its physical existence as its own body. The worm’s naturally selected and internally generated, controlled action is a much more complex causal chain than occurs in the ice cube. But the complexity of action is not what is fundamental. What is fundamental is the more complex internal identity of the worm and special relationship of that identity to the world outside the organism, as it continually interacts with that environment. While they are both necessary, neither the identity of the organism nor the identity of the environment alone are sufficient to explain how living processes actually work.
Now some will say that an ice cube is not a robot. And that all that is necessary is to replace the ice cube with a robot programmed to escape from heat and then the robot and the worm will behave the same way. Yes, it is true that then the two systems will behave in more similar ways, but that does not make them causally the same. State of the art robots are not alive and do not mimic organisms causally to any great degree. Remember, we are comparing overall identity and the implicit total causal context of a given existent, not merely its surface features.
Comparing modern robots to an organism is analogous to comparing a carved wooden puppet to a robot. Robots only mimic some behaviors of organisms with mechanistic causal logic, but they do not have goals or needs like the earthworm does. The only goals robots have are the human goals that are programmed or otherwise designed into them. Robots do not have their own goals from their own perspective, nor the proactive goal-directed survival need to support them and their role in their relationship with their environment. They are more like a recently dead human body that reacts mechanically to the jolt of electricity from a defibulator a doctor is using to try to revive them, after they are already dead.
It is only the full context of the close relational interaction of an organism and its environment that is sufficient to explain life processes and the biological teleologic by which they operate. It is the goal-directed, relational interaction of the identity of all organisms with the identity of their specific environment and associated causal action contexts that is the key difference between life and non-life. Non-living things simply do not have the same internal processes nor the same close, interdependent relationships that every organism does. This is the difference that makes the difference that is necessary to explain the full identity-interaction causality of life processes.
The reason most people have so much difficulty grasping how life works is because they keep trying to separate the organism from its environment or the other way around, and the two cannot be separated if operational understanding is the goal. Or people try to put all the causation in the environment as mechanism and ignore the proactive processes inside the organism. Both those approaches drop context. The secret of life is not just in the organism or the environment, but the relational interaction of both of the organism with the environment it exists in as it continously acts to survive.
The point is that organisms must causally and continuously interact with their world in a relational manner that successfully satisfies their biological needs, or they simply die. Non-living existents have no such challenge. Everyone knows this, but for various reasons they choose not to think about it as quantitative identity interaction. Most people use the inadequate concepts of action-reaction, “billiard ball” causality and mechanistic determinism instead, then wonder why they are mystified about how life works. But in fact, there is an inexorable logic and mathematics to how all processes execute, including biological processes. All of processes are specific identities interacting in quantitative relationships in specific ways that produce specific content and action capacities in specific causal contexts. Nothing else is possible to them, or anything else.
The only organisms alive today, are the successful organisms, the ones that evolved an identity within the right measurement ranges and that includes the right goal-directed action capacities necessary for survival in relation to their specific environment, as explained above. Part of that biological identity necessarily includes a measurable and bi-directional interface with reality that enables organisms to successfully act to gain and keep those things they need to survive. They need to identify their success or failure, and identify the external things of the right kind and in the right quantity (food, shelter) that their lives require --- on pain simply ceasing to exist altogether. That is one hell of an incentive that our machines do not have! Why? Because the identity and existence of machines is not conditional on their own continuous, successful action like the existence of organisms is. The life to reality interface of machines simply does not have the same identity as that of organisms, so our robots do not have the same action capacity as organisms. Identity determines action capacity in every specific causal context.
How could we change the identity of our AL simulations to make them more life-like? That is our mission of discovery. Now that you have a new perspective with identity-interaction causality and a clearer idea of the relational, goal-directed nature of organisms, it is time to look more closely at the specific identity of organisms and the essentials of the relationship to the environment any given organism lives in.
The very survival of a real organism as system in its environment is a form of continuous relational goal-directed action that actually depends on the successful execution of its own essential life process over the long run. Every organism must survive its last survival action before it can perform the next one. Why? To stop the causal action chain of survival is to break its essential relationships to its environment, and doing that is the end of the organism, literally. For biology, stopping survival action is death and rapid physical disintegration of the body of the organism. It is very different from machines.
For organisms, all of the survival action control occurs at a level and scale far above the mechanistic molecular scale. In other words, to successfully simulate a real organism a simulated organism system must behave like a real organism biologically to be a realistic simulation. A simulated organism must have real needs, and values that can satisfy those needs. It must relate to its world in the same way real organisms actually do, organisms that depend on an environment for their very existence (and this must be simulated to whatever degree that is technically possible in practice).
But most existing robot designs merely copy some of the external actions of organisms for the purpose of attaining human goals, not the essential processes of life itself for the simulated organism. The successful performance of those actions must actually have value-significance to the continuation of the future actions of the robot system or they are just useless “bells and whistles. A realistic organism simulation must be aware of its environment from its own self-perspective.
However, a “self” cannot be programmed because a “self” does not exist inside the brain of an organism or in a robot system simulation of one. A self is not a thing, not a physical part or property of an organism, but rather a self is the organism side of an on-going relationship to an environment by an entity that is alive and at least partially separated from that environment. That is what makes a relational “self” possible. The organism is a different thing from the environment, and that implies a relationship and a special kind of causal action context. The internal parts and processes of the organism interact with the external things in its environment. That is what makes possible that a “self-perspective” can emerge from the point of view of a given organism, as an instance of some form of content produced by the continuous and uninterrupted, interactive relationship between all biological organisms and the environment they exist in. The “self” of an organism is the organism’s half of the continuous "interaction loop" or relationship between the organism and its environment. That is its teleologic or goal-directed process logic of as an organism and the causal action context of its system architecture. This may not be an easy concept to grasp, but it is a different way to explain how the life processes of organisms really do work in the real world. (We will discuss this in much more detail later in our book and white papers.)
Unfortunately, state of the art Artificial Intelligence (AI) and Artificial Life (AL) software does not work in the life-like way it does on Star Trek, even after many years of attempts to make it work that way (though it is improving). More advanced human interfaces that use ordinary or so called "natural" human language have been the "holy grail" of the Artificial Intelligence community for over many years. Yes, existing systems keep getting better at simulating some actions of organisms. To date, however, the successes of making computer systems and their interfaces seem human-like because they can communicate verbally have been very limited (including some of the new ones). These technologies are helpful, but they are still just computer programs at root. They do not have the experience of a real organism because they do not see the world as we do or have to make goal-directed survival decisions. These systems simply do not operate in the same causal action context that a real organism does. (Actually, as mechanisms, AI systems are not "aware" of anything.)
At Blue Oak, we believe that part of the reason there is some confusion about how human sensory perception and natural human language works is because many of the concepts commonly used to describe these functions are inadequate. These are not mechanistic processes. We believe this situation is the consequence of too much emphasis on the analysis of the mechanics of life processes thought of like computer technology and machines. There is not enough emphasis on biology and the observation of how organisms actually proactively act and react in the world of their environment biologically. There needs to be more emphasis on the kind of proactive goal-directed system logic we call teleologic. And most importantly, there needs to be emphasis on the identity of perceptual consciousness itself as a biological process of 3D identification of objects at the human size and time scale, and how to implement that from the perspective and for the use of our robotic systems so they behave more like organisms.That can only happen by making AL systems more like organisms.
In addition, there must be special emphasis on how people form concepts (according to what method), and then how they use concepts to make premises (simple sentences). If they did so, researchers would discover that language is not about communication first, but rather, it is about thinking for biological survival first, and communication second. In that sense, even natural languages are goal-directed tools for survival. If you think different, you too will discover much more about all of these ideas as you continue reading, studying, and thinking about the ideas you will find on this site. You will start thinking in terms of how we could design AL systems with the kind of identity that would enable them to form and use conceptual tools too. But none of that is possible based on the idea that AL systems are machines. Human concepts are not formed mechanistically. They are formed by a method followed by choice. How is that possible for any system, living or mechanical, that is made out of the non-living material of atoms and molecules? Keep reading and eventually you will find out.
Nor has state of the art Artificial Life (AL) been much more successful than AI in simulating consciousness, except as cartoon characters for human audiences. The alternative technologies of state of the art Artificial Life software to AI technologies have not yet produced practical solutions of simulated life-forms that are "conscious" from their own internal perspective the way real animals are at human size and time scales, instead of the bitmaps that most robots use now. State of the art AL technologies digitally and mechanically can simulate only low level animal functions such as sensation and locomotion in robots, groups of imaginary animals interacting to carry out virtual biological experiments, or the simulation of entire virtual ecosystems based on some biological model. Extant AL technology does not simulate goal-directed, conscious behavior as it is observed to operate in individual animals and people as relational, goal-directed system logic (except superficially for human entertainment purposes). But these systems are nothing more than computer-generated puppet shows. They are great at what they do, but they are not "alive" in the biological sense like an organism is alive. AL systems are not designed to simulate an intelligent entity that is capable of independent, rational action to accomplish its own goals in a purposeful way from its own perspective and judgement of its own relationship to reality.
Yes, systems like self driving cars are getting better an better. But they are neither alive nor conscious from their own perspective. They are simply aspects of human consciousness automated in mechanical form. However, our designs are intended to do those very things, but as a real organism would do them. These were some of the key factors that enabled us to convince patent examiners our process designs were original enough to win our patents for our technology architectures.
Most AI and AL researchers base their work on the assumption that all of consciousness is "located" or confined only in the brain, if they consider consciousness as real at all. And they think consciousness is a complex but mechanistic process, a process that is essentially computational in its nature, even as it exists in animal and human brains. It is also assumed that the entire process of consciousness exists and operates only in the brain as neural activity of some kind, hence the explosion of neural network technology. There is nothing wrong with neural networks, however, the way they are used in the state of the art AI and AL assumes the process of consciousness can be mathematicised into an algorithm to run on a computer or robot to recreate at least some images or other data from reality inside a computer or a robot "brain" in digital form. The resulting system algorithm then processes images and related data as some form of computational information, and they assume this is the same as biological consciousness. However, there is strong evidence to contradict the assumption that biological consciousness works this way at all, and if you keep reading, you may eventually conclude that there is strong evidence that the state of the art mechanistic assumption is dead wrong.
If what other researchers have identified is true and valid (and we think it is), that perceptual consciousness in animals and people operates very differently using relational, goal-directed system logic we call teleologic. Given that idea, it stands to reason that the designers of simulations of consciousness intended to improve the capabilities of robots and androids to better simulate the conscious behavior of some animals and people must take these facts into account as well, and not simply ignore them as most people in AI and AL have done so far.
In other words, in order to simulate consciousness in robots or androids, the simulation system must simulate most of the same relationships and causal action context that exist between an organism and its environment and are used by organisms to survive in that environment. That is, most of the same relationships that are proactively formed by real life-forms with the world they live in. In the context of the relationship between the simulation system and reality, the simulation system must be designed to proactively form similar relationships to the ones that real animals and people form with the objects around them that they interact with, and that must happen at the same size and time scale as for animals and humans and the same reasons. It cannot happen at the scale or design theory of bitmaps or to fit some mechanistic causal action context. Doing so requires a relationship oriented, goal-directed system architecture and causal logic. Biological consciousness does not process pictures and videos.
Then, once "brain-body-proaction-in-world-sense-perception-pleasure/pain reaction" relationships exist in some form internal to a simulated organism, relationships that are analogous to those of animals and people are operational in a robot simulation system, simulated consciousness is also operational as a relational process for that robot precisely because the robot has made similar relationships to its world that animals and people have made to theirs. This relationship includes mimicking survival needs with real value-significance to the simulated organism and the ability to do 3D identification of the world around it to learn how to keep its simulated life --- alive. Simulated consciousness IS part of that relationship, just as it is for a real organism of sufficient complexity. Consciousness is not in the brain, but part of a relational process in the full context of a proactive, goal-directed survival identification relationship between some kinds of organism's brains and their specific causal action context of their environment.
It is not sufficient merely to build neural networks and write software, though that is certainly a necessary part of creating a simulation of this kind. The simulation of consciousness at the perceptual level IS the process of forming such relationships with objects in the world, of making it possible by design for a robot or android as a complete integrated entity to identify the existence of things in its environment from its own perspective by producing memory content, and to establish its own relationships with the world in which it must "live" its simulated "life." Simulated consciousness IS the process of identifying things exist and forming and using the relationships with reality to survive and cause its own future existence as an acting organism. Simulated consciousness is not the software in its simulated neural network brain. Simulated consciousness is that entire, proactive relational process in the much wider causal action context of the internal processes and the survival of the organism as a continuous, goal-directed action process.
It is important to be absolutely clear that we at Blue Oak do not believe that consciousness is computational, though its content is relationally quantitative in a very different manner than state of the art theories say. Those differences will be explained in our work and our references, but essentially, it is the content of consciousness and the memories the process produces that is quantitative, not consciousness itself.
That is a key differentiator for the ideas and technology architectures of Blue Oak from those of our competitors. To paraphrase the notes** of the author of this website taken in a lecture by Dr. Leonard Peikoff on the topic of logical induction: "Consciousness is not physical. It is relational, a perspective, so not susceptible to numbers or numerology. Consciousness is a causal, integrating faculty that is not a blender. Consciousness is a biological process of identification that is not mathematicised or otherwise turned into a formula or algorithm. Thoughts do not exist in degrees, only either/or, and cannot be reduced to a brain algorithm. Conscious states are not entities, only awareness of entities. Only the content of perceptual consciousness is measured, and then only by comparison, by relative measurements of "this" as compared to "that," bigger than, smaller than, and so on."
While consciousness itself is not about numbers or formal mathematics, that does not mean that the content of a conscious system (such as memories) with its own self-perspective is not quantitative or that the physical processes that cause consciousness are not specific, causal, and quantitative. How to resolve this apparent contradiction will be explained on subsequent pages. But for now, just mentally note that this is the case. When you study our book and white papers (and the references they are based on) you can decide for yourself if we make a convincing argument for our position.
It is also important to be absolutely clear on precisely what we mean by "identification." To continue paraphrasing from the same lecture by Dr. Peikoff: "Identification is a goal-directed process of building and validating relationships. Consciousness is a relationship building engine." This is what the process of consciousness does, and the result is memory content in some form in the brains of organisms. This is not what state of the art AI and AL systems do. But this is what Blue Oak has specified in our patents and other intellectual property, and this is what our technology architectures will simulate when they are fully developed. Our technology is designed to simulate the way animals and people build and use relationships by interacting with their environment in order to cause their own future survival.
Blue Oak is designing and developing new Artificial Life system process architectures that will include the teleologic of the causal action context of biological organisms. The result will be a "relationship building engine" that is modeled on the identity-interaction causality of the goal-directed proactions of real life-forms and will operate very differently from state of the art AI and AL technologies.
Given the foregoing, it is crucial to grasp that Blue Oak technology is entirely based on the idea that biological actions including consciousness are proactive, causal, and relational. This is why we believe Blue Oak technology can succeed in simulating consciousness in varying degrees for robots, androids, and other systems such as networks where other state of the art designs have not. We will succeed because unlike all known competitors, Blue Oak is apparently the only simulation development company with a clear, objective definition and understanding of the identity-interaction causality of goal-directed action, the relational nature of consciousness, and also the only company with a detailed, patented design for how to simulate it. In fact, this is precisely the reason why our RICX Perceptual Simulation Technology™ is designed to simulate sense perception at the scale of physical 3D objects using the sensory flow ideas of JJ Gibson, instead of at the bitmap scale as state of the art robots and computers do. These are also reasons why our design's focus is on identification, not image recreation or statistical analysis in computer's memory. We chose the human scale sensory perceptual observation as one of our organizing principles because that is the scale of size and time at which animals and people causally interact with the world with their perceptual consciousness, identifying and building relationships in the process. So far the only real operating source of 3D object level sense perception is the relational processing in the causal action context of the living animal mind. But our patented new technology architectures will change that, hopefully in the near future.
In fact, there is no known technology architecture that can functionally mimic relational, proactive goal-directed behavior, the sense perception of real world objects as sensed by animals and people to identify them, originate independent actions, or think using ordinary human, Natural Languages (NL) such as English, French, Arabic, Spanish, German, Japanese, Russian, and so on. Until now, that is. Blue Oak technology architectures can eventually make all of these capabilities possible for properly design organism simulation systems. Our designs can do this because they put in place causal relationships for our DLFs simulated sensory perception and methodical concept formation that are modeled after the causal relationships people form as they relationally interact with and identify their environments. But as we explained on our introduction page, you will first have to learn a new concept of identity-interaction causality to grasp how such designs can be created, designs that function very differently from the computational natural language systems that exist now.
The goal of DLF Simulation Technology™ is to replace or assist people where it is necessary and practical to do so with robots, androids, or computer systems running simulations of virtual androids. DLF technology is a general architecture that simulates life-like, goal-directed behavior and human consciousness as a relational identification process in a biological causal action context in order to offer other solutions in situations where using people does not make sense (or to assist people working in difficult situations). State of the art AI and AL systems are not teleological (goal-directed), whereas DLF™ technology is a new kind of software architecture that is. This means that DLF Simulation Technology is a hybrid system of a standard computer logic platform that runs a teleological system as an application, and that application in turn relationally and proactively interacts with its environment in reality more like an organism does. DLF technology is AL technology that uses teleologic systematically to simulate the organism side of the organism - environment relationship. Our layered system of causal action contexts and standardized interfaces provides a base of teleologic active system functionality so that teleological processes can run on and be animated by mechanistic processes just as they are with biological organisms.
Yes, the logic of mechanistic systems is not compatible with the teleologic of biological systems. But just as the designs of layered conceptual abstraction models have enabled incompatible mechanistic systems to interact through standard interfaces, such as the Open Systems Interconnection Model (OSI) does for networks, we think something similar can be accomplish with our designs to layer the causal action context of biological system on top of mechanistic systems. We believe this because that is exactly how biological life works. The teleologic of life runs on and is powered by the mechanistic causal action context of the physics of atoms and the interacting molecules of organic chemistry.
The system we have conceived, are designing, and have described simulates the causal action context of the organism side of energy self-generation, goal-directed proaction, sensory perception, relationship building and how relationships can be used in proactions by the organism in its environment. The system also has an identity that can enable simulated free-will, methodical concept formation, and simulated logical induction as emergent properties of its operation as simulated life. The entire package is designed to make the technology more accurately reflect what real animals and people actually do in relation to their environments to survive, and for similar reasons. RICX Perceptual Simulation technology adds simulated sense perception (automatic non-numeric version of relationship building by relative comparisons) to this list of features to provide the data that DLF Simulation Technology processes for concept formation and other internal process functions, to provide the simulated life-form with its direct, observational access to its environment.
DLF Technology is not the same as human consciousness, but a new way to think about consciousness as relational and a new technical design to simulate or mimic it for use as a human tool. A Digital Life-Form mimics the form of some animal and human conscious functions and relationships with the same world people interact with, just as a department store mannequin mimics the human physical form in the scenes in the store windows, only it does so actively and in a goal-directed manner. Only biological life-forms are really (biologically) conscious, but a computer simulation system that is programmed to mimic the value significance of goal-directedness and the identification behaviors of life-forms as it interacts with its environment to build its own relationships firsthand can mimic certain aspects of life and consciousness, and do so better than state of the art designs. It is key to grasp here that the software is only the organism side of the relationship and the environment the other side of the same relationship. The software is not programmed to act like ordinary software. It is programmed to be adaptable and play a flexible role in an on-going relationship, and limited mainly by survival goals like real organisms are. You can think of DLF Technology as reality-based computing (as opposed to the "arbitrary computing" found in state of the art computer systems or AI systems designed only to satisfy human goals). In state of the art systems, human programmers have attempted to anticipate everything their programs will encounter. Since this is impossible, the result is necessarily arbitrary in the full context of trying to simulate real, biological life. Only a few kinds of systems such as neural networks and some genetic algorithms interact directly with reality to change themselves, but those do so in very limited ways.True, some may eventually be good enough to recapitulate evolution, but we think that is reinventing the wheel. It is likely also to take a very long time...
We already know the causal action context of organisms and are offering new ways to leverage that knowledge with our designs, new ways that more closely mimic the teleologic of biology.This is an important distinction. Part of the reason for the limited success of AI is that there is a widespread misconception in our culture that sufficiently advanced AI computer systems can somehow become conscious on their own the way some life-forms have. And having done so, then somehow learn to use language like people do to communicate with us, if only the right kind of computer program could be written. This view has been put forth in many popular science fiction stories and movies, but we think it is false.
State of the art computer systems are not alive, cannot be conscious in the same way as biological life is, and computer programs certainly cannot think. Only their programmers can think (follow the "Learn more" link below for details as to why). The causal logic between mechanistic and teleological types of systems is very different and incompatible unless a special interface is provided to resolve those incompatibilities. Part of what our simulation designs do is to provide that interface.
Most state of the art computer programs simply react like falling dominos, mostly using data made up arbitrarily by programmers or pulled from users for their content (and stored in files). If reality is sensed at all by these kinds of systems, the data that results are used just like the rest of the arbitrary data in the system (from the perspective of the system itself) because its context and other relationships to reality are lost in the process (or were never formed). The identity about reality it contains and all the myriad relationships to the computer or robot systm are not conserved in their content because they are essentially irrelevant to the human goals the system is designed to satisfy. Relationship data that may have been sensed in reality by state of the art systems and stored in bitmaps is not used for identification the existence of things in of the environment by the computer system sensing it (except to satisfy human goals). These systems have no firsthand "perspective" because they have no "self" to have one. They are simple a human tool. And even if such data is used, the data cannot work like it would in an organism because extant designs do not allow for data to be used from the perspective of that computer system itself to cause its own goals to become a reality because mechanistic systems cannot have their own goals without the complete biological causal action context and many other relational processes. Only organisms can use their data content that way now because organisms are the only goal-directed systems that currently exist.
Technically, computers do not even calculate. State of the art computers are just machines, electromechanical automatons that change the electrical properties in their components ("memories"), parts of which represent "1" and "0" bit values. Computers change their bits and the output of their display mechanisms according to their designs which follow the laws of physics, mathematics, computer science, and what people type into them. But these bit values, and all the programming that is built on them, only have meaning to people, not to the computers themselves from their own perspective, and only exist to the degree they satisfy some human goal. The comuter program has no goals of its own.
State of the art computers and robots do not have the same relationships to the world that animals and people do, more importantly, they do not have their own relationships to reality from their own self-perspective like organisms do. They simply cannot because they have the wrong identity for that to be possible. State of the art computer systems are mechanistic logic systems, not teleologic system like organisms. So it should not be a surprise that computer systems as they are currently designed and robots controlled by them cannot do the same things as organisms.
Remember one of our key guiding principles: What a thing is determines what it can do, its causal action context and capacity. Would you try to use a piece of food to cut your hair or a broom to fly to a distant place? Of course not, because you know that the identity of those things does not allow those kinds of actions. That idea can be generalized to the following principle: Identity determines action capacity and action context. For example, the identity of an airplane is designed so it can fly and its "performance envelope" is its action context. Knowing that principle can be very helpful if it is applied to digital biology and AL in new ways to design AL systems that operate using teleologic rather than only mechanical logic.
But fully applying that principle depends on a different view of causality than the over simplified "billiard ball" view that is so commonly used and implicit in most people's thinking. Causality is not just action-reaction, it is the relational, quantitative inter-action of the identities of things, and that can occur at different scales and causal action contexts.
In fact, the whole idea that life is essentially simple mechanism is a false view based on a flawed concept of what causality is and how a causal sequence actually works with real objects, and that there is only one kind of causality. Causality is more than the so-called “billiard ball” action and reaction we were taught in school and now hold implicitly in our subconscious memories. And at its root, this false view of causality is due to a dropped context. The billiard ball theory of causality is a gross over-simplification that drops most of the context of causal processes. Causality is much more than action-reaction between two arbitrary events. Causality depends on a much more complex, quantitative interaction between the identities of acting objects, but it is necessary to “think different” about causality and include that full context in order to grasp the idea. And since there and many different kinds of existents that exist at many differnent scales (relative to humans) in our universe, it stands to reason that there are probably many different forms of causal action contexts. We will explain how to think about causality in more detail on another page of this site to answer the questions we have asked, but first there is another implicit idea that must be exposed to complete the picture.
Some people will instantly conclude that the fact we reject the state of the art view of action-reaction causality and life processes as mechanistic means that we think life is supernatural or somehow mystical and ghost-like and powered by emotions. Those ideas and other fantasies are very popular these days. Most people accept one side of the other of the alternative of the mechanical vs. the mystical. We do not. The mysticism vs. mechanism alternative is a false one that is just another consequence of a flawed action-reaction concept of causality. As with life processes, it is not possible to grasp how causality works without including all the information of its full context.
The supernatural fork of this alternative denies causality altogether, and depends (in varying degrees) on faith in supernatural forces to animate the action of objects*, not the scientific causal interaction of natural objects as observed my sensory perception and asserted conceptually by reason and science. Yet due to the facts that life and consciousness are proactive and goal-directed, organisms seem to people somehow different from most non-living natural objects. *(For details, study the theology of any religion.)
So many people rightly conclude that there must be more complexity to causality operating in life processes and consciousness than simple mechanistic actions and reactions. And we agree, there is more complexity to life and consciousness. Except that we do not think the supernatural explanation is the right answer. But this example does show how the weakness of the flawed action-reaction view of causality leads many people to assume life is mechanistic. They think that because they do not agree with materialism, which is what they are told in school is the only other alternative of mysticism. Most people think they have no other choice than the supernatural or materialism. And they cannot think of a third explanation that breaks the false alternative.
The mechanistic fork of the alternative (materialism) claims to rely on science, causality and reason to explain how life and consciousness operate. But materialism historically is simply a denial reaction to the supernatural, and not a valid idea either. A reaction of denial to false ideas does not necessarily lead to a true and valid idea.
In fact, it is the materialists who came up with the action-reaction view of causality in the first place, and are the ones who drop the biological context in AI and AL. They did so because they did not like the supernatural alternative. But unfortunately, their theory is not much better because it does not adequately explain how goal-directed life processes and consciousness work causally, and without biology describe with the perspective of identity-interaction causality, it cannot. And materialism certainly does not explain how purposeful consciousness works causally in organisms that possess it. So, the materialists end up simply claiming life is random, arbitrary deterministic mechanism and denying consciousness exists altogether. Yet animals and people are much more complex and purposeful than billiard balls, even at the cellular level.
In fact, there is a third way to think about this issue, a way that eliminates the false alternative of the supernatural vs. materialism to explain the action of objects in the world. This third way works by including the full context of the identity of all objects that interact in each situation, and it also includes exactly how that identity relates objects to each other and the world around them. That alternative is called “identity interaction causality,” and it consists of a new way to think of reality, as a methodical, observation-based, quantitative comparison of objects’ identities. This third way to think about causality for both mechanistic and living systems can be illustrated very simply by an example from philosopher Leonard Peikoff about an egg and a feather (see reference 2 at the end of our RICX Technology white paper page).
If you think the above explanation of identity interaction causality is just another word salad, try the following real world thought experiment to try to gain clarity. Take and egg and a feather and drop both from 3-4 feet above a hard floor. All billiard ball causality will tell you is that the causal action of releasing those objects is followed by the reaction (effect) of their fall to the floor. Period.
There is no information included about the huge difference in the effect between the two objects or clues to help you understand why the mess occurs for one object and not the other. The whole billiard ball idea of causality is a gross over simplification precisely because it drops most of the causal action context of the objects and what is actually occurring, including our common-sense knowledge about how objects interact. As the experimenter, you are left with the knowledge of an infant who drops food on the floor just to see what happens. So now contrast this with identity interaction causality applied to the same example.
To perform this thought experiment using identity interaction causality, think of the full context of the situation: Think of the identity of what an egg is, picture it in your mind, and use your conceptual memory to remember past experiences with eggs, and then do the same for the feather, hard floors, the earth’s gravity, and your own identity and role as the actor releasing the objects to initiate the action sequence. When you think about the identity of the objects you prime your subconscious to automatically que up all the knowledge contained implicitly in your conceptual memory of the objects, information filed in your concepts about the mass of the egg and the feather, the earth and its gravity, and the air. All of these things are part of the causal action context of the experiment. You may not even remember learning all that stuff in school or elsewhere, but you probably did. Your conceptual memory also contains past experiences of objects being dropped (possibly including eggs), so you will realize what is likely to happen. What you call your "common sense" experience has a lot of knowledge about the causal action contexts you frequently encounter. If you “think different” in this way, it is obvious to most any adult what the consequences will be before you do the experiment. Doing the experiment is a form of validation and confirmation. Why? Because thinking about the identity of each object will lead you to think about what that object is capable of doing from your experience with the world. You may already know and recall each object’s action capacity abd causal action context, which the experiment will confirm. That is the power of identity interaction causality, as opposed to the empty-headed action-reaction causality.
How much information you include from your conceptual memories as you perform the thought experiment is up to you. For our purposes in this explanation, the qualitative information was sufficient. You may not realize it, but your conceptual memory likely has many more details about eggs and feathers and falling objects than you think. For example, ask your subconscious questions about eggs and feathers. So, if necessary to your purpose, you could also think about the quantitative information explicitly too, along with the qualitative information we used in our example. The quantitative information includes all the mathematical relationships between the objects and much more. If you cannot remember it, you can use measurement tools to find out. Then you will know if for future reference. You remember causal action context information because it has value-signficance to you. You can use it to help your own survival.
Every property in each object’s identity in the example has an associated quantity or measurement. The egg for example, has a specific shape, size, color, weight, and so on. The same goes for the feather, the earth, the air, and you. All those objects have comparative measurements that our sensory perceptual system enables us to make simply by looking at things and comparing them mentally to other things. In fact, any egg and feather that you use from anywhere on earth will have measurements of some quantity as compared to other eggs and all other things. All those quantities will fall into a range that is typical for that type of object, eggs or feathers as recorded in our memories by neural processes in our sensory perceptual system. There is no need to measure everything with a ruler. Your sensory perceptual system has a built-in ability to measure objects “comparatively” by analog comparison, just without a digital scale. In other words, by making comparisons such as “my house is bigger than my car.” That is a kind of measuring too and like all our other capabilities, it evolved due to its survival value. Most people do not even realize they can do this, though they make comparisons and measure things by relative quantities it all the time. You own identity has that capacity because it had value-significance to your human and animal ancestors. It help them survive so you can be alive now. That is part of your own causal action context as a biological organism.
The bottom line is that identity interaction causality is a very effective way to “think different” about causality, a way that brings “situational awareness” and comparative quantitative measuring back into the causal picture, awareness that is lost by the over simplification of action reaction, billiard ball causality. Doing so can change your whole perspective on things. In time, you will find thinking this way can also answer a lot of apparently unsolvable conundrums. But for now, let’s consider using it to explain how a state of the art mechanistic computer system works in its own causal action context.
For example, what happens if we apply identity interaction causality to the following example of the way valid computer commands work when input by a user? If you have some knowledge about how computer systems work, think about the identity of their components. Think about the hardware platform, the operating system, the application programs, and how they interact in layered fashion. Each layer is its own identity and causal action context and connects to the other layers through specific interfaces. The operating system runs on the hardware, and the application programs run on the operating system. All three components interact with each other using mechanistic logic and with the user, who uses goal-directed teleologic. If you think about the identity of these components, your subconscious will que up your conceptual knowledge about the components and their relationships (if you have it). What comes to mind will obviously depend on how much you know about computers, right? But if you have some knowledge, then when you think about this you will remember that valid computer commands work because they always connect to the code that makes them operate, and you will remember that the code that makes them operate is within the design specifications of the operating system and hardware it runs on. You will also remember there are no breaks in the chain that connects the system together, and no design specifications are exceeded. If any are, an error will be generated in response to the command entered by the user.
In other words, the computer system and its code consists of components with a specific identity of attributes or properties in some specific quantity at some specific scale (such as the human scale for keyboards and the microscopic scale for chip components). That identity enables each system component with its own specific and quantitative causal action capacity to operate in specific relationships with the other components and the user. One can say that the identity of each component IS its action capacity in frozen form. (Think of the egg and the feather.) It is the relational interaction of all the component identities in specific quantities that make any system what it is in terms of its “causal action capacity envelope” overall. Whether it is a user and a computer system or an egg and a feather makes no difference.
Now ask yourself what happens if we use that same kind of thinking applied to causality life processes?
For example, for a robot/computer system to behave like an organism, you may realize now that such a system must also have more of the identity of an organism, including similar relationships with its environment for all of the same reasons that organisms do. Why? As we just pointed out: Identity determines the action capacity of objects and systems of objects. And that means that a living system cannot simply be a machine. Like the above identity interaction causality analysis of a computer system, the identity of an organism determines its action capacity. In addition, organisms are not stand-alone objects. Organisms cannot exist without their relationship to their environment, so the environment must be included too. That fact is also part of their identity as an organism. An organism cannot be understood without its specific environment as part of its causal action context.
The biological identity of an organism in relation to its environment is a very different identity relationship than that of machines. The identity of an organism is different because of its conditional existence, in energy self-generation from environmental resources, its self-generated actions in its environment, sensory perception system to detect environmental changes, internal pleasure-pain self-regulation system, and its knowledge formation and multi-scale architecture (microscopic at the cellular level and below up to the human scale). Another difference is the fact that an organism has some form of internal self-perspective of its environment and itself. An organism has survival needs and internal self-perspective because it needs one of those too to survive and maintain itself in existence. Machines have no need to survive and no “self” to have a perspective.
In addition, the very survival of every organism depends on its own goals --- and the organism’s proaction success in achieving them in relation to its environment --- goals such as getting food and to avoid being eaten by other organisms. Organisms operate to achieve their goals under their own self-generated power. Machines have no goals of their own, but are simply automated actions to carry out human goals driven by energy supplied to them by humans to accomplish human goals.
Thinking of causality this way makes it possible to identify the relationships needed to more fully understand relational processes like those between an organism and its environment, as well as the internal processes of the organism. We call this view of causality "identity-interaction" causality, as explained on the introduction page and earlier on this page. This is one of the ideas we use to better understand the operation of biology so we can apply the new perspective it provides to digital biology and AL. It is ideas like this that have enabled us to discover new ways to think about life processes so that better designs can be created. Systems that operate on different or incompatible principles can interact if they have the right kind of interface. People and computer interaction is a good example. People are goal-directed and can make free choices, whereas state of the art computer systems are deterministic. But with the right interface, the two very different kinds of systems can interact. We are doing something similar to be able to design interfaces for mechanistic logic and goal-directed or teleological action processes of organisms to create a new kind of AL simulation system.
Only consciousness as the relational identification process that is observed in humans can calculate. Consciousness is a goal-directed, proactively relational process of interaction with an environment for the identification of reality possessed by some organisms in various degrees of complexity. Consciousness is a highly structured process that is an attribute of biological entities, such as some kinds of animals and people. That means consciousness as a specific causal process is also based on the action context of teleologic in its proaction context layers (and only based on mechanical logic indirectly at its physical layers - see the OSI model for details on layers). As such, the causation of consciousness cannot be explained with or reduced to mechanistic system logic alone. But goal-directed behavior can and does intereact with mechanistic behavior whether it be a person using a computer system, walking around, or internal interaction between a persons goal-directed cellular processes and the mechanisms of molecular chemistry. This interaction simply occurs by means of special interfaces that make it possible.
Some processes in the human body are mechanistic and reactive, and other processes are goal-directed and proactive. Computer systems as currently designed are only mechanistic. They cannot calculate, but rather automate human calculation by executing specific mechanistic processes. The results of these processes only make sense to humans, not any other kinds of organisms.
Of all life-forms, only people can calculate because only people possess the kind of teleological consciousness that has that ability as part of its identity, and therefore its action capacity. Counting comes before numbers, which are a conceptual process. Only people possess a consciousness with freewill that can perceive objects and count objects in reality selectively and compare them to each other, that can choose to focus on some of the objects' attributes and ignore others, choose to methodically form concepts of objects and of numbers, eventually abstract the principles of mathematics, and then inductively use those principles to attain human goals---such as building computer systems to automate human calculation abilities. These processes cannot be explained by mechanical logic alone because they are part of the wider relational action context between the human organism and its environment, as well as the universal life process requirement for constant proaction to maintain its very existence.
But none of this was probably ever explained to you in school.
If computer systems are to interact with humans using natural language in other than a preprogrammed, predetermined way like state of the art systems do now, they must be designed to use teleologic to sense and use reality-based data firsthand for themselves (as opposed to arbitrary data supplied by human programmers). In addition, both consciousness as a biological survival process and concept formation must be simulated as proactive, volitional, relational processes. This is the only way it is possible to enable human-like functionality in robot or android interface that is better than what exists today.
Since consciousness is an attribute only of some kinds of organisms and is a biological process, the conditional nature of life processes must also be simulated as a causal action context layer to functionally enable the consciousness simulation and provide it with the simulated motivation and the means to act in reality. Otherwise, the simulation system cannot operate in a realistic manner as a goal-directed system to initiate real causes and build real relationships in the world, which means it cannot operate at all. The continuation of life action IS the goal of all organisms and the very basis of biology. For a robot to act conscious, it must be relationally aware of its environment at some level of awareness, it must "feel" simulated needs, simulated pleasure and pain, and its needs must have value-significance to it as a simulated organism. This is what the goal-directed system teleologic is for: To simulate what goal-direction does in organisms as a causal action control context is absolutely essential in other kinds of systems such as AL for robots. Otherwise, the simulation will fail. Identity determines action capacity. A robot that does not have the proactive, relational identity of an organism simply cannot behave like one, any more than a toy doll can behave like a person.
Ask yourself this: Biologically, what is consciousness for? The answer is proaction control in order to satisfy needs in order to survive to make future action possible and future survival more likely.
The conditional nature of relational life processes is why organisms are goal-directed, to help ensure their survival. Survival is the condition that must be caused and maintained by every action sequence of every organism everytime. Any break in the chain of survival events for any organism is the end of that organism.
Teleologic is what provides biological life-forms with a causal action context that includes the motivation and independent ability to act, to proactively initiate causes in the world, to do what they need to do in order to cause their own future survival, and to do so continuously or literally cease to exist physically. In other words, for any organism, failure is not an option for gaining and keeping the basic requirements of life! Failure is not an option because of the relationships the organism must maintain with its environment to simply survive and stay in existence. Any realistic simulation of consciousness must operate the same way --- as a teleological process that maintains specific relationships with its environment. Those are the facts of a life causal action context. A realistic AL simulation system must have needs (really need its values as a causal requirement), and it must proactively attempt to gain and keep them like real animals do. That is what "value-significance" means in the context of life processes. We must simulate that too in our AL designs to make a new kind of proactive system of teleologic that runs in a layer on top of a computer system of mechanical logic.
Working from these premises, Blue Oak has developed and patented two radical new Artificial Life simulation technology architectures as described earlier, two new kinds of simulation systems to simulate some of the unique aspects of life-forms, including the teleologic to simulate of many of their conscious behaviors. More details of how these kinds of systems can be designed and developed are explained in our book and white papers (see below).
The principle behind our patented AL simulation system designs is to include the environment outside our simulated organism as part of a relational system to better simulate the proactive relationship that a real organism has with the world it survives in, as opposed to a design that focuses almost entirely on what brain processes supposedly do and create a program or a neural network to do something similar.
Real organisms are more than their brain or even their brain and their body. They are literally a product of their environment as evolutionary biologists will tell you, and are intimately related to it. Real organisms generate their own energy by their own internal processes. In order to do so, they need to proactively get energy or chemicals directly, or food from their environment by eating other organisms. That is the "big picture" of life as an on-going, uninterruptible proactive process. Forget about AI and AL. Think about biology and all the micro environments all over our planet, each with unique organisms all generating their own internal energy and proactively getting what they need from their specific habitats. What does that kind of relational process require to function?
First, the relational process of life requires proactions by organisms in their environment. But blind, unmeasured, uncontrolled action is pointless to an organism because it will not help them survive and may get them killed. In other words, don't drive your car with your eyes closed. Worse, blind action is eventually lethal to a life process that must survive, which means maintain itself in order to continue at all.
What does measured, controlled proaction require to successfully function? The answer is input to the acting organism to tell the organism the effect of its own proactions, which are the organisms causal output. It requires input that identifies what is outside the organism and what is going on around it. Identification produces information about the things in the environment outside the organism. Identification tells organisms if something exists in its environment, and the form in which those existents are sensed by the organism has survival value by indicating if those things are food or poison, friend or foe. The amount of information depends on the complexity of the organism. For higher organisms, sensory perception is the form of identification that produces the properties and comparative measurements of each (relative quantities of properties) of the things outside the organism. Everything that exists, exists in some quantity. It is in that respect that sensory perception is quantitative by comparison, such as "this" thing is bigger or smaller than "that" thing. The forms of what is sensed or perceived basically indicate to an organism something is their, and each different kind of organism has evolved senses that automatically indicate (usually with pleasure or pain) if an existent is good or bad for the organism's future existence.
Sensory perceptions of various kinds are the form of quantitative identification of things in their environment used by many kinds of organisms. And the quantities included in those identifications are used for relational action control for future proactions, but they do not tell organisms "what" those existents are. Only people with conceptual ability can identify what exists around them. All higher animals know is that some existent in some form they perceive is there, and that they feel pleasure or pain from that contact. But sensory percepts are extremely accurate at making comparative measurements.
For example, think of the last time you played baseball or golf. If you miss the ball, it is your sensory perceptions that tell you and by how much (a little or a lot). We even have descriptive expressions for our comparative, sensory perceptual measurements such as: "I missed the ball only by a hair." or I missed the ball by a mile!" Your next swing of the bat or club (or lunge at your prey if you are in a survival situation) will be your attempt to correct your error.
What is consciousness for? Both sensory perceptual and conceptual consciousness is for quantitative identification and proaction control. Simple sensing such as of plants and simple animals does something similar, but to a lesser degree than the capabilities of complex animals and people. Both hitting a ball and survival in a difficult environment depend on the accuracy of your own consciousness of your environment every instant you are alive. Suvival is not possible with inaccuate senses. (Think about that the next time you are driving, rock climbing, or trusting a pilot to land the plane you are riding in.)
In addition to proaction and consciousness, organisms need to remember what they have perceived for future action control and so they have evolved memories for that survival purpose. And because not all things in their environment are supportive of their life processes, organisms have evolved some form of judgement about what they perceive, either automatic or volitional. Some of that process is biologically automatic and built into the pleasure/pain system and instincts of an organism. Human consciousness evolved such that it has a wider degree of freedom. But both animals and people implicitly or explicitly compare and judge things in the environment good or bad relative to their own self-maintenance and the continuation of their life as an organism.
The lives of organisms as they interact with their environment end up being a series of events startinng with a proaction, followed by sensory perceptions, followed by judgement and memories, followed by a controlled action selection, and then the cycle repeats with the start of the next proaction. In our patents and other documents, we call this the conscious-event cycle, with each event being a "C.Event" that is designed to mimic what happens in the lives of real organism as the relationally interact in their environments. But this is a biological, proactive, goal driven process that starts with action, not consciousness. Consciousness is a relational process that follows action to identify and measure envionmental change after every action to process and prepare the organism for the next proaction. It is not a mechanistic process, but one that operates by teleologic automatically or consciously, with the survival and continuation of itself as its ultimate goal. Life process is not analogue or digital mechanistic logic in its essence. It is a combination of many different kinds of causal processes inter-operating through interfaces of various kinds. Our AL simulation designs recognize and include that fact as their basic operating principle.
Our new technologies are designed to simulate goal-directed survival behavior, sense perception, freewill, concept formation, and logical induction to form premises as relational processes with the world outside the life simulation system, as opposed to the simple mechanistic behavior found in state of the art AI and AL systems. Our designs do so by emulating the conditional nature of life, the teleologic it requires to be simulated realistically, and the identity-interaction causality that makes life what it is. Our designs emulate the relational nature of consciousness as the pro-active, relational process of identification that is observed in real organisms. These are not simple mechanistic processes, but teleologically causal, relational, mathematical, processes that are new to the field.
Complex systems like this are not of only one kind. They are a combination of various system types, some of which are incompatible with each other, just as many computer network systems are. So part of our technical approach is to leverage the system of process abstraction layers and standardized interfaces that works so well for the Open Systems Interconnection Model for network inter-operation. We are redesigning this architecture to enable the interoperation of mechanistic logic and teleologic. Doing so enables us to interconnect the incompatible mechanistic systems of computers and networks with the teleological systems of organisms. As explained in our book and white papers, this strategy enables us to swap out the mechanistic organic chemistry that powers the teleologic of organisms for the mechanistic processes of computer and network systems. We are only exchanging one mechanistic process for another. How well this works remains to be seen, but we think the idea will succeed because it has worked successfully in the past with many kinds of mechanistic systems.
The thrust of DLF Technology is to create a teleological system as several causal action context layers to run the various "life" processes of simulated Digital Life-Forms (or DLFs for short) that are relational rather than mechanistic (and hence incompatible with the mechanistic computer, network, or robot system layers below them). For example these layers of teleologic will support proaction control, self-generation of energy internal to the DLF, simulation of sensory perception, and so on that simulate the organism side of the relationship of a DLF to its environment. The teleologic operates between the simulated organism and its environment as it does with real organisms. The mechanistic processes simply power and animate the system, just as molecular chemistry does for biological systems. And so, while based on and animated by the lower causal action context layers of mechanistic logic of a state of the art computer, robot system, or network systems, these upper causal action context layers simulate the teleologic and more complex causality of conditional, goal-directed, self-generated, self-sustaining behaviors identified with biological life-forms. The simulated teleological behaviors are conditional precisely because they depend on continuous proaction to control relationships of a simulated organism in a specific environment to gain and keep certain values it needs to survive, just as similar behaviors do in real life-forms. The power and animation processes are not a direct part of the teleologic.
DLF Technology will make possible new kinds of systems that better simulate a wide variety of life-forms of many types, as well as make possible improved natural language interfaces for computer systems, enable animated virtual reality characters to communicate with people and each other more realistically, enable state of the art robots to engage in more independent behavior than is possible with the mechanistic designs of their control systems we have today, and lead to more powerful digital assistants that are much more flexible than those available now. Our teleologic designs are not intended to replace current technology as much as they are to offer new capabilities that are impossible to mechanistic logic alone.
In order to succeed in designing and developing more life-like AL simulation technologies, we have asked ourselves many questions. Here is a list of some of them. We have answered many of them, but not all yet as this is an on-going project. And you will find many of the answers we have found in our book, white papers, and other references, and especially the reference in the link below that summarizes most of the new ideas our work is based on. We hope you enjoy reading, studying, and thinking about them all. Then, you can make up your mind if we have a convincing case or not.
One thing is certain, you will have to "think different" to answer the questions that follow:
How do both non-living and living objects interact with the world causally? What are the relationships between the identities of organisms and their environments? What is the essential causal difference between non-living and biological processes? Is the existence of organisms unconditional or conditional? Where do needs and values come from? What is value-significance for organisms? How does a “self” form and what is required for that to happen? What are two key essentials of a "self?" What is the difference between mechanistic and teleological system logic?
How do organisms act externally in relation to their environment and internally in relation to themselves? How do they self-generate the energy they need to power proaction in relation to their environment? What facts make their self-actions causally possible? What keeps living actions continuously acting and how does an organism control its actions? Can organisms choose actions or are all actions pre-determined? What is required for action control to even occur at all? What happens if living action stops in an organism? What is the essential difference between a proaction and a reaction?
How do organisms sense and perceive objects in their world? Where do sensory qualities such as colors and other properties such as shapes, smells, sounds, and so on come from and how did they evolve? Are sensory qualities objective or subjective? Are sensory qualities quantitative, and if so, how are the quantities identified and in what form? Are sensory perceptions reactive or proactive? Do sensory perceptions tell us what exists in our environment, or just that it exists? How do organisms evaluate what they sense and perceive? What does the pleasure-pain system do and how does it work? How do those evaluations determine future actions? How are evaluations and needs/values related? What kind of interface do organisms have with their world? Why are the sensory perceptual inputs of organisms reactive and the action outputs of organisms proactive?
What do simple organisms know? How do medium and higher complexity organisms like conscious animals know what they know? How do people know what they know, perceptually and conceptually? How do organisms process and remember 3D sensory perceptions and internal body sensations? Are sensory perceptions and internal body sensations both qualitative and quantitative? Is consciousness quantitative, or only its content? How are sensory perceptual memories (content of awareness) formed, processed, and stored? How are concepts formed? How are premises formed? Is any part of that process neurological? Are concepts and premises quantitative? How does language work and what role does logical induction play? What is the overall structure of the knowledge that organisms produce as the content of consciousness and that organisms remember as they interact with things and other organisms in their environment?
To Read Our Free Book On-line
Our free book describing how Digital Life-Form technology works and how to build your own consciousness simulator is now available at this web site for on-line reading. (Note: Since our book was written, the author has made many trips around the "spiral of learning" regarding the many topics explained in our book. None of the essential points made in the book have turned out to be wrong as far as the author is aware, but the author's context and understanding of many of the ideas it contains have been greatly expanded since the time the book was written. Some of the definitions of key concepts should have been a little more tightly constrained than the way they may have been worded in the book. For example, the importance of proaction was not clear to the author at that time and not included. Also, the author has learned that some of the phrases that discuss the concept of "implicit knowledge" could be interpreted to mean they apply to things in the world independent of human consciousness and knowledge. That interpretation is incorrect. The wording should have been a little more constrained so it is clear that implicit knowledge applies only to the content of human awareness and knowledge, and that it is not intrinsic to the nature of things that exist independent of the biological processes of human consciousness. Some sentences about sensory perceptual identification may imply that process tells us "what" exists in our environment. In fact, sensory perception only tells us that "some things exist" in whatever form we sense them, not what they are. The identification of what things in our environment "are" is conceptual knowledge that only people are capable of knowing. There may be a few other similar examples, so the author hopes the reader will ignore such interpretations.)
How to Simulate Consciousness Using a Computer System
You may: Download Your Own Copy of Our Free Book
Note: For more technical details and explanation, see our RICX Perceptual Simulation and DLF Simulation technology architecture white papers. These two white papers summarize the new approach and theory we offer as solutions to long standing problems in the fields of digital biology and AL. Some of these ideas may also be applicable to AI, to whatever degree they can be disassociated from biology.
A few crucial key points: At Blue Oak, we freely admit we do not yet know to what degree consciousness and the more complex, conditional, goal-directed causality of life can be simulated using mechanistic computer and robot platforms with digital biology techniques. But the fact is biology itself is built from and "runs" on or is animated by the mechanistic causal process "platform" of chemistry and physics. Chemistry and physics are mechanistic active systems and there is an interface between those systems and the teleological active systems of the life processes of organisms. And while we are not yet clear on precisely what that interface is, we can say that it is most likely the relational interaction of the identities of the cell wall, the DNA/RNA processes, the mitochondria that generate energy for living cells, and probably other factors of cellular functioning. Whatever this interface turns out to be when it is fully understood, the fact of this interface is what enables the two incompatible kinds of system logic to work smoothly together and make life possible. That is what interfaces do. Based on this idea as explained above, we believe it is possible use the more complete context and relational nature of identity-interaction causality to identify, describe, and implement relationships that mimic biology with a new kind of goal-directed software we are developing. Then, by analogy to the manner in which the OSI model for interconnecting network systems very successfully works for incompatible computer network systems, we believe we can to substitute the mechanistic platform of computer and robot systems for the chemistry and physics mechanistic causal layer in the system architecture, in order to animate a reasonable and useful simulation of some life processes and consciousness (see our book and white papers for details). Of course, this will only work if the causal action context layers of life processes are also simulated properly as well in their own causal contexts, as we have explained above and other places in our documentation. It is absolutely essential that a simulated life-form can create its own firsthand relationships to reality in those active system layers that are similar to those that animals and people form, or the simulation system will not work. This latter idea is key to our whole enterprise: Consciousness is an unanalyzable, non-mechanistic state of awareness of some living organisms, a natural subject/object relationship that state of the art computer systems and robots simply do not have. But that is what our technology simulation designs are intended to duplicate by duplicating the causal relationships of organisms on a different platform from their natural one.
Learn more about the ideas that underlie our work. The book you will find at this link is provides an overview of many of the new ideas our work is based on, and much more. This book will provide you with answers for many of the questions we have asked and is really a prerequisite to being able to more easily find the answers you need to design a new kind of AL simulation system that use teleologic in addition to ordinary mechanistic logic.
To License Our Technology
If you wish to apply for a license or have questions about our patented DLF technology, please mailto:firstname.lastname@example.org, and we will reply to you as soon as possible. Please include your Name, address, phone number, title and organization (if any), and your interest in the technology, such as education, scientific research, product development, and so on.
Back to Top of Page
< Welcome Page
© Copyright 1998 - 2018, Blue Oak Mountain Technologies, Inc. All Rights Reserved
* STAR TREK and related marks are trademarks of Paramount Pictures Corporation.
** Notes taken at the lecture: Induction in Physics and Philosophy , Dr. Leonard Peikoff, copyright 2003, available at the AynRandBookstore.com