What is life and what is its purpose?
Life is defined by the preservation of a bit of order in the midst of a Universe tending, inexorably, unremittingly towards disorder. Life is the temporary maintenance of a low-entropy islands in an ever more entropic sea.
While all the world around it is dispersing and smoothing out, life maintains barriers, along which energy is harvested from a stochastically-evolving environment. The key aspect by which life–individual, low-entropy islands–is preserved is by predicting the evolution of the environment around it and utilizing those predictions to best extract useful energy, thereby sustaining itself.
So we are reminded of Maxwell’s Demon (or is it Daemon?), creating a selectively-permeable membrane capable of reducing overall entropy in a closed system. The rub with Maxwell’s Demon and with life: predictions require memory, memory requires space, quantized space is finite, therefore memory must be finite, therefore memory must be erased, and memory erasure increases entropy. Thus, over time, all of the “gains” produced by entropy reduction enabled by accurate predictions are precisely canceled out by the necessary act of forgetting.
The Universe is bound by thermodynamics. We can thus, at best, hope to take out a low-entropy loan for some period, which the repo men of the Universe ensure we repay when it comes due. The question then becomes, simply, how do we extend the duration, increase the principle, or reduce the interest of our loan of life from the Universe?
Here, we must consider more precisely what it means to predict a stochastically-evolving environment. At base, we must construct a model–a set of rules by which we can generate a prediction of the state at time t+1 given features of prior states at time t-i..t. Then, we must observe and remember our state features from t-i..t, apply our model, and come up with a prediction for state t+1 (obviously, our prediction needn’t be a single state but is likely a distribution over possible states, perhaps with a distribution over payoffs associated with each).
Our limiting thermodynamic factor is memory so our primary goal, as living beings, is to construct a model such that the sum of the size of the model itself and the size of the input features it requires is minimized while the accuracy of its predictions is maximized.
This is a point to stop and think a bit. We have just defined, quite precisely, the purpose of life, our prime directive as living creatures. We are to construct the most efficient model of reality and act in accordance with it.
What does this imply about epistemology?
What is real and what is true? While humanity considered the problem of collectives quite seriously in antiquity, we seem to have lost interest in it in our modern times. Yet it is central to our understanding of reality.
Each of us is composed of billions of molecules, each composed of lots of atoms, themselves composed of lots of subatomic particles, the whole mess constantly bombarded by outside particles, subjected to quantum fluctuations, interspersed with virtual particles bursting momentarily into being. Yet I label this amalgam, “me.” Though the particles in “me” may exchange with the particles in “you” and though the constituents of “me” may leave and be replaced by foreign stand-ins, still, I say that I am “me” and you are “you.” It seems that our understanding of reality is dependent on the scale at which we undertake our examination. Is the Ship of Theseus, patched over centuries until no board of the original remains, still the same ship?
One uniquely unsatisfying answer to this is to reject our pedestrian notions of reality and identity and to write off the whole affair as an artifact of language, a quirk in our brains (though, how one might explain the persistence of any attribute of our brains without a proper treatment of the problem of collectives remains a mystery to me).
On the other hand, the central tenet of Objectivism is that reality exists. Objectivist epistemology notes “that the concept ‘unit’ involves an act of consciousness (a selective focus, a certain way of regarding things), but that it is not an arbitrary creation of consciousness: it is a method of identification or classification according to the attributes which a consciousness observes in reality.” That is, there is a reality–the constituent objects of which we each are made–and there are many valid views of that reality, projecting a set of identities and thus creating a set of units and concepts–the labeling of “humans” and “me” vs “you” even though the constituents of each of us may vary and even exchange.
These views of reality (e.g. concepts or abstractions) are “a mental integration of two or more units possessing the same distinguishing characteristic(s), with their particular measurements omitted.” Having integrated units into a concept, one may then differentiate, identifying sub-concepts once subsumed under the integrated concept, by specifying some additional constraints.
Objectivism rightly holds that the distinguishing characteristics used to define one concept apart from others varies with context. As we gain deeper understanding of a subject, we are better able to differentiate one collective from another or may find that two previously separated collectives are better treated as one. Still, we are left wanting a more satisfying means of determining which conceptual boundaries are “better” drawn than others.
The natural extension of the Objectivist treatment of concept formation in light of the thermodynamic definition of life gives us an answer. A “good” concept is the integration of units according to some set of characteristics such that the concept identified allows for a model of reality with increased space-efficiency of prediction.
So as we approach the problem of collectives with this thermodynamic lens, we rate conceptual integrations of and differentiations between objects in reality by the added effectiveness that each such lumping together or splitting apart adds to our ability to predict the future or the decreased size of model and required state needed to make those predictions.
Sometimes, a concept serves primarily to decrease model and state size. For example, identifying planets in place of their trillions of constituent particles allows for equivalently-good predictions of their motion but requires only a handful of state variables (size, mass, position) instead of trillions.
In other cases, a concept serves primarily to increase prediction quality. For example, identifying a table as an object with a flat surface capable of supporting other objects allows a model to more reliably predict how it will interact with other objects than if it were more directly represented as five pieces of wood nailed together.
In still other cases, concepts serve a useful dual purpose. The concept of “person” serves to compress billions of state variables about atoms into thousands of state variables about a person and it allows a model to significantly improve the accuracy of predicting what response smiling at any particular person is likely to elicit.
In all such cases, the creation of the concept is a “good” overlay on top of reality because it allows for better compression and/or better prediction. That is, abstraction is compression and good compression is the differentiating aspect of living beings.
What does this say about worthy fields of endeavor?
Description is inferior to, though may be a necessary precondition of, synthesis, summary, and prediction. Voluminous descriptions of what is or what was, the cataloging of facts, may have some appeal insofar as it makes some feel that they understand an aspect of reality. Yet facts are not models. One must take the leap to synthesize facts, to create identities of groupings of objects, properly integrated and suitably differentiated, and apply human reasoning to arrive at meta facts. Meta facts describe the evolution of facts describing state at times t-1..t into facts at times t+1..inf. Meta facts are our model of reality and it is the creation of these concepts that properly fulfill the purpose of life.
One plant may “catalog” the directions from which the sun reaches it at different points in time. Another arrives at the meta fact that growing towards the light enhances its ability to stave off a thermodynamic smearing out. The latter thrives, having more closely adhered to the purpose of life.
Similarly, in the search for how to spend one’s time, those who synthesize and create higher-level concepts that subsume much of the predictive value in lower-level facts ought to consider themselves as living in closer harmony with our purpose. Put simply: intelligence beats knowledge any day.
What is the job of a computer scientist?
Reality is composed of facts. Computers are perfectly capable of closely modeling individual facts and their evolution over time. The archetypical business-minded folks provide requirements in terms of how specific concretions interact and change.
Computer scientists ought to operate on a higher plane–as we said, intelligence beats knowledge. The job of the computer scientist is to extrapolate, to integrate, and to differentiate. Given the concretions that our archetypical business person spoke of, our role is to identify groupings and meta facts about those groupings that allow us to more efficiently model the evolution of any properly-identified concrete object.
Imagine a business need to generate a report of earnings per hour from a table of jobs, times, and job payments. The concrete task is to simply sum up each hour’s earnings. A level above is to realize that we can identify a concept of “things that have a value and happen at a time” and develop a meta system to calculate any such thing’s sum of values per hour. The next level is to identify time as an instance of a concept of “things for which there exists a mapping from the thing (time) to another thing (hours)” and to develop a meta system to calculate the sum per element of the range of the mapping. Another level is to identify a sum as an instance of a concept of “things that may be calculated by a function over the prior value and the current value (a reduction)” and to develop a meta system to calculate arbitrary reductions per element of the range of an arbitrary mapping.
What allows us to say that these examples are presented in order of increasing “correctness?” Each allows us to model a broader swath of reality with a smaller model and the same-sized input. If we have similar reports for the hourly sum of earnings, hourly sum of payouts, and weekly average of payouts, the first example requires 3x the number of similarly-sized models as the last.
The role of the computer scientist is, simply, to invent concepts such that we are more likely to inhabit the world of the last example than the first because our systems are then closer to harmony with the thermodynamics of life. When we say that a system is elegant, what we mean is that it has highly efficient conceptual compression–it is composed of “good” concepts.
It is the job of the computer scientist to delve ever deeper into the meta. Not only must the models we construct predict and enact the proper functioning of the concretions of reality but our models must predict and enable the evolution and creation of future such models. That is, we must anticipate how future computer scientists will need to make use of the models and concepts that we have invented.
It is the directness with which we deal with our models of reality and the existence and prominence of this game inside the game–this recursive model building–that makes computer science fairly unique and, perhaps second only to mathematics, deserving of a spot near the top of the fields of human endeavor that, practiced properly, bring one closest to harmony with the purpose of life.
Recommended reading for all computer scientists and fellow modelers of reality: Introduction to Objectivist Epistemology