In the early development of my ideas on explaining spiritual wisdom in naturalistic terms I took a route that attempted to explain oneness in terms of an a priori argument. Having had over two years to develop my ideas and the underlying argument I do not think that this particular approach is the most fruitful or even necessarily most convincing. After having reviewed these old disjunctive posts in another context I thought it to be a good opportunity to revise and join the argument into a single post for future reference. This is not to be taken as the non plus ultra in my current thought (that will follow shortly :-) ) it is merely a long overdue dusting up on an old approach to make it look a bit less convoluted and impenetrable. I may revise this argument at a later date but see much bigger promise in evaluating on the much more basic and easier accessible argument underlying multilevel selection theory. Anywho…. without further ado, an attempted formulaic deduction of the spiritual idea of non-duality after the jump:

Let us assume an ecology of i agents denoted as A(i). Each agent is to poses an explicit utility function Fe(i), a level of knowledge, cognitive complexity, experience, available resources, levels of rationality – in short capital C(i) as well as levels of trust between each agent T(ia->ib) from 0 (no trust) and 1 (perfectly trusting). All agents are set in an endless reiterative game in which to maximize explicit utility in line with the available capital.

Scenario 1: Two agents have fairly similar utility functions Fe(a) = Fe(b), level of capital C(a) = C(b), and a high level of mutual trust T(a->b) = T(b->a) = ~1. They will quickly agree on the way forward, pool their resources and execute their joint plan.

Scenario 2: Again we assume Fe(a) = Fe(b), however C(a) > C(b) – again T(a->b) = T(b->a) = 1. The more capable agent will devise a plan, the less capable agent will provide its resources and execute the trusted plan.

Scenario 3: Fe(a) = Fe(b), C(a) > C(b) but this time T(a->b) = 1 and T(b->a) = 0.5 meaning the less powerful agent assumes with a probability of 50% that A(a) is in fact a self serving optimizer who’s difference in plan will turn out to be detrimental to the utility of A(b) while A(a) is certain that it is all just one big misunderstanding. The optimal plan devised under scenario 2 will now face opposition by A(b) although it would in fact be in A(b)’s best interest to actually support it with its resources to maximize Fe(b) while A(a) will see A(b)’s objection as being detrimental to maximizing their shared utility function. Based on lack of trust and differences in capability each agent perceives the other agent’s plan as being irrational from their respective points of view.

Scenario 4: Fe(a) <> Fe(b) both agents are seeking to maximize largely mutually exclusive utility functions, resulting in a desire to minimize opposition by other agents.

Under scenarios 3 and 4, both agents have a variety of strategies at their disposal:

  1. deny pooling of part or all of ones resources
  2. use resources to sabotage the other agent’s plan
  3. deceive the other agent in order to skew how the other agent is deploying strategies 1 and 2
  4. spend resources to explain the plan to the other agent
  5. spend resources to understand the other agent’s plan better
  6. strike a compromise to ensure a higher level of pooled resources and minimize resistance in the other agent

Strategy 1 is a given under scenario 3 and 4. Number 2 is risky, particularly as it would cause a further reduction in trust on both sides if this strategy gets deployed assuming the other party would find out, similarly with strategy 3. Strategy 4 appears to be appropriate but may not always be feasible particularly with large differences in C(i) among the agents. Number 5 is a likely strategy with a fairly high level of trust but an utter waste of resources under scenario 4. Most likely however is strategy 6. Striking a compromise is trust building in repeated encounters and thus promises less objection and thus higher total payoff in the future while at the same time minimizing objection cost by agents with at east minimal overlap in Fe(i).

Let us further assume that there exists a utility function Fm the adoption of which maximizes an agent’s chances of staying in the existential game (Slobodkin & Rapoport 1974). The difference between an agent’s Fe(i) and Fm is denoted as FΔ(i) where 0 represents Fe(i) = Fm and 1 represents no overlap between Fe(i) and Fm.

Scenario 1: With a low FΔ(i) agents will turn their resources into utility in such a way that it is detrimental to their continued participation in the existential game and will either evolve their Fe(i) to more closely approximate Fm or consequently seize to exist.

Scenario 2: With a high FΔ(i) agents will turn their resources into utility in such a way that it contributes positively to their continued participation in the existential game and will on average outcompete those with a lower FΔ(i).

In summary, an agent will either have to evolve its Fe(i) to ever more closely approximate Fm or end up using its resources in a way that results in the agent’s removal from the existential game through natural selection. An agent will increase its chances of staying in the existential game not only by having a higher FΔ(i), but as shown in the first thought experiment by respecting all agents Fe(i) irrespective of their FΔ(i) as well by always striking the most rational compromise and thus minimizing opposition cost from and ensuring maximum future cooperation by all other agents. What is crucial to understand in this context is that microeconomic deliberations dictate, that a reduction in utility generated by loosing the resources spent on a rational compromise between a high FΔ(i) agent and a low FΔ(i) agent would have to be equivalent to the reduced utility suffered by the high FΔ(i) agent through opposition from the low FΔ(i) agent if no compromise would be struck at all. An agent realizing these dynamics would be compelled to be equally concerned for the self as for the other out of an interest for remaining in the existential game alone. It can be argued on this basis, that oneness (i.e. the radical identification of the self with the other) becomes the highest form of meaning i.e. the highest form of adaptive truth, when aiming for maximizing ones chances of staying in the existential game. The reason being that what one does to another, one quite literally does to oneself from the perspective of evolutionary dynamics.

It is important to note in this context that striking a compromise does not always mean to support an agent (e.g. a suicide bomber). For agents with an FΔ(i) below 0.5, meaning below the boundary where an agent barely contributes to staying in the existential game to becoming detrimental to remaining in the existential game, the compromise turns from support to opposition, defense or even offense. One notable factor is missing from the presented model of human interaction that very much pertains to the real world, namely that of human beings compensating for dying by having offspring. Since values, belief systems, religions and many other factors constituting the utility function in the presented model which we can sum up as culture are largely transmitted vertically from parents to children (Boyd & Richerson 1988, pp. 49-51) justifies the representation of decedents as a single agent. Therefore omitting the matter of reproduction from the above model does not invalidate the conclusions we can draw from it and their application to the human condition.

Bibliography

Boyd, R & Richerson, PJ 1988, Culture and the evolutionary process, University of Chicago Press.

Slobodkin, L & Rapoport, A 1974, ‘An optimal strategy of evolution’, Quarterly Review of Biology, vol. 49, pp. 181-200.

2 comments on “Oneness – an attempt at formulating an a priori argument

  1. Pingback: Rational Morality » The future of human evolution revisited 2.0

  2. Pingback: Rational Morality » The Bible read with Evolutionary Eyes

Leave a Reply

Your email address will not be published. Required fields are marked *

*


4 − = one

* Copy This Password *

* Type Or Paste Password Here *

59,994 Spam Comments Blocked so far by Spam Free Wordpress

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>