Thursday 16 October 2014

Dispensing with the Moral/Thought Dictate

In my earlier post (today, I'm feeling quite brain-fart-y this morning), I described the emotional reward we get for learning lessons that have no physical nature. The very act of 'recognising' a pattern already triggers a reward in the brain, but seeing this recognition in another human seems to add a secondary, stronger effect: a negative reaction from that other human should move us to adapt our emotional reaction ('this doesn't work here'), and a positive reaction would doubly reward and confirm that pattern (as 'acceptance' also means 'survival' to the human brain). This is what we'd call 'morals', but it can also be called 'adapting to new environments'.

As a human has first to rely on his existing memory/emotion cocktail to judge the new environment, what it does next is based on its need for survival: if the situation is dire, and doesn't match any existing patterns, a human has to use these to do its best to determine a course of action in order to survive; if the situation is not life-threatening or challenging, the human has the privilege of choice: either remembering the situation or problem for later analysis, or simply dismissing it and forgetting about it. In a real-life alone-against-nature situation, the brain has no choice but to rely on reason: it will decide if a situation is good for it or not, and adjust its neurone network accordingly; yet if the brain is presented with an idea not represented in reality, if it is not dismissed, it will try to 'make' it real by imagining it, and will attempt to make the same adjustments.

No matter how we name them or talk about them, our basest instincts are around self-preservation. In situations that involve physical things, decisions are empirical and obvious, and 'working methods' are rewarded and categorised 'good for survival', but when it comes to untested new non-physical ideas, how does the brain decide if a non-physical idea 'works' or not? The choice here is simple: trust your own judgement, or trust a more experienced someone else's, but first, trust your own ability to choose between the two.

The last choice is everything. If you don't think yourself able to 'know' or to 'decide', then you are have no choice but to rely on someone else's judgement, meaning that you are dependant on that 'trust tree' I mentioned in my last post. And if an experience doesn't 'match' something 'taught' to you by anyone in your trust tree, depending on the level of danger, your brain will either dismiss the experience or remember it as something 'dangerous' (only 'confirming' the earlier mimicked 'lesson').

So for anyone who can't or won't rely their your own judgement, anything, anybody, or any idea originating from 'outside' that programmed sphere becomes a sometimes very scary 'unknown, can't judge, beware' category directly linked to our emotional, instinctive sense of self-preservation.

This is what both religion and totalitarianism tap into. Both divorce from a human its innate ability to judge the world for itself, make it impossible for a developing human to take its first steps in that self-sufficient state, and present (or impose) themselves as a sole recourse for any and all judgement in things both real and imagined as though they are equally "true". To the untrained and non-rational brain, they might as well be.

This limits humans not only in their education and decision-making abilities, it makes entire populations, instead of measuring themselves against reality and each other, rely on one source for all their decisions and guidance. Since that central source's judgement also encompasses what is dangerous or not, the people they 'lead' have no choice but to refer to their 'trust tree' for a 'measure' of safety, and view anything not originating from it as potentially dangerous. If their 'moral guide' dictates that something or someone 'not of theirs' is dangerous, the brain will adjust its emotional and neurone networks accordingly as though it was a real danger.

Now take two (or more) 'moral guides' competing for control over a single population trained to believe that they 'need' guidance: no wonder Christianity and Communism hated each other (and 'because atheism' my ass!). Now take two competing 'moral guides' who know that they can never persuade the other's following to follow them, so paint them as 'the enemy'. Now imagine that that central dictate is only educated in its own indoctrination methods and inept at everything else. Now imagine that the central dictate starts adjusting its dictates and descriptions of 'the enemy' only to accommodate its own wealth and survival... oh, throw some nuclear weapons in there, too. Scared yet?

But not only totalitarian regimes and religion are guilty of this: it exists to a lesser degree in advertising, Fox 'news', politics, and we all aid and abet this system by 'confirming' (or rejecting) each other's choice of dictate.

If we want to fix this, before all, we need to tell humans that they are able to make their own decisions and that their brains are wired for it from birth. We no longer have to face nature to 'prove' that ability, but that doesn't mean we don't have it and shouldn't use it, because, after a ~10,000-year long 'rationality-free' holiday, our very survival depends on it today.

PS: I just thought about someone's 'concentric circles of extremism', that the 'moderates' don't support what the 'extremists' do... f*ck that, ALL of them follow the same dictate, so it's the dictate's support or condemnation of whoever's action that speaks for all of them, since the dictate is the unique moral guide for all.

Decategorising motivation.

I've been going through acrobatics trying to apply brain function to existing definitions of human nature, but the latter shouldn't be used to describe the former; these contortions shouldn't be necessary, and are in fact quite counterproductive to understanding, even my own. Instead we should be taking the brain function and categorising according to that: our brain function is the motivation that drives a behaviour, yet that motivation, even if it has the same source and goal, somehow becomes a 'different thing' between (social) topics.

Our brain's most basic function is 'recognising' situations and dishing out the 'right' chemicals (emotions) as a reaction to them: defensive and active for danger, and passive and soothing for reward, and our survival depends on our brain matching these correctly. Yet this recognition needs to be educated: in our first moments in the outside world, the only 'safe' situations we recognise are those we already know from the womb: the warmth of mother. When we see mother trustingly and fearlessly associate with others around her (namely father), our definition of 'safe' will spread to them as well. And the 'tree of trust' will spread to whoever they trust, and so on and so on.

That's just our most primitive 'danger detection' mode. Added to that are the lessons trusted people teach us: we tend not to accept any lessons from people not in this category, and although we may remember those 'lesson attempts', we will not integrate them into our 'knowledge repertoire' until we think about them ourselves at a later date - if we ever do - or until that person somehow later becomes part of the 'trusted' category. But still, if someone is not trusted as they give a lesson, they will not have a direct influence on our learning process.

In our younger years, if we are given an example of behaviour to imitate, we can only judge the 'success' with which we accomplish this imitation by empirical evidence (does the square peg fit in the round hole? No. The square one? Yes!) and the (emotional) reaction of the person making us do the exercise. I doubt that we even consider why we're doing that exercise in our younger years, we know only that we have to imitate 'older proof of successful survival trusted figure' in order to survive ourselves. Sometimes the lessons we are told to imitate, like learning words, have no physical aspects that we can confirm ourselves: we know only that, should we imitate a sound successfully, other humans will understand and provide an emotional response that is more or less the same from person to person. Yet, at a young age, we don't consider the why of those words, we know them only as successful methods of expressing our emotion and well-being, and the desires for and fears of the material things that effect these.

But as we grow older and the scope of our attention grows wider, we're going to notice functioning things and the behaviour of other humans (trusted or not), and we may want to try them out for ourselves without any prompting or guidance. For whatever reason, if a child chooses to act on their curiosity, their reward can only come from themselves.

This last bit is what intrigues me. In our society, it hard to tell what motivates curiousity and a desire to try (new) things out for oneself. Yet it is very easy to understand in a 'do or die' situation... survival is the 'reward', and if we have never encountered that situation before, we will educate our emotions to respond 'accordingly' if we survive it.

You see, in describing this basic function in this context, referring to my earlier post, we're talking about both critical thinking and morals here, or "recording an emotional response after testing". But it's more than that: this function is used in all aspects of our lives, but it is named differently for what motivates its use.

If we were to draw a diagram of 'human nature', we would have categories such as 'morals' and 'critical thinking' and 'learning' and 'feelings', but what we seem to be doing today is taking a single brain function and adding it, individually, as a separate entity, into each category. Instead, I propose making that common function a unique central 'thing', and linking the other categories to it.