Thursday, 16 October 2014

Dispensing with the Moral/Thought Dictate

In my earlier post (today, I'm feeling quite brain-fart-y this morning), I described the emotional reward we get for learning lessons that have no physical nature. The very act of 'recognising' a pattern already triggers a reward in the brain, but seeing this recognition in another human seems to add a secondary, stronger effect: a negative reaction from that other human should move us to adapt our emotional reaction ('this doesn't work here'), and a positive reaction would doubly reward and confirm that pattern (as 'acceptance' also means 'survival' to the human brain). This is what we'd call 'morals', but it can also be called 'adapting to new environments'.

As a human has first to rely on his existing memory/emotion cocktail to judge the new environment, what it does next is based on its need for survival: if the situation is dire, and doesn't match any existing patterns, a human has to use these to do its best to determine a course of action in order to survive; if the situation is not life-threatening or challenging, the human has the privilege of choice: either remembering the situation or problem for later analysis, or simply dismissing it and forgetting about it. In a real-life alone-against-nature situation, the brain has no choice but to rely on reason: it will decide if a situation is good for it or not, and adjust its neurone network accordingly; yet if the brain is presented with an idea not represented in reality, if it is not dismissed, it will try to 'make' it real by imagining it, and will attempt to make the same adjustments.

No matter how we name them or talk about them, our basest instincts are around self-preservation. In situations that involve physical things, decisions are empirical and obvious, and 'working methods' are rewarded and categorised 'good for survival', but when it comes to untested new non-physical ideas, how does the brain decide if a non-physical idea 'works' or not? The choice here is simple: trust your own judgement, or trust a more experienced someone else's, but first, trust your own ability to choose between the two.

The last choice is everything. If you don't think yourself able to 'know' or to 'decide', then you are have no choice but to rely on someone else's judgement, meaning that you are dependant on that 'trust tree' I mentioned in my last post. And if an experience doesn't 'match' something 'taught' to you by anyone in your trust tree, depending on the level of danger, your brain will either dismiss the experience or remember it as something 'dangerous' (only 'confirming' the earlier mimicked 'lesson').

So for anyone who can't or won't rely their your own judgement, anything, anybody, or any idea originating from 'outside' that programmed sphere becomes a sometimes very scary 'unknown, can't judge, beware' category directly linked to our emotional, instinctive sense of self-preservation.

This is what both religion and totalitarianism tap into. Both divorce from a human its innate ability to judge the world for itself, make it impossible for a developing human to take its first steps in that self-sufficient state, and present (or impose) themselves as a sole recourse for any and all judgement in things both real and imagined as though they are equally "true". To the untrained and non-rational brain, they might as well be.

This limits humans not only in their education and decision-making abilities, it makes entire populations, instead of measuring themselves against reality and each other, rely on one source for all their decisions and guidance. Since that central source's judgement also encompasses what is dangerous or not, the people they 'lead' have no choice but to refer to their 'trust tree' for a 'measure' of safety, and view anything not originating from it as potentially dangerous. If their 'moral guide' dictates that something or someone 'not of theirs' is dangerous, the brain will adjust its emotional and neurone networks accordingly as though it was a real danger.

Now take two (or more) 'moral guides' competing for control over a single population trained to believe that they 'need' guidance: no wonder Christianity and Communism hated each other (and 'because atheism' my ass!). Now take two competing 'moral guides' who know that they can never persuade the other's following to follow them, so paint them as 'the enemy'. Now imagine that that central dictate is only educated in its own indoctrination methods and inept at everything else. Now imagine that the central dictate starts adjusting its dictates and descriptions of 'the enemy' only to accommodate its own wealth and survival... oh, throw some nuclear weapons in there, too. Scared yet?

But not only totalitarian regimes and religion are guilty of this: it exists to a lesser degree in advertising, Fox 'news', politics, and we all aid and abet this system by 'confirming' (or rejecting) each other's choice of dictate.

If we want to fix this, before all, we need to tell humans that they are able to make their own decisions and that their brains are wired for it from birth. We no longer have to face nature to 'prove' that ability, but that doesn't mean we don't have it and shouldn't use it, because, after a ~10,000-year long 'rationality-free' holiday, our very survival depends on it today.

PS: I just thought about someone's 'concentric circles of extremism', that the 'moderates' don't support what the 'extremists' do... f*ck that, ALL of them follow the same dictate, so it's the dictate's support or condemnation of whoever's action that speaks for all of them, since the dictate is the unique moral guide for all.

Decategorising motivation.

I've been going through acrobatics trying to apply brain function to existing definitions of human nature, but the latter shouldn't be used to describe the former; these contortions shouldn't be necessary, and are in fact quite counterproductive to understanding, even my own. Instead we should be taking the brain function and categorising according to that: our brain function is the motivation that drives a behaviour, yet that motivation, even if it has the same source and goal, somehow becomes a 'different thing' between (social) topics.

Our brain's most basic function is 'recognising' situations and dishing out the 'right' chemicals (emotions) as a reaction to them: defensive and active for danger, and passive and soothing for reward, and our survival depends on our brain matching these correctly. Yet this recognition needs to be educated: in our first moments in the outside world, the only 'safe' situations we recognise are those we already know from the womb: the warmth of mother. When we see mother trustingly and fearlessly associate with others around her (namely father), our definition of 'safe' will spread to them as well. And the 'tree of trust' will spread to whoever they trust, and so on and so on.

That's just our most primitive 'danger detection' mode. Added to that are the lessons trusted people teach us: we well not accept any lessons from people not in this category, although we may remember them, but we will not integrate them into our 'knowledge repertoire' until we think about them ourselves at a later date - if we ever do - or that person somehow later becomes part of the 'trusted' category. But still, if someone is not trusted as they give a lesson, they will not have a direct influence on our learning process.

In our younger years, if we are given an example of behaviour to imitate, we can only judge the 'success' with which we accomplish this imitation by empirical evidence (does the square peg fit in the round hole? No. The square one? Yes!) and the (emotional) reaction of the person making us do the exercise. I doubt that we even consider why we're doing that exercise in our younger years, we know only that we have to imitate 'older proof of successful survival trusted figure' in order to survive ourselves. Sometimes the lessons we are told to imitate, like learning words, have no physical aspects that we can confirm ourselves: we know only that, should we imitate a sound successfully, other humans will understand and provide an emotional response that is more or less the same from person to person. Yet, at a young age, we don't consider the why of those words, we know them only as successful methods of expressing our emotion and well-being, and the desires for and fears of the material things that effect these.

But as we grow older and the scope of our attention grows wider, we're going to notice functioning things and the behaviour of other humans (trusted or not), and we may want to try them out for ourselves without any prompting or guidance. For whatever reason, if a child chooses to act on their curiosity, their reward can only come from themselves.

This last bit is what intrigues me. In our society, it hard to tell what motivates curiousity and a desire to try (new) things out for oneself. Yet it is very easy to understand in a 'do or die' situation... survival is the 'reward', and if we have never encountered that situation before, we will educate our emotions to respond 'accordingly' if we survive it.

You see, in describing this basic function in this context, referring to my earlier post, we're talking about both critical thinking and morals here, or "recording an emotional response after testing". But it's more than that: this function is used in all aspects of our lives, but it is named differently for what motivates its use.

If we were to draw a diagram of 'human nature', we would have categories such as 'morals' and 'critical thinking' and 'learning' and 'feelings', but what we seem to be doing today is taking a single brain function and adding it, individually, as a separate entity, into each category. Instead, I propose making that common function a unique central 'thing', and linking the other categories to it.

Thursday, 4 September 2014

Morals: Independant Thought?

Morality is a very individualistic thing: it is an 'internal conclusion' that affects our personal interaction, as individuals, with the world around us. Morality is a balance between emotion (instinct), memory (examples of other humans, etc.) and rational thought; it's the mix of all three that makes us human and individuals.

Without rational thought, if a situation requires an immediate reaction, we will search our memory for a similar already-learned experience as a guide: if one is found, our emotional reaction to that will decide our actions, and if there is none, a panic ('fight or flight') reaction ensues.

Yet with rational thought, almost in parallel with the above process, we are able to 'calculate' the situation beyond 'knee-jerk' instincts of self-preservation: our brain can compare how one remembered course of action may be better than another, it can consider an action's effects on the surroundings, on other people and even what future consequences those actions will have. Neuroscience shows us that, if our brain decides that the rational conclusion is 'better' than the instinctive one, it will override it.

Yet many in a religious or totalitarian regime, at least the followers, have no use for rational thought: their actions are based on a 'punishment or reward' reaction to situations shown (and often only 'explained') to them by someone else, usually a 'trusted leader'.

In everyday interactions, if a situation or someone's answer 'matches' with something a follower was taught, the emotion attached to that memory (what he was 'taught to feel') will dictate their action: if they get a match with something in the 'good' category, their brain will give them a chemical reward and permission to continue the action; if it is in the 'bad' category, they will 'reject' the situation or their (instinctive) defense mechanism may be activated; if there is no match at all (and they don't feel in danger) there most likely will be no reaction at all - that 'deer in the headlights' look.

What the above paragraph really describes is our childhood learning process. When our frontal lobes (where neuroscience shows us that rational thought and 'morality' are seated) have reached maturity in late adolescence, our brain (well, the model of it promoted by evolution) normally expects us to start using it, but somehow, in many, this 'switch' never happens.

Mimicking lessons that promote self-preservation and/or personal reward is not 'morals'.



Thursday, 14 August 2014

Critical Thinking: an Art we traded for Agriculture.

In years before agriculture, man lived in smaller groups where skills were most likely not divided amongst its members. This would mean that an individual would have to have a complete skill set to survive, and be able to process the never-ending variables that nature threw at them. This was critical thinking: humans then had a choice between using it, or death.

Neuroscience has recently shown us that the brain is pre-wired (but developing through adolescence) so that any cortex neurone has the potential to connect to any other, directly or indirectly, at any distance, in the brain. We would not have this nerve structure if evolution hadn't promoted it as a 'successful' model.

Below I will try to explain how we used to use our brain, and compare that with how we use it today.

This essay contains some references to neuroscience: Please click here to show/hide a short description about how neurones and neurone networks work.

A basic neurone basically resembles a cell with coral-like 'arms': its multiple 'receiver' arms, dendrites, project at all angles from most of its circumference, and a single slender 'sender' arm, the axon, usually much longer than dendrites, can extend to connect to another neurone's dendrites through axon terminal branches of its own. Most axons are very short, but even if one extends past another neurone it would like to connect to, it can sprout a terminal 'branch' anywhere along its length.


Basic structure of the human neurone

Most of our cortex neurones reside in an outer layer that we call grey matter, and they are immobile once 'placed' during brain development. Below this (in a layer towards the interior of the brain) is a layer devoted to carrying the longer axon arms carrying signals between neurones in different parts of the brain: these axons have a myelin sheath that is thin and almost transparent if the neurone is unused, but this grows thick to better protect and strengthen a neurone's signal when it is used often; the myelin's whitish colour gives this area of the brain its name, white matter.



fMRI scan of White Matter (axon connections) between distant neurones in the human brain

Only very recently was it discovered that these axon arms connecting different regions of the brain extend, in a pattern much resembling a map of Manhattan, to all extremities of the brain, allowing almost any cortex neurone the possibility of connecting to another even distant one (and other deeper centres of the brain).

Close Section

===


The brain is 'wired' for critical thought from birth. Thanks to recent (f)MRI and PET brain-scan technologies, we can see how different regions of the brain are linked together, meaning that any neurone in our cerebral cortex has the potential to connect, either directly or indirectly (through other neurones), to any other.

The question we have to ask ourselves is: Why are our brains that way? If we had never had use for the extensive neurone-connecting abilities of the human brain, why did evolution promote that model as 'successful'?

If we look back at our evolution, we'd see that we spent most of it, hundreds of thousands of years at least, as hunter-gatherers. Hunter-gatherers lived in small groups, moved with animal migration, and lived in caves and temporary shelters; they were practically one with nature. I would imagine that each individual would have to have a complete survival skill set, as tasks don't seem to have been divided between community members as trades in those days - perhaps between the sexes, but there is little proof supporting that idea, either.

Anyhow, these skills had to be taught to younger humans: Hunting and gathering for everyday survival, as well as the dangers that represented wild animals; I'm sure these lessons were quite strict, as any deviation from them would be a threat to tribe survival. Medicinal knowledge and theories about the origins of the elements and other natural phenomena were probably practiced and passed along by a select few tribe members. In all, the 'unexplainable' aside, the methods they passed through the generations were tried and true, almost a science in those times.

A human that must fend for itself against nature to provide sustenance for itself (and eventually its family) would at least have to be close to maturity in body, so we can assume that the time until then was spent on education. Yet this education would be worthless if the young human didn't break the bond with the rest of the tribe and forage out for himself for a first time; when in his group of 'trusted teacher' tribe members (and he probably, by instinct, feared anyone else), he depended on them for approval or disproval of his imitations of their methods, but eventually he would have to test them against nature with only his survival as a judge. The 'walkabout' is an example of this still existing in Australian aborigine tradition: take what you've learned (from your elders and ancestors), use it fend for yourself, or die. The switch to critical (independent) thinking was not a choice then, it was a matter of survival.

Yet before that initiation, if an information has only been tested through imitation against a trusted member's expression of approval or disproval, the human brain can only categorise it (with other similar information) with a link to the emotion generated when judgement was given (by trusted member, and most likely linked to (emotional) information connected to trusted tribe member themself); this is completely at odds with the context of a real-life situation.

Here's an exercise for the sake of example: consider a task that you repeat so often that it has even become mundane. Can you remember who taught it to you? Now consider another subject that you learned but have had little-to-no experience actually using. I'm sure you still remember its teacher quite well.

When a young human sees an animal in a 'real-life' hunting situation for the first time, it becomes an actual goal (and means for survival), everything about his lessons changes. Place yourself in the same situation as the young hunter: you're about to embark on your first one-on-one with an animal, and your lessons have to relate to ~it~ (not dear teacher), so thoughts about your education process are not the first thing on your mind. What you are experiencing now is ~yours~ (and you may at first feel afraid at this, which will only heighten your senses and accelerate your processing): see how the animal parts the bushes as he runs into the forest; your brain will make a direct link from that observation to the size and direction of that animal (amongst other conditions), and when you enter the forest in the right direction to actually see the animal again, that earlier connection will be labelled 'success' and the 'teacher approval' filter will be needed no more. Did you lose the animal again when you entered a clearing? Notice that smell, note the wind direction, and follow it, and again, if successful, your connection will be rewarded and 'confirmed'.

That night when you dream, you will re-enact those events, making the new connections that 'worked' even stronger (added axon terminal arms, dendrite arms, and myelination), and should you encounter the same situation the next day (using those neurones again), the connection will become stronger still. Any slight variation to those circumstances will add additional information to the established links, making a 'hunting' neurone network that is your very own creation, your tested experience alone. One can imagine that with a lifetime of experiences such as these (in all methods of survival), the complexity of our neural networks must have grown great indeed.

===

Enter agriculture. This invention, only 12,000 years ago, flew in the face of over ~200,000 years of evolution and tradition. No longer was a human at odds with nature, as it needed stray no further than the boundaries of its habitat to collect its needed nourishment; many 'old' lessons about nature and survival were no longer needed, no longer given, and no longer tested.  Community size grew, and the work required by agriculture was divided between its members; it was no longer required to have a full survival skill-set to earn one's sustenance. Repeated tool-use skills in a sedentary environment requires much less critical thinking than the ever-changing circumstances of nature.

So, even though agriculture reduced the skill requirements for survival, the human brain was still 'wired' to handle them. And even though the human brain was wired to make direct inter-neurone 'conclusion' connections of its own (as a requirement for survival), it no longer encountered the circumstances nor the motivation to do so.

Yet because of our evolution and instincts, even in village (agricultural community) life, central leader role models remained, and were promoted to important places in society. Fewer were trained to brave the dangers of 'outside the village' (and these often became leaders), and even fewer had the occasion to test those skills. So from then, rumoured dangers, because untested, remained in the 'feared unknown' category, and directly linked in the brain to the 'authoritative' person who spoke of them. I imagine that over the years those stories, because they were untested/untestable, grew increasingly fanciful, and that the person telling them became an increasingly central village figure. This is probably how religion began.

So let's fast-forward to today, a mere 12,000 years later. This time period is next to nothing in the scale of our evolution, so our modern brains are practically unchanged from the hunter-gatherer model evolution favoured, yet with our cities protecting us from both the elements and nature, we are even ~less~ required and motivated to make the transition to the independent critical thinking that 200,000 years of evolution prepared our brains for.

The timing of that transition can change, too. In pre-agriculture days we had no other choice but to remain in a protective environment with our untested learnings until we were physically strong enough to affront nature on our own, but today, thanks to information technology and the (non-dangerous) nature of the things we learn, we can test any idea or information anytime we want in our lives... if we want to.

Wednesday, 29 January 2014

Understanding the Theist Mindset.

This is perhaps already obvious to many of you, but I had a bit of a 'release' revelation a few days ago; I'm much less daunted by theist discussion thanks to it. Sorry if this sounds pompous, but I'd like to share.

'Pigeon Chess' is the best analogy I've heard so far to describe an atheist/theist discussion, but I kept wondering about ~how~ a theist manages to deny/dismiss fact/evidence even when it's right in front of them.

If you think to our education, we spend the first part of our lives building our minds by mimicking a few trusted 'authority figures' who are supposed to show us what's good and dangerous in the world (and being doubtful/in fear of anyone/anything else), but eventually we gain enough experience to start making conclusions of our own from what we experience around us.

Religious people are just people who have never left the first stage. Just as children, they focus on their 'authority figures' for (emotional) reward and punishment; they just 'blank' any information from any other source as 'wrong' or 'bad'.

I can almost compare a 'follower' education to training a lab rat: the reward is food if he does the 'good thing', and punishment is an electric shock if he does the 'bad'. Eventually the rat will grow to fear certain things and appreciate others, even without an actual reward being given. When released into the world, he will regard with incomprehension (and perhaps fear) anything different from the environment he was trained in, and run to his 'education environment' for safety if he can. Trained rats together will behave the same way, but as a pack.

The key here is emotion: the 'reward' for a theist comes directly from a leader approving a followed behavioural pattern, whereas the 'reward' for a thinker comes (first) from ~himself~ when he achieves understanding and uses it to a successful result/conclusion.

So, for a theist, any information not from certain sources or outside their programmed behavioural pattern ~doesn't even register~ if they are not approved by their leader/fellow followers, much in the same way 'god' doesn't register for atheists.

For religious leaders, all that matters is that their followers continue to focus on them for education, reward and punishment; one could even argue that the content of the doctrine used to establish/maintain this dependancy system... isn't even important.

Sunday, 26 January 2014

Everything is light and time - what about the 'other side' ?

If my earlier idea was true, that would mean that every 'up' quark would exist as a 'down' quark in the spacetime-construct direction opposite to ours (or 'opposite dimension'). This really bothered me, as it would mean that, in the dimension opposite to ours, our world would be perfectly mirrored in antimatter.

Until I considered that the 'zero point' between the two dimensions. It's a 'zero point', right? It may be possible that anything originating from that in our dimension could be a complete somwhere else in the other :



I also doubt that the 'zero points' axes are 'aligned' between them... imagine a cloud of striped billiard balls rotating in all directions with no synchronisation at all. All that matters is that the dimension 'sides' are directly opposed to each other.

Antimatter exists and has been produced, and it has been demonstrated that antimatter annihilates matter... but if the above were true, a fermion annihilated in our dimension would also be annihilated in the other. Once the time-space rip maintaining energy is gone, the opposing time-construct will annihilate each other as well. 

The above idea is two dimensions that will 'zero out' in all its aspects if all its matter/energy is destroyed, meaning a return to a 'perfect state' nothing, but it is... disturbing, to say the least.


Saturday, 25 January 2014

Religion vs. Rationality - a simple exercise to demonstrate why we can't communicate.

Theist/rationalist debates have always been a source of frustration.  The reason for this: our value systems are completely different, and the 'horizons' we use to orient ourselves are not even comparable. Consider the following two diagrams:




...now try to take one item from one diagram and place it in another. It's a difficult task... more than likely a theist would group all the 'science items' at the same level, whereas a rationalist would group all of the 'faith-dependant' items at the same level on his graph.

Perhaps my bias shows in the choice of items on each chart, but all I wanted to do is show the 'horizon' of our respective value systems. It would be an interesting exercise to take all items from both charts and place them in a box, then ask an interviewee to place them all on one chart, then another. I'm sure that a theist will place things like 'bible veracity' on 'demonstrable' even if it isn't - but that would only highlight more our value differences, wouldn't it?