Sunday, 24 May 2015

The Why of Laughter.

I was listening to a podcast earlier ("Very Bad Wizards" - always a pleasure, excellent work guys, thanks ; ) about humour in general... the types of humour, what's funny or not, when things are funny. It was an enlightening and fun experience, so if you want to listen for yourself, you can find it here.

But this is something that I'd been thinking about since decades. Black humour, nonsensical humour, slapstick humour, what do all these have in common?

I do know that when the brain is tracking 'movement', depending on the 'anticipation level', our subconscious will be trying to 'predict' what will happen next. I think this is the 'link' between humour types: most all types of humour 'break' from the pattern that we'd normally expect.

Whether it is humour or not would (I guess) depend on the circumstances, but I think it comes down to our relation with the source of the humour: in most cases I can think of, it is a relaxed state of trust. It could be another person, a television... and add to this the idea (IMHO) that our decision-making consciousness is almost a persona in itself (that can be trusted/mistrusted by our subconscious). So, from a position of trust, our senses get a description or circumstance that breaks from 'the predicted', yet, if the result is inoffensive in nature, we may even see sense in it. A sort of "I wasn't expecting that, but it fits." The "ha ha ha (how wrong I was to think that way)" may be just... sociological conditioning, an expression of a... "you got me"?

Going on to the part about "humour we don't find humorous at all", we may simply just be switching off the 'prediction/anticipation' brain function because we simply aren't interested in knowing what happens next, and that would also cancel any further reaction.

Just my two cents on a (still) mysterious subject.

Thursday, 21 May 2015

Beyond the Village... again?

Correction: Before Critical Thinking, 'Value Judgements'

I'd like to revise my earlier position on critical thinking: thought is a two-level process, and critical thinking is an 'activated' second level above the base 'weighing and comparing options' function that governs most of our decision making. I had made it sound earlier as though critical thinking was our entire thought process.

I like to call this emotion-led 'weighing and comparing options' process 'value judgements'. As I noted earlier, every memory we store is associated with an emotional response neurone; this is how we determine the 'value' of that memory (otherwise any object or experience, a loved one or apple, would have the same 'weight' in our minds). When confronted with a situation, the mind a) recalls similar situations (and attributing elements) then b) 'weighs' each memory recalled by the strength/resonance of their respective signal against each other; the most 'positive match' and 'appropriate' response will win out, which will lead the brain to send the signals/chemical responses needed for the decided course of action.

This is not critical thinking; it is a 'comparative memory' process. It includes even 'new' situations, as data from sensory input becomes a comparable memory as soon as it is stored, even temporarily.

Critical Thinking from an Evolutionary Point of View

I keep referring to the hunter-gatherer period in which we spent most of our evolution. Much of our time then was spent foraging and hunting, and our clan camps a place to keep and protect our young in larger groups (against nature); without the latter, there would be no point in even having a village. So, most of our time was spent outside of its protection, one-on-one with nature (and at war with other clans).

The latter situation was where critical thinking was most important and most-used: it was a survival tool more than a key to inventiveness. It allowed us, instead of following the same patterns again and again (like 'dumb' animals), to create that 'other option' that would allow us to break our predictability and 'trick' our opponent (that is stuck in their 'known behaviour' thought-process, unable to read our minds)... and this is the main reason we survived as a species through the ages, the ability to break a predictable pattern.

This tool is practically useless in a village (clan camp) environment. Only the results of it could be brought back there in the form of stories that could be remembered and later imitated by other clan members. People then did not share or pass on information for the simple ideal of posterity, but in the aim of survival, and the techniques passed to others were probably dictated by the evolving environment and survival needs therein; it is possible that many techniques faded to oblivion when no longer required by nature or war, which meant that human life was an e'er-evolving, often repeating, state of constant adaptation. 

Agriculture to the Cities

This basic evolution still more or less holds true, but our invention of agriculture changed everything. Critical thinking was reserved for that (and even then, it evolved very little; choosing the best grains from each crop is not a process of invention) and the defence of a growing village (and methods of attacking others). Once needed for everyone who affronted nature, critical thinking was practiced by a very few, and these either became the village/city leaders or were allied with them, and the 'common villagers' were left to imitate and 'value judgement' reason within the educative limits defined by the leadership.

Even this holds true today. The only change was, probably after centuries of this follower-imitating-leadership behaviour, an idle group of ambitious yet idle observers discovered that a human could be dependant on leadership for even value judgements; like critical thinking, these also were survival tools for an individual against nature, but in a protective village environment, even these became optional. In short, the former 'protect and educate' village environment that, in hunter-gatherer times, was only applicable in early life (before an adult was obliged leave and fend for itself against nature), became a lifelong process for those who gave up, or were prevented from, making value judgements for themselves. All religions and totalitarianist regimes have their root in this.

The Enlightenment through Today

The invention of writing, and widespread literacy, changed everything yet again. No longer was authority and education the responsibility of a select few (who could only pass it on orally), but people could share information amongst themselves, and judge it for themselves; still, the prevalent authoritarian system, and societal pressure, prevented people from doing this, although these roadblocks to personal, individual enlightenment has eroded slowly over the years. The few critical thinkers practicing science and critical thought in general could record its experimentations for posterity (and validation by anyone able to think for themselves from later generations)... this too was a game-changer, which is why so many libraries went up in totalitarian/religion-lit flames throughout history.

The invention of the internet, though, changes everything yet yet again. We are in the age of information, but this time, an information that can only with difficulty be banned or burned. For the first time in human history, absolutely everyone has not only access to this information, but the invitation to assess and test it for themselves: doing so requires not only the will to make value judgements for oneself, but also critical thinking. With access to this information, a growing human will develop these abilities naturally; this explains the recent surge of activity in those seeking to make young humans disbelieve, and even fear, the possibility of thinking for oneself. 

We are at a crossroads of two extremities today: Either fundamentalist 'uni(non)thought' wins out and the distribution of information is controlled or banned, or, for the first time in human history, humans will be using their 'outside the village' abilities, once again as individuals, to judge their own actions, and each others', for themselves, within the village confines.

If the latter were to happen, we will be becoming a real, less leadership-'thought'-dependant, post-modern society. As we once did against nature, once again we will decide rationally (not through peer-pressure imitation), but this time as both individuals and a group, what's best for ourselves.

Sunday, 18 January 2015

Super-Gamma EMW to Fermion process (or vice versa)

(Interlude music)
I think I've got how the most basic fermions combine initially, but I'm still fighting with my brain over the the 'load balancing' part once they're combined... the two same-charge fermions somehow transfer their differences between themselves... and the combinations would be of opposing charge (thus would annihilate themselves anyway). There's something utterly mathematically simplistic about this, but it is just beyond me... 

Saturday, 10 January 2015

Discussion is essential to clarity - 'everything' in a nutshell.

Just putting this here for posterity... I've never been able to express it so succinctly before.

"Something's holding that quark-energy in place, otherwise it would just dissipate. There's a force resulting from the 'finding balance' struggle between the two (that something and the energy it's binding), and gravity is its residue.

IMHO, of course."

"I had an idea that the centre of every quark was a rip in the spacetime continuum... a gateway to 'absolute nothing', and a quark is energy that is bound by its trying to get 'back' to that zero state. Kind of like... (scratching head) Flushing pasta down the toilet? LOL - but the strands would become interlocked, forming a ring that would keep the whole from being flushed down... I -have- to think of a better analogy ; P

But if I were to go further down the rabbit hole, that 'zero point' would have to be something in itself, but it would make even more sense (complete sense, IMHO) if 'our side' matter was matched by something on 'the other side', and that force was -across- that zero point... like a fermion pair trying to annihilate each other. And that force would be gravity."

"My idea goes like this: energy (EMW) levels above a certain level (super-gamma, probably) make a spacetime rip, making its path change from a straight one to a 'swirl' around the rip. Only EMW's of a certain frequency can have any stability (think a wobbling, rotating top - 'wrong' frequencies would rip themselves apart (and be sucked in)), but 'right' frequencies, stable, form matter. And the different 'right' frequency levels determine the size of the resulting fermion."

"I have absolutely -no- education in this domain, but I've always been processing ideas to see how things 'fit'... and I like 'seeing' patterns, too. Today I see everything as a 'zero point' and a parabolic energy curve away from it... well, two parabolic curves opposing each other, one energy and the other, the 'pull' towards that zero point.

It even makes sense to me that the 'strong force' and the 'nuclear force' are just variations of gravity... if you follow even Newtonian physics all the way to quantum level, the 'pull' close to that fermion-level 'zero point' must be ENORMOUS... and so must be the energy. We already know that the 'binding energy' of atoms is enormous (A-bomb, etc), but take that up one level to quarks... wow.

And taking that even -further- to the 'fermion pair annihilation'... Tyson spoke of 'event horizons' where one of the pair would escape, but what of energy behaviour in a quantum soup: what if one half of a pair 'bound' to another (different-frequency) fermion before it could annihilate itself against its same-frequency opposite?"

(comment indicating equivilence principle)

I can see how the math works out for the equivilence principle, but I have a problem with its application, especially in questions of time dilation... time does vary with the strength of a gravitational field, but although the math says that that time variation also applies to an object in accelleration (because equivalence principle), but don't see sense in that - I'm of the persuasion that time dilation (and gravity) can only be calculated relative to a mass itself.

The math works out because of the -difference between the two objects-. A mass on its own might as well be standing completely still, its mass (and gravitational pull and time dilation at its surface) constant and unvaried -until it encounters another-. Only -then-, upon collision, do the different velocities/masses count - I think it is an error to put all of that 'inertia' into an object if there is no other to compare it to, and even more of an error to say that time affects that object because of that (hypothetically) increased 'gravity'.

But that's just my humble opinion.

Friday, 2 January 2015

The Beauty of Being Wrong

Just a short entry today after witnessing one-too-many pointless 'saving face' back-and-forths: this for me really defines intellectual honesty, and shows whether one is really using their critical thinking abilities or just putting on a show of doing so.

We all shape our communication from the knowledge we have managed to accumulate until that point in the conversation where we have to use it. We all have varying degrees of trust in different points in knowledge: some may be empirical, some may be hearsay, but we don't really think much about this distinction when we are tapping it. A conversation should be a great occasion to test that knowledge, yet more often than not I see it used as an occasion to 'show' knowledge as a badge of stature, and any questioning of it is seen as an offense.

This is a sure sign that the person speaking has created an 'illusion' of themselves that they are presenting just as much to themselves as the person they are speaking to... almost a third person, some sort of mystical 'authority' that should be revered and defended without question. And this creation is also a result of wanting to cater to whatever (we think) another person 'wants' or 'needs'.

Yet we can't see into other minds, nor can we 'know' anything with absolute certainty. All we have to operate on is 'to the best of our knowledge', and if a conversation is to have any intellectual honesty, the knowledge of both/all parties should be open to (and even begging) questioning and testing. I guess this is what we'd call 'constructive conversation'.

If I am unsure about an element of knowledge I am using to make a point, this should show in my emotional display, and should be an invitation for someone other to provide a better solution if they have one. If a better solution is provided, it is not an offense - au contraire! If their point is valid and, better still, tested, they have actually increased my wealth of knowledge through their experience, making me a better person... what a gift!

Saturday, 13 December 2014

Memory/Emotion/Critical Thought in the Human Brain

The brain is amazing. Seemingly complicated, but amazingly simple in its function: input, recognition; test, accept, reject; and finally... combine and create.

Above is an fMRI scan of the axon paths through the human brain. Axons are neurone connectors: to reiterate earlier explanations, every neurone has many receptors (dendrites), but only one output (axon) to carry the neurone charge to another neurone... if it fires. Whether it fires or not (and to what degree) is the result of the sum of input it receives from other neurones.

The network of neurones behind a single memory is a complicated thing... not only are there the neurones forming the memory itself, but its connection to 'inhibitor' and 'accelerator' neurones that will affect their relation to other neurones, namely in the later 'conclusion' part of the thinking process. To make things simpler, I've eliminated all those secondary (yet capital) connections in my illustrations, to show just the process result.
The above is the simplest of memories: empirical memory. First off, your brain will decide for you whether an event is even worth remembering, and if it is, it will be linked with the emotion that made it seem 'important', a link that will be traded off to the neurone used to permanently store the memory (if the brain deems it important enough). These emotion connections can be trained by further 'matching' input from empirical... or 'trusted' sources.

These 'test types' are capital to the human brain. Even before we became rational creatures, we were mimicking ones: whereas 'lower' creatures had to rely on empirical testing for memory 'validation', our ancestors learned to copy the behaviour of 'similars' who had already tried and tested the concerned techniques and circumstances. Unless trained, the brain tends to ignore (and even reject) any information not coming from anyone in the 'similar', or 'trusted' category.

Yet we only 'recognise' or 'match' things already in our memory. If we've had experience with a red ball, and the brain deems it important enough, it will be one of the things our (supposed) subconscious 'watches out for' in our environment. If it is detected, the 'recognition' trigger will be accompanied by the memory's associated emotion (playing with the ball, for example, as opposed to being hit on the head with it) will be triggered too. If at first the emotional 'importance' of each memory may seem quite stark and distinct, this will become 'muted' with repetition ('lesser importance') and other associated/more important events.
Now enter critical thinking, the 'extra level' that makes us human. It's that region of constantly 'slow firing' neurone region of the brain that, when activated, will 'test' existing memories against each other to create a third 'possibility' that can become a 'memory' of its own: if a) a rock is heavy and b) a muddy slope is slippery, therefore c) a heavy rock on a muddy slope will slide. Untested, the new idea will generate but a vague 'warning, watch out for this' emotion attachment, but if someone having that thought later sees that rock slide (and avoids it to survive), that thought becomes an empirically confirmed fact that can be related to others (with that 'warning' emotion) until they can empirically test it themselves.

Yet consider the experiment: if a) All Terriers are dogs, and b) all dogs are animals, then c) all Terriers are animals. This is an exercise concerning only the mind (and communication with other minds) and categorisation - it cannot be tested empirically, and can only be 'confirmed' by the emotional reaction (understanding, approval or not) of the person receiving the idea. Yet it uses the same survival-tool technique as the first example.

I mentioned earlier that emotions can be trained. In the empirical world, this is a clear-cut affair (with, say, an initial fear reaction to fire dulled by an education in its uses, and experience with them), but when it comes to ideas, it all depends on where the ideas come from. Without critical thinking, the origin of an idea is just as, if not more, important than the idea itself. If a trusted source says that circumstance a A will result in A, and you repeat their lesson and get a positive response, then the same tells you that circumstance A will result in B, you will attribute that same emotional 'reward' to both lessons, even if they are contradictory. 

Yet to the critical thinker mulling over circumstance A, they may weigh it against other circumstance/criteria/memories to conclude that, in fact, circumstance B because of A doesn't make any sense. The very act of this consideration removes the 'confirmation' emotion dependance on the teacher, and if the result of the reflection is tested empirically to a conclusive result, it may dampen, negate or even convert to 'warning' all former emotional connections to the source of the information. But the concrete conclusion of this exercise is that the emotional reward becomes a personal one. 

It seems that once one experiences this personal conclusion/reward for the first time, it seems to 'validate' for the brain the utility of the critical thinking technique as a whole. I guess this is the 'switch' I was trying to locate in my earlier posts on this subject.

Thursday, 16 October 2014

Dispensing with the Moral/Thought Dictate

In my earlier post (today, I'm feeling quite brain-fart-y this morning), I described the emotional reward we get for learning lessons that have no physical nature. The very act of 'recognising' a pattern already triggers a reward in the brain, but seeing this recognition in another human seems to add a secondary, stronger effect: a negative reaction from that other human should move us to adapt our emotional reaction ('this doesn't work here'), and a positive reaction would doubly reward and confirm that pattern (as 'acceptance' also means 'survival' to the human brain). This is what we'd call 'morals', but it can also be called 'adapting to new environments'.

As a human has first to rely on his existing memory/emotion cocktail to judge the new environment, what it does next is based on its need for survival: if the situation is dire, and doesn't match any existing patterns, a human has to use these to do its best to determine a course of action in order to survive; if the situation is not life-threatening or challenging, the human has the privilege of choice: either remembering the situation or problem for later analysis, or simply dismissing it and forgetting about it. In a real-life alone-against-nature situation, the brain has no choice but to rely on reason: it will decide if a situation is good for it or not, and adjust its neurone network accordingly; yet if the brain is presented with an idea not represented in reality, if it is not dismissed, it will try to 'make' it real by imagining it, and will attempt to make the same adjustments.

No matter how we name them or talk about them, our basest instincts are around self-preservation. In situations that involve physical things, decisions are empirical and obvious, and 'working methods' are rewarded and categorised 'good for survival', but when it comes to untested new non-physical ideas, how does the brain decide if a non-physical idea 'works' or not? The choice here is simple: trust your own judgement, or trust a more experienced someone else's, but first, trust your own ability to choose between the two.

The last choice is everything. If you don't think yourself able to 'know' or to 'decide', then you are have no choice but to rely on someone else's judgement, meaning that you are dependant on that 'trust tree' I mentioned in my last post. And if an experience doesn't 'match' something 'taught' to you by anyone in your trust tree, depending on the level of danger, your brain will either dismiss the experience or remember it as something 'dangerous' (only 'confirming' the earlier mimicked 'lesson').

So for anyone who can't or won't rely their your own judgement, anything, anybody, or any idea originating from 'outside' that programmed sphere becomes a sometimes very scary 'unknown, can't judge, beware' category directly linked to our emotional, instinctive sense of self-preservation.

This is what both religion and totalitarianism tap into. Both divorce from a human its innate ability to judge the world for itself, make it impossible for a developing human to take its first steps in that self-sufficient state, and present (or impose) themselves as a sole recourse for any and all judgement in things both real and imagined as though they are equally "true". To the untrained and non-rational brain, they might as well be.

This limits humans not only in their education and decision-making abilities, it makes entire populations, instead of measuring themselves against reality and each other, rely on one source for all their decisions and guidance. Since that central source's judgement also encompasses what is dangerous or not, the people they 'lead' have no choice but to refer to their 'trust tree' for a 'measure' of safety, and view anything not originating from it as potentially dangerous. If their 'moral guide' dictates that something or someone 'not of theirs' is dangerous, the brain will adjust its emotional and neurone networks accordingly as though it was a real danger.

Now take two (or more) 'moral guides' competing for control over a single population trained to believe that they 'need' guidance: no wonder Christianity and Communism hated each other (and 'because atheism' my ass!). Now take two competing 'moral guides' who know that they can never persuade the other's following to follow them, so paint them as 'the enemy'. Now imagine that that central dictate is only educated in its own indoctrination methods and inept at everything else. Now imagine that the central dictate starts adjusting its dictates and descriptions of 'the enemy' only to accommodate its own wealth and survival... oh, throw some nuclear weapons in there, too. Scared yet?

But not only totalitarian regimes and religion are guilty of this: it exists to a lesser degree in advertising, Fox 'news', politics, and we all aid and abet this system by 'confirming' (or rejecting) each other's choice of dictate.

If we want to fix this, before all, we need to tell humans that they are able to make their own decisions and that their brains are wired for it from birth. We no longer have to face nature to 'prove' that ability, but that doesn't mean we don't have it and shouldn't use it, because, after a ~10,000-year long 'rationality-free' holiday, our very survival depends on it today.

PS: I just thought about someone's 'concentric circles of extremism', that the 'moderates' don't support what the 'extremists' do... f*ck that, ALL of them follow the same dictate, so it's the dictate's support or condemnation of whoever's action that speaks for all of them, since the dictate is the unique moral guide for all.