Sunday, 18 January 2015

Super-Gamma EMW to Fermion process (or vice versa)







(Interlude music)
I think I've got how the most basic fermions combine initially, but I'm still fighting with my brain over the the 'load balancing' part once they're combined... the two same-charge fermions somehow transfer their differences between themselves... and the combinations would be of opposing charge (thus would annihilate themselves anyway). There's something utterly mathematically simplistic about this, but it is just beyond me... 




Saturday, 10 January 2015

Discussion is essential to clarity - 'everything' in a nutshell.

Just putting this here for posterity... I've never been able to express it so succinctly before.

"Something's holding that quark-energy in place, otherwise it would just dissipate. There's a force resulting from the 'finding balance' struggle between the two (that something and the energy it's binding), and gravity is its residue.

IMHO, of course."


"I had an idea that the centre of every quark was a rip in the spacetime continuum... a gateway to 'absolute nothing', and a quark is energy that is bound by its trying to get 'back' to that zero state. Kind of like... (scratching head) Flushing pasta down the toilet? LOL - but the strands would become interlocked, forming a ring that would keep the whole from being flushed down... I -have- to think of a better analogy ; P

But if I were to go further down the rabbit hole, that 'zero point' would have to be something in itself, but it would make even more sense (complete sense, IMHO) if 'our side' matter was matched by something on 'the other side', and that force was -across- that zero point... like a fermion pair trying to annihilate each other. And that force would be gravity."


"My idea goes like this: energy (EMW) levels above a certain level (super-gamma, probably) make a spacetime rip, making its path change from a straight one to a 'swirl' around the rip. Only EMW's of a certain frequency can have any stability (think a wobbling, rotating top - 'wrong' frequencies would rip themselves apart (and be sucked in)), but 'right' frequencies, stable, form matter. And the different 'right' frequency levels determine the size of the resulting fermion."

"I have absolutely -no- education in this domain, but I've always been processing ideas to see how things 'fit'... and I like 'seeing' patterns, too. Today I see everything as a 'zero point' and a parabolic energy curve away from it... well, two parabolic curves opposing each other, one energy and the other, the 'pull' towards that zero point.

It even makes sense to me that the 'strong force' and the 'nuclear force' are just variations of gravity... if you follow even Newtonian physics all the way to quantum level, the 'pull' close to that fermion-level 'zero point' must be ENORMOUS... and so must be the energy. We already know that the 'binding energy' of atoms is enormous (A-bomb, etc), but take that up one level to quarks... wow.

And taking that even -further- to the 'fermion pair annihilation'... Tyson spoke of 'event horizons' where one of the pair would escape, but what of energy behaviour in a quantum soup: what if one half of a pair 'bound' to another (different-frequency) fermion before it could annihilate itself against its same-frequency opposite?"

(comment indicating equivilence principle)

I can see how the math works out for the equivilence principle, but I have a problem with its application, especially in questions of time dilation... time does vary with the strength of a gravitational field, but although the math says that that time variation also applies to an object in accelleration (because equivalence principle), but don't see sense in that - I'm of the persuasion that time dilation (and gravity) can only be calculated relative to a mass itself.

The math works out because of the -difference between the two objects-. A mass on its own might as well be standing completely still, its mass (and gravitational pull and time dilation at its surface) constant and unvaried -until it encounters another-. Only -then-, upon collision, do the different velocities/masses count - I think it is an error to put all of that 'inertia' into an object if there is no other to compare it to, and even more of an error to say that time affects that object because of that (hypothetically) increased 'gravity'.

But that's just my humble opinion.

Friday, 2 January 2015

The Beauty of Being Wrong

Just a short entry today after witnessing one-too-many pointless 'saving face' back-and-forths: this for me really defines intellectual honesty, and shows whether one is really using their critical thinking abilities or just putting on a show of doing so.

We all shape our communication from the knowledge we have managed to accumulate until that point in the conversation where we have to use it. We all have varying degrees of trust in different points in knowledge: some may be empirical, some may be hearsay, but we don't really think much about this distinction when we are tapping it. A conversation should be a great occasion to test that knowledge, yet more often than not I see it used as an occasion to 'show' knowledge as a badge of stature, and any questioning of it is seen as an offense.

This is a sure sign that the person speaking has created an 'illusion' of themselves that they are presenting just as much to themselves as the person they are speaking to... almost a third person, some sort of mystical 'authority' that should be revered and defended without question. And this creation is also a result of wanting to cater to whatever (we think) another person 'wants' or 'needs'.

Yet we can't see into other minds, nor can we 'know' anything with absolute certainty. All we have to operate on is 'to the best of our knowledge', and if a conversation is to have any intellectual honesty, the knowledge of both/all parties should be open to (and even begging) questioning and testing. I guess this is what we'd call 'constructive conversation'.

If I am unsure about an element of knowledge I am using to make a point, this should show in my emotional display, and should be an invitation for someone other to provide a better solution if they have one. If a better solution is provided, it is not an offense - au contraire! If their point is valid and, better still, tested, they have actually increased my wealth of knowledge through their experience, making me a better person... what a gift!

Saturday, 13 December 2014

Memory/Emotion/Critical Thought in the Human Brain


The brain is amazing. Seemingly complicated, but amazingly simple in its function: input, recognition; test, accept, reject; and finally... combine and create.

Above is an fMRI scan of the axon paths through the human brain. Axons are neurone connectors: to reiterate earlier explanations, every neurone has many receptors (dendrites), but only one output (axon) to carry the neurone charge to another neurone... if it fires. Whether it fires or not (and to what degree) is the result of the sum of input it receives from other neurones.

The network of neurones behind a single memory is a complicated thing... not only are there the neurones forming the memory itself, but its connection to 'inhibitor' and 'accelerator' neurones that will affect their relation to other neurones, namely in the later 'conclusion' part of the thinking process. To make things simpler, I've eliminated all those secondary (yet capital) connections in my illustrations, to show just the process result.
The above is the simplest of memories: empirical memory. First off, your brain will decide for you whether an event is even worth remembering, and if it is, it will be linked with the emotion that made it seem 'important', a link that will be traded off to the neurone used to permanently store the memory (if the brain deems it important enough). These emotion connections can be trained by further 'matching' input from empirical... or 'trusted' sources.

These 'test types' are capital to the human brain. Even before we became rational creatures, we were mimicking ones: whereas 'lower' creatures had to rely on empirical testing for memory 'validation', our ancestors learned to copy the behaviour of 'similars' who had already tried and tested the concerned techniques and circumstances. Unless trained, the brain tends to ignore (and even reject) any information not coming from anyone in the 'similar', or 'trusted' category.

Yet we only 'recognise' or 'match' things already in our memory. If we've had experience with a red ball, and the brain deems it important enough, it will be one of the things our (supposed) subconscious 'watches out for' in our environment. If it is detected, the 'recognition' trigger will be accompanied by the memory's associated emotion (playing with the ball, for example, as opposed to being hit on the head with it) will be triggered too. If at first the emotional 'importance' of each memory may seem quite stark and distinct, this will become 'muted' with repetition ('lesser importance') and other associated/more important events.
Now enter critical thinking, the 'extra level' that makes us human. It's that region of constantly 'slow firing' neurone region of the brain that, when activated, will 'test' existing memories against each other to create a third 'possibility' that can become a 'memory' of its own: if a) a rock is heavy and b) a muddy slope is slippery, therefore c) a heavy rock on a muddy slope will slide. Untested, the new idea will generate but a vague 'warning, watch out for this' emotion attachment, but if someone having that thought later sees that rock slide (and avoids it to survive), that thought becomes an empirically confirmed fact that can be related to others (with that 'warning' emotion) until they can empirically test it themselves.

Yet consider the experiment: if a) All Terriers are dogs, and b) all dogs are animals, then c) all Terriers are animals. This is an exercise concerning only the mind (and communication with other minds) and categorisation - it cannot be tested empirically, and can only be 'confirmed' by the emotional reaction (understanding, approval or not) of the person receiving the idea. Yet it uses the same survival-tool technique as the first example.

I mentioned earlier that emotions can be trained. In the empirical world, this is a clear-cut affair (with, say, an initial fear reaction to fire dulled by an education in its uses, and experience with them), but when it comes to ideas, it all depends on where the ideas come from. Without critical thinking, the origin of an idea is just as, if not more, important than the idea itself. If a trusted source says that circumstance a A will result in A, and you repeat their lesson and get a positive response, then the same tells you that circumstance A will result in B, you will attribute that same emotional 'reward' to both lessons, even if they are contradictory. 

Yet to the critical thinker mulling over circumstance A, they may weigh it against other circumstance/criteria/memories to conclude that, in fact, circumstance B because of A doesn't make any sense. The very act of this consideration removes the 'confirmation' emotion dependance on the teacher, and if the result of the reflection is tested empirically to a conclusive result, it may dampen, negate or even convert to 'warning' all former emotional connections to the source of the information. But the concrete conclusion of this exercise is that the emotional reward becomes a personal one. 

It seems that once one experiences this personal conclusion/reward for the first time, it seems to 'validate' for the brain the utility of the critical thinking technique as a whole. I guess this is the 'switch' I was trying to locate in my earlier posts on this subject.


Thursday, 16 October 2014

Dispensing with the Moral/Thought Dictate

In my earlier post (today, I'm feeling quite brain-fart-y this morning), I described the emotional reward we get for learning lessons that have no physical nature. The very act of 'recognising' a pattern already triggers a reward in the brain, but seeing this recognition in another human seems to add a secondary, stronger effect: a negative reaction from that other human should move us to adapt our emotional reaction ('this doesn't work here'), and a positive reaction would doubly reward and confirm that pattern (as 'acceptance' also means 'survival' to the human brain). This is what we'd call 'morals', but it can also be called 'adapting to new environments'.

As a human has first to rely on his existing memory/emotion cocktail to judge the new environment, what it does next is based on its need for survival: if the situation is dire, and doesn't match any existing patterns, a human has to use these to do its best to determine a course of action in order to survive; if the situation is not life-threatening or challenging, the human has the privilege of choice: either remembering the situation or problem for later analysis, or simply dismissing it and forgetting about it. In a real-life alone-against-nature situation, the brain has no choice but to rely on reason: it will decide if a situation is good for it or not, and adjust its neurone network accordingly; yet if the brain is presented with an idea not represented in reality, if it is not dismissed, it will try to 'make' it real by imagining it, and will attempt to make the same adjustments.

No matter how we name them or talk about them, our basest instincts are around self-preservation. In situations that involve physical things, decisions are empirical and obvious, and 'working methods' are rewarded and categorised 'good for survival', but when it comes to untested new non-physical ideas, how does the brain decide if a non-physical idea 'works' or not? The choice here is simple: trust your own judgement, or trust a more experienced someone else's, but first, trust your own ability to choose between the two.

The last choice is everything. If you don't think yourself able to 'know' or to 'decide', then you are have no choice but to rely on someone else's judgement, meaning that you are dependant on that 'trust tree' I mentioned in my last post. And if an experience doesn't 'match' something 'taught' to you by anyone in your trust tree, depending on the level of danger, your brain will either dismiss the experience or remember it as something 'dangerous' (only 'confirming' the earlier mimicked 'lesson').

So for anyone who can't or won't rely their your own judgement, anything, anybody, or any idea originating from 'outside' that programmed sphere becomes a sometimes very scary 'unknown, can't judge, beware' category directly linked to our emotional, instinctive sense of self-preservation.

This is what both religion and totalitarianism tap into. Both divorce from a human its innate ability to judge the world for itself, make it impossible for a developing human to take its first steps in that self-sufficient state, and present (or impose) themselves as a sole recourse for any and all judgement in things both real and imagined as though they are equally "true". To the untrained and non-rational brain, they might as well be.

This limits humans not only in their education and decision-making abilities, it makes entire populations, instead of measuring themselves against reality and each other, rely on one source for all their decisions and guidance. Since that central source's judgement also encompasses what is dangerous or not, the people they 'lead' have no choice but to refer to their 'trust tree' for a 'measure' of safety, and view anything not originating from it as potentially dangerous. If their 'moral guide' dictates that something or someone 'not of theirs' is dangerous, the brain will adjust its emotional and neurone networks accordingly as though it was a real danger.

Now take two (or more) 'moral guides' competing for control over a single population trained to believe that they 'need' guidance: no wonder Christianity and Communism hated each other (and 'because atheism' my ass!). Now take two competing 'moral guides' who know that they can never persuade the other's following to follow them, so paint them as 'the enemy'. Now imagine that that central dictate is only educated in its own indoctrination methods and inept at everything else. Now imagine that the central dictate starts adjusting its dictates and descriptions of 'the enemy' only to accommodate its own wealth and survival... oh, throw some nuclear weapons in there, too. Scared yet?

But not only totalitarian regimes and religion are guilty of this: it exists to a lesser degree in advertising, Fox 'news', politics, and we all aid and abet this system by 'confirming' (or rejecting) each other's choice of dictate.

If we want to fix this, before all, we need to tell humans that they are able to make their own decisions and that their brains are wired for it from birth. We no longer have to face nature to 'prove' that ability, but that doesn't mean we don't have it and shouldn't use it, because, after a ~10,000-year long 'rationality-free' holiday, our very survival depends on it today.

PS: I just thought about someone's 'concentric circles of extremism', that the 'moderates' don't support what the 'extremists' do... f*ck that, ALL of them follow the same dictate, so it's the dictate's support or condemnation of whoever's action that speaks for all of them, since the dictate is the unique moral guide for all.

Decategorising motivation.

I've been going through acrobatics trying to apply brain function to existing definitions of human nature, but the latter shouldn't be used to describe the former; these contortions shouldn't be necessary, and are in fact quite counterproductive to understanding, even my own. Instead we should be taking the brain function and categorising according to that: our brain function is the motivation that drives a behaviour, yet that motivation, even if it has the same source and goal, somehow becomes a 'different thing' between (social) topics.

Our brain's most basic function is 'recognising' situations and dishing out the 'right' chemicals (emotions) as a reaction to them: defensive and active for danger, and passive and soothing for reward, and our survival depends on our brain matching these correctly. Yet this recognition needs to be educated: in our first moments in the outside world, the only 'safe' situations we recognise are those we already know from the womb: the warmth of mother. When we see mother trustingly and fearlessly associate with others around her (namely father), our definition of 'safe' will spread to them as well. And the 'tree of trust' will spread to whoever they trust, and so on and so on.

That's just our most primitive 'danger detection' mode. Added to that are the lessons trusted people teach us: we tend not to accept any lessons from people not in this category, and although we may remember those 'lesson attempts', we will not integrate them into our 'knowledge repertoire' until we think about them ourselves at a later date - if we ever do - or until that person somehow later becomes part of the 'trusted' category. But still, if someone is not trusted as they give a lesson, they will not have a direct influence on our learning process.

In our younger years, if we are given an example of behaviour to imitate, we can only judge the 'success' with which we accomplish this imitation by empirical evidence (does the square peg fit in the round hole? No. The square one? Yes!) and the (emotional) reaction of the person making us do the exercise. I doubt that we even consider why we're doing that exercise in our younger years, we know only that we have to imitate 'older proof of successful survival trusted figure' in order to survive ourselves. Sometimes the lessons we are told to imitate, like learning words, have no physical aspects that we can confirm ourselves: we know only that, should we imitate a sound successfully, other humans will understand and provide an emotional response that is more or less the same from person to person. Yet, at a young age, we don't consider the why of those words, we know them only as successful methods of expressing our emotion and well-being, and the desires for and fears of the material things that effect these.

But as we grow older and the scope of our attention grows wider, we're going to notice functioning things and the behaviour of other humans (trusted or not), and we may want to try them out for ourselves without any prompting or guidance. For whatever reason, if a child chooses to act on their curiosity, their reward can only come from themselves.

This last bit is what intrigues me. In our society, it hard to tell what motivates curiousity and a desire to try (new) things out for oneself. Yet it is very easy to understand in a 'do or die' situation... survival is the 'reward', and if we have never encountered that situation before, we will educate our emotions to respond 'accordingly' if we survive it.

You see, in describing this basic function in this context, referring to my earlier post, we're talking about both critical thinking and morals here, or "recording an emotional response after testing". But it's more than that: this function is used in all aspects of our lives, but it is named differently for what motivates its use.

If we were to draw a diagram of 'human nature', we would have categories such as 'morals' and 'critical thinking' and 'learning' and 'feelings', but what we seem to be doing today is taking a single brain function and adding it, individually, as a separate entity, into each category. Instead, I propose making that common function a unique central 'thing', and linking the other categories to it.

Thursday, 4 September 2014

Morals: Independant Thought?

Morality is a very individualistic thing: it is an 'internal conclusion' that affects our personal interaction, as individuals, with the world around us. Morality is a balance between emotion (instinct), memory (examples of other humans, etc.) and rational thought; it's the mix of all three that makes us human and individuals.

Without rational thought, if a situation requires an immediate reaction, we will search our memory for a similar already-learned experience as a guide: if one is found, our emotional reaction to that will decide our actions, and if there is none, a panic ('fight or flight') reaction ensues.

Yet with rational thought, almost in parallel with the above process, we are able to 'calculate' the situation beyond 'knee-jerk' instincts of self-preservation: our brain can compare how one remembered course of action may be better than another, it can consider an action's effects on the surroundings, on other people and even what future consequences those actions will have. Neuroscience shows us that, if our brain decides that the rational conclusion is 'better' than the instinctive one, it will override it.

Yet many in a religious or totalitarian regime, at least the followers, have no use for rational thought: their actions are based on a 'punishment or reward' reaction to situations shown (and often only 'explained') to them by someone else, usually a 'trusted leader'.

In everyday interactions, if a situation or someone's answer 'matches' with something a follower was taught, the emotion attached to that memory (what he was 'taught to feel') will dictate their action: if they get a match with something in the 'good' category, their brain will give them a chemical reward and permission to continue the action; if it is in the 'bad' category, they will 'reject' the situation or their (instinctive) defense mechanism may be activated; if there is no match at all (and they don't feel in danger) there most likely will be no reaction at all - that 'deer in the headlights' look.

What the above paragraph really describes is our childhood learning process. When our frontal lobes (where neuroscience shows us that rational thought and 'morality' are seated) have reached maturity in late adolescence, our brain (well, the model of it promoted by evolution) normally expects us to start using it, but somehow, in many, this 'switch' never happens.

Mimicking lessons that promote self-preservation and/or personal reward is not 'morals'.