Saturday, 13 December 2014

Memory/Emotion/Critical Thought in the Human Brain

The brain is amazing. Seemingly complicated, but amazingly simple in its function: input, recognition; test, accept, reject; and finally... combine and create.

Above is an fMRI scan of the axon paths through the human brain. Axons are neurone connectors: to reiterate earlier explanations, every neurone has many receptors (dendrites), but only one output (axon) to carry the neurone charge to another neurone... if it fires. Whether it fires or not (and to what degree) is the result of the sum of input it receives from other neurones.

The network of neurones behind a single memory is a complicated thing... not only are there the neurones forming the memory itself, but its connection to 'inhibitor' and 'accelerator' neurones that will affect their relation to other neurones, namely in the later 'conclusion' part of the thinking process. To make things simpler, I've eliminated all those secondary (yet capital) connections in my illustrations, to show just the process result.
The above is the simplest of memories: empirical memory. First off, your brain will decide for you whether an event is even worth remembering, and if it is, it will be linked with the emotion that made it seem 'important', a link that will be traded off to the neurone used to permanently store the memory (if the brain deems it important enough). These emotion connections can be trained by further 'matching' input from empirical... or 'trusted' sources.

These 'test types' are capital to the human brain. Even before we became rational creatures, we were mimicking ones: whereas 'lower' creatures had to rely on empirical testing for memory 'validation', our ancestors learned to copy the behaviour of 'similars' who had already tried and tested the concerned techniques and circumstances. Unless trained, the brain tends to ignore (and even reject) any information not coming from anyone in the 'similar', or 'trusted' category.

Yet we only 'recognise' or 'match' things already in our memory. If we've had experience with a red ball, and the brain deems it important enough, it will be one of the things our (supposed) subconscious 'watches out for' in our environment. If it is detected, the 'recognition' trigger will be accompanied by the memory's associated emotion (playing with the ball, for example, as opposed to being hit on the head with it) will be triggered too. If at first the emotional 'importance' of each memory may seem quite stark and distinct, this will become 'muted' with repetition ('lesser importance') and other associated/more important events.
Now enter critical thinking, the 'extra level' that makes us human. It's that region of constantly 'slow firing' neurone region of the brain that, when activated, will 'test' existing memories against each other to create a third 'possibility' that can become a 'memory' of its own: if a) a rock is heavy and b) a muddy slope is slippery, therefore c) a heavy rock on a muddy slope will slide. Untested, the new idea will generate but a vague 'warning, watch out for this' emotion attachment, but if someone having that thought later sees that rock slide (and avoids it to survive), that thought becomes an empirically confirmed fact that can be related to others (with that 'warning' emotion) until they can empirically test it themselves.

Yet consider the experiment: if a) All Terriers are dogs, and b) all dogs are animals, then c) all Terriers are animals. This is an exercise concerning only the mind (and communication with other minds) and categorisation - it cannot be tested empirically, and can only be 'confirmed' by the emotional reaction (understanding, approval or not) of the person receiving the idea. Yet it uses the same survival-tool technique as the first example.

I mentioned earlier that emotions can be trained. In the empirical world, this is a clear-cut affair (with, say, an initial fear reaction to fire dulled by an education in its uses, and experience with them), but when it comes to ideas, it all depends on where the ideas come from. Without critical thinking, the origin of an idea is just as, if not more, important than the idea itself. If a trusted source says that circumstance a A will result in A, and you repeat their lesson and get a positive response, then the same tells you that circumstance A will result in B, you will attribute that same emotional 'reward' to both lessons, even if they are contradictory. 

Yet to the critical thinker mulling over circumstance A, they may weigh it against other circumstance/criteria/memories to conclude that, in fact, circumstance B because of A doesn't make any sense. The very act of this consideration removes the 'confirmation' emotion dependance on the teacher, and if the result of the reflection is tested empirically to a conclusive result, it may dampen, negate or even convert to 'warning' all former emotional connections to the source of the information. But the concrete conclusion of this exercise is that the emotional reward becomes a personal one. 

It seems that once one experiences this personal conclusion/reward for the first time, it seems to 'validate' for the brain the utility of the critical thinking technique as a whole. I guess this is the 'switch' I was trying to locate in my earlier posts on this subject.

Thursday, 16 October 2014

Dispensing with the Moral/Thought Dictate

In my earlier post (today, I'm feeling quite brain-fart-y this morning), I described the emotional reward we get for learning lessons that have no physical nature. The very act of 'recognising' a pattern already triggers a reward in the brain, but seeing this recognition in another human seems to add a secondary, stronger effect: a negative reaction from that other human should move us to adapt our emotional reaction ('this doesn't work here'), and a positive reaction would doubly reward and confirm that pattern (as 'acceptance' also means 'survival' to the human brain). This is what we'd call 'morals', but it can also be called 'adapting to new environments'.

As a human has first to rely on his existing memory/emotion cocktail to judge the new environment, what it does next is based on its need for survival: if the situation is dire, and doesn't match any existing patterns, a human has to use these to do its best to determine a course of action in order to survive; if the situation is not life-threatening or challenging, the human has the privilege of choice: either remembering the situation or problem for later analysis, or simply dismissing it and forgetting about it. In a real-life alone-against-nature situation, the brain has no choice but to rely on reason: it will decide if a situation is good for it or not, and adjust its neurone network accordingly; yet if the brain is presented with an idea not represented in reality, if it is not dismissed, it will try to 'make' it real by imagining it, and will attempt to make the same adjustments.

No matter how we name them or talk about them, our basest instincts are around self-preservation. In situations that involve physical things, decisions are empirical and obvious, and 'working methods' are rewarded and categorised 'good for survival', but when it comes to untested new non-physical ideas, how does the brain decide if a non-physical idea 'works' or not? The choice here is simple: trust your own judgement, or trust a more experienced someone else's, but first, trust your own ability to choose between the two.

The last choice is everything. If you don't think yourself able to 'know' or to 'decide', then you are have no choice but to rely on someone else's judgement, meaning that you are dependant on that 'trust tree' I mentioned in my last post. And if an experience doesn't 'match' something 'taught' to you by anyone in your trust tree, depending on the level of danger, your brain will either dismiss the experience or remember it as something 'dangerous' (only 'confirming' the earlier mimicked 'lesson').

So for anyone who can't or won't rely their your own judgement, anything, anybody, or any idea originating from 'outside' that programmed sphere becomes a sometimes very scary 'unknown, can't judge, beware' category directly linked to our emotional, instinctive sense of self-preservation.

This is what both religion and totalitarianism tap into. Both divorce from a human its innate ability to judge the world for itself, make it impossible for a developing human to take its first steps in that self-sufficient state, and present (or impose) themselves as a sole recourse for any and all judgement in things both real and imagined as though they are equally "true". To the untrained and non-rational brain, they might as well be.

This limits humans not only in their education and decision-making abilities, it makes entire populations, instead of measuring themselves against reality and each other, rely on one source for all their decisions and guidance. Since that central source's judgement also encompasses what is dangerous or not, the people they 'lead' have no choice but to refer to their 'trust tree' for a 'measure' of safety, and view anything not originating from it as potentially dangerous. If their 'moral guide' dictates that something or someone 'not of theirs' is dangerous, the brain will adjust its emotional and neurone networks accordingly as though it was a real danger.

Now take two (or more) 'moral guides' competing for control over a single population trained to believe that they 'need' guidance: no wonder Christianity and Communism hated each other (and 'because atheism' my ass!). Now take two competing 'moral guides' who know that they can never persuade the other's following to follow them, so paint them as 'the enemy'. Now imagine that that central dictate is only educated in its own indoctrination methods and inept at everything else. Now imagine that the central dictate starts adjusting its dictates and descriptions of 'the enemy' only to accommodate its own wealth and survival... oh, throw some nuclear weapons in there, too. Scared yet?

But not only totalitarian regimes and religion are guilty of this: it exists to a lesser degree in advertising, Fox 'news', politics, and we all aid and abet this system by 'confirming' (or rejecting) each other's choice of dictate.

If we want to fix this, before all, we need to tell humans that they are able to make their own decisions and that their brains are wired for it from birth. We no longer have to face nature to 'prove' that ability, but that doesn't mean we don't have it and shouldn't use it, because, after a ~10,000-year long 'rationality-free' holiday, our very survival depends on it today.

PS: I just thought about someone's 'concentric circles of extremism', that the 'moderates' don't support what the 'extremists' do... f*ck that, ALL of them follow the same dictate, so it's the dictate's support or condemnation of whoever's action that speaks for all of them, since the dictate is the unique moral guide for all.

Decategorising motivation.

I've been going through acrobatics trying to apply brain function to existing definitions of human nature, but the latter shouldn't be used to describe the former; these contortions shouldn't be necessary, and are in fact quite counterproductive to understanding, even my own. Instead we should be taking the brain function and categorising according to that: our brain function is the motivation that drives a behaviour, yet that motivation, even if it has the same source and goal, somehow becomes a 'different thing' between (social) topics.

Our brain's most basic function is 'recognising' situations and dishing out the 'right' chemicals (emotions) as a reaction to them: defensive and active for danger, and passive and soothing for reward, and our survival depends on our brain matching these correctly. Yet this recognition needs to be educated: in our first moments in the outside world, the only 'safe' situations we recognise are those we already know from the womb: the warmth of mother. When we see mother trustingly and fearlessly associate with others around her (namely father), our definition of 'safe' will spread to them as well. And the 'tree of trust' will spread to whoever they trust, and so on and so on.

That's just our most primitive 'danger detection' mode. Added to that are the lessons trusted people teach us: we tend not to accept any lessons from people not in this category, and although we may remember those 'lesson attempts', we will not integrate them into our 'knowledge repertoire' until we think about them ourselves at a later date - if we ever do - or until that person somehow later becomes part of the 'trusted' category. But still, if someone is not trusted as they give a lesson, they will not have a direct influence on our learning process.

In our younger years, if we are given an example of behaviour to imitate, we can only judge the 'success' with which we accomplish this imitation by empirical evidence (does the square peg fit in the round hole? No. The square one? Yes!) and the (emotional) reaction of the person making us do the exercise. I doubt that we even consider why we're doing that exercise in our younger years, we know only that we have to imitate 'older proof of successful survival trusted figure' in order to survive ourselves. Sometimes the lessons we are told to imitate, like learning words, have no physical aspects that we can confirm ourselves: we know only that, should we imitate a sound successfully, other humans will understand and provide an emotional response that is more or less the same from person to person. Yet, at a young age, we don't consider the why of those words, we know them only as successful methods of expressing our emotion and well-being, and the desires for and fears of the material things that effect these.

But as we grow older and the scope of our attention grows wider, we're going to notice functioning things and the behaviour of other humans (trusted or not), and we may want to try them out for ourselves without any prompting or guidance. For whatever reason, if a child chooses to act on their curiosity, their reward can only come from themselves.

This last bit is what intrigues me. In our society, it hard to tell what motivates curiousity and a desire to try (new) things out for oneself. Yet it is very easy to understand in a 'do or die' situation... survival is the 'reward', and if we have never encountered that situation before, we will educate our emotions to respond 'accordingly' if we survive it.

You see, in describing this basic function in this context, referring to my earlier post, we're talking about both critical thinking and morals here, or "recording an emotional response after testing". But it's more than that: this function is used in all aspects of our lives, but it is named differently for what motivates its use.

If we were to draw a diagram of 'human nature', we would have categories such as 'morals' and 'critical thinking' and 'learning' and 'feelings', but what we seem to be doing today is taking a single brain function and adding it, individually, as a separate entity, into each category. Instead, I propose making that common function a unique central 'thing', and linking the other categories to it.

Thursday, 4 September 2014

Morals: Independant Thought?

Morality is a very individualistic thing: it is an 'internal conclusion' that affects our personal interaction, as individuals, with the world around us. Morality is a balance between emotion (instinct), memory (examples of other humans, etc.) and rational thought; it's the mix of all three that makes us human and individuals.

Without rational thought, if a situation requires an immediate reaction, we will search our memory for a similar already-learned experience as a guide: if one is found, our emotional reaction to that will decide our actions, and if there is none, a panic ('fight or flight') reaction ensues.

Yet with rational thought, almost in parallel with the above process, we are able to 'calculate' the situation beyond 'knee-jerk' instincts of self-preservation: our brain can compare how one remembered course of action may be better than another, it can consider an action's effects on the surroundings, on other people and even what future consequences those actions will have. Neuroscience shows us that, if our brain decides that the rational conclusion is 'better' than the instinctive one, it will override it.

Yet many in a religious or totalitarian regime, at least the followers, have no use for rational thought: their actions are based on a 'punishment or reward' reaction to situations shown (and often only 'explained') to them by someone else, usually a 'trusted leader'.

In everyday interactions, if a situation or someone's answer 'matches' with something a follower was taught, the emotion attached to that memory (what he was 'taught to feel') will dictate their action: if they get a match with something in the 'good' category, their brain will give them a chemical reward and permission to continue the action; if it is in the 'bad' category, they will 'reject' the situation or their (instinctive) defense mechanism may be activated; if there is no match at all (and they don't feel in danger) there most likely will be no reaction at all - that 'deer in the headlights' look.

What the above paragraph really describes is our childhood learning process. When our frontal lobes (where neuroscience shows us that rational thought and 'morality' are seated) have reached maturity in late adolescence, our brain (well, the model of it promoted by evolution) normally expects us to start using it, but somehow, in many, this 'switch' never happens.

Mimicking lessons that promote self-preservation and/or personal reward is not 'morals'.

Thursday, 14 August 2014

Critical Thinking: an Art we traded for Agriculture.

In years before agriculture, man lived in smaller groups where skills were most likely not divided amongst its members. This would mean that an individual would have to have a complete skill set to survive, and be able to process and overcome the never-ending variables that nature threw at them: this was critical thinking. Humans then had a simple choice: use it or die.

Neuroscience has recently shown us that the brain is pre-wired (but developing through adolescence) so that any cortex neurone has the potential to connect to any other, directly or indirectly, at any distance, in the brain. We would not have this nerve structure if evolution hadn't promoted it as a 'successful' model.

Below I will try to explain how we used to use our brain, and compare that with how we use it today.

This essay contains some references to neuroscience: Please click here to show/hide a short description about how neurones and neurone networks work.

A basic neurone basically resembles a cell with coral-like 'arms': its multiple 'receiver' arms, dendrites, project at all angles from most of its circumference, and a single slender 'sender' arm, the axon, usually much longer than dendrites, can extend to connect to another neurone's dendrites through axon terminal branches of its own. Most axons are very short, but even if one extends past another neurone it would like to connect to, it can sprout a terminal 'branch' anywhere along its length.

Basic structure of the human neurone

Most of our cortex neurones reside in an outer layer that we call grey matter, and they are immobile once 'placed' during brain development. Below this (in a layer towards the interior of the brain) is a layer devoted to carrying the longer axon arms carrying signals between neurones in different parts of the brain: these axons have a myelin sheath that is thin and almost transparent if the neurone is unused, but this grows thick to better protect and strengthen a neurone's signal when it is used often; the myelin's whitish colour gives this area of the brain its name, white matter.

fMRI scan of White Matter (axon connections) between distant neurones in the human brain

Only very recently was it discovered that these axon arms connecting different regions of the brain extend, in a pattern much resembling a map of Manhattan, to all extremities of the brain, allowing almost any cortex neurone the possibility of connecting to another even distant one (and other deeper centres of the brain).

Close Section


The brain is 'wired' for critical thought from birth. Thanks to recent (f)MRI and PET brain-scan technologies, we can see how different regions of the brain are linked together, meaning that any neurone in our cerebral cortex has the potential to connect, either directly or indirectly (through other neurones), to any other.

The question we have to ask ourselves is: Why are our brains that way? If we had never had use for the extensive neurone-connecting abilities of the human brain, why did evolution promote that model as 'successful'?

If we look back at our evolution, we'd see that we spent most of it, hundreds of thousands of years at least, as hunter-gatherers. Hunter-gatherers lived in small groups, moved with animal migration, and lived in caves and temporary shelters; they were practically one with nature. I would imagine that each individual would have to have a complete survival skill set, as tasks don't seem to have been divided between community members as trades in those days - perhaps between the sexes, but there is little proof supporting that idea, either.

Anyhow, these skills had to be taught to younger humans: Hunting and gathering for everyday survival, as well as the dangers that represented wild animals; I'm sure these lessons were quite strict, as any deviation from them would be a threat to tribe survival. Medicinal knowledge and theories about the origins of the elements and other natural phenomena were probably practiced and passed along by a select few tribe members. In all, the 'unexplainable' aside, the methods they passed through the generations were tried and true, almost a science in those times.

A human that must fend for itself against nature to provide sustenance for itself (and eventually its family) would at least have to be close to maturity in body, so we can assume that the time until then was spent on education. Yet this education would be worthless if the young human didn't break the bond with the rest of the tribe and forage out for himself for a first time; when in his group of 'trusted teacher' tribe members (and he probably, by instinct, feared anyone else), he depended on them for approval or disproval of his imitations of their methods, but eventually he would have to test them against nature with only his survival as a judge. The 'walkabout' is an example of this still existing in Australian aborigine tradition: take what you've learned (from your elders and ancestors), use it fend for yourself, or die. The switch to critical (independent) thinking was not a choice then, it was a matter of survival.

Yet before that initiation, if an information has only been tested through imitation against a trusted member's expression of approval or disproval, the human brain can only categorise it (with other similar information) with a link to the emotion generated when judgement was given (by trusted member, and most likely linked to (emotional) information connected to trusted tribe member themself); this is completely at odds with the context of a real-life situation.

Here's an exercise for the sake of example: consider a task that you repeat so often that it has even become mundane. Can you remember who taught it to you? Now consider another subject that you learned but have had little-to-no experience actually using. I'm sure you still remember its teacher quite well.

When a young human sees an animal in a 'real-life' hunting situation for the first time, it becomes an actual goal (and means for survival), everything about his lessons changes. Place yourself in the same situation as the young hunter: you're about to embark on your first one-on-one with an animal, and your lessons have to relate to ~it~ (not dear teacher), so thoughts about your education process are not the first thing on your mind. What you are experiencing now is ~yours~ (and you may at first feel afraid at this, which will only heighten your senses and accelerate your processing): see how the animal parts the bushes as he runs into the forest; your brain will make a direct link from that observation to the size and direction of that animal (amongst other conditions), and when you enter the forest in the right direction to actually see the animal again, that earlier connection will be labelled 'success' and the 'teacher approval' filter will be needed no more. Did you lose the animal again when you entered a clearing? Notice that smell, note the wind direction, and follow it, and again, if successful, your connection will be rewarded and 'confirmed'.

That night when you dream, you will re-enact those events, making the new connections that 'worked' even stronger (added axon terminal arms, dendrite arms, and myelination), and should you encounter the same situation the next day (using those neurones again), the connection will become stronger still. Any slight variation to those circumstances will add additional information to the established links, making a 'hunting' neurone network that is your very own creation, your tested experience alone. One can imagine that with a lifetime of experiences such as these (in all methods of survival), the complexity of our neural networks must have grown great indeed.


Enter agriculture. This invention, only 12,000 years ago, flew in the face of over ~200,000 years of evolution and tradition. No longer was a human at odds with nature, as it needed stray no further than the boundaries of its habitat to collect its needed nourishment; many 'old' lessons about nature and survival were no longer needed, no longer given, and no longer tested.  Community size grew, and the work required by agriculture was divided between its members; it was no longer required to have a full survival skill-set to earn one's sustenance. Repeated tool-use skills in a sedentary environment requires much less critical thinking than the ever-changing circumstances of nature.

So, even though agriculture reduced the skill requirements for survival, the human brain was still 'wired' to handle them. And even though the human brain was wired to make direct inter-neurone 'conclusion' connections of its own (as a requirement for survival), it no longer encountered the circumstances nor the motivation to do so.

Yet because of our evolution and instincts, even in village (agricultural community) life, central leader role models remained, and were promoted to important places in society. Fewer were trained to brave the dangers of 'outside the village' (and these often became leaders), and even fewer had the occasion to test those skills. So from then, rumoured dangers, because untested, remained in the 'feared unknown' category, and directly linked in the brain to the 'authoritative' person who spoke of them. I imagine that over the years those stories, because they were untested/untestable, grew increasingly fanciful, and that the person telling them became an increasingly central village figure. This is probably how religion began.

So let's fast-forward to today, a mere 12,000 years later. This time period is next to nothing in the scale of our evolution, so our modern brains are practically unchanged from the hunter-gatherer model evolution favoured, yet with our cities protecting us from both the elements and nature, we are even ~less~ required and motivated to make the transition to the independent critical thinking that 200,000 years of evolution prepared our brains for.

The timing of that transition can change, too. In pre-agriculture days we had no other choice but to remain in a protective environment with our untested learnings until we were physically strong enough to affront nature on our own, but today, thanks to information technology and the (non-dangerous) nature of the things we learn, we can test any idea or information anytime we want in our lives... if we want to.

Wednesday, 29 January 2014

Understanding the Theist Mindset.

This is perhaps already obvious to many of you, but I had a bit of a 'release' revelation a few days ago; I'm much less daunted by theist discussion thanks to it. Sorry if this sounds pompous, but I'd like to share.

'Pigeon Chess' is the best analogy I've heard so far to describe an atheist/theist discussion, but I kept wondering about ~how~ a theist manages to deny/dismiss fact/evidence even when it's right in front of them.

If you think to our education, we spend the first part of our lives building our minds by mimicking a few trusted 'authority figures' who are supposed to show us what's good and dangerous in the world (and being doubtful/in fear of anyone/anything else), but eventually we gain enough experience to start making conclusions of our own from what we experience around us.

Religious people are just people who have never left the first stage. Just as children, they focus on their 'authority figures' for (emotional) reward and punishment; they just 'blank' any information from any other source as 'wrong' or 'bad'.

I can almost compare a 'follower' education to training a lab rat: the reward is food if he does the 'good thing', and punishment is an electric shock if he does the 'bad'. Eventually the rat will grow to fear certain things and appreciate others, even without an actual reward being given. When released into the world, he will regard with incomprehension (and perhaps fear) anything different from the environment he was trained in, and run to his 'education environment' for safety if he can. Trained rats together will behave the same way, but as a pack.

The key here is emotion: the 'reward' for a theist comes directly from a leader approving a followed behavioural pattern, whereas the 'reward' for a thinker comes (first) from ~himself~ when he achieves understanding and uses it to a successful result/conclusion.

So, for a theist, any information not from certain sources or outside their programmed behavioural pattern ~doesn't even register~ if they are not approved by their leader/fellow followers, much in the same way 'god' doesn't register for atheists.

For religious leaders, all that matters is that their followers continue to focus on them for education, reward and punishment; one could even argue that the content of the doctrine used to establish/maintain this dependancy system... isn't even important.

Sunday, 26 January 2014

Everything is light and time - what about the 'other side' ?

If my earlier idea was true, that would mean that every 'up' quark would exist as a 'down' quark in the spacetime-construct direction opposite to ours (or 'opposite dimension'). This really bothered me, as it would mean that, in the dimension opposite to ours, our world would be perfectly mirrored in antimatter.

Until I considered that the 'zero point' between the two dimensions. It's a 'zero point', right? It may be possible that anything originating from that in our dimension could be a complete somwhere else in the other :

I also doubt that the 'zero points' axes are 'aligned' between them... imagine a cloud of striped billiard balls rotating in all directions with no synchronisation at all. All that matters is that the dimension 'sides' are directly opposed to each other.

Antimatter exists and has been produced, and it has been demonstrated that antimatter annihilates matter... but if the above were true, a fermion annihilated in our dimension would also be annihilated in the other. Once the time-space rip maintaining energy is gone, the opposing time-construct will annihilate each other as well. 

The above idea is two dimensions that will 'zero out' in all its aspects if all its matter/energy is destroyed, meaning a return to a 'perfect state' nothing, but it is... disturbing, to say the least.

Saturday, 25 January 2014

Religion vs. Rationality - a simple exercise to demonstrate why we can't communicate.

Theist/rationalist debates have always been a source of frustration.  The reason for this: our value systems are completely different, and the 'horizons' we use to orient ourselves are not even comparable. Consider the following two diagrams: try to take one item from one diagram and place it in another. It's a difficult task... more than likely a theist would group all the 'science items' at the same level, whereas a rationalist would group all of the 'faith-dependant' items at the same level on his graph.

Perhaps my bias shows in the choice of items on each chart, but all I wanted to do is show the 'horizon' of our respective value systems. It would be an interesting exercise to take all items from both charts and place them in a box, then ask an interviewee to place them all on one chart, then another. I'm sure that a theist will place things like 'bible veracity' on 'demonstrable' even if it isn't - but that would only highlight more our value differences, wouldn't it?

Thursday, 23 January 2014

Everything is light... and time?

Around one in the morning last night I watched around ten minutes of a (rather stupid) time-travel movie, and went to bed with that in my head... spending around an hour mulling 'brain-only' things usually helps my sleep. Anyway, I was thinking about the conditions that would have to be met for time travel to be possible... either somehow being able to re-create the entire universe at the desired point in time (not), somehow 'reversing' the action-reaction of particles (not) that in any case would 'travel' no faster in the opposite direction than our present sense of time (not not) and that again for the entire universe (not ∞ )... so then thought about manipulating a limited area of space-time.

If I was to 'reverse' a certain number of particles (I had tossed the 'speed' factor for the time being), I would have to not only reverse the particles themselves, but reverse the very essence of each particle, meaning down to quarks themselves. But this would make them... antimatter. Isolating that... more plausible, but again 'not'.

But then I got to thinking about 'particle reversal' and particle-antiparticle annihilation, and asked myself... do antiparticles travel backwards in time? The draw between a particle and an antiparticle is enormous, so much so that it is near impossible for us (today) to isolate an antiparticle from any particle (I digress), but anyways, in a collision between the two, imagine that their impact point is also a 'zero point' between two different 'directions' of energy AND time . This 'zero point', or 'perfect state' as I called it in an entry here three years ago (but in another context), could be what all matter in the universe is trying to attain.

I discarded my 'light bending' (into quarks) idea months ago, but I retained the persuasion that something happened to EMV energy above gamma level... what if it tore a hole in that 'zero state' (meaning penetrated slightly into the 'opposite of our' side) and became locked into running rings around its lip?

What if things happened the other way around? That is to say, with super-gamma-level energies being the ~source~ closest to the 'zero point' and all EMW's below were a residue, projectiles and smoke if you will, left over from the explosion that contained energies great enough to be 'rip maintainers' (mass-creators)? EMW's were probably (at their origin) energies emanating from that 'zero point' into 'our side'.

Anyhow, getting back to the 'energy/spacetime rip' struggle, it would probably take an enormous amount of energy to 'dislodge' that energy from its struggle between our two... dimensions (but why only two dimensions (why not dimensions between every direction possible?), and must the struggle be diametrically opposed?).

To continue this line of thought, if any EMW energy greater than gamma levels is enough to create a rip in spacetime, this would mean that every point in our spacetime can be a 'potential rip'. Is this dark energy/matter?

****** intermission music ******

Now let's play with this spacetime-rip idea a bit. If we imagine that an initial explosion in all directions in space and time managed to create rips in all directions... for simplicity's sake, let's just take two diametrically-opposed dimensions.

An explosion towards our 'construct direction' in spacetime (called 'dimension' hereon for simplicity's sake) would create a rip-ring (again, 'quark' for simplicity) whose energy field is more towards our dimension, and an explosion in the opposite would create the opposite (and in fact, the exact opposite could be true (an explosion into our dimension leaving the majority of its energy on the 'other side'), but the result would be the same). A 'positive' quark would be a rip whose energy amplitude extends in its majority into our dimension (let's say by 2/3 for simplicity) and a negative quark, the opposite.

These rips, without any 'energy controller' maintaining them open, would just close. A rip-energy combination, or quark, would remain stable as long as it wasn't approached by another, but when two positively-charged quarks approach each other, their respective 'rip' cores would be drawn to each other (much like two whirlpools), but probably not at a very high rate/strength (could this be the 'weak force'?), and they would be kept apart by their similar energy amplitudes extending into our/our opposite dimension (their charges would not 'draw across' or 'zero out' across the spacetime rip). Yet should two oppositely-charged quarks approach each other, their charges ~would~ zero out across the spacetime rip, and they would annihilate each other. The process of this happening is probably much like the effect as two magnets approaching each other; the closer they are, the stronger their pull towards each other, and this probably exponentially.

If it were this simple, the universe would appear and annihilate itself in an immeasurable length of time. Perhaps most of it did. But Hadrons are composed of ~three~ quarks, two of one kind and one of the opposite: two positive ('up') quarks eternally trying to annihilate a negative ('down') quark make a Proton, and two negative ('down') quarks and one positive ('up') quark make a Neutron. Did I even have to outline this?

Okay, to represent an 'up' quark, let's draw a symbolic horizontal EMW wave, and a horizontal line across it leaving 1/3 its amplitude below, 2/3 above. The line is the 'zero point', and everything above 'our dimension'.

There are two things to notice here: although most of the wave amplitude (energy) is in our universe, the 'draw' from the other side is lesser. If we look at the present Standard Model of Elementary particles, we see that 'up' quarks have more charge and less mass. If we move the line up 1/3 to represent a 'down' quark, we see less of the wave in our dimension but more on the 'other side'; the same table will show you that 'down' quarks have less charge but have more mass.

My earlier 'the oscillation of a looping EMW = gravity' idea makes more sense when it is placed as a ring around/inside/outside a spacetime rip, because part of that lateral action is taking place in a spacetime direction opposite to ours.

So, in summary, the idea I describe above is an 'extreme-frequency-between-spacetimes-oscillating ring of energy', an elementary particle that has charge, mass, gravity and the weak force.