Monday 30 September 2019

Those Who Would Let Other Humans Think 'for' Them.

The Programmable Human

Even before anything we learn in life, survival is the most basic function/instinct of the 'human machine': every move we make, even scratching our noses, is in the interest of bodily well-being, and our emotions, or our trainable 'sub-conscious value judgement' system, is the brain's reward or discomfort response to any given situation: without this basic system, we wouldn't be motivated to move or do/think anything at all.

From our very basic, perhaps even hard-wired, emotional responses to everything 'familiar' to us in early life, that is to say things like 'mother', 'milk' and 'warmth', we learn to expand our sphere of acceptance to other 'trusted' useful-for-survival tools shown to us by our also-expanding circle of 'trusted protectors', people usually presented to us by already-accepted trusted protectors. Through this we 're-train' our initial fear response to those people, animals and things unfamiliar to us, and before long in life our brains will have established a library of 'recognised' entities that no longer incite any sense of fear and/or revulsion. Even further on in life, we are able to identify the 'type' of person or thing that our peers and protectors obey/use, and accept those into our sphere of fear-free acceptance and trust as well. To the pre-adolescent, anything that has become part of this sphere is their 'trusted normal',2 and they will still have a fear response (to varying degrees) of anything outside of it.

Here we should also consider how the brain works on a subconscious and conscious level: what we call our 'consciousness' only seems to 'see' a small percentage of what our senses percieve, and the content of the relayed information seems to be dictated by whatever our subconscious deems 'important' to it (or its survival): this 'importance' is dictated by all the '(what is) safe bubble' training described above. Two people in the exact same situation may 'see' different things: if one has developed an affection for, say, a red ball, if they are placed in a warehouse full of jumbled toys, they will 'see' red balls everywhere (and have a positive 'reward' emotional response upon the sight of one), whereas someone else without that experience may not notice them at all. So, not only does our early-life experience determine what and who we trust (and what we fear outside of that), it can even determine how we perceive the world around us.

In 'learning' through the above imitated example and empirical experience, there is rarely (if) any call for us to make a personal assessment of any 'lesson' given3: if the 'trusted' human showing us the example is part of what we 'know', and the result of whatever lesson they give doesn't affect whatever notion of comfort we've developed until then, we have more or less the tendency to simply accept it as 'good' (for our survival). In fact, I would like to propose that, at this stage, the very definition of 'good' and 'bad', outside of physical pain or discomfort, is how familiar whatever being proposed to us is.

Some would like to call our early-life experience 'education', but if what is affecting our internal brain function is the direct result of our environment or outside, imitated-without-question example, programming would be a better descriptive term.

It remains to note that, for humanity, in times where we were still faced with the challenges of nature, nature was just as much, if not more, our education than the examples our protectors set for us: our emotional reactions to all that dangerous (or unfamiliar) to us most likely determined our chances of survival. As humanity began to gather in greater numbers, and thus protect itself from and distance itself from the tests of nature, the dangers in the world around us became less 'real' (almost distant threats, scary tales, really), but the emotional responses that were a defense to these remained quite intact; it's not for nothing that many of us still get a 'thrill' out of horror films and ghost stories still today.

But to not digress, through controlling the environment, clan members, culture, knowledge, and customs of any given settlement, it became possible to 'homogenise' the early-life experience of its younger members, that is to say, impregnate their minds with a 'sameness' with each other, and also impregnate their minds with a fear of all those not 'like' them.

Our around-adolescence 'Switch' to critical thought: a tool no longer needed.

When we come to the point in our lives when the brain gains the ability to discover and analyse (aka 'critical thought'), we suddenly are able to, instead of learning through simple imitation and obedience, question and examine everything we've learned to that point, should those lessons instil emotions of doubt and/or discomfort, and this analysis can even extend to those who were the source of these lessons.1 I think it's important to mention the latter because, from our emergence from nature, our greatest teacher was no longer nature, but other humans.

Yet in that time when we lived in competition with creatures of other species, and nature itself, we were often obliged to test those early-life lessons empirically, and, using our critical thought abilities, eliminate, modify and/or improve those found wanting, and this also became essential to our survival.  Again (from earlier posts), the Australian aboriginal 'walkabout' is a still-existing 'coming of age' tradition that is a perfect example of this: either the adolescent practically uses/tests all they've learned until then, or the result would almost certainly be death.

But once humans became more sedentary in greater numbers, the 'need' for critical thought waned: agriculture and animal husbandry techniques could be passed down, unchallenged and unchanged, through the generations, and distributed roles in any given settlement meant that a single human was no longer required to learn a full survival skill set. Critical thought seems to have been reserved for those distributing roles and setting the rules, but where 'tradition' became a concept and/or rule, it most likely became possible to propagate knowledge and techniques, through simple example and imitation, through the generations.

Critical Thought and Ambition: 'Blocking The Switch'.

Yet if a lesser-able human wanted to 'rise above' a survive-through-imitation (largely non-critically-thinking) settlement whose hierarchy was dictated by age or ability/strength, they had little choice but to resort to critical thought to dominate non-critical thinkers which was, not without irony, a situation much like, in days before, a human who hoped to emerge victorious from a competition with creatures more agile or stronger than they. And all one had to do is transition to, and develop, critical thought enough to outwit or manipulate those higher up in the feeding chain, or convince/manipulate enough humans to create an army of their own against the same.

And once in their desired position, it is obvious that many, if not most, of history's leaders of all calibre saw that a maintained state of non-critically-thinking, childlike survival-dependance mental in an adult population would create a faithful, dependant, unquestioning, conformist, thus controllable, following. The most useful tool to this end was transposing the child's protector-dependant 'rule-based' (or punishment) environment onto an adult population, thus convincing any child in that society that that childhood state was perpetual, or in other words, convincing them that there was nothing to transition to, that there was no other state of being in which it is possible to survive,  which meant that, to the follower-believers, everything outside that 'conform/obey-or-else' environment became a great, fear-inducing 'unknown'.

Through history, the forms this tool took were many: some of history's leaders simply jailed or eliminated all those who would 'dare' question, counter or ignore their authority and dictate (thus reinforcing the no-example-to-transition-to state), and yet others found it useful to hide behind psychology-manipulating concept-tools that both tapped into the immature human's fear of separation from their 'protector-provider' (and the 'known, same' following who obeyed their dictate), their fear of punishment, and their most innate and unthinkingly instinctive fear of death (promise of immortality, etc.). No matter the tool used to get them there, the definition of 'good' for a human in this arrested state about amounts to 'same', that is to say, 'same' as whatever they (and others 'like them' following/thought-dependant on the same 'leader') were programmed with until then.

And adult humans in this state are very, very, manipulable and corruptible: throw a few scraps from the leader's follower-fed table to a few 'chosen' (-by-the-leader) followers, and they eerily almost instantaneously transform from unquestioning followers of the leader's dictate into enforcers of the same: again, no matter if their form is superstitious-threat or demonstrable-threat based, examples of the resulting three-level dictatorial hierarchy model (see: of Shepherds, Sheep-dogs and Sheep) can be seen all through history.

The Above transposed onto Modern Society.

Many in more education-and-technology developed societies would like to think themselves exempt from, or immune to, the above dictatorial systems, but they seem strangely blind to the existence of the same, in the form of sub-cultures, in their own would-be democracies: if at least a majority of a population that would call itself a democracy isn't thinking for themselves, it isn't one.

Some of the earlier-described switch-blocking tools have proven so effective over the millennia that, even in this post-enlightenment, dwindling-superstition, information-laden world we live in today, a few quite unworthy-of-leadership (or even consultation) humans are desperately trying to hang onto them through attempting to even further intellectually cripple future would-be followers (while setting things up for an easier elimination of future dissenters). And this seems to be the state of the things in the U.S. today.

But things have evolved a bit further than that: those who would shape society through hiding behind imaginary proxies (while shifting attention, responsibility and accountability onto the same), have provided those dictating today's economy with a useful example: since Reagan (and some would say earlier), fear-of-other-spreading politicians have served as very effective distractions from those who are really doing the decision-making, those who decide which products we consume, all while fighting amongst themselves to be the one to control the whole of the cash-cow that is our thoughtless complacency: even those consumers aware of this situation are guilty of supporting it to some degree, but in today's world, it has become near impossible to find any other alternative to it. But the battle for a real consumer awareness (thus a change to the status quo) has only just begun.

One would think that the advent of the internet would have facilitated the dissemination of rational, educated, demonstrable thoughts and ideas to the world, but it has also made it easier for would-be dictators (and their followers) to spread disinformation (fear), bigotry (fear), and irrational fear-of-'other'(-than-followed-dictate) ideas (also fear), and experience has shown us that those who 'need' to make the most noise are often those less deserving of our attention... but to one seeing our networks and screens monopolised by this desperate brouhaha, it may seem that our world is dominated by it, but a closer examination of the declining-criminality-and-war real state of things shows that this is not so.

Filling the Void.

Many would-be dictators disparage the loss of the 'community' aspect that their respective regimes used to bring, and it is true that, at least for the time being, there is not much on the horizon to fill the void, but, in this author's humble opinion, this is largely due to the demoralisation-effect that their respective noise-machines are making (which makes their complaints disingenuous to the core). And the answer to this noise, at least for the time being, seems to be but something best described as a disparate, too-multi-faceted (and distracting-from-real-problem) utopic fog of ambiguity, because, yes, although seemingly well-intentioned, many who would like to make a safe place for themselves in society are not (critically-)thinking beyond the survive-by-imitation bubble of their own 'identity' (sense of comfort, 'self'), either.

So, against a 'united in sameness' (and fear-of-different-from-that) voter bloc, what do we have to counter it? For the time being, all we have is a largely silent 'meh' (non-)voter bloc peppered with small-in-comparison 'identity' groups. Concerning the latter, the focus should be on the non-rational fear-of-different survive-through-imitation(-panderers) causing the exclusion, not the excluded. Already, a 'united against all forms of bigotry'4 force would be one to reckon with.

The 'meh' (non-)voter bloc seems to feel that their voice doesn't count, that their voice doesn't matter... but are most of us not living in a democracy? What if we replace the centuries-old 'tradition' of weekly irrational-and-indemonstrable-superstition-and-fear-themed meetings with others that are places to make our thoughts as individuals heard and recorded, to compare, discuss, and morph our individual thoughts into consensus?4 If such a thing were organised around administrative communities from a grassroots level, and the results published to recorded history (online) where others can see and compare (and think about!) the results, hell, I'd participate. And that also could be a force for the ignorance-exploiters-and-panderers-that-be to reckon with.

In short, while the 'follow-minded' go to rallies organised 'for' them by those who dictate 'for' them what's 'good' or 'bad' 'for' them, we who 'dare' think for ourselves would better organise meetings where we can decide, between ourselves, what's good or bad for ourselves.




1 - In any case, it has been widely demonstrated that the brain undergoes a 'pruning' process around adolescence.
2 - This in itself is complex: a child knowing nothing but squalor might not perceive this state as 'uncomfortable'.
3 - Emotions such as empathy (sense of sharing, and the brain 'rewards' thereof), may come into play here, but omitted for simplicity's sake.
4 - No, 'thwarting our promotion of bigotry' is not bigotry.
5 - Does this remind anyone else of anything Classical Greece taught us?

Thursday 13 June 2019

Independence of Mind without Resources

I come from a position of disadvantage. I had no family fortune (or hardly any help at all, for that matter) to get me going in life. I am doing trades that have nothing to do with my education (an education that, as the promised end result was a ('successful') 'being like everyone else') I did not understand or even see the point of at the time), so became entirely dependant on 'networking' for finding work (as there are few 'traditional' companies who would take my five-page-long multi-trade CV seriously). In fact, in all my 47 years, I have held two full-time salary jobs: the first convinced me to get the hell out of that 'jumping through hoops for rewards' rut and do something for myself, and the second... well, was so 'easy' that it temporarily corrupted my results-based work-ethic (and I doubt that I'll ever see an opportunity like that again).

So when one is without resources, they have only their willingness to work to count on, and intelligence (education, experience, imagination) comes into play, too. But when confronted with real-world situations, we run into a problem: (comfortable) humans with resources are, paradoxically, often those with the least will to work and imagination. So when I, seeking resources, show up with my ideas and willingness to work, the resource-provider has the option of just taking the former... and, if my situation of precarity (foreigner without resources) becomes too evident, they have the additional option of making me do all the work, then reneging on their side of the deal. It was often like this until I became less trusting of 'the better angels of our nature' (but I still fail there from time to time).

But my road to this understanding was a long one. I began from a place of utter naiveté (my childhood was fairly devoid of 'normal' human interaction), a (childhood) lack-of-affection-generated too-eagerness-to-please, and a total disability when it came to dealing with dishonesty (I tended to wax credulous in reaction to even outrageously dishonest claims or blame-responsibility displacing (on me)). All of this tended to lend value to the existence of others around me, and none to my own. And a childhood-instilled lack of confidence in myself added to the mix: I had a hard time demanding a decent wage for my work (because I (somehow) felt that I didn't 'deserve so much') until recently. Also figuring prominently was my (also childhood-instilled) credulty - and fear - of authority: only through direct work with such supposed 'adults' was I able to distill that misconception, because many 'authority figures', most all of them in places of comfort, are actually lesser beings (utility-and-survival-wise) than the average worker, with less imagination, too.

Am I laying blame for all that? It's hard to, because everyone involved was most likely convinced that they were doing the 'right thing' (at the time they were doing it). And humans with no value-judgement abilities (or desire or will to learn to or accept the responsibility for making the same) will repeat the same patterns as long as it 'works' for them (meaning: as long as it doesn't put their survival (comfort) in jeopardy). Some concerned actors probably still don't understand the error of their ways even today. When considering such things ('fault') it's hugely important to consider their motivation, and whether they were knowingly doing damage/taking advantage... and that's often hard to determine, as feigned indigation is a common 'defence' in situations of idea-reality discord/dishonesty, too.

The curse is triple when one considers that, with that understanding, not only will a resource(-or-safety-net)-free person be sure to be exploited, they will often be obliged to accept that exploitation with a full understanding of the imbalance of it all... or retire from society completely. But how can one do that without any resources of one's own and survive?

Monday 3 June 2019

Revising Copyright: Quality Control + the Attribution System.

Already I'm dismayed at seeing those who have done no work benefit from the invention/work of others. Only the morally bankrupt (voir: a socio/psycho-path) could ever do this. Damn Edison creating the 'model' of the investor (they who have already profited from/exploited from the work of others) getting the credit and profit from an inventor's innovation and work, and not the inventor. Who actually invented the lightbulb? You probably still have no idea.

But with that little rant out of the way, how do we treat copyrighted material in this internet age?

The powers-that(-would)-be seem to be clinging 'all or nothing' desperately to an old-world copyright system, and it is failing them, as it is impossible to locate and control all points of data exchange. Not only do their vain attempts to locate remove, paywall or monetise copyrighted material fail, but their efforts can become an incentive to piracy.

It goes beyond there: especially annoying is the 'copyright paranoia' reigning on one of the world's principle sources of information, Wikipedia: magazine and album cover-image use is restricted to an article about that magazine or album, making it impossible to use such art for articles on a band member or book author. As a demonstration of this last point, I am at present working on the article about Camera magazine editor Allan Porter, and I cannot use any images of the books he is the author of or worked on. Even the portraits of himself (given to me by himself) are under strict control, and cannot be above a certain pixel dimension. I do understand the reasoning behind this, but this tongue-tied practice is only katowing to (thus enforcing) the existing 'system' without doing anything at all to change it.

It's about the quality, stupid.

I thought this even back in Napster days, when the music industry moguls were doing their all to track down and remove/paywall any instances of 'their' product. The irony is that the solution to their dilemma existed already in the quality standards of online music: 128kb/s, a quality comparable to a radio transmission, is palpably better in quality than the 96kb/s some 'sharers' used to save a still-slow-internet bandwidth. Yet who would want to listen to the latter in their hi-fi stereo system? It might be interesting to consider a system where only the free distribution of music above a certain bitrate is considered as piracy.

The same goes for images: even from my photographer point of view, I consider any image I 'put out there' as 'lost' (that it will be freely exchanged and used), and it is for that that I am very careful to only publish images below a certain pixel dimension online.

Automatic Attribution

It would even seem that a free distribution of low-quality media would benefit its authors from an advertising standpoint, but... it is still rare to see an attribution on any web-published media even today. So how can we easily attribute a work to its author?

I think the solution lies in something similar to the EXIF data attached to most modern digital images: were this sort of 'source' info be attached to all file-format data that circulated on the web, we would have no more need to add/reference (often ignored, and still-rudimentary) license data, and our website applications could read it and attribute a maybe link-accreditation (overlay for images, a notification for music, for example), automatically.... and this would demonstrably be a boon-benefit to media authors.

And it doesn't end there: this ties into the RDF 'claim attribution' system I am developing, as this add-on would allow the media itself to be perfectly integrated into the 'data-cloud' that would be any event at any given point in time... but, once again, I digress.

Monday 29 April 2019

ANN (Artificial Neural Network) OCR: A likely dead-end method. Considering a new approach.

In my recent dives into AI-dependant endeavours, I've been presented with the gargantuan task of extracting data from countless pages of printed, and often ancient, text, and in every, I've run up against the same obstacle: the limitations of Artificial Neural Network (henceforth 'ANN')-dependant OCR.

For starters, ANN is but an exercise in comparison: it contains none of the logic or other processes that the human brain uses to differentiate text from background, identify text as text, or identify character forms (why is an 'a' not a 'd', and what characteristics do each have?). Instead, it 'remembers' through on a library of labelled 'samples' (images of 'a's named 'a') and 'recognises' by detecting these patterns in any given input image... and in many OCR application, the analysis stops there. What's more, ANN is a 'black box': we know whats in the sample library, and we know what the output is, but we don't know at all what the computer 'sees' and retains as a 'positive' match. Of course it would be possible to capture this (save the output of every network step), but I do not think this would aid the shortcomings just mentioned.

The present logic-less method may also be subject to over-training: the larger the sample library, especially considering all the forms (serif, sans serif, italic, scripted, etc.) a letter may have, the more the chance that the computer may find 'false positives'; the only way to avoid this is to do further training, and/or do a training specific to each document, a procedure which would limit the required library (character styles) and thus reduce error. But this, and further adaptation, requires human intervention, and still we have no means of intervening or monitoring the 'recognition' process. Also absent from this system is a probability determination (it is present, but as an 'accepted' threshold programmed into the application itself), and this would prove useful in further character and word analysis.

And all the above is specific to plain text on an uncluttered background: what of text on maps, partly-tree-covered billboards, art, and multi-coloured (and overlapping) layouts? The human brain 'extracts' character data quite well in these conditions; therein are other deduction/induction processes absent from ANN as well.

Considering Human 'text recognition' as a model.

Like many other functions of the human brain, text recognition seems to function as an independent 'module' that contributes its output to the overall thought/analysis process of any given situation. It requires creation, then training, though: a dog, for example, might recognise text (or any other human-made entity) as 'not natural', but the analysis ends there, as it has not learned that certain forms have meaning (beyond 'function'), and may so ignore them completely; a human, when presented with text in a language they were not trained in, may recognise the characters as 'text' (and here there are other ANN-absent rules of logic at work here), but that's about it.

What constitutes a 'recognised character'? Every alphabet has a logic to it: a 'b', for example, in most every circumstance, is a round-ish shape to the lower right of a vertical-ish line; stray from this, and the human brain won't recognise it as a 'b' anymore. In using the exact same forms, we can create p, a, d, and q as well... the only thing differentiating them is position and size. In fact, in all, the Roman alphabet consists of less than a dozen 'logic shapes'.



Not only can the human brain detect and identify these forms: it can also 'fill in the blanks' in situations like, say, a tree branch covering a billboard: the overall identifaction process seems to be an initial 'text, not text' separation, the removal of 'not text' from the picture, then it seems to 'imagine' what the covered 'missing bits' would be, and this is submitted for further analysis.

But the same holds true in cases where a character is badly printed, super-stylised, missing bits, etc.: in fact, if a word is not instantly readable (and this is a highly-trained process in itself), the brain seems to 'dig down' a level to determine what the 'missing' character should be, and 'matches' against that, and this is a whole other level of analysis (absent from ANN). In fact, were we able to extract the probability of a character match of every given character of any given word and compare this to to a dictionary, we would not only create another probability (the 'refined' chances of 'x' character being 'y'), we would also have a means for the computer to... train itself.



Thursday 7 February 2019

Gravity and Light (energy) is Everything.

Everything is gravity. Everything is light. Everything is both, and they're indissociable. In fact, the two together are a constant in itself, a constant that represents the base, or the base function, if you will, of everything existing in our universe.

The Inverse Squared law is omnipresent in both gravitational equations and magnetism (note: in an earlier post, I hypothesise that both are a variation of the same thing): the closer one measures to a given particle, the higher its gravitational pull, in a proportionately constant way. Concerning gravitation, Newton states:
"I ſay that a corpuſcle placed without the ſphærical ſuperficies is attracted towards the centre of the ſphere with a force reciprocally proportional to the ſquare of its diſtance from that centre."

- Issac Newton,
Or, 'translated' into modern equation form:

$F = G \frac{m_1 m_2}{r^2}\ $

For much of modern physics, this 'law' holds true until we reach a hard-to-observe (-and-never-observed) sub-atomic level, but 'goes west' when immersed in the sea of mathematical hypotheticae beyond. I'd like to maintain that Newton's observation holds true all the way down, but to do so I must explain my ideas on the dynamics that lead to this.

Every Particle: a Point Divided.

For the purpose of this article, 'particle' designates any non-construct manifestation of energy, that is to say any single photon, quark, electron, etc., (without considering later dynamics that will be explained later).

Where there is a particle, there is gravity. Every particle, isolated, is in a 'stable' state, an energy maintaining a constant 'resistance' against that gravitational force, in an action that could almost be considered an orbit. Gravity is considered to be a 'weak' force (not to be confused with the actual 'weak force' physics concept), as the gravity emanating from a single particle is almost immeasurable, and only combined (into a mass) does it begin to gain discernibility, but we think that because we can only approach and observe said particle outside a certain distance.

And it's that concept of 'distance' that has to change: it's only recently that we've begun to understand that it is quite possible, and quite normal, to observe and manipulate physics phenomena below the level of human observation; in the present day, we seem to have paused at the sub-atomic level, but, as a camera zooming in from a view of the entire earth to a single electron, the plunge can go far, far, beyond the latter point.

We seem to apply the 'inverse square' observation without considering what may happen at that extreme depth (of observation). Already we know that if we take the idea of a hydrogen atom and 'blow up' its proton to the size of a pea, one would need a football stadium to contain the orbit of the electron around it. And already the 'binding strength' (that physics does not consider as gravity) is quite strong. So should we retain the distance-attraction-strength aspects of the above model (because the 'has mass' atom model itself is not what we're looking at, here), we see that, at that level, the electron (particle) is still quite 'far away' from the point attracting it. But what if we were to take this dynamic to an even deeper level, even closer to any given point of attraction, within a particle itself?

Before we go there, I'd like to return to the 'particle stability' dynamic. As it has already been observed, different particles have different energy levels: I'd like to propose that that energy level is directly proportional to the distance from the 'centre point' it is bound to, that is to say, the source of the gravitational force. The basic rule (here) is: to maintain particle stability, the closer an energy is to its source of gravity, the higher it has to be. In fact, if we can measure accurately the energy level of any given particle, using the inverse-square observation, we should be able to calculate that energy's distance from the centre of gravitational pull attracting it. So if we were to consider the gravity from the 'weak bind' hydrogen atom model, and increase that inverse-squared down to the level of a single particle, the gravitational pull there must be enormous indeed... thus so must be the energy levels required to maintain that state at that level, also.

Side-note: Light (EMR) is a particle; everything above (in energy level) is two halves of one. 

To avoid referring readers to earlier posts, I'd like to resume these briefly here: there I posit that any ElectroMagnetic Radiation (EMR) 'boosted' above a gamma-level energy level will split into two 'halves', and that their 'forward' light-speed-constant (in relation to their respective points of origin) will be no more. These two 'halves' will have what modern physics calls 'charge', and they will be opposite, that is to say, positive and negative. Consider a waveform with a line down the centre of its path: everything above will be 'positive', and everything below 'negative'. Yet, although separate, those two halves are still one unique entity. This should not be confused with Einstein's 'spooky action at a distance' because, although the effects of this phenomena would be the same, his hypothesis (rather, his reflections on someone else's work) about the creation of that condition is quite different.

I suppose that I should also outline what might happen after that split: here I posit that the halves of a 'split fermion', since they are halves of a unique 'thing', will not be attracted to each other, and cannot annihilate each other, but opposing halves of two different particles can, or at least they'll try. Since the respective particle energies are close enough to their respective gravitational centres to allow another opposing particle half to get close enough to be captured by the enormous gravitational draw at that proximity, the two will attempt to bind, in a dynamic modern physics calls 'the strong force'. I won't get into the dynamics of 'particle(-half) binding' here, but the only 'stable' particle-half combination seems to be a trio, or two positives and one negative, or vice versa, locked in an eternal inter-annihilation struggle, and, depending on the polarity, the result is an either proton or neutron hadron... and this brings us back up to the atomic-level physics we know.

Model conclusion

I've done my best to explain my ideas about an 'energy vs. gravity' dynamic of any given particle in the simplest way possible (and I hope I succeeded), but if any of this stands to testing, another reality becomes true: any single particle, that is to say: photons, quarks (all 'colours'), electrons (and positrons), neutrinos, etc., are but variations of the same thing.