There's really something to say about the Dunning-Kruger effect: I'm constantly fighting to overcome it, and because of it, I'm not even sure how successful I am to that end. I keep going on about the importance of critical thought, but because of the above, I don't think I'm even very particularly good at it: those notions of self-doubt ground into me through my entire childhood plague still my every thought process (albeit, today, to a lesser degree), and that probably causes me to miss options and certainly to doubt outcomes (second-checking), and that lengthens what should be a 'normal' thought process enormously. Re-routing around early-life indoctrination ('programming') is a long process and takes sometimes decades of work.
And I got into it in the worst possible way: something happened to me in early life that 'flipped the switch' for autonomous thought... but I only can speak for my own mind, so perhaps this happens to everyone, and it eventually gets ground out of them. Anyhow, it seemed early on in life essential to my (subconscious idea of) survival that I understand everything happening around me; if something (I was told to do/imitate) didn't make any sense, I simply couldn't do it. What made things even more confusing was the obvious evidence (that no-one seemed able to see) that the 'ideal' goal of 'simply obeying' was a stressful, unsatisfactory, unhappy, and unfulfilled life. And the goals everyone was striving to 'fulfil' were almost never their own.
And when presented with an 'example' to follow, I soon found that the very act of questioning was enough to destroy, interrupt, 'poison' what was supposed to be a 'normal' social process. Yet not only did I not understand this, I had no answer for it (once one knows enough to question, how can one not question (especially things obviously questionable)?). That was a divide that I was never able to overcome, and one I only recently began to understand.
The root of that difference is in the 'survival' I mentioned earlier. It sounds strange and almost cliché said like that, but that is actually how our subconscious works: every decision we make is rooted in, and depends upon, that survival instinct.
And there is a 'switch'. How does one 'survive'? Whether we are familiar with our environment or not, we have the options of a) learning about everything that environment contains, and making our own decisions about what's good and bad in it, or, b), should we observe others already 'surviving' in that environment, imitating them (based on their overall healthiness and happiness).
But again, to one that has always taken the second option, the first option does not even exist... or, at least, it would most likely not come to their mind as a choice of action.
And therein lies the divide. To one that relies on imitating others for survival, any deviation from the (group) 'survival model' is 'different', and this can even mean 'danger'. So questioning the survival model is, in itself, already enough for a survive-through-imitation-er (have to think up a term for this) to 'out-group' anyone doing it... and once out-grouped, a person so deemed will probably always be mistrusted at a deepest level... unless of course they make some display of total submission. And this 'same or not' pattern-matching comparison (that the non-critical-thinker seems condemned to), too, is a 'switch'.
So how is one to negotiate with one who knows (or cares) not to reason, but only to imitate?
The thing with people able to make their own value judgments (think critically) is that they're persuaded that everyone else can think critically, too. Yet to find the answer to the above question, they have only to think back to the time (probably their childhood) when they couldn't think critically, or were still new it... and this is a hard thing for some to do.
Convincing someone who survives through imitation to deviate from their 'chosen' (often 'programmed' by others) behaviour pattern is almost an exercise in manipulation: either we have to convince them that an option 'outside' their comfort zone (programming) was a) their idea, or that b) everyone else in their 'in-group' has already opted for it (making it look like they are 'behind' or 'different', and this would make them even eager to adopt the new model (to conform)). And if one needs evidence of this and evidence that some, if not many, are already aware of it, one has only to look at almost any and all advertising.
Even more maddening, since the survive-through-imitator can/will not judge the value of anything for themselves, is the fact that they will continue to refuse to change their minds even when buried in evidence; when one knows not to judge the value of something, how can the value of that evidence be determined? In short, it can't (and all that is left/that 'registers' is the 'default' comparison (to an existing survival programming/model)).
Ridicule doesn't work, either, unless their 'in group' somehow joins in against the targeted... but, again, any critical thinker with any moral values would hesitate to resort to that sort of manipulation.
(side note: therein lies a point of irony in, the sheer disingenuousity of those who deem themselves 'masters (programmers) of morals': if one doesn't measure the value of anything, no moral judgements are even possible, making those 'moral lessons', to those who lack the will/ability to understand them for themselves, nothing but dictate to imitate (or else!), so it's hard to believe the gall of those who call that sort of dictate-serving carrot-and-stick (or else!) manipulation 'morals'?).
And what to do in face of this sort of divide? There is a frustration to both sides of it: the survive-through-imitation-er feels frustration in not being able to get the critical thinker to simply conform (and, subconsciously, sees the same's behaviour as even a danger (to survival)), and the critical thinker feels frustration at the former's inability and unwillingness to reason, and apparent dishonesty.
And when communication doesn't work to overcome this divide, what remains? Yes: violence. Overcoming/squashing this (primal!) urge depends upon the duress of the situation and/or the education/programming of both parties, but the critical thinker has the distinct advantage of being able to rein in their emotional responses, whereas the non-critical-thinker has only their fear (strength comparison, strength in numbers, 'thou shalt not' (or else!) programming, etc.) to hold them back.
If that's not enough, even the concept of honesty seems lost on one with no value judgement abilities of their own: if the only means of determining value is comparison, then, in any given situation, only the options comparable (beneficial) to the survival model will be considered (and everything else, especially things countering or questioning the same, rejected). To the critical thinker, who quite often is used to assessing the maximum available elements in any situation before making a decision, this looks like 'cherrypicking' to the extreme, but they have to understand that, to the non-critical-thinker, the concept of 'cherrypicking' can't even exist.
So, to 'work' with a non-critical-thinker, only is it necessary for critical thinkers to mask their thought processes (which would (subconsciously, even) trigger an alarm in their interlocuteur), it is necessary to avoid all attempts at reason and ridicule. But since the critical thinker will almost certainly fail at one or all of these challenges, and will certainly become 'out-grouped', the only means remaining is a long, arduous, one-on-one building of trust (acceptance as a 'reliable survival model') before even lessons of (how to) reason can even begin to set in.
Oh, the genius of the immoral those who set up the world this way: once set into motion, the survive-through-imitation machine perpetuates, almost immutably, itself, and its masters (the shepherds) have only to program the in-group survival-model 'leaders' (sheep-dogs) to propagate change through the rest of the imitation-or-else society (sheep).
That has been the model almost since humanity began to gather in greater numbers; it has worked thus far because our inability to communicate over long distances has contained 'packages' of humans into managable, isolated 'in-groups'. Yet it is becomong increasingly hard to maintain these, and it will soon become necessary to cut a dictate-able (survive-through-imitation) group from any other (competing group) influence entirely... and it's hard to imagine that those who would opt for this could ever become a world majority (at least, in the near future).
So, how does a critical thinker survive in a non-critically-thinking world (if they are not already an immoral part of the sheep-dependant 'shepherd' clan)?
Find your own autonomous as-away-from-public-as-possible means of survival, keep your cool, and have patience.
Thursday, 27 September 2018
Friday, 27 July 2018
RDF (the 'Semantic Web') and the Human Brain
I was introduced to the RDF (Resource Description Framework) data model by the chairpeople (waving to Paul Rouet ; ) of the "Paris Time Machine" project; they were sorely in need of a 'tech guy' (and I was the only one on the 'team'), but it was the only computer-oriented thing on their 'cahier des charges' that I wasn't qualified for; not only had I no experience with RDF, but I was totally unaware of its existence until then. I'm wondering how I managed to miss it: it's been around since 1999, created almost in tandem with the XML format (that is only beginning to seem 'more value than noise' for me), but it never took off, and is still far from anything approaching a standard (use) today. Now that I've looked into it, its potential utility is, well, amazing, but it's going to require a lot of work to implement: either the whole of the web is going to have to be re-factored to accommodate it, or we're going to have to develop an AI that can reliably read and extract data from all forms of publication (print and web). I'm working towards, and vying for, the latter.
RDF at its base is not a complicated affair, and its syntax took only a couple days to master. Basically, each bit of data is a 'subject-predicate-object' "triplet", for example: "Bob=><-->last_name-->=>Smith', or "Bob=>address=>25 maple lane' or "Bob=>phone_number=>0 (145) 628-5400'. So if we were to do a search for (subject) 'Bob', we would get all the data 'attributed' to that subject: last_name, address, phone_number. Of course, in larger data collections, 'Bob' would be a bad 'central node/identifier' choice (because that what it becomes in this context), but I'm sure you get the picture: in this way, it would be possible to attribute any 'type' (dictated by the predicate) of information to that subject, without any limitation and possibility of conflict (Bob can have two phone numbers: both will have a 'Bob=>phone_number' subject-predicate, and a query for 'Bob=>phone_number' (or just 'Bob') will return both). Furthermore, one triplet's object (data) can be a subject with data of its own: for example: "0 (145) 628-5400=>phone_number_type=>land_line' would turn up as a 'second level' of data in an 'all about Bob' simple 'Bob' query. So with this method, data linked to data linked to data... the possibilities are endless.
But that's not what excited me about it: I've always been fascinated by neuroscience (basically: understanding my own (brain's) quirks), and as I learned more about RDF, my thoughts, with bells ringing, returned increasingly there: there are a lot of similarities between the workings of RDF and the human brain.
Granted, RDF is a step 'above' our 'fired-or-not' basically-binary synapses, but the organisation seems the same. If we were to think of 'Bob', our brain would return all the data it contained that could be attributed to that entity. Our brain 'identifies' "Bob" by a group of synapses ('identifier'), and that is where I thought the difference with RDF was, but if we were to examine a more complicated RDF dataset, easily-conflict-prone subjects such as 'Bob' would have to become unique identifiers as well (and 'that particular Bob's' first name would become, say:10001010=>first_name=>Bob (and 10001010<-->=>address-->=>25 maple lane, etc.). In reality, to avoid conflicts, most likely every 'thing' in existence should have a unique identifier (save, for example, our most fundamental elements (atom-types, fermion-types, etc.)... so if we reductio ad absurdam our computer's 'unique id', it will be a collection of 'on or off' binary values... the same as our brain's.
====
Just a footnote here to underline that this 'binary cocktail' outline most likely does not describe the entirety of the brain's thought-memory-recall process; probably other chemical 'filters' figure in there too (and this is how we give 'value' to retrieved memories (over others)). This is yet something else to explore (and perhaps even exploit, if it can be re-created technologically), but for the purposes of this what-is-supposed-to-be RDF perusal, going there would be but a distraction.
RDF at its base is not a complicated affair, and its syntax took only a couple days to master. Basically, each bit of data is a 'subject-predicate-object' "triplet", for example: "Bob=><-->last_name-->=>Smith', or "Bob=>address=>25 maple lane' or "Bob=>phone_number=>0 (145) 628-5400'. So if we were to do a search for (subject) 'Bob', we would get all the data 'attributed' to that subject: last_name, address, phone_number. Of course, in larger data collections, 'Bob' would be a bad 'central node/identifier' choice (because that what it becomes in this context), but I'm sure you get the picture: in this way, it would be possible to attribute any 'type' (dictated by the predicate) of information to that subject, without any limitation and possibility of conflict (Bob can have two phone numbers: both will have a 'Bob=>phone_number' subject-predicate, and a query for 'Bob=>phone_number' (or just 'Bob') will return both). Furthermore, one triplet's object (data) can be a subject with data of its own: for example: "0 (145) 628-5400=>phone_number_type=>land_line' would turn up as a 'second level' of data in an 'all about Bob' simple 'Bob' query. So with this method, data linked to data linked to data... the possibilities are endless.
But that's not what excited me about it: I've always been fascinated by neuroscience (basically: understanding my own (brain's) quirks), and as I learned more about RDF, my thoughts, with bells ringing, returned increasingly there: there are a lot of similarities between the workings of RDF and the human brain.
Granted, RDF is a step 'above' our 'fired-or-not' basically-binary synapses, but the organisation seems the same. If we were to think of 'Bob', our brain would return all the data it contained that could be attributed to that entity. Our brain 'identifies' "Bob" by a group of synapses ('identifier'), and that is where I thought the difference with RDF was, but if we were to examine a more complicated RDF dataset, easily-conflict-prone subjects such as 'Bob' would have to become unique identifiers as well (and 'that particular Bob's' first name would become, say:10001010=>first_name=>Bob (and 10001010<-->=>address-->=>25 maple lane, etc.). In reality, to avoid conflicts, most likely every 'thing' in existence should have a unique identifier (save, for example, our most fundamental elements (atom-types, fermion-types, etc.)... so if we reductio ad absurdam our computer's 'unique id', it will be a collection of 'on or off' binary values... the same as our brain's.
====
Just a footnote here to underline that this 'binary cocktail' outline most likely does not describe the entirety of the brain's thought-memory-recall process; probably other chemical 'filters' figure in there too (and this is how we give 'value' to retrieved memories (over others)). This is yet something else to explore (and perhaps even exploit, if it can be re-created technologically), but for the purposes of this what-is-supposed-to-be RDF perusal, going there would be but a distraction.
Friday, 20 April 2018
There are no Gluons, Bosons, Gravitons, Weak or Strong Force... or 'magnetism', while we're at it.
...Not as individual 'things', anyway. All of these are the effects of the energy-gravity battle between particles at different energy levels... the varying degree and state of every particle (super-gamma split, sub-gamma photon, polarised-bound (quark) or unpolarised (lepton) make for the different 'effects'.
Magnetism is simply 'synchronised gravity'... of atoms whose outermost electrons are in that 'sweet spot' that is just to the inside of the overall atom 'event horizon': while being retained by the atom itself, that outermost electron(s) can 'broadcast' its gravitational signature while being affected by other similar atoms; the gravitational effects of electrons deeper within an atom's sphere of influence tend to be 'stifled' (negated) by the atom itself.
The centre of both gravity and magnetism (and this is no coincidence), 'negative squared' rules everything. Weak and strong forces are simply the 'proximity factor' of two particles, or the difference between two (bound) particles and another.
Magnetism is simply 'synchronised gravity'... of atoms whose outermost electrons are in that 'sweet spot' that is just to the inside of the overall atom 'event horizon': while being retained by the atom itself, that outermost electron(s) can 'broadcast' its gravitational signature while being affected by other similar atoms; the gravitational effects of electrons deeper within an atom's sphere of influence tend to be 'stifled' (negated) by the atom itself.
The centre of both gravity and magnetism (and this is no coincidence), 'negative squared' rules everything. Weak and strong forces are simply the 'proximity factor' of two particles, or the difference between two (bound) particles and another.
Saturday, 5 August 2017
Of Sheep Dogs, Shepherds and Sheep
I've expressed in past posts my persuasion that it is critical thought abilities that divide humanity, but since then I've tried to refine and revise that somewhat, and attempt to apply it to the social workings of modern society. Definition of terms: by 'modern', I mean 'now'.
Through that I came up with the title's three behaviour patterns: critical thinking (or absence of the same) does indeed create two distinct behaviour-thought patterns, but attempting to achieve 'survival success' with these in modern society creates behaviour subdivisions.
One who thinks critically can seek 'autonomous success' (perhaps in mutual exchanges with other like-minded humans), or they can choose to use their critical-thought abilities as an advantage over non-critical thinkers (without any attempt to educate them): it is this latter type, the one I call 'the shepherd', that I'd like to talk about here.
Then we have the non-critical-thinkers ('survive-through-imitation-ers'): I used to (rather pejoratively) refer to them as 'the sheep', but that would mean that anyone in a 'management' position would be a critical thinker; this is hardly the case. It would also require that every indoctrinating religious leader be aware that what they are spreading is bunk, but this is hardly true, as many are genuine 'believers'.
This latter case puzzled me until realising that there is a fine line, almost a behaviour 'switch', between those who seek to imitate a survival model and those who enforce it. All that changes there is a reference to, and exercise, of authority, but the 'survive through imitation' behaviour remains the same. Thus the 'sheep dog' ('survival-model authority') and 'sheep' ('survival-model-imitator') classifications.
So, if the sheep depend on the sheep dogs (and 'in-group' comparison) for their survival model, where do the sheep-dogs get theirs from? The shepherds, of course.
The genius of this system is that, to 'enact' a behaviour change, all the (often behind-the-scenes) shepherds have to do is give orders to their sheep-dogs (with a healthy share of scraps as 'reward'), and they will introduce that to whatever in-group behaviour pattern of whatever in-group they lead... not only will the sheep dogs 'police' that behaviour, but the sheep will police themselves; in the surive-by-imitation thought-behaviour mindset, imitation is the only known survival method, and everything outside of it is a 'threat' (to be 'defended against' through various levels of denial/dismissal/ostracisation/violence), and simple observation shows that this is how many people behave today.
The religious example led me to that realisation, but the pattern extends well beyond it: in your local supermarket, for example, consumers may think they have the 'freedom' to choose whatever product they like, but few think to why those products are there (and not others) and to who decides this 'for' them: the store manager stocks the shelves, but they take the choice of products from (often central) management, and when it comes to chain stores, this is pretty high up in the hierarchy.
In this pattern I've observed that the shepherds will often put forward a 'believer' sheep-dog as a 'patsy-authority' while they themselves remain anonymous: we see this both in politics (many of the presidents of the United States since Eisenhower have been this (and the present one is an uncontrollable, failed attempt at this)) and religion (it's the cardinals that pull the strings, not the Pope). But yet other shepherds don't 'require' this as an accountability-deflecting distraction, and content themselves with controlling what products appear on the shelves: the Koch brothers and Exxon mobil (we've had the technology for electric highways and electric most everything since the 1970s, for goodness' sake) are good examples of this.
And this behaviour pattern is echoed in modern class-divisions and our economy: the top 1% are people we rarely see in public, and the more-visible shepherd-connected 'rewarded sheep-dog' levels fill the tiers below that; as this system saps the lower levels, there is a sharp drop-off (at the former 'middle class') when we get to the 'sheep'.
And in this system, autonomous thinkers who don't seek to exploit their advantage have difficulty fitting in: we have seen rare examples of honest, autonomous success (such as Tesla and Elon Musk), but many, to survive, must rely on shepherds (grants, research financing) for their existence, making them almost... well rewarded slaves (anything they produce will earn their 'benefactors' much more than they will ever see), condemned to all the social ills that being an 'out-group' entails, because, in a survive-by-imitation society, even though those 'not-sheep or sheep-dogs' do the demonstrable, overly-obvious 'right' of producing all the comforts sheep enjoy, since they don't 'behave the same way', the sheep will always (instinctively) think them 'wrong' in some way... that they themselves can't even describe.
Through that I came up with the title's three behaviour patterns: critical thinking (or absence of the same) does indeed create two distinct behaviour-thought patterns, but attempting to achieve 'survival success' with these in modern society creates behaviour subdivisions.
One who thinks critically can seek 'autonomous success' (perhaps in mutual exchanges with other like-minded humans), or they can choose to use their critical-thought abilities as an advantage over non-critical thinkers (without any attempt to educate them): it is this latter type, the one I call 'the shepherd', that I'd like to talk about here.
Then we have the non-critical-thinkers ('survive-through-imitation-ers'): I used to (rather pejoratively) refer to them as 'the sheep', but that would mean that anyone in a 'management' position would be a critical thinker; this is hardly the case. It would also require that every indoctrinating religious leader be aware that what they are spreading is bunk, but this is hardly true, as many are genuine 'believers'.
This latter case puzzled me until realising that there is a fine line, almost a behaviour 'switch', between those who seek to imitate a survival model and those who enforce it. All that changes there is a reference to, and exercise, of authority, but the 'survive through imitation' behaviour remains the same. Thus the 'sheep dog' ('survival-model authority') and 'sheep' ('survival-model-imitator') classifications.
So, if the sheep depend on the sheep dogs (and 'in-group' comparison) for their survival model, where do the sheep-dogs get theirs from? The shepherds, of course.
The genius of this system is that, to 'enact' a behaviour change, all the (often behind-the-scenes) shepherds have to do is give orders to their sheep-dogs (with a healthy share of scraps as 'reward'), and they will introduce that to whatever in-group behaviour pattern of whatever in-group they lead... not only will the sheep dogs 'police' that behaviour, but the sheep will police themselves; in the surive-by-imitation thought-behaviour mindset, imitation is the only known survival method, and everything outside of it is a 'threat' (to be 'defended against' through various levels of denial/dismissal/ostracisation/violence), and simple observation shows that this is how many people behave today.
The religious example led me to that realisation, but the pattern extends well beyond it: in your local supermarket, for example, consumers may think they have the 'freedom' to choose whatever product they like, but few think to why those products are there (and not others) and to who decides this 'for' them: the store manager stocks the shelves, but they take the choice of products from (often central) management, and when it comes to chain stores, this is pretty high up in the hierarchy.
In this pattern I've observed that the shepherds will often put forward a 'believer' sheep-dog as a 'patsy-authority' while they themselves remain anonymous: we see this both in politics (many of the presidents of the United States since Eisenhower have been this (and the present one is an uncontrollable, failed attempt at this)) and religion (it's the cardinals that pull the strings, not the Pope). But yet other shepherds don't 'require' this as an accountability-deflecting distraction, and content themselves with controlling what products appear on the shelves: the Koch brothers and Exxon mobil (we've had the technology for electric highways and electric most everything since the 1970s, for goodness' sake) are good examples of this.
And this behaviour pattern is echoed in modern class-divisions and our economy: the top 1% are people we rarely see in public, and the more-visible shepherd-connected 'rewarded sheep-dog' levels fill the tiers below that; as this system saps the lower levels, there is a sharp drop-off (at the former 'middle class') when we get to the 'sheep'.
And in this system, autonomous thinkers who don't seek to exploit their advantage have difficulty fitting in: we have seen rare examples of honest, autonomous success (such as Tesla and Elon Musk), but many, to survive, must rely on shepherds (grants, research financing) for their existence, making them almost... well rewarded slaves (anything they produce will earn their 'benefactors' much more than they will ever see), condemned to all the social ills that being an 'out-group' entails, because, in a survive-by-imitation society, even though those 'not-sheep or sheep-dogs' do the demonstrable, overly-obvious 'right' of producing all the comforts sheep enjoy, since they don't 'behave the same way', the sheep will always (instinctively) think them 'wrong' in some way... that they themselves can't even describe.
Monday, 24 July 2017
A facebook suite to my last post (energy + gravity = 'light' (thus mass))
First off, quantum physics isn't as hard as most make out: there's a few base movements and interactions and the 'complicated part' is the math (expressing and predicting that interaction)... and that's partly due to the inefficiency of our 'traditional' base-ten number system.
Anyhow, a lot of present 'knowledge' (a lot of which has never been demonstrated) is based on hypothesis dating back to the early 19th century: the most cited of these is Maxwell's equations, themselves based on earlier observations of 'electromagnetic activity'.
Then and since then, they've taken the observed atomic (electron) behaviour and used it as a description of the actual content of an electromagnetic particle. I question this.
Because all that is based on observations of the behaviour of a -few- atom-types whose electrons occupy a 'sweet spot' that nears the 'event horizon' of its host atom's repelling force with other atoms (a sum of its parts' total 'mass' and charge), meaning that two 'sweet spot' atoms can in fact come 'closer' than other atoms whose outer electrons are further away from any neigbouring electron, meaning that they cannot affect each other's inner workings (but more about that later).
All of these 'sweet spot' atoms, because of their 'almost touching' outer electrons, easily transfer energy (heat) between them, and in some cases their outer electrons are held so weakly that a neighbouring unbalanced-charge atom will 'leech' them away... this is the base behaviour of electricity.
But those 'sweet spot' atoms whose electrons are near enough to their 'event horizon' to be affected by neighbouring atoms, but not close enough to it to be 'leeched', can be -synchronised- by a field of constant polarity... and this (imho) is the base behaviour of 'magnetism'.
Because if we add gravity (instead of 'magnetism') to the 'base elementary particle', what we have in the latter case is -synchronised gravity- (fulfilling 'magnetic behaviour').
Because if we observe gravity, we see that it becomes exponentially stronger towards its point of origin; in the above atoms whose atoms are 'almost touching', even though the actual 'size' of the electron may be small, the gravity must be great at that proximity (as would the energy it is retaining).
And gravity seems to extend to 'infinity' from that point of origin, but already at a short distance away it is next to 'nil'... but acumulate points of origin, and the combined 'pull' will add up, and if that 'pull' is synchronised (all electrons 'pointing in the same direction at the same time'), even more so, perhaps even exponentially (combined 'wavelengths'... observable even in ocean waves).
Anyhow, a lot of present 'knowledge' (a lot of which has never been demonstrated) is based on hypothesis dating back to the early 19th century: the most cited of these is Maxwell's equations, themselves based on earlier observations of 'electromagnetic activity'.
Then and since then, they've taken the observed atomic (electron) behaviour and used it as a description of the actual content of an electromagnetic particle. I question this.
Because all that is based on observations of the behaviour of a -few- atom-types whose electrons occupy a 'sweet spot' that nears the 'event horizon' of its host atom's repelling force with other atoms (a sum of its parts' total 'mass' and charge), meaning that two 'sweet spot' atoms can in fact come 'closer' than other atoms whose outer electrons are further away from any neigbouring electron, meaning that they cannot affect each other's inner workings (but more about that later).
All of these 'sweet spot' atoms, because of their 'almost touching' outer electrons, easily transfer energy (heat) between them, and in some cases their outer electrons are held so weakly that a neighbouring unbalanced-charge atom will 'leech' them away... this is the base behaviour of electricity.
But those 'sweet spot' atoms whose electrons are near enough to their 'event horizon' to be affected by neighbouring atoms, but not close enough to it to be 'leeched', can be -synchronised- by a field of constant polarity... and this (imho) is the base behaviour of 'magnetism'.
Because if we add gravity (instead of 'magnetism') to the 'base elementary particle', what we have in the latter case is -synchronised gravity- (fulfilling 'magnetic behaviour').
Because if we observe gravity, we see that it becomes exponentially stronger towards its point of origin; in the above atoms whose atoms are 'almost touching', even though the actual 'size' of the electron may be small, the gravity must be great at that proximity (as would the energy it is retaining).
And gravity seems to extend to 'infinity' from that point of origin, but already at a short distance away it is next to 'nil'... but acumulate points of origin, and the combined 'pull' will add up, and if that 'pull' is synchronised (all electrons 'pointing in the same direction at the same time'), even more so, perhaps even exponentially (combined 'wavelengths'... observable even in ocean waves).
So if we have a point of origin, an energy, and an exponentially-stronger-towards-point-of-origin force that is that energy trying to get 'back' to it (gravity), in order to resist this force, the energy would have to be exponentially stronger/weaker with distance too: yet even stable, that energy is -there- orbiting the point of origin, and it's that constant gravity-energy 'difference' that is the origin of the constant 'c'.
The rest is 'consequential behaviour'... 'light' is a low-energy 'complete' particle that is 'chasing' its energy excess, with an oscillating 'polarity', in the direction it was thrown in: super-gamma-energy particle energy has (somehow) 'split' between polarities (as described in fermion pair (creation/annihilation)) that pull at -each other- with a force beyond gravity, stopping their forward motion.
That would make mass (in the classical sense) 'mismatched particle pairs', and I've already written about that extensively elsewhere.
In any case, the 'gravity vs. energy' model ties everything together, or 'clicks'.
But please, shoot me down.
In any case, the 'gravity vs. energy' model ties everything together, or 'clicks'.
But please, shoot me down.
Wednesday, 24 May 2017
Electromagnetic radiation is probably neither electric nor magnetic at all... and the Pandora's box that idea opens.
I'm about to commit physics blasphemy, but it's only me here, and I don't mind at all being wrong. These are just my conclusions after years of trying to fit the 'demonstrated' pieces of evidence together, and my ignorance in the subject may have even helped me try thought-experiment methods that may not have been considered before. My research tends to be pretty linear (following one hypothesis rabbit-hole down to its demonstrable fact-bottom), so I tend to hear about existing similar hypothesises only after I've reached a conclusion of my own. Anyway, with this I have yet to find any hypothesis similar to this one, but I'm sure that there's one out there, somewhere.
What set me off towards this blog-entry's title conclusion was my research into the demonstrable aspects of electromagnetic radiation theory: I was wondering if the mechanics of light had ever been observed. Even before going there I was quite aware that observing electromagnetic radiation (henceforth 'EMR') is well-neigh impossible, due to the conditions that Schrodinger outlined so succinctly, but I was wondering if some 'non-classical' testing methods had ever been devised, especially since CERN entered operation.
But all I got in response to my questions (even to some professors of prestigious universities) were (often condescending) references to Maxwell's theorems, themselves based on Farraday's findings before him. I parsed these from every angle for some insight into the inner workings of EMR, but these, in spite of them being presented as 'accepted fact', are but theory, as, to date, the inner workings and mechanics of EMR has not been observed. And there is obviously something wrong with these observations, as phenomena such as those observed in the double-slit experiment still have no answer, although many are doing their damnedest to make those observations 'fit' Maxwell's theorems mathematically.
And there I noticed that most all experiments 'demonstrating' EMR's inner workings were using at least one of the 'conductor' elements, that is to say an atom that has a 'sweet spot' created by a rather weak 'charge surplus' that would be the sum of the nucleus and inner electrons; any electron attracted to this would have an orbit towards the outer 'reaches' of the atom's overall charge-reach, meaning that the electrons of two neighbouring same-element 'conductor' atoms would be 'closer' than it would be possible in any other atom; this is why and how they transfer energy (e.g., heat) and electrons (electricity) so readily.
This behaviour is more than obvious in experiments leading to inventions such as the electric motor, and it has been demonstrated that any intense-enough form of EMR will generate electricity (and resulting 'magnetic' waves) in a conductor element, but again, this behaviour is only that of those conductor elements. Yet here we have taken this behaviour and projected it into the workings of the EMR itself, and there's something fallaciously wrong with this: it's like shining a flashlight on someone in a dark room and, should they react, declaring that the the mechanics directing the behaviour of the person reacting must be the 'same' as the light from the flashlight because 'they reacted'. Hm, I have to come up with a better analogy than that.
Anyhow, underlying all this were the questions about 'matter' I have mulled over in earlier entries, and the one non-answer prevailing from all these is gravity; it's in applying my thoughts to the gravitational constant (simply put, that the gravitational 'draw' between two mass-elements grows exponentially with proximity) and the proximity of the electrons in the above 'conductor' atoms when things began to 'click'.
I won't get into the math here, but when one keeps the 'exponential draw with proximity' of gravity in mind, and considers the proximity of a 'conductor' electron to its neighbouring atom, the potential draw between the two must be great indeed. But some conditions have to be met for that 'enhanced' attraction to happen: if we take two 'synchronised' iron atoms, the draw between the two will be the greatest when the outermost electron of one is farthest away from that of the other (and vice versa), that is to say, when the electron of one is 'most drawn' by the overall charge of an atom whose electron is as far as possible away from another atom whose outermost electron is closest to it. Repeat this behaviour across thousands or more synchronised electrons, and we see have a 'wave' effect when all of the electrons are pointing to the same 'side' simultaneously. This would also explain polarisation. And although this would happen in 'waves', the rapidity of an electron orbit is so extreme that this attraction would seem, to us slow humans at least, constant.
That's all fine and well on its own, but of course, if I'm going to remap that part of the model, I'll have to remap the rest. Of course, I came to the 'EMR is neither electric nor magnetic' conclusion only after remapping the rest, but, well, it all fits together.
Anyhow, a fermion of any type seems to be a 'rip' in this fabric that caused the energy to separate from its binding force, and gravity is that 'zero state' trying to draw its energy back to it. The gravitational constant probably still holds true there, that is to say, the closer one gets to that 'zero point', the more energy it would take to resist its draw; I would like to (again) propose that both EMR and matter are that energy 'orbiting' around that zero point, an 'orbit' that becomes increasingly tighter as the energy 'resisting' it increases. I'm not sure what form this 'orbiting' takes (do the 'zero point' and the energy resist each other equally, orbiting each other (like a two equally-sized balls at the end of a string), or is the 'point of origin' zero point a fixed one?), but they are interlocked around one point in spacetime.
Just to avoid referencing earlier entries: I hypothesise that an EMW and a fermion pair are the same thing at two different energy levels; above a given energy level, the EMW wave 'splits' into positive and negative 'arcs' of the same waveform to become a quark and antiquark, or electron and positron depending on energy level, and that hadrons are formed by 'mismatched' halves of EMWs of different origins.
I would surmise that a rip in spacetime would release a gravity 'draw' that extends to infinity (becoming infinitely weak with distance) and would not be different in 'size' from any other (the energy resisting it being the variable here); if we imagine an electron, its infinitesimal size would create an extremely weak draw at any given distance, but its infinitesimal size also means that it is possible to approach the 'zero point' to such an extent that, towards it, the draw would be enormous.
What set me off towards this blog-entry's title conclusion was my research into the demonstrable aspects of electromagnetic radiation theory: I was wondering if the mechanics of light had ever been observed. Even before going there I was quite aware that observing electromagnetic radiation (henceforth 'EMR') is well-neigh impossible, due to the conditions that Schrodinger outlined so succinctly, but I was wondering if some 'non-classical' testing methods had ever been devised, especially since CERN entered operation.
But all I got in response to my questions (even to some professors of prestigious universities) were (often condescending) references to Maxwell's theorems, themselves based on Farraday's findings before him. I parsed these from every angle for some insight into the inner workings of EMR, but these, in spite of them being presented as 'accepted fact', are but theory, as, to date, the inner workings and mechanics of EMR has not been observed. And there is obviously something wrong with these observations, as phenomena such as those observed in the double-slit experiment still have no answer, although many are doing their damnedest to make those observations 'fit' Maxwell's theorems mathematically.
And there I noticed that most all experiments 'demonstrating' EMR's inner workings were using at least one of the 'conductor' elements, that is to say an atom that has a 'sweet spot' created by a rather weak 'charge surplus' that would be the sum of the nucleus and inner electrons; any electron attracted to this would have an orbit towards the outer 'reaches' of the atom's overall charge-reach, meaning that the electrons of two neighbouring same-element 'conductor' atoms would be 'closer' than it would be possible in any other atom; this is why and how they transfer energy (e.g., heat) and electrons (electricity) so readily.
This behaviour is more than obvious in experiments leading to inventions such as the electric motor, and it has been demonstrated that any intense-enough form of EMR will generate electricity (and resulting 'magnetic' waves) in a conductor element, but again, this behaviour is only that of those conductor elements. Yet here we have taken this behaviour and projected it into the workings of the EMR itself, and there's something fallaciously wrong with this: it's like shining a flashlight on someone in a dark room and, should they react, declaring that the the mechanics directing the behaviour of the person reacting must be the 'same' as the light from the flashlight because 'they reacted'. Hm, I have to come up with a better analogy than that.
Anyhow, underlying all this were the questions about 'matter' I have mulled over in earlier entries, and the one non-answer prevailing from all these is gravity; it's in applying my thoughts to the gravitational constant (simply put, that the gravitational 'draw' between two mass-elements grows exponentially with proximity) and the proximity of the electrons in the above 'conductor' atoms when things began to 'click'.
'Magnetism' is Gravity.
In short, I'd like to propose that what we call 'magnetism' or 'magnetic fields' is in fact 'synchronised gravity', or in other words, instead of a 'separate' force, just a different behaviour of an 'existing' one.I won't get into the math here, but when one keeps the 'exponential draw with proximity' of gravity in mind, and considers the proximity of a 'conductor' electron to its neighbouring atom, the potential draw between the two must be great indeed. But some conditions have to be met for that 'enhanced' attraction to happen: if we take two 'synchronised' iron atoms, the draw between the two will be the greatest when the outermost electron of one is farthest away from that of the other (and vice versa), that is to say, when the electron of one is 'most drawn' by the overall charge of an atom whose electron is as far as possible away from another atom whose outermost electron is closest to it. Repeat this behaviour across thousands or more synchronised electrons, and we see have a 'wave' effect when all of the electrons are pointing to the same 'side' simultaneously. This would also explain polarisation. And although this would happen in 'waves', the rapidity of an electron orbit is so extreme that this attraction would seem, to us slow humans at least, constant.
That's all fine and well on its own, but of course, if I'm going to remap that part of the model, I'll have to remap the rest. Of course, I came to the 'EMR is neither electric nor magnetic' conclusion only after remapping the rest, but, well, it all fits together.
So what is gravity?
As I outlined in earlier entries, I think that gravity seems to be one half of a whole that has a 'zero state'; the 'level' of that zero state is not important for now, but it could be a perfectly intertwined (and indistinguishable) gravity and energy; this could even be the 'fabric' of our universe behind the 'dark energy' (and 'dark matter') hypothesis.Anyhow, a fermion of any type seems to be a 'rip' in this fabric that caused the energy to separate from its binding force, and gravity is that 'zero state' trying to draw its energy back to it. The gravitational constant probably still holds true there, that is to say, the closer one gets to that 'zero point', the more energy it would take to resist its draw; I would like to (again) propose that both EMR and matter are that energy 'orbiting' around that zero point, an 'orbit' that becomes increasingly tighter as the energy 'resisting' it increases. I'm not sure what form this 'orbiting' takes (do the 'zero point' and the energy resist each other equally, orbiting each other (like a two equally-sized balls at the end of a string), or is the 'point of origin' zero point a fixed one?), but they are interlocked around one point in spacetime.
Just to avoid referencing earlier entries: I hypothesise that an EMW and a fermion pair are the same thing at two different energy levels; above a given energy level, the EMW wave 'splits' into positive and negative 'arcs' of the same waveform to become a quark and antiquark, or electron and positron depending on energy level, and that hadrons are formed by 'mismatched' halves of EMWs of different origins.
I would surmise that a rip in spacetime would release a gravity 'draw' that extends to infinity (becoming infinitely weak with distance) and would not be different in 'size' from any other (the energy resisting it being the variable here); if we imagine an electron, its infinitesimal size would create an extremely weak draw at any given distance, but its infinitesimal size also means that it is possible to approach the 'zero point' to such an extent that, towards it, the draw would be enormous.
So how does all this work together?
The 'towards newtonian' model remains essentially the same. Since both EMR and particle pairs are 'one thing', still-joined positive-and-negative EMR 'halves' would remain a 'neutral' whole that, outside of direct contact with another particle, would be affected only by gravity; particle 'pairs', or the positive and negative 'halves' of the same 'thing', would maintain their state, with their respective energies resisting the gravity of its own and all other particle 'gravity rip'... irrelevant of their polarity, or so my thoughts go so far, but I'm still mulling that one over for the time being.Sunday, 25 December 2016
Matter = bonded 'mismatched' halves of a same thing.
I'm going to have to revise a few diagrams I made earlier to illustrate this (I was misguided/wrong about a couple things in them), but the recent observation that anti-atoms emit light, too, was... encouraging.
I still see sense in my prediction that mass is mismatched particle pairs (which I will get to in a second), but I was most likely wrong about it being gravity that makes them bond to each other, as it would make more sense that it is their polarity that makes them try to annihilate each other... but mismatched quarks would have to (practically) touch for this to happen. Maybe.
To sum up my idea this far: a photon and a particle pair are essentially the same thing at different energy levels. Not 'electromagnetic' at all, I hypothesise that a 'light wave' (photon) is, in fact, a 'balance' of energy and a force that we would call gravity. Like a bucket swung on a string, the more energy one puts into the rotation (the faster the rotation), the higher the sensation of gravity; the two, in essence, are a tug-of-war balance against each other, and this is the source of the 'constant' that is 'light-speed' C.
It works this way for a forward c-travelling energy wave, with the 'gravity' being the force that causes the energy to oscillate across polarities; yet above a certain energy level, the oscillation will 'overcome' polarity axis and forward motion, and the wave will 'split' into its positive and negative polarities... we would call these 'quarks' (but henceforth 'particles', here). And, as they are perfectly matched (they are, essentially, halves of the same thing), they should annihilate each other perfectly.
But if, after particle creation, the particles are estranged from each other, they may meet another not-same-energy-level one, and, should they touch/near (again, the 'how' of this is still not clear to me), they will annihilate each other, 'creating' either a photon or smaller particle equalling the energy difference between the two.
But should three particles, one of one charge and the other two of opposing charge, meet each other instantaneously, they would try desperately to annihilate each other, but the two same-charge particles would prevent that from happening (while being 'bonded' across the opposite-charge particle): this is called a 'hadron'.
But, in particle physics, there are two forms of hadron: 'neutral charge' neutrons and positively-charged protons. I still have a lot of questions about how this comes to be (are neutrons really so 'neutral', or perhaps is this distribution an 'outcome' of further particle interaction), but going there would digress from what I'm trying to address here.
But the 'bonding' I describe above only involves one half of a particle (pair): what happens to the other 'estranged' half?
This model, if it is demonstrable, would explain both radioactive decay (nuclear half-life) and Einstein's "spooky action at a distance", as, since both particles of a pair are one half of a same thing, something affecting one of them would also affect the other. For example, were an electron annihilated by a positron, their 'opposing twins' would be affected, too, and one or both of them are 'bonded' in some way to other particles in a stable manner, their disparition would make the bonding unstable, causing it to be affected by surrounding particles, or, in other words, decay. And it would make sense that the 'distance' separating twin particle-pairs doesn't matter; any change to one would instantly affect the other.
Were this true, the implications and possibilities (instantanious communication, etc.) would be myriad: this is becoming almost exciting.
I still see sense in my prediction that mass is mismatched particle pairs (which I will get to in a second), but I was most likely wrong about it being gravity that makes them bond to each other, as it would make more sense that it is their polarity that makes them try to annihilate each other... but mismatched quarks would have to (practically) touch for this to happen. Maybe.
To sum up my idea this far: a photon and a particle pair are essentially the same thing at different energy levels. Not 'electromagnetic' at all, I hypothesise that a 'light wave' (photon) is, in fact, a 'balance' of energy and a force that we would call gravity. Like a bucket swung on a string, the more energy one puts into the rotation (the faster the rotation), the higher the sensation of gravity; the two, in essence, are a tug-of-war balance against each other, and this is the source of the 'constant' that is 'light-speed' C.
It works this way for a forward c-travelling energy wave, with the 'gravity' being the force that causes the energy to oscillate across polarities; yet above a certain energy level, the oscillation will 'overcome' polarity axis and forward motion, and the wave will 'split' into its positive and negative polarities... we would call these 'quarks' (but henceforth 'particles', here). And, as they are perfectly matched (they are, essentially, halves of the same thing), they should annihilate each other perfectly.
But if, after particle creation, the particles are estranged from each other, they may meet another not-same-energy-level one, and, should they touch/near (again, the 'how' of this is still not clear to me), they will annihilate each other, 'creating' either a photon or smaller particle equalling the energy difference between the two.
But should three particles, one of one charge and the other two of opposing charge, meet each other instantaneously, they would try desperately to annihilate each other, but the two same-charge particles would prevent that from happening (while being 'bonded' across the opposite-charge particle): this is called a 'hadron'.
But, in particle physics, there are two forms of hadron: 'neutral charge' neutrons and positively-charged protons. I still have a lot of questions about how this comes to be (are neutrons really so 'neutral', or perhaps is this distribution an 'outcome' of further particle interaction), but going there would digress from what I'm trying to address here.
But the 'bonding' I describe above only involves one half of a particle (pair): what happens to the other 'estranged' half?
This model, if it is demonstrable, would explain both radioactive decay (nuclear half-life) and Einstein's "spooky action at a distance", as, since both particles of a pair are one half of a same thing, something affecting one of them would also affect the other. For example, were an electron annihilated by a positron, their 'opposing twins' would be affected, too, and one or both of them are 'bonded' in some way to other particles in a stable manner, their disparition would make the bonding unstable, causing it to be affected by surrounding particles, or, in other words, decay. And it would make sense that the 'distance' separating twin particle-pairs doesn't matter; any change to one would instantly affect the other.
Were this true, the implications and possibilities (instantanious communication, etc.) would be myriad: this is becoming almost exciting.
Saturday, 22 October 2016
Time Dilation at high velocities... ?
I keep running into this hurdle. I can understand that it would take 'infinite energy' to accelerate a particle to light speed using fields sharing the same time frame of the point of origin of said particle... but I also don't see how this could ever happen. Yet this seems to be the base of most relativity spacetime calculations... along with 'c is constant in every frame of reference'. I don't see how that is possible, unless there's a new 'aether' (a 'speed-limiting substance' hypothesis disproven between 1881~1887)... yet it seems that they're trying to make the boson seem this. And this is, in turn, a base for a 'time dilation at high velocities' theory (hypothesis? I've never seen record of any non-mathematical demonstration of this - I do not consider mathematical theory 'theory' in the scientific sense of the term). In short, this reasoning raises more questions than it 'answers', for me.
If two particles travelling in opposite directions at 99.9~% c collide, their relative velocity would be ~199.87% c. Even before their collision, it doesn't matter if one or both particles are moving: their velocity relative to each other is this.
It's around here that I'm accused of 'thinking Newtonian' and being referred to 'sacred' relativity (and its 'nothing can travel faster than c in any frame of reference' math-apologetics)... but quantum mechanics seems to work just fine 'Newtonianly' if particles are 'allowed' to travel at velocities above C.
If two particles travelling in opposite directions at 99.9~% c collide, their relative velocity would be ~199.87% c. Even before their collision, it doesn't matter if one or both particles are moving: their velocity relative to each other is this.
It's around here that I'm accused of 'thinking Newtonian' and being referred to 'sacred' relativity (and its 'nothing can travel faster than c in any frame of reference' math-apologetics)... but quantum mechanics seems to work just fine 'Newtonianly' if particles are 'allowed' to travel at velocities above C.
Tuesday, 11 October 2016
Everything is light - a brief brainfart.
Just recording a thought for posterity: I'm increasingly persuaded that all of our most basic fermions are but 'one half' of a super-gamma-level electromagnetic wave (with only energy levels differentiating their 'types'); the binding force (that makes them oscillate) that binds them to their 'axis' is gravity: in EMW form, it is spread out along the wave's length, so it is practically undetectable (but it is entirely demonstrable that gravity affects gravity, and that light (EMWs) is affected by gravity), but when an EMW gains enough energy to 'split', that gravity is concentrated when the axis becomes a 'loop' on itself.
The dynamics of this 'splitting' seems to happen (in my mind) when the EMW's wavelength becomes so compact and rapid that its 'photon' collides with itself, disrupting its forward momentum; it still tries (vainly) to continue a forward motion (away from itself), and the constant collision course makes it deviate at (I don't know what) angles, making it 'loop' around one point.
This would explain both 'spooky action at a distance' and nuclear half-life: not only does an alteration to one fermion of a pair affect its opposite 'twin', but should that fermion be absorbed, its twin elsewhere in the universe would disappear also - or it would become a lower-energy form - and the resulting atomic instability would be a matter of course.
The dynamics of this 'splitting' seems to happen (in my mind) when the EMW's wavelength becomes so compact and rapid that its 'photon' collides with itself, disrupting its forward momentum; it still tries (vainly) to continue a forward motion (away from itself), and the constant collision course makes it deviate at (I don't know what) angles, making it 'loop' around one point.
This would explain both 'spooky action at a distance' and nuclear half-life: not only does an alteration to one fermion of a pair affect its opposite 'twin', but should that fermion be absorbed, its twin elsewhere in the universe would disappear also - or it would become a lower-energy form - and the resulting atomic instability would be a matter of course.
Tuesday, 4 October 2016
Critical thought or 'not': the real divider of humanity.
I suppose this post is long enough to merit a preamble. I have framed the following around critical thought, but this term could just as well be switched with ‘autonomous survival’: there are humans who have emerged from their ‘sheltered’ live-by-imitation childhood mode into one where they take the responsibility of making value judgements (and the responsibility for their decisions that result from those judgements) for themselves, and there are those who have not: the former have a clear advantage over the latter. Most of today’s problems are because of a few critical-thinkers’ exploitation of the greater non-critically-thinking populace (and their ‘leader-protector-dependant’ state) and the no-rational-solution-possible (because they're both guilty of the same thing) conflicts they create with other non-critical-thinker-exploiter leaderships. Here, I try to draw a line from the origins of critical thought, and the purpose it once served, through to the way it is used (or not) today.
I'd like to start presenting my case by trying to improve on an earlier 'hunter in the forest' analogy, because it is the best demonstration I have about the origins, and original use, of critical thought.
'Human against nature' is how we spent millions of years of our evolution, and the resulting 'wiring' is still quite present today. Our emergence from that state (after the dawn of agriculture only 15,000 years ago) into a safer, 'shelter in larger numbers' environment, made that tool an option more than anything, but, as I will try to explain later, because of the advent of technology, if we are to survive past the next few centuries, it is essential that we all learn to use it.
Let's look at the 'knowledgable hunter' once again. Thanks to experience combined with the tested lessons of whoever taught them, they are quite familiar with their environment, and thanks to their always-developing critical thought, can judge the value and potential (change, danger, movement, possible source of sustenance, etc.) of every detail in it, even new ones. The hunter has all the knowledge and skills they need to survive, and feels quite confident in their autonomous role.
Now enter the hunter's children (or any group of children, for that matter): they are new to this world, and have neither the critical thought skills (their prefrontal cortex has yet to mature) or the experience to survive on their own. Should they want ato venture out into the forest, they have to do so in the company of a 'protector' (the hunter), and while on such an outing, because their survival depends on the Hunter's proximity and knowledge about what's dangerous or not, their focus is on the Hunter more than the forest around them: they will unquestioningly imitate the hunter's every move, and as long as they are not in danger, this is their definition of 'good (for survival)', and they will judge their own (and their peers') behaviour 'good' or 'bad' by comparing it to the hunter's.
With time, the children will grow to adults, and in the process become very good at imitating their hunter-protector, but they still have yet to venture out into the forest alone. Their prefrontal cortex has almost matured, but they have not yet begun to use it to test and compare their learned lessons, let alone use these to devise new methods or tools of their own. Their hunter-protector is still their key to survival, and as long as the hunter is around, their 'value comparison' reference (for 'survival value') is they, and all the details of their environment is compared to that 'what would the hunter do' reference, too.
They all have the same physical abilities at that point, but the hunter has a clear advantage over their protégées. The hunter has two options: they can send them out into the forest to test their skills alone, obliging them to 'activate' their critical thinking skills and progressively replace their comparison-to-hunter value (emotional) programming with autonomous-experience (the result of actually needing those lessons for actual survival) 'confirmed results' emotion-overriding (partly critical-thought) values of their own, or they can continue allowing their protégées to depend on them, and their presence, for 'hunter-compared' survival-decisions for the rest of their lives (or until they decide to make the switch to autonomy on their own).
Fast-forward to the dawn of agriculture and animal husbandry. When humans began to gather around it in greater numbers, they no longer had the need, nor the occasion, to make the formerly essential-to-autonomous-survival 'switch': the technological techniques developed over time (largely by a sporadic trial-and-error discoveries mixed with the contributions of some individual critical thought) could be simply imitation-passed from generation to generation; there was no need to develop new ones as long as the existing ones 'worked'. Thus, while we still had the ability to survive autonomously, a larger part of the population no longer used it, and spent their lives determining the 'value' of all they did by comparing it to the lessons taught to them by their leader-protectors and elders, as well as their peers who imitated those teachers.
Soon some critically-thinking clan-leaders, who were before obliged to maintain their leadership role through their entirely demonstrable superior knowledge, strength and experience, learned to exert their advantage over appearing generations of ‘new’ non-critical thinkers, and extend their rule to entire trade cities and dependancies of those unquestioning humans. At a lower level, critically-thinking heads of production began to exploit non-critical-thinkers, trained into a life of repetitive non-thought toil, through slavery (although a few of this 'new class' became educated, critical thinkers themselves). When rivalry between competing city-leaders began, their subjects would seek the leaders' protection, just like they would with the hunter of times before, but this time around, as non-critically-thinking able adults, they could man their leader's armies, too. But this class-separation system wasn't 'stable', as there was nothing preventing people from learning to think (survive) for themselves.
Enter religion. Kings would tap into existing superstition, ignorance and legends to invent a 'higher authority' to 'enforce' and 'validate' their own rule; their priests eventually rose in stature and political importance and began to impose rules of their own. Soon, some non-powers-that-be critical thinkers wanting in on the game understood that anyone who claimed that they were messengers of a 'higher power' could gain a non-critical, dependant following and earn a livelihood, and because of this, the religions, gods and messiahs became myriad... but even this system wasn't stable, as followers would change gods (thus religious leaders) at whim, and were potential prey for any proselytisation that 'sounded good'. Monotheism was the solution to this: one 'god', one set of rules.
Early humanity was indeed unruly: it was the result of a merging of sometimes very different cultures (tribes), and if most of the populace wasn't able to reason ('my way/clan or the highway'), the result was often... messy. So many religions also began as a social dictate for a given tribe, and their 'god' inventions served more as 'enforcers' than anything. Even that system was unstable, but indoctrination on top of that, in a culture made no-other-option (or else!) 'distinct' through religious dictate, pretty well sealed any hapless child unfortunate enough to be born into it into a life of critical-thought-free subservience to religious leaders and their largely self-serving dictate and (often other-domination) goals. So whether they ruled through fear of might or fear of 'might', these critical thinkers made a system in which they could maintain their advantage, and it worked so well that it lasted for the better part of two millennia. The 'marriage' between the two forms of dictate was rarely a happy one, though: while one form of dictate held power over the bodies of their followers, the other held their minds, and the two would often have to negotiate with the other to advance their own goals, or eliminate it altogether.
Re-purposing critical thought to other ends than survival, namely through the Greek invention of logic and reason, was still a fledgling idea when Rome put an end to its spread with its 'rule by might' over the known world, and it was all too easy for religion to step in to fill in the critical-thinking leadership void at its fall. Enter the Enlightenment, then the re-birth of democratic ideology: this resulted in, again, an unstable marriage, as, although on the surface the people had freedom to 'determine' what was best for society as a whole, their still largely-un-critically-thinking minds were still bound by and dependant upon the limited options imposed by religious dictate.
Religion is in decline in much of the western world, but, to varying degrees in different countries depending on their accessibility of education, other 'power-entities' (namely corporations) controlling commodities and services have stepped in to exploit the critical-thinking void: they, too, are rigging the system to maintain their advantage, but this time from behind the scenes, through influencing political 'democratic' power into making the rules as advantageous as possible for them, all while making it as difficult as possible for one to become educated or independant-minded enough to learn to ask questions about the mysterious entities, of which few think about beyond the face/facade that is the logo and jingle presented to them, that 'provide' for them.
Adding technology to the mix only makes things worse: it is one thing to invent a pitchfork, and quite another thing to simply use it (or imitate someone else doing the same). As long as the critical-thinking divide remains, society's inventors are not only the only ones detaining the 'secrets' of their technology (that would be accessible to anyone with a bit of research), but that technology also becomes an element of control: as long as the non-critical-thinking populace is 'comfortable', they will not think beyond the sphere of the 'survival (now 'comfort') tool'-filled environment made for them, leaving the critical thinkers to do pretty well anything they please, unchallenged.
And that is the scenario today. Our information age has led to a huge rise in the number of those who have learned to think (survive) for themselves, people who 'dare' question society's workings and would like to propose new solutions; unfortunately, to the opposite end of the spectrum, the die-hard 'live through imitation' (and those they imitate) clan-minders are becoming increasingly radical and active; and, a product of our (surveillance) technology, the emergence of overly-sheltered 'helicopter-parent'-raised children into active society isn't helping things either, because instead of assuming life's responsibilities and challenges, they would rather give their life-governance to a 'nanny state' (and other governing entities) to have them shut up anything that could risk 'offending' them.
There has to be a conversation between the critical-thought (or not) extremes, but this seems almost impossible, because it's almost as though they're speaking separate languages:
A critical thinker trying to rationally propose a new solution to existing problems to a non-critical-thinker will most likely be met with rejection if it doesn't match an existing method, or it is not endorsed by some authority-figure (and it will be rejected with a 'rationale' consisting of comparisons to things other clan-members or 'respected' authority-figures have done and said);
An insult to a clan-minded non-critical-thinker is an affront to their (illusion of) position in their particular clan-scale, and, instinctively, the threat of being humiliated and excluded from their 'clan-circle' (by those unable to judge the real value of the insult) is a threat to their very life, whereas an insult to a critical thinker will most often be met with consideration, followed by a rejection (if the insult is unfounded) or acceptance (because errors are bad for survival-knowledge, and corrections are good for it), with, at most, a 'you didn't have to be rude about it' as an expression of offense;
Words, even, don’t have the same use and meaning: a critical thinker will choose words best-suited to making sure what they are trying to communicate about reality is understood; the non-critical-thinker will try to make reality ‘match’ their programmed authority-given ‘definitions’ of words (as part of their ‘survival success example to imitate’)… especially towards ‘not-their-clan’ people (“my survival depends on what I was told about you, so you are what my protectors say you are, otherwise I don’t know how to deal with you.” It’s almost as though they think their words do affect reality). This ‘out-group-non-processing’ is also responsible for all forms of irrational (to a critically-thinking person) bigotry;
If a critical thinker and a non-critical thinker have a debate in which the critical thinker totally destroys their opponent's position through reason and logic and the non-critical-thinker gets their opponent to show hesitation, anger, doubt, frustration or exasperation, the audience supporters of each will walk away thinking that 'their guy' won.
I could go on about the actual effects of critical thought in dampening/overriding/cancelling emotions (that most that non-critical-thinkers have to rely on as a reaction) as well, but I think I have already written extensively about this elsewhere in this blog.
Let me close by saying that it's clear to me that, in this technological age (and all the high-speed trade and destructive power it brings), if we don't all begin thinking critically soon, we will be leaving the few that do an exponentially-increasing exploitable advantage over those who don't (look at the wealth-divide, already...)... and, as we saw before, conflicts between overlapping non-critical-thinker-exploiter systems most always end in war, and as time and technology marches on without us filling the 'critical thinking void', the potential consequences of such conflict grow exponentially as well.
--------
I'd like to start presenting my case by trying to improve on an earlier 'hunter in the forest' analogy, because it is the best demonstration I have about the origins, and original use, of critical thought.
'Human against nature' is how we spent millions of years of our evolution, and the resulting 'wiring' is still quite present today. Our emergence from that state (after the dawn of agriculture only 15,000 years ago) into a safer, 'shelter in larger numbers' environment, made that tool an option more than anything, but, as I will try to explain later, because of the advent of technology, if we are to survive past the next few centuries, it is essential that we all learn to use it.
Let's look at the 'knowledgable hunter' once again. Thanks to experience combined with the tested lessons of whoever taught them, they are quite familiar with their environment, and thanks to their always-developing critical thought, can judge the value and potential (change, danger, movement, possible source of sustenance, etc.) of every detail in it, even new ones. The hunter has all the knowledge and skills they need to survive, and feels quite confident in their autonomous role.
Now enter the hunter's children (or any group of children, for that matter): they are new to this world, and have neither the critical thought skills (their prefrontal cortex has yet to mature) or the experience to survive on their own. Should they want ato venture out into the forest, they have to do so in the company of a 'protector' (the hunter), and while on such an outing, because their survival depends on the Hunter's proximity and knowledge about what's dangerous or not, their focus is on the Hunter more than the forest around them: they will unquestioningly imitate the hunter's every move, and as long as they are not in danger, this is their definition of 'good (for survival)', and they will judge their own (and their peers') behaviour 'good' or 'bad' by comparing it to the hunter's.
With time, the children will grow to adults, and in the process become very good at imitating their hunter-protector, but they still have yet to venture out into the forest alone. Their prefrontal cortex has almost matured, but they have not yet begun to use it to test and compare their learned lessons, let alone use these to devise new methods or tools of their own. Their hunter-protector is still their key to survival, and as long as the hunter is around, their 'value comparison' reference (for 'survival value') is they, and all the details of their environment is compared to that 'what would the hunter do' reference, too.
They all have the same physical abilities at that point, but the hunter has a clear advantage over their protégées. The hunter has two options: they can send them out into the forest to test their skills alone, obliging them to 'activate' their critical thinking skills and progressively replace their comparison-to-hunter value (emotional) programming with autonomous-experience (the result of actually needing those lessons for actual survival) 'confirmed results' emotion-overriding (partly critical-thought) values of their own, or they can continue allowing their protégées to depend on them, and their presence, for 'hunter-compared' survival-decisions for the rest of their lives (or until they decide to make the switch to autonomy on their own).
Fast-forward to the dawn of agriculture and animal husbandry. When humans began to gather around it in greater numbers, they no longer had the need, nor the occasion, to make the formerly essential-to-autonomous-survival 'switch': the technological techniques developed over time (largely by a sporadic trial-and-error discoveries mixed with the contributions of some individual critical thought) could be simply imitation-passed from generation to generation; there was no need to develop new ones as long as the existing ones 'worked'. Thus, while we still had the ability to survive autonomously, a larger part of the population no longer used it, and spent their lives determining the 'value' of all they did by comparing it to the lessons taught to them by their leader-protectors and elders, as well as their peers who imitated those teachers.
Soon some critically-thinking clan-leaders, who were before obliged to maintain their leadership role through their entirely demonstrable superior knowledge, strength and experience, learned to exert their advantage over appearing generations of ‘new’ non-critical thinkers, and extend their rule to entire trade cities and dependancies of those unquestioning humans. At a lower level, critically-thinking heads of production began to exploit non-critical-thinkers, trained into a life of repetitive non-thought toil, through slavery (although a few of this 'new class' became educated, critical thinkers themselves). When rivalry between competing city-leaders began, their subjects would seek the leaders' protection, just like they would with the hunter of times before, but this time around, as non-critically-thinking able adults, they could man their leader's armies, too. But this class-separation system wasn't 'stable', as there was nothing preventing people from learning to think (survive) for themselves.
Enter religion. Kings would tap into existing superstition, ignorance and legends to invent a 'higher authority' to 'enforce' and 'validate' their own rule; their priests eventually rose in stature and political importance and began to impose rules of their own. Soon, some non-powers-that-be critical thinkers wanting in on the game understood that anyone who claimed that they were messengers of a 'higher power' could gain a non-critical, dependant following and earn a livelihood, and because of this, the religions, gods and messiahs became myriad... but even this system wasn't stable, as followers would change gods (thus religious leaders) at whim, and were potential prey for any proselytisation that 'sounded good'. Monotheism was the solution to this: one 'god', one set of rules.
Early humanity was indeed unruly: it was the result of a merging of sometimes very different cultures (tribes), and if most of the populace wasn't able to reason ('my way/clan or the highway'), the result was often... messy. So many religions also began as a social dictate for a given tribe, and their 'god' inventions served more as 'enforcers' than anything. Even that system was unstable, but indoctrination on top of that, in a culture made no-other-option (or else!) 'distinct' through religious dictate, pretty well sealed any hapless child unfortunate enough to be born into it into a life of critical-thought-free subservience to religious leaders and their largely self-serving dictate and (often other-domination) goals. So whether they ruled through fear of might or fear of 'might', these critical thinkers made a system in which they could maintain their advantage, and it worked so well that it lasted for the better part of two millennia. The 'marriage' between the two forms of dictate was rarely a happy one, though: while one form of dictate held power over the bodies of their followers, the other held their minds, and the two would often have to negotiate with the other to advance their own goals, or eliminate it altogether.
Re-purposing critical thought to other ends than survival, namely through the Greek invention of logic and reason, was still a fledgling idea when Rome put an end to its spread with its 'rule by might' over the known world, and it was all too easy for religion to step in to fill in the critical-thinking leadership void at its fall. Enter the Enlightenment, then the re-birth of democratic ideology: this resulted in, again, an unstable marriage, as, although on the surface the people had freedom to 'determine' what was best for society as a whole, their still largely-un-critically-thinking minds were still bound by and dependant upon the limited options imposed by religious dictate.
Religion is in decline in much of the western world, but, to varying degrees in different countries depending on their accessibility of education, other 'power-entities' (namely corporations) controlling commodities and services have stepped in to exploit the critical-thinking void: they, too, are rigging the system to maintain their advantage, but this time from behind the scenes, through influencing political 'democratic' power into making the rules as advantageous as possible for them, all while making it as difficult as possible for one to become educated or independant-minded enough to learn to ask questions about the mysterious entities, of which few think about beyond the face/facade that is the logo and jingle presented to them, that 'provide' for them.
Adding technology to the mix only makes things worse: it is one thing to invent a pitchfork, and quite another thing to simply use it (or imitate someone else doing the same). As long as the critical-thinking divide remains, society's inventors are not only the only ones detaining the 'secrets' of their technology (that would be accessible to anyone with a bit of research), but that technology also becomes an element of control: as long as the non-critical-thinking populace is 'comfortable', they will not think beyond the sphere of the 'survival (now 'comfort') tool'-filled environment made for them, leaving the critical thinkers to do pretty well anything they please, unchallenged.
And that is the scenario today. Our information age has led to a huge rise in the number of those who have learned to think (survive) for themselves, people who 'dare' question society's workings and would like to propose new solutions; unfortunately, to the opposite end of the spectrum, the die-hard 'live through imitation' (and those they imitate) clan-minders are becoming increasingly radical and active; and, a product of our (surveillance) technology, the emergence of overly-sheltered 'helicopter-parent'-raised children into active society isn't helping things either, because instead of assuming life's responsibilities and challenges, they would rather give their life-governance to a 'nanny state' (and other governing entities) to have them shut up anything that could risk 'offending' them.
There has to be a conversation between the critical-thought (or not) extremes, but this seems almost impossible, because it's almost as though they're speaking separate languages:
A critical thinker trying to rationally propose a new solution to existing problems to a non-critical-thinker will most likely be met with rejection if it doesn't match an existing method, or it is not endorsed by some authority-figure (and it will be rejected with a 'rationale' consisting of comparisons to things other clan-members or 'respected' authority-figures have done and said);
An insult to a clan-minded non-critical-thinker is an affront to their (illusion of) position in their particular clan-scale, and, instinctively, the threat of being humiliated and excluded from their 'clan-circle' (by those unable to judge the real value of the insult) is a threat to their very life, whereas an insult to a critical thinker will most often be met with consideration, followed by a rejection (if the insult is unfounded) or acceptance (because errors are bad for survival-knowledge, and corrections are good for it), with, at most, a 'you didn't have to be rude about it' as an expression of offense;
Words, even, don’t have the same use and meaning: a critical thinker will choose words best-suited to making sure what they are trying to communicate about reality is understood; the non-critical-thinker will try to make reality ‘match’ their programmed authority-given ‘definitions’ of words (as part of their ‘survival success example to imitate’)… especially towards ‘not-their-clan’ people (“my survival depends on what I was told about you, so you are what my protectors say you are, otherwise I don’t know how to deal with you.” It’s almost as though they think their words do affect reality). This ‘out-group-non-processing’ is also responsible for all forms of irrational (to a critically-thinking person) bigotry;
If a critical thinker and a non-critical thinker have a debate in which the critical thinker totally destroys their opponent's position through reason and logic and the non-critical-thinker gets their opponent to show hesitation, anger, doubt, frustration or exasperation, the audience supporters of each will walk away thinking that 'their guy' won.
I could go on about the actual effects of critical thought in dampening/overriding/cancelling emotions (that most that non-critical-thinkers have to rely on as a reaction) as well, but I think I have already written extensively about this elsewhere in this blog.
Let me close by saying that it's clear to me that, in this technological age (and all the high-speed trade and destructive power it brings), if we don't all begin thinking critically soon, we will be leaving the few that do an exponentially-increasing exploitable advantage over those who don't (look at the wealth-divide, already...)... and, as we saw before, conflicts between overlapping non-critical-thinker-exploiter systems most always end in war, and as time and technology marches on without us filling the 'critical thinking void', the potential consequences of such conflict grow exponentially as well.
Tuesday, 13 September 2016
Atheism is not about 'atheism' at all.
I've been hearing a lot of noise lately about how infighting and splinter-movement differences are 'splitting' the 'atheist community' apart. If we stick to 'traditional' status-quo definitions and categorisations, this seems almost inexplicable, yet quite distressing, but if we really look at it without all that, the observed 'differing opinion' (and attitude) makes perfect sense.
What is 'atheism'? It is but a theist-leader (thus follower) term that describes a group of people who don't adhere to their (or any similar) belief system... 'those (dangerous!) people outside our bubble', in other words.
Remove that 'faux' wrapper, and what have you? A lot of different people doing and thinking different things in very different ways. It's only normal that they have differences between themselves, and it seems almost inane to try to group them under... a dictate's self-serving purposely-ignorant hate-generating label for 'dissenters', and it's even more inane when 'atheists' try to do this themselves (and complain when it doesn't work).
But it is important to put up a facade of an 'atheist community': for many (if not most) indoctrinatees, the thought of not living in a 'protective' community inspires fear, and that 'there is another community' facade is almost required to make them think twice (or once!) about considering other options than that already chosen for them. It is also important to stand and be counted as an 'atheist': although an 'appeal to popularity' is a logical fallacy, for many indoctrinees, it is the strongest argument one can make.
So, although we should accept the 'atheist' label from theists, anyone without religion should really be naming themselves and each other for what they are (and the result will be myriad), not by someone else's 'what they aren't' description.
What is 'atheism'? It is but a theist-leader (thus follower) term that describes a group of people who don't adhere to their (or any similar) belief system... 'those (dangerous!) people outside our bubble', in other words.
Remove that 'faux' wrapper, and what have you? A lot of different people doing and thinking different things in very different ways. It's only normal that they have differences between themselves, and it seems almost inane to try to group them under... a dictate's self-serving purposely-ignorant hate-generating label for 'dissenters', and it's even more inane when 'atheists' try to do this themselves (and complain when it doesn't work).
But it is important to put up a facade of an 'atheist community': for many (if not most) indoctrinatees, the thought of not living in a 'protective' community inspires fear, and that 'there is another community' facade is almost required to make them think twice (or once!) about considering other options than that already chosen for them. It is also important to stand and be counted as an 'atheist': although an 'appeal to popularity' is a logical fallacy, for many indoctrinees, it is the strongest argument one can make.
So, although we should accept the 'atheist' label from theists, anyone without religion should really be naming themselves and each other for what they are (and the result will be myriad), not by someone else's 'what they aren't' description.
Saturday, 2 April 2016
The Accellerating Universe?
I'm not making any declarations or anything, just consider this as a bit of a 'hiccup' in fitting an accelerating universe into my still-solidifying understanding-model of said universe.
What gives me pause is the relation between 'explosion mechanics' and gravity.
Even in a high-gravity environment such as ours, at the 'epiforce' of an explosion, where the outward expansion of whatever combustible has either just reached its 'maximum combustion' (where the most fissile material is 'lit' at one point in time) or overcome whatever contained it, it will project any contained or proximate material at the highest speed, but from there after, the energy will drop and projectiles will be ejected at a slower speed, and so on and so on until the explosive engine's combustible is exhausted. Those 'epiforce' projectiles will, of course, travel the farthest from the explosion epicentre.
Now take the same model and transpose it into a 'no-gravity' environment.
Again, the 'epiforce' projectiles will attain the highest velocities, and those projected after, slower ones (et cetera et cetera)... but this time, there is nothing to slow those projectiles down (well, there is, but I'll get to that in a second). So the 'outermost' projectiles will be travelling at a much faster velocity than the later 'inner' ones, and this relative 'difference', as the velocity of all projectiles is constant (without considering other later factors), will grow over time. Already we have a model where, from the perspective of the innermost projectiles (pretend that they are standing still), the outer projectiles are accelerating.
That's fine enough on its own as a 'small' model, but the universe is hardly that, and there's the enormous gravitational forces that it contains to factor in.
It would be the slower 'projectiles' closer to the 'big bang' explosion epicentre that would be the first to succumb to mutual gravitational attraction and form stars (then planets). There's also the density of the ejected matter to consider at each point in the explosion (but I don't have either the math nor engineering/mechanics knowledge for that), but I would think that, even if the 'rate of explosion' was constant (which it is most likely not), the faster-velocity material on the outer rim would also be more disperse (over a wider circumference), thus slower (and less likely) to accumulate into larger masses.
So, one way or another, toward the epicentre of the (former) explosion, we would have a 'core' that would be increasingly denser and have a higher gravitational mass, and, logically, a centre of gravity as a whole.
Now factor this onto those outward-travelling 'projectiles'. The universe's gravitational pull on these, I would assume, would follow the gravitational constant (yet one increasingly polarised on the projectiles as they travel further away); the math here, again, is complicated (for me), as one would have to factor in velocity, the gravitational force relative to it, and its gradual diminishing over time (as the projectile grows more distant). Yet, all the same, in all cases, we would have a model where the projectiles towards the explosion epicentre would slow each other down much more quickly than those towards the outer rim. So, here, the centre of the universe is slowing at a much faster rate than the outer rim, which may give the illusion that the universe's expansion is accelerating when, in fact, it isn't.
What gives me pause is the relation between 'explosion mechanics' and gravity.
Even in a high-gravity environment such as ours, at the 'epiforce' of an explosion, where the outward expansion of whatever combustible has either just reached its 'maximum combustion' (where the most fissile material is 'lit' at one point in time) or overcome whatever contained it, it will project any contained or proximate material at the highest speed, but from there after, the energy will drop and projectiles will be ejected at a slower speed, and so on and so on until the explosive engine's combustible is exhausted. Those 'epiforce' projectiles will, of course, travel the farthest from the explosion epicentre.
Now take the same model and transpose it into a 'no-gravity' environment.
Again, the 'epiforce' projectiles will attain the highest velocities, and those projected after, slower ones (et cetera et cetera)... but this time, there is nothing to slow those projectiles down (well, there is, but I'll get to that in a second). So the 'outermost' projectiles will be travelling at a much faster velocity than the later 'inner' ones, and this relative 'difference', as the velocity of all projectiles is constant (without considering other later factors), will grow over time. Already we have a model where, from the perspective of the innermost projectiles (pretend that they are standing still), the outer projectiles are accelerating.
That's fine enough on its own as a 'small' model, but the universe is hardly that, and there's the enormous gravitational forces that it contains to factor in.
It would be the slower 'projectiles' closer to the 'big bang' explosion epicentre that would be the first to succumb to mutual gravitational attraction and form stars (then planets). There's also the density of the ejected matter to consider at each point in the explosion (but I don't have either the math nor engineering/mechanics knowledge for that), but I would think that, even if the 'rate of explosion' was constant (which it is most likely not), the faster-velocity material on the outer rim would also be more disperse (over a wider circumference), thus slower (and less likely) to accumulate into larger masses.
So, one way or another, toward the epicentre of the (former) explosion, we would have a 'core' that would be increasingly denser and have a higher gravitational mass, and, logically, a centre of gravity as a whole.
Now factor this onto those outward-travelling 'projectiles'. The universe's gravitational pull on these, I would assume, would follow the gravitational constant (yet one increasingly polarised on the projectiles as they travel further away); the math here, again, is complicated (for me), as one would have to factor in velocity, the gravitational force relative to it, and its gradual diminishing over time (as the projectile grows more distant). Yet, all the same, in all cases, we would have a model where the projectiles towards the explosion epicentre would slow each other down much more quickly than those towards the outer rim. So, here, the centre of the universe is slowing at a much faster rate than the outer rim, which may give the illusion that the universe's expansion is accelerating when, in fact, it isn't.
Subscribe to:
Posts (Atom)