Saturday, 13 April 2024

Singularity = Return to a 'dark energy' state ?

A more that a month of convalescence has given me time to think many things over, and some of that involved my earlier physics mullings and my apparent inability to convey my conclusions from the same in any coherent way. At the end of the first week of my return to work, after a discussion with a friend and colleague there, I found I was able to do so: the problem with my earlier posts on this topic was that I was trying to relate my ideas using existing concepts, and that I should have just begun by explaining things as simply as possible, and applying existing concepts and methods to them after the fact as almost analogies. Anyhow.

The beginning of our discussion was a joke around 'what happens at the centre of black holes' (and what should go there), followed by a 'mv -rf / /dev/null' joke: I tried to clarify my "that nothing may be more than what we think" reply with something like: "In physics, we tend to think of ourselves, or 'what we see', at the centre of everything, like we once thought the sun revolved around us.". And at that I just sort of stopped and blinked, because I realised that, in a few words, I had just created the key that would make my former reams of words comprehensible.

In going way back into my posts, one can find my quite naive 'perfect state' concept: I'm not sure if I was comparing it with  the 'dark energy' of today, but I should have been. In short, that 'perfect state' (a term I will no longer use) is a state in which energy is no longer 'visible' to us... but that doesn't mean that it's not there. In fact, I would argue that it is the super-high-energy centre of everything, and we are a mere side product of it 'orbiting' around it. 'Orbiting' is of course not the right word, as it is hard to imagine orbiting around something akin to a ne'er-ending blanket that encompasses everything.

Now to apply that to that black hole joke: what if the 'singularity' we hypothesise as the centre of each was in fact a 'point of return' to that high-energy 'invisible blanket' state, a state that we may or may not call 'dark energy'?

Tuesday, 15 August 2023

Religion is a Symptom, not the Ailment.

First off, I would like to call into question the 'atheist' tendency to target the religion most familiar to them as the cause of many of society's problems, and, secondly, I would like to propose an examination of the human behaviour tendencies which permits religion (and many other of society's 'ruling' elements) to do all it does.

But even going there, as we will be reasoning from a critical thought perspective, I think it would be useful to dispel with the notion of 'atheism' altogether, at least for this essay: 'atheism' is a religious concept, from a religious language and a religious point of view, a concept designed by religion to bring attention to itself more than anything. But in spite of its insistent claims, religion is not the centre of everything: it is but a phenomenon allowed through the exploitation of what seems to be a core human trait, or weakness if you will.

As I have repeated several times through my last essays, society is comprised of two types of human: those who have learned to measure the world around them through their own experiences, thoughts and conclusions, and those who rely on the example of others to make life decisions for them. I'm sure that in reality, things are not so black and white (aka the 'compartmentalisation' of survive-by-imitation to certain subjects), but, for the sake of clarity in this discussion, it would be useful to maintain a distinct line between the two.

A term that comes up often in circles questioning religion is "suspension of disbelief": the term describing a blind acceptance of another human's claims (and an obeisance of their commands based thereon) is spot-on as far as religion is concerned, but I would like to propose that the phenomenon, at least in many humans, goes far beyond that: some suspend all thought altogether in favour of the comforting certitude expressed by many types of authority figure; it is not a phenomenon reserved to religion alone.

Critical thought is not the only cognitive process separating the behaviour of these two states: emotion is just as, if not more, important in dictating the reaction of a human in any given situation, but between the two states, emotion becomes a tool used in very different ways. At its base, emotion is a reaction that attributes a 'value' to a given situation or thought process: fear and anger, namely, are reactions to elements that pose a potential threat to one's existence.

It is this notion of 'threat' that creates the sharpest divide between those who think for themselves, and those who survive by imitation: if one has no capacity (or willingness) to judge a given situation for themselves, how can they determine whether it poses a threat or not? Such a decision would be limited to the scope of what they 'know' (have experienced thus far), which is why their initial knee-jerk reaction to anything outside of that is often a mix of condescendence, fear, anger or hate.

Just as important, to one limited to the same 'reaction' thought processes, is the positive emotions attributed to all that they deem 'safe' and 'sure': affection and acceptance are reserved for those 'like-minded, meaning those who share the same 'knowledge', or those would-be 'leaders' (introduced, or even imposed, as 'safe leaders' by parents, kin, etc.) who dictate the same.

It still amazes me that most humans seem to operate in that mode, but in an economy where actual work was rewarded, this seems to operate just fine: as many of our parents taught us, if we but obey the dictates of our leaders, teachers, bosses, etc., we will be thrown scraps enough from the table above to lead comfortable lives. Unfortunately, especially since the decades following the second world war, this promise has become an increasingly empty one.

Yet our economy still depends on a larger suspension of autonomous thought: our would-be 'leaders' who tell us what to buy and what to think (how to react), have, instead of using their critical-thinking and technological advances to better educate all of humanity, have (ab)used their advantage to increase profits (technology means lower production costs, and even an elimination of the need for human labour altogether), and when protest to that arose, their answer was to make education increasingly inaccessible. The result: a dumbed-down, but disgruntled, driven-to-consume (to imitate the status quo) population unable to understand the simple 'complexities' of their own plight.

Emotion in a critical thinker is another, often subdued (through neuroplasticity), other animal: it only takes a nanosecond of thought to realise that one sexually 'different' from another poses no threat to either existence... especially in this world suffering from overpopulation. It only takes a second of questioning thought to determine the veracity of the 'threats' our leaders would like us to die for: gone, too, would be the 'blind believer' armies indoctrinated (most often with falsehoods) into fulfilling the often undisclosed real goals of an 'elite' few. The irony in those armies, past and present, is that the goal of those leaders was most often not riches or resources directly, but the control of, and a cut of, an even larger economy of complacent, labouring, non-critically thinking 'believers'. And in creating this sort of mental climate, religion has, indeed, to those would-be leaders, always been 'useful'.

In times before, especially in western Europe, religion was the ultimate pyramid scheme, with 10% of every income going into the pockets of a few, but today the means are myriad for an 'elite' few to draw a profit from each and every one of us: the petrol industry we are, by design, still locked into, health care, insurance, computer technology... religious manipulations pale to the actions by the corporate, often faceless (as far as the news and the public is concerned), shareholders who dictate the 'rules' of modern economies.

Yet I underline again the fact that, no matter the form of the autonomous-thought-inhibiting machination, its existence depends on a general lack of will or ability to think critically, to take the responsibility (thus accountability) of their own life decisions upon oneself. We are all capable of critical thought, and tend naturally towards it towards adolescence: it is the practice of 'breaking' this transition that must end, as it is the cause of most all that ails modern, and past, societies.

Thursday, 3 March 2022

"Critical Thought or Not" revised: Auto-Determination or Not.

I've been expounding my ideas on how critical thought can benefit society since more than two decades now, and the fact that I stop at that in my altercations may suggest that I found that to be an end-all explanation, but this has never been the case: aside from my usual social awkwardness, my difficulty in relating those ideas came from a halting, hesitating thought, almost knowledge, that there was more to it than that.

Indeed, critical thought can be used just as well for constructive reasoning as it can be for a reality-dissecting selective rationalisation: in a situation where one is confronted with an element or situation of reality that is outside one's sphere of experience, there is something very deep-rooted in us motivating which tool we choose to affront it.

It's not about Intelligence, it's about Responsibility.

My awkward "Hunter in the Forest" analogy was almost there. What I was trying to convey was the hunter's option of either a) exploiting his young charges' reliance upon them, or b) transmitting to them knowledge enough to allow them to survive on their own, but there's one more step: even with all the hunter's knowledge transmitted, the young charges have to accept the responsibility for their own survival. If they don't, they will perhaps become talented imitators, but they will still be relying upon the Hunter to decide what is good or bad for the party's survival. That is the real line that divides today's society, and the world is a very different place for those who have accepted the responsibility of interpreting reality for themselves, and those who haven't.


Prelude: Survival by Imitation

Of course our first years before our brains are developed enough to create, adapt to, or even comprend any concept are spent in fog of sense-related empirical experiences. What we do eventually begin to understand is a behavioural path around these sensations: touching something hot results in pain, closeness with another human results in warmth, etc., and the complexity of this 'behavioural map' will increase with age.

When we are old enough to look beyond our self-absorbed world of senses to other humans, our first interactions seem to be around comparing our behaviour patterns to those of others in order to determine what they do to attain or avoid the sensations we know: how do they get to the sensations of warmth, sweetness, etc. that we so desire, and what do they do to avoid the sensations of loneliness (aka 'helplessness') we so fear?

From there we tend to begin to recognise the behaviour of those who most care and provide for us as 'successful', or something to imitate if we hope to attain they same trappings as they, again the same in others our main care-givers defer to, and again to those who display a widely-accepted 'higher' social status: imitating this behaviour would seem, without thought, the easiest path to (clan) 'acceptance' and comfort.

Around adolescence, our brains undergo a neural reorganisation or 'remapping' process that neuroscientists call 'pruning'. The form this remapping takes probably depends on which path we have chosen/we were trained to take.

It's important to note at this point that thought patterns in the human brain are actual, physical things: those neural pathways we develop are axion-to-dendrite neural connections, 'reinforced' over time (if they are used often enough) with a myelin sheathing that isolates them from other synapses, thus strengthening their signal. Whichever path we choose in life, changing from it takes actual, physical work: in order to change our ways, we have to construct and strengthen new neural connections before we can 'forget' the old ones (as, if unused, that myelin sheathing will thin with time, and eventually the connection may fade or 'break' altogether).

But to accomplish this, a brain has to be able and willing to 'correct' the inefficient and inaccurate concepts of reality it had until then, and before even that can happen, one has to be willing to understand and accept the faults in their ways: but how would this be possible if one survives simply by comparison to behaviour patterns in their given group, instead of understanding and dealing with reality itself?


Those who have accepted the burden of Responsibility of making Value Judgements (thus the responsibility of their own survival) for themselves...

I can't read minds, but the ease of which one who has decided to use their own judgement to interpret the world around them faces things must depend on their level of preparedness: to one transitioning before they are equipped with all the knowledge tools required for their survival, the world must be a scary and confusing place.* In any case, once we begin to rely on these tools to make value judgements for ourselves about all that we see around us, our brain will test any new experience against its existing collection of tools and information, and if an adjustment or an addition is required, it will 'remap' existing neural networks accordingly, and this can be described as neuroplasticity.

The most important word to extract from the above is test, that is to say, a real examination of any claim, object, or situation, to see how it fits into our brain's so-far-constructed map of reality. In all this, the latter word is most important: the more the content of our brain matches reality, the better the chances of our survival. And this should be the goal of any honest hunter/parent/teacher passing their knowledge unto younger generations: they should expect that, not only will the teacher's charges be testing the teacher's claims against reality, but that this testing will be a test of the teacher themselves.

The teacher should have no problem with this, if they have indeed understood and accepted the burden of self-determination and self-governance: should the student find demonstrable fault with their body of knowledge about the world around them, they should be commended, as, again, a demonstrably accurate map of reality is of benefit to all.**

It is only here that critical thought comes into play: most of our brain runs in the 'routines' learned thus far through imitation and empirical experience, but the prefrontal cortex, our centre of critical thought, is there to 'correct' these routines should they prove to be inadequate. In short, think of a computer and a programmer: the computer runs its repetitive automated tasks, and the programmer is there to correct the code should it prove faulty, or if a more efficient routine is found.

And here it is the programmer who takes it upon themselves to deem the efficiency of any other code suggestion or idea (imagination comes into play, here too, and that, too, requires critical thought), in testing it against reality and noting the results. Of course, even here the critical thinker may lend bias towards their former behaviour patterns (routines), but quite often they end up adopting the better solution in the end.

So, for one who has transitioned to a mode where they take decision-making responsibilities upon themselves (to their own 'risk and peril'), the world becomes a very different place: gone (or at least diminished) are the notions of 'same as successful = best', as, to the contrary of following such models thoughtlessly (aka 'blindly'), every idea, fact or societal status claim becomes a proposition to be considered and tested by the critical thinker. An Anne Rice novel can be just as much a source of ideas and information as any religious book, as it is the idea or claim that is being tested, not the person (supposedly) presenting it, unlike their 'imitation' years of before. Of course the quality and continuity of the information passed will reflect upon the quality of the source, but even the worst humans have to be given credit where credit is due, if one of their discoveries reveals a yet-not-understood facet of reality that is of benefit to greater society. But, again, this acceptance of an idea from a 'bad person' is not a blanket, blind acceptance of every claim made by that source.

Diminished or gone, too, are notions of social coercion and rejection: "The Joneses already have two cars, why do they need a third one? That person who insulted me (for not having three cars): well, they're partly right, but not for the reasons they think." In short, such social displays and interactions no longer have the emotional magnetism that they had before the transition to autonomous thought.

This is a rather black-and-white depiction. I'm sure that quite often the transition takes time (it did for me: unable to understand the behaviour of the societally more 'successful' those around me, I lost years, even decades, trying to 'fit in'), and I'm sure that, in this survive-by-imitation world, some may hide, or even forego, their critical thought capacities in order to survive in it.

* I am persuaded that a large part of what many call 'autism' today is in fact someone who switches to autonomous thought (self-governance) before most of us do, that is to say, without a sufficient knowledge toolset - there is nothing 'mental illness' in it at all.

** I perhaps digress here, but I think it's worthy to note that this very basic goal of making one's body of knowledge match as closely as possible reality is much akin to, if not exactly reflected, by what we call 'the scientific method'. Science, often treated as though it's a religion or icon (to brandish at others), is nothing more than a shared body of knowledge about the universe we know of. Anyone of any stature can take or contribute to it at will, and if a contribution passes the test of reality, then the result, an even more accurate map of reality, will of course benefit our survival.


...and Those who have Not.

Our present education system (including school, traditions and religion), for the most part, teaches our young that if we "learn X behaviour patterns, we will get Y reward". There is no incentive to test, or often even question, and god forbid add to, any teachings, and this sort of environment is not at all one that would encourage critical or autonomous thought. Some would say this is almost by design.

So if the greater majority of our population is 'trained' in this 'survive by imitation' mode, it is most often not by any fault of their own. And more often they will get even further entrenched into that 'mode de vie' when they enter the workplace: attaining the X goals of one's employer (who may or may not become, in the mind of the worker, an authoritative or even 'provider' figure) will reap the material Y rewards necessary to survive, and this will become even further entrenched into it when the worker gains a family that is dependant upon them.

So if one is unable/unwilling to make value judgements (interpret reality through their own cognitive faculties) by themselves, who could they defer this task to? 'Parents' is a first obvious choice, but next in line would the the 'authority figures' (clan 'leaders') that they defer to. Depending on the circles (clans) they gravitate to, their 'leaders' may evolve through time.

As for the behaviour of an un-autonomous individual, the main consequence of their inability/unwillingness to accept the burden of interpreting reality for themselves  is a lack of ability to empathise with others: if one can't interpret or understand one's own thoughts and actions, how can one understand the same of others? instead, in the 'survive by imitation' social makeup, with the 'clan' defined by all others who also defer to the same leaders, clan-follower behaviour would be dictated first by the leader, then by the clan-follower's 'sameness' to that of all other, mainly 'successful' (most accepted by the clan leader), clan-followers. In this state, the 'necessity' of adjusting one's behaviour comes from a rejection-fear caused by a difference with the clan 'social norm', not from any rational ('how my behaviour affects others and reality') cognitive conclusion.

Both the top-down behavioural-dictate and the 'survive by imitation'-er's total lack of empathy (let alone thought) for others is easily demonstrated in the their claim that "without X('s guidance), what point is there to life?". Not only does this demonstrate a total lack of understanding (thought) to what survival (or life itself) is, it demonstrates a total lack of thought to anyone but themselves: if one were to take the burden of responsibility for one's survival upon themselves, they would very quickly understand the reality-based requirements of survival, and very quickly understand our dependance on other humans and the dependance of other humans on us for that survival and comfort, but for one unable or unwilling to evaluate anything with their own cognitive faculties, these basic observations don't even register.

Without an ability to reality-evaluate situations, gone too is anything resembling anything can call 'morals': unable/unwilling to judge the effect one's own behaviour has on their environment, they replace this instead with a reactive-comparative goal to attain 'sameness' with their clan members and clan thought-leaders, in hoping for the rewards promised in return for this obedience; even a base kindness to fellow clan members may be partly, if not entirely, made with this reward as a goal. This is not 'morals' at all, but a mix of greed, fear, and an obeying of dictate.

This utter lack of morals and even thought is again displayed when it comes to how they relate to their clan leaders and those 'outside' their fold: for the former, they will not hold those leaders to the very rules they set as long as they retain their leadership ('trusted' status), and for the latter, there seem to be... no rules at all, anything goes, especially if it is for the 'good' of their clan. And most often, the 'good of the clan' benefits most the clan leaders.

And one would think that, since the clan leaders are making all the decisions, they would be held accountable for them, but as this sort of leader-dependent-follower relation is all to prone to abuse by those doing all the 'thinking', the result is usually a 'leadership' that, in addition to using the work of the following to boost their personal position and income, not only won't accept accountability, but uses its following as a human shield and/or hostages against any reckoning ever happening. And, quite often, even in a highly abused state, that following all-too-willingly continues to support their very abusers, to a point of committing often quite horrible crimes against 'other' humans in that 'leaders' name; ironically, in doing this, those followers are also shirking accountability... or so they (don't) think.


All this in the state of things today

In all, as those who have foregone making their own reality-based life decisions count on 'higher-ups' to dictate to them the 'realities' of reality, the only form in which a survive-by-imitation society can survive is in 'layers of authority', where the level below is following the rules set by the level above for (a promised) 'reward' of the scraps that fall from the table above. Of course this system is wide-open prone to abuse, and abused it is today; one only has to look to the economy's wealth disparity as evidence of this. In spite of the obviousness of this situation, those 'below' continue complacently to produce for their 'leader-provider-deciders', as all the complacent have as a reference for judging their own state of being is their own level of comfort and a comparison to that of those other followers in the same situation as they: if everyone else is suffering, the non-questioner may consider this state 'normal' and not complain, that is until their personal discomfort becomes too much to bear or becomes a threat their own survival.

I can't remember what triggered my transition to autonomous thought so early in life (and for this early transition, most psychologists would probably put me on the 'autistic' scale), but when I was too young to even grasp the workings of this system, already I could observe obviously unhappy 'adults' telling me to obey and 'follow the rules' unquestioningly (and often with 'because I said so' as a sole explanation to my 'why' queries), and that if I should do so, my 'reward' would be becoming... just like them. My answer, of course, was a resounding "Fuck You". And when I tried get others around me to question what was making them so miserable, I was confused at their automatic citing 'authorities' and everyone else's behaviour to explain their own behaviour, instead of actually partaking in any self-examination or examination or testing of the realities of their situation. This is the largest communication problem between those who have transitioned and those who haven't: both assume that the other party 'thinks just like them', and have difficulty thinking or reacting in any other way.

It is for this that it took me close to thirty years to finally accept the fact that most 'adults' today in fact aren't adults at all. If one depends upon another to dictate to them the 'right' way to think and behave, it is all too easy to hide this shirking of responsibility (to that 'higher' personality) of one's own survival behind a mask of seeming-authoritative and confident certitude in the 'rightness' of their ways; this attitude seems to be that of what many would define as 'adult behaviour' today.

It was a while ago that I said that "A democracy without voters who think for themselves is not one.": at best, its structure is a democratic one, but if its voters are divided into survive-by-imitation clan-behaviour groups, any resulting vote will echo the interests (dictate) of each respective clan (leader), and not be any rational decide-for-myself-what-behaviour-benefits-both-myself-and-others-then-consensus-with-others process... only the latter thought-fed-awareness in voters can result in anything one can call a real democracy.

In times past the greater survival-by-imitation behaviour was exploited by the 'thinking' nobility and religious leaders, but today corporations (and their faceless shareholders) have largely taken over, and are using every tool at their disposition to both map and 'herd' our imitation-behaviour to increase their profits.

Profit is taking more than one gives, yet even this simple fact seems to escape most in today's society. What makes it worse is that many, if not most, of any economy's actors, even to its lowest ranks, dreams of being 'that guy' who takes (undeservedly) the product of everyone else's work. And to those actually holding the decision-making reins of an economy, this society-dividing 'dream' is what keeps their machine rolling, all while distracting attention from the cause of the very cause of the ills rampaging society: themselves.

But, like I said earlier, once an individual is 'set' in one's ways, it takes a lot of work to even convince them that there's something damaging (to society) in their ways, let alone motivate them to actually go through the work of finding, testing and accepting alternate solutions. This is why I'm vying for the next generation: all I can do is try to make as much real, (scientific) consensus-led information about our environment and history available to our future youth in the most coherent and testable way possible. I'm almost certain that I'll never see those who caused this generation the most damage held accountable for their actions, let alone see them repair that damage, but here's hoping that a better-informed, thinking future will place those bad actors in their proper place in history.

Thursday, 13 May 2021

RDF Explained (hopefully) the Right Way

I had intended to create a few illustrations for my last blog entry (as when I wrote it I had not access to my usual graphic-design tools), but instead I think it would be more useful to go through the different elements of RDF to explain them in both concept and functionality, then with that we can better understand how it all works together. If any of this sounds too obvious or condescending (to those who know better), please keep in mind that its goal is to better educate the ignorant myself that existed only a few years ago.


Individuals

The most basic RDF concept is the "individual": consider it to be a 'cell' in a spreadsheet or relational database. Yet here it is free from those constraints: at its most basic level, an 'individual' is neither a column nor row, nor is it attributed to any 'table', and the only thing that counts is that we maintain its 'individuality', or make it distinguishable from other 'individuals'. If we want to create a new 'individual', all we have to do is name it: in fact, naming it is what brings it into existence. To better relate this to our relational-database experience, consider it as a cell with only an 'id' attribute.




An individual can be an 'empty' named shell, and as such it can be used to demonstrate relations between different individuals (more on this later). But an individual can 'contain' data: in our relational-database minds, we could consider attributing data to this individual to be 'filling its cell', but data attributes for a single individual can be many: we could consider these multiple data attributes as 'column names', but that would complicate things for our later understanding. Let's consider, instead, that the data attribute is in fact another type of 'unnamed individual' (a cell with neither an id or column 'identifier'), and that the only thing linking it to the individual is the declared data attribute relation.

Most RDF tutorials go straight to organising 'classes' (the next part of this), but I think this opposite-way-round approach is easier on the mind as far as understanding RDF is concerned: data is, after all, what we ultimately will be extracting from our RDF database, so it's best we understand that first. So when thinking of RDF database structure, I find that it's best to first think of the individuals we will be structuring, and think of the data attributes that each should have.

So, conceptually speaking, we have created an individual with a 'name', 'birthdate' and an 'address' attribution. What are we describing here? Most instinctively we would put something that had a name, birdate and address in a 'person' box or category... and this brings us to 'classes'.


Classes

'Classes' are basically boxes in which we can group individuals of the same 'type': these could be any indivduals with any data attributes, and it's only our putting them in the class that make them 'of' that class.

So let's create a few 'people' individuals (with name, birthdate and address attributes), then create a 'people' class to put them in. In a way, in thinking from our relational-database experience, we have created a 'people' table with several 'people' (with different rows (id) and columns (data attributes)) in it. At its most basic, an individual belonging to a class has an 'is a' relation.

Pierrick trigger alert

So now we have a bunch of individuals grouped within a "people" class. Let's go through the procedure again, but this time let's create "house" individuals, and group them in a "house" class. What has a house? Let's say that they're between adjoining streets: each house would have a "data:number" and "data:streetname" attribute, and just for fun, a "data:floors" attribute.

But hey, won't the "data:address" attribute of our "people"-class individuals conflict, or become redundant, with the "data:streetname" and "data:number" attributes of the individuals in our "house" class? Yes, and this is why it is important to think down to the data attributes of each individual when creating a database scheme or structure.

So now that we have individuals in 'house' and 'people' groups, who lives in which house? If we want to indicate that "Bob" (0001) lives in house h_0003, we have to create a new connection between the two individuals: "person_livesIn_house" (the naming scheme, as mentioned in my earlier post, doesn't really matter, but let's call it such to make our undersanding easier). So, if we interconnect our individuals as so:
..we see that we have two different 'goups' of individuals with links between them.

Were were to apply this model to a typical mySql database (in the most memory-economic way possible), we would have to have a) one table for 'houses', b) another table for 'people', c) each would have to have an 'id' column, and between them a 'connecting' column on which to make 'joins' (something like 'living_in_house_id'), and if, say, we wanted to know who lived in the house at "1 elm Street", our sql query would look like such:


...whereas, in RDF, our query would be:


...and we can even do more complex, abstract queries such as "who lives in a house with more than 1 floor?" as such:
 

...and this is just a database with two classes ('tables').

We can also add unilmited classes and individuals (e.g. 'pets' (not the french word), 'cars', 'trades'... whatever!), to our own ontology (nota: the preferred RDF terminology for 'dataset' or 'database' that is a single catalog of RDF triples). So, for example, were we to add another 'class' and set of individuals therein, say, 'lottery ticket wins' (and the individuals that are lottery winners): we can now ask things like "The names and addresses of people living in a one-floor house who won the lottery before august 19, 1957". Try doing that in a relational database.

Not only can we add new classes and individuals to our own ontology, but we can import others' data as well: were we to import, say, the yellow pages, we would suddenly have a huge database of names, addresses, and phone numbers: all we would have to do is ensure (in our ontology) that the imported-data's relational properties are understood by ours: for example, if their ontology's relation between a 'human' individual and a 'residence' is 'humanLivesAt', we would have to declare that their 'human' is the same as our 'person', their 'residence' is the same as our 'house' class, and that their 'humanLivesAt' property is the same as our 'person_livesIn_house' property, then should we import their data, we can query both datasets as one using our own ontology language. Another method, although one that makes our queries become a bit more complex, is to access another dataset remotely (as every ontolgy has an URL (URI) exactly for this) by querying both their database (with their terminology) and ours.

The next step, to avoid overlaps or errors, to improve query efficiency, and even give the query engine the ability to reason, is structuring our classes and applying rules to them, but we've gone quite far enough for one day.


Saturday, 1 May 2021

Why it took me Three Years to 'Get' RDF

I was introduced to RDF around five years ago by Paul Rouet, the former digital technologies director of the APUR (Atelier Parisien d'URbanisme), while in a meeting about the creation of what is now the "Paris Time Machine" HumaNum project. RDF is a data-modelling technology that had, much to my amazement, been around since the 1960s, and Paul had proposed using it as a base for all the historical data we planned to accumulate, but in the end this point was left unaddressed, as no-one in the meeting (including me) knew the first thing about it. 

We're all used to relational databases: lines, most usually made distinct from others with unique IDs, divided up into 'columns' of data-cells: every line in a relational-database table is an 'individual' set of data, a cell therein is a 'type' of data, and the table itself is a sort of 'context'. This is all well and fine, but should we want to add a new data-type for each individual, we would have to add an extra column to our table, or create another table entirely. When our data-modelling is over, and it comes time to actually query our data, if the database setup is not organised, or the information we are looking for is deep-rooted, things can get messy pretty quickly because of all the 'JOIN's required. Yet we learn to adapt to these limitations, and I did to a point where I became even 'fluid' in them (the limitations didn't seem such anymore, and became 'part of the process' in my mind). This is probably why it was so hard for me to 'let go' of these methods, and why most of my past five years toying with RDF was spent trying to apply relational-database-'think' to it... which was exactly why I wasn't 'getting' it. I ended up leaving it to the side around a year ago.

What made me return to it was (another) side-project that was researching a 13th-century Burgundy fort: over time I had accumulated a huge amount of historical, political and genealogical data about it, and when it came time to actually write a resumé of my findings, providing citations for every claim in my writ became a huge obstacle: were we to create a new entry in, say, a genealogical chart, every element of that 'being' would require a citation: their name (and its spelling, and variations thereof), birthdate (and place thereof), reign (over what period), titles, marriages, children, death (and place thereof, and conflicting records, etc.), etc., etc.. Already the spelling of a certain person's name would require a table in itself (or adding extra columns for each new variation == bad practice), another table for marriages (as there were often more than one), another one for birth dates (as different sources often cite different ones), another for titles (to account for many, plus other conflicting source claims), etc. etc.. And to all that we have to catalogue everything we can about each individual source any citation points to. In all, the idea of setting up a relational database that could deal with all that, plus writing queries for the same, seemed pretty daunting.

So I returned to RDF once again as a possible solution for this problem. There is a lot 'out there' on the web about it, but since RDF is a technical solution made by technicians, and it is largely unused by the public, I could find very little out there that was understandable by anyone not already having knowledge of it.

Already the most-often-found terminology used to describe it is an obstacle: the first thing we will most likely read about RDF is its "subject-predicate-object" structure of data, but this, especially to one with a long relational-database experience behind them), is misleading, as we might (as I did) try to project a 'line-column-data' structure onto it, which is completely missing the point of RDF entirely. In fact, and this is a point most often left out of RDF explanations, the 'subject' and 'object' are perfectly interchangeable, and are in fact but 'individual instances', or 'bits of data'. The only thing that makes RDF what it is is the structure, or relations, between this data. So if there's one thing to retain in understanding RDF, it's 'predicates (relations) are everything'.

But if any 'individual data' ('subject' as per common-tutorial parlance, but henceforth 'individual') can have an unlimited number of any other 'individual data' ('objects' ad idem) attached to it, and the 'type' of that data is dictated by the relation (predicate) linking the two together. If I was to apply this to my case, a given person (individual) could have an unlimited number of spelling variants (also 'individual data'), without the constraint of extra tables or columns, and one type of relation (say: 'hasName') linking them. One thing to retain here: thus far, as far as the database is concerned, all of the 'individual data' is of the same 'type' (only that they be identifiable as separate 'individuals' is important at this stage).

(nota: will add diagram at a later date)

Yet if we were to consider the actual 'data-organisation' angle of this setup: if every bit of data is an 'individual', each linked to one another through 'relation-types' (predicates), we would quite quickly have a) a basket of 'individuals' and b) a basket of relations (relation-types, or predicates). Telling relations apart from one another is fairly easy if we 'name' them right (example: hasName (in form: 'individual data' => hasName => 'individual data'), but how to differentiate the 'individuals' from one another? In my first primitive tentatives with RDF, I had tried naming every individual as I would a relational-database column -and- row name (e.g.: (individual) "HuguesStVerain" => hasName => (individual) "Saint-Verain". Already in creating three individuals, with multiple spelling variants for each, with this method, my database was already a mess. And when we consider that I was giving the bit of data that was an individual (human, in this case) the 'name' (unique ID) it was supposed to -point to-, this all seems (now) quite stupid. In fact, as far as RDF is concerned, and this was perhaps the hardest part of RDF to grasp, was that what the 'individual' was 'named' in our database didn't matter: the only element in RDF reasoning of importance is that one individual bit of data (no matter what 'type' it is) would not have the same 'ID' as any other (and an individual could be an identifier with no data at all (but the identifier itself)). But if individual data-bits were 'targeted' by unique IDs (say: 0001, 0002, etc.), this would make database-management and queries a nightmare.

And it's probably for this very reason that 'classes' were invented, and it's only here that an explanation about classes should come in to any tutorial on the subject.

So it's possible to categorise 'individuals' into 'classes' to better organise things. If we take the above example, the 'individual' that in reality is a 'person' could be labeled as a 'person' class, and the various 'individuals' that are the various spellings of a name could be labelled with a 'name' class. Primitive RDF achieved this with an 'ref:type' label (that we would see in the .xml code), but nowadays we use the 'rdfs' attribute (itself a labelling system written in RDF as an 'extension' to RDF) to this end. Without getting into too much detail here, RDFS not only makes it possible to classify individual data, but to label it and add more 'relation-types', or 'relation-rules', between them. But let's stay with its ability to 'class'-ify individual data for now. So with our ability to classify data, we can more easily manage our data-model: a 'person' individual can have multiple 'name' individuals, each pointing (or not) to, say, 'source' individuals. This looks like it's exactly what our 'genealogy' situation requires.

(nota: a diagram would be helpful here, as well)

But here I had to take a step back and ask myself: If the 'spelling' of the identifier of any bit of individual data 'doesn't matter' (and I must add at this point that the same holds true for class identifiers), and only 'which (class of) data is related to which' is important, what exactly is going on here? In this simple model, "individual identifier "huguesStVerain" ("Person") => predicate identifier "hasName" => individual identifier "HuSaintVerain" ("name") data:"Saint Verain"" works, but "individual identifier "pers_0001" ("c_001") => predicate identifier "rel_0001" => individual identifier "nm_0005" ("c_002") data:"Saint Verain"" works as well!

If we were apply this to other situations: if any given person has a certain number of other people in their entourage, we could 'call' any of these people anything at all, and that would change nothing in the relations between them. We could describe 'a rock sitting on a table' in any language or terminology we want, and that would change nothing in the fact that the... rock is sitting on the table.

Conclusion: RDF was not designed as a 'data manager', but designed to represent things exactly as they are in reality.

That's when the flash of understanding came: if we were to dig down using that model, we'd see that people are not only related with people, but people are related with animals (pets) as well, and cells are related with cells, and viruses are related with cells, and molecules are related with molecules, atoms are related with atoms, all the way down to gravity's relation with energy.

That seems to be going a bit far, but that should be taken to mind whenever constructing an RDF schema, and this seems to be exactly what most have not done when designing theirs. Everywhere I see RDF ontologies (that's the word for an RDF database) structured on 'what we call things' or 'how we categorise things' in almost complete ignorance of reality itself: it's we who apply our names, categorisations and classifications to reality, and when in our data models we try to make reality 'fit' those concepts (instead of the other way around), the result in RDF will always be a schema that not only won't work, but won't fit with anyone else's. But the 'shared knowledge' principle was the very reason for RDF's invention.

So I had to rethink my model yet again. This time I was careful to separate our concepts ('names', 'classification', etc.) from reality itself (an 'entity' class that has 'object' (itself with 'construction', 'implantation', 'machine', 'composition' (as in 'molecular' or 'atomic') and 'organism') and 'phenomenon' subclasses)... and to that, the element of 'time'. I may have to revisit this again. But in any case, we have a model that separates matter, concepts and time.

And therein I could describe relations between entities ('organisms') and concepts ('name') without affecting anything in the rest of the model: what's more, therein I gained the ability to import other databases into my model. For example, when into the 'concept' => 'classification' branch I imported a speciation ontology, I was suddenly able to classify my 'organisms' (as 'homo sapiens', subclass of 'homo', subclass of... and so on and so on) without changing anything in my own model (other than to add a link between any given 'organism' and its 'species' class). I could also do the same with geographical data and link my model's 'placename' to a specific geographical location (and elevation!), and when one adds the element of time for say, an 'event' (that is a subclass of 'occurrence', itself a subclass of 'time'), we get even more information. And if I was to remove all the additional data sources, other than the broken links (were the data sources imported/referred to internally), my model would still work.

But that last point is super-important in the RDF scheme of things: if data sources remain constant, and our references to these (in our own RDF models) not internal, but linking to their remote data, save in the exception of no internet access, our model would never break.

But to return from my digression (though one hopefully useful to this presentation), and to conclude, the prime element separating me from an understanding of how RDF works seems to have been... my misunderstanding, or misuse, if you will, of the methods we have at our disposal to perceive, interpret and communicate reality.


Sunday, 8 November 2020

The Web Today: a Sea of (mis-)information.

Aside from a few neurological-aspect details (that I'm still mulling over), I've had little to add over the past year to my earlier posts, but what has changed in that time is our 'leader-decider-provider's methods of dealing with what seems to be the larger population's state of dependency upon them.

The internet today is an accumulation of around twenty years of data. In the earliest days, everyone had their own homepage, blog, or some other form of content, but it didn't seem to be human habit to date any of this data (unless the csm, like this one, did it for us), and even media and software developers seemed to share the same oversight. Google does have a 'tool' that allows us to search for results within a certain time span, but it is not settable by default (please correct me on this if I am wrong). Add to this the increasing corporate and for-pay-friendly search results returned to us by Google, and the result is: a) a layer of for-pay content/wares that only may concern our searches, and b) a second layer of results deemed 'most relevant' that have origins anywhere in time. Add again to this Google's recent seeming change to their algorithm that 'culls' results they deem 'not relevent': in short, any search query today about anything not in the 'most popular' or 'most sold' (or 'most paid for') category will result in an incomprehensible mess.

And all this in spite of today's (would-be) AI technology: the only thing to reasonably conclude from this continued trend is that Google must like it that way.

What inspired this rant: a week-end of mostly-lost time researching a means to encapsulate Drupal variables. Some of the methods I found turned out to be, only after dozens of pages of reading (across several websites, because the Drupal documentation itself, in addition to being largely un-dated (in any obvious way), is mostly user-contributed, thus hopelessly incomplete), completely non-sequitur to any recent version of Drupal.  Searching for other non-Drupal methods of capturing PHP variables buried under a layer of Twig resulted in more of the same. In the end, what I was looking for turned out to be hiding in plain sight: an extension dating back to Drupal 6 (we are on version 9 today) that has been maintained (in spite of what Google results told me by not turning up recent (un-dated) additions to the module's page), and a plug-in to the IDE (that is seemingly not, at first sight, an IDE) that I'm already using (for free!).

And yet again add to this efforts by the least-well-intentioned parts of humanity to 'drown' any reasoned or fact-based searches in pointless, credulance-seeking 'noise': with Google judging the value of any content 'for' us, and there being no other well-performing alternative to it on the market, anyone looking for anything corresponding to reality (and not 'feelings' or 'concepts' or 'popularity'), without a LOT of effort (and just as many bullshit-detecting abilities), are almost just as much in the dark today as they were thirty years ago.

Monday, 30 September 2019

Those Who Would Let Other Humans Think 'for' Them.

The Programmable Human

Even before anything we learn in life, survival is the most basic function/instinct of the 'human machine': every move we make, even scratching our noses, is in the interest of bodily well-being, and our emotions, or our trainable 'sub-conscious value judgement' system, is the brain's reward or discomfort response to any given situation: without this basic system, we wouldn't be motivated to move or do/think anything at all.

From our very basic, perhaps even hard-wired, emotional responses to everything 'familiar' to us in early life, that is to say things like 'mother', 'milk' and 'warmth', we learn to expand our sphere of acceptance to other 'trusted' useful-for-survival tools shown to us by our also-expanding circle of 'trusted protectors', people usually presented to us by already-accepted trusted protectors. Through this we 're-train' our initial fear response to those people, animals and things unfamiliar to us, and before long in life our brains will have established a library of 'recognised' entities that no longer incite any sense of fear and/or revulsion. Even further on in life, we are able to identify the 'type' of person or thing that our peers and protectors obey/use, and accept those into our sphere of fear-free acceptance and trust as well. To the pre-adolescent, anything that has become part of this sphere is their 'trusted normal',2 and they will still have a fear response (to varying degrees) of anything outside of it.

Here we should also consider how the brain works on a subconscious and conscious level: what we call our 'consciousness' only seems to 'see' a small percentage of what our senses percieve, and the content of the relayed information seems to be dictated by whatever our subconscious deems 'important' to it (or its survival): this 'importance' is dictated by all the '(what is) safe bubble' training described above. Two people in the exact same situation may 'see' different things: if one has developed an affection for, say, a red ball, if they are placed in a warehouse full of jumbled toys, they will 'see' red balls everywhere (and have a positive 'reward' emotional response upon the sight of one), whereas someone else without that experience may not notice them at all. So, not only does our early-life experience determine what and who we trust (and what we fear outside of that), it can even determine how we perceive the world around us.

In 'learning' through the above imitated example and empirical experience, there is rarely (if) any call for us to make a personal assessment of any 'lesson' given3: if the 'trusted' human showing us the example is part of what we 'know', and the result of whatever lesson they give doesn't affect whatever notion of comfort we've developed until then, we have more or less the tendency to simply accept it as 'good' (for our survival). In fact, I would like to propose that, at this stage, the very definition of 'good' and 'bad', outside of physical pain or discomfort, is how familiar whatever being proposed to us is.

Some would like to call our early-life experience 'education', but if what is affecting our internal brain function is the direct result of our environment or outside, imitated-without-question example, programming would be a better descriptive term.

It remains to note that, for humanity, in times where we were still faced with the challenges of nature, nature was just as much, if not more, our education than the examples our protectors set for us: our emotional reactions to all that dangerous (or unfamiliar) to us most likely determined our chances of survival. As humanity began to gather in greater numbers, and thus protect itself from and distance itself from the tests of nature, the dangers in the world around us became less 'real' (almost distant threats, scary tales, really), but the emotional responses that were a defense to these remained quite intact; it's not for nothing that many of us still get a 'thrill' out of horror films and ghost stories still today.

But to not digress, through controlling the environment, clan members, culture, knowledge, and customs of any given settlement, it became possible to 'homogenise' the early-life experience of its younger members, that is to say, impregnate their minds with a 'sameness' with each other, and also impregnate their minds with a fear of all those not 'like' them.

Our around-adolescence 'Switch' to critical thought: a tool no longer needed.

When we come to the point in our lives when the brain gains the ability to discover and analyse (aka 'critical thought'), we suddenly are able to, instead of learning through simple imitation and obedience, question and examine everything we've learned to that point, should those lessons instil emotions of doubt and/or discomfort, and this analysis can even extend to those who were the source of these lessons.1 I think it's important to mention the latter because, from our emergence from nature, our greatest teacher was no longer nature, but other humans.

Yet in that time when we lived in competition with creatures of other species, and nature itself, we were often obliged to test those early-life lessons empirically, and, using our critical thought abilities, eliminate, modify and/or improve those found wanting, and this also became essential to our survival.  Again (from earlier posts), the Australian aboriginal 'walkabout' is a still-existing 'coming of age' tradition that is a perfect example of this: either the adolescent practically uses/tests all they've learned until then, or the result would almost certainly be death.

But once humans became more sedentary in greater numbers, the 'need' for critical thought waned: agriculture and animal husbandry techniques could be passed down, unchallenged and unchanged, through the generations, and distributed roles in any given settlement meant that a single human was no longer required to learn a full survival skill set. Critical thought seems to have been reserved for those distributing roles and setting the rules, but where 'tradition' became a concept and/or rule, it most likely became possible to propagate knowledge and techniques, through simple example and imitation, through the generations.

Critical Thought and Ambition: 'Blocking The Switch'.

Yet if a lesser-able human wanted to 'rise above' a survive-through-imitation (largely non-critically-thinking) settlement whose hierarchy was dictated by age or ability/strength, they had little choice but to resort to critical thought to dominate non-critical thinkers which was, not without irony, a situation much like, in days before, a human who hoped to emerge victorious from a competition with creatures more agile or stronger than they. And all one had to do is transition to, and develop, critical thought enough to outwit or manipulate those higher up in the feeding chain, or convince/manipulate enough humans to create an army of their own against the same.

And once in their desired position, it is obvious that many, if not most, of history's leaders of all calibre saw that a maintained state of non-critically-thinking, childlike survival-dependance mental in an adult population would create a faithful, dependant, unquestioning, conformist, thus controllable, following. The most useful tool to this end was transposing the child's protector-dependant 'rule-based' (or punishment) environment onto an adult population, thus convincing any child in that society that that childhood state was perpetual, or in other words, convincing them that there was nothing to transition to, that there was no other state of being in which it is possible to survive,  which meant that, to the follower-believers, everything outside that 'conform/obey-or-else' environment became a great, fear-inducing 'unknown'.

Through history, the forms this tool took were many: some of history's leaders simply jailed or eliminated all those who would 'dare' question, counter or ignore their authority and dictate (thus reinforcing the no-example-to-transition-to state), and yet others found it useful to hide behind psychology-manipulating concept-tools that both tapped into the immature human's fear of separation from their 'protector-provider' (and the 'known, same' following who obeyed their dictate), their fear of punishment, and their most innate and unthinkingly instinctive fear of death (promise of immortality, etc.). No matter the tool used to get them there, the definition of 'good' for a human in this arrested state about amounts to 'same', that is to say, 'same' as whatever they (and others 'like them' following/thought-dependant on the same 'leader') were programmed with until then.

And adult humans in this state are very, very, manipulable and corruptible: throw a few scraps from the leader's follower-fed table to a few 'chosen' (-by-the-leader) followers, and they eerily almost instantaneously transform from unquestioning followers of the leader's dictate into enforcers of the same: again, no matter if their form is superstitious-threat or demonstrable-threat based, examples of the resulting three-level dictatorial hierarchy model (see: of Shepherds, Sheep-dogs and Sheep) can be seen all through history.

The Above transposed onto Modern Society.

Many in more education-and-technology developed societies would like to think themselves exempt from, or immune to, the above dictatorial systems, but they seem strangely blind to the existence of the same, in the form of sub-cultures, in their own would-be democracies: if at least a majority of a population that would call itself a democracy isn't thinking for themselves, it isn't one.

Some of the earlier-described switch-blocking tools have proven so effective over the millennia that, even in this post-enlightenment, dwindling-superstition, information-laden world we live in today, a few quite unworthy-of-leadership (or even consultation) humans are desperately trying to hang onto them through attempting to even further intellectually cripple future would-be followers (while setting things up for an easier elimination of future dissenters). And this seems to be the state of the things in the U.S. today.

But things have evolved a bit further than that: those who would shape society through hiding behind imaginary proxies (while shifting attention, responsibility and accountability onto the same), have provided those dictating today's economy with a useful example: since Reagan (and some would say earlier), fear-of-other-spreading politicians have served as very effective distractions from those who are really doing the decision-making, those who decide which products we consume, all while fighting amongst themselves to be the one to control the whole of the cash-cow that is our thoughtless complacency: even those consumers aware of this situation are guilty of supporting it to some degree, but in today's world, it has become near impossible to find any other alternative to it. But the battle for a real consumer awareness (thus a change to the status quo) has only just begun.

One would think that the advent of the internet would have facilitated the dissemination of rational, educated, demonstrable thoughts and ideas to the world, but it has also made it easier for would-be dictators (and their followers) to spread disinformation (fear), bigotry (fear), and irrational fear-of-'other'(-than-followed-dictate) ideas (also fear), and experience has shown us that those who 'need' to make the most noise are often those less deserving of our attention... but to one seeing our networks and screens monopolised by this desperate brouhaha, it may seem that our world is dominated by it, but a closer examination of the declining-criminality-and-war real state of things shows that this is not so.

Filling the Void.

Many would-be dictators disparage the loss of the 'community' aspect that their respective regimes used to bring, and it is true that, at least for the time being, there is not much on the horizon to fill the void, but, in this author's humble opinion, this is largely due to the demoralisation-effect that their respective noise-machines are making (which makes their complaints disingenuous to the core). And the answer to this noise, at least for the time being, seems to be but something best described as a disparate, too-multi-faceted (and distracting-from-real-problem) utopic fog of ambiguity, because, yes, although seemingly well-intentioned, many who would like to make a safe place for themselves in society are not (critically-)thinking beyond the survive-by-imitation bubble of their own 'identity' (sense of comfort, 'self'), either.

So, against a 'united in sameness' (and fear-of-different-from-that) voter bloc, what do we have to counter it? For the time being, all we have is a largely silent 'meh' (non-)voter bloc peppered with small-in-comparison 'identity' groups. Concerning the latter, the focus should be on the non-rational fear-of-different survive-through-imitation(-panderers) causing the exclusion, not the excluded. Already, a 'united against all forms of bigotry'4 force would be one to reckon with.

The 'meh' (non-)voter bloc seems to feel that their voice doesn't count, that their voice doesn't matter... but are most of us not living in a democracy? What if we replace the centuries-old 'tradition' of weekly irrational-and-indemonstrable-superstition-and-fear-themed meetings with others that are places to make our thoughts as individuals heard and recorded, to compare, discuss, and morph our individual thoughts into consensus?4 If such a thing were organised around administrative communities from a grassroots level, and the results published to recorded history (online) where others can see and compare (and think about!) the results, hell, I'd participate. And that also could be a force for the ignorance-exploiters-and-panderers-that-be to reckon with.

In short, while the 'follow-minded' go to rallies organised 'for' them by those who dictate 'for' them what's 'good' or 'bad' 'for' them, we who 'dare' think for ourselves would better organise meetings where we can decide, between ourselves, what's good or bad for ourselves.




1 - In any case, it has been widely demonstrated that the brain undergoes a 'pruning' process around adolescence.
2 - This in itself is complex: a child knowing nothing but squalor might not perceive this state as 'uncomfortable'.
3 - Emotions such as empathy (sense of sharing, and the brain 'rewards' thereof), may come into play here, but omitted for simplicity's sake.
4 - No, 'thwarting our promotion of bigotry' is not bigotry.
5 - Does this remind anyone else of anything Classical Greece taught us?

Thursday, 13 June 2019

Independence of Mind without Resources

I come from a position of disadvantage. I had no family fortune (or hardly any help at all, for that matter) to get me going in life. I am doing trades that have nothing to do with my education (an education that, as the promised end result was a ('successful') 'being like everyone else') I did not understand or even see the point of at the time), so became entirely dependant on 'networking' for finding work (as there are few 'traditional' companies who would take my five-page-long multi-trade CV seriously). In fact, in all my 47 years, I have held two full-time salary jobs: the first convinced me to get the hell out of that 'jumping through hoops for rewards' rut and do something for myself, and the second... well, was so 'easy' that it temporarily corrupted my results-based work-ethic (and I doubt that I'll ever see an opportunity like that again).

So when one is without resources, they have only their willingness to work to count on, and intelligence (education, experience, imagination) comes into play, too. But when confronted with real-world situations, we run into a problem: (comfortable) humans with resources are, paradoxically, often those with the least will to work and imagination. So when I, seeking resources, show up with my ideas and willingness to work, the resource-provider has the option of just taking the former... and, if my situation of precarity (foreigner without resources) becomes too evident, they have the additional option of making me do all the work, then reneging on their side of the deal. It was often like this until I became less trusting of 'the better angels of our nature' (but I still fail there from time to time).

But my road to this understanding was a long one. I began from a place of utter naiveté (my childhood was fairly devoid of 'normal' human interaction), a (childhood) lack-of-affection-generated too-eagerness-to-please, and a total disability when it came to dealing with dishonesty (I tended to wax credulous in reaction to even outrageously dishonest claims or blame-responsibility displacing (on me)). All of this tended to lend value to the existence of others around me, and none to my own. And a childhood-instilled lack of confidence in myself added to the mix: I had a hard time demanding a decent wage for my work (because I (somehow) felt that I didn't 'deserve so much') until recently. Also figuring prominently was my (also childhood-instilled) credulty - and fear - of authority: only through direct work with such supposed 'adults' was I able to distill that misconception, because many 'authority figures', most all of them in places of comfort, are actually lesser beings (utility-and-survival-wise) than the average worker, with less imagination, too.

Am I laying blame for all that? It's hard to, because everyone involved was most likely convinced that they were doing the 'right thing' (at the time they were doing it). And humans with no value-judgement abilities (or desire or will to learn to or accept the responsibility for making the same) will repeat the same patterns as long as it 'works' for them (meaning: as long as it doesn't put their survival (comfort) in jeopardy). Some concerned actors probably still don't understand the error of their ways even today. When considering such things ('fault') it's hugely important to consider their motivation, and whether they were knowingly doing damage/taking advantage... and that's often hard to determine, as feigned indigation is a common 'defence' in situations of idea-reality discord/dishonesty, too.

The curse is triple when one considers that, with that understanding, not only will a resource(-or-safety-net)-free person be sure to be exploited, they will often be obliged to accept that exploitation with a full understanding of the imbalance of it all... or retire from society completely. But how can one do that without any resources of one's own and survive?

Monday, 3 June 2019

Revising Copyright: Quality Control + the Attribution System.

Already I'm dismayed at seeing those who have done no work benefit from the invention/work of others. Only the morally bankrupt (voir: a socio/psycho-path) could ever do this. Damn Edison creating the 'model' of the investor (they who have already profited from/exploited from the work of others) getting the credit and profit from an inventor's innovation and work, and not the inventor. Who actually invented the lightbulb? You probably still have no idea.

But with that little rant out of the way, how do we treat copyrighted material in this internet age?

The powers-that(-would)-be seem to be clinging 'all or nothing' desperately to an old-world copyright system, and it is failing them, as it is impossible to locate and control all points of data exchange. Not only do their vain attempts to locate remove, paywall or monetise copyrighted material fail, but their efforts can become an incentive to piracy.

It goes beyond there: especially annoying is the 'copyright paranoia' reigning on one of the world's principle sources of information, Wikipedia: magazine and album cover-image use is restricted to an article about that magazine or album, making it impossible to use such art for articles on a band member or book author. As a demonstration of this last point, I am at present working on the article about Camera magazine editor Allan Porter, and I cannot use any images of the books he is the author of or worked on. Even the portraits of himself (given to me by himself) are under strict control, and cannot be above a certain pixel dimension. I do understand the reasoning behind this, but this tongue-tied practice is only katowing to (thus enforcing) the existing 'system' without doing anything at all to change it.

It's about the quality, stupid.

I thought this even back in Napster days, when the music industry moguls were doing their all to track down and remove/paywall any instances of 'their' product. The irony is that the solution to their dilemma existed already in the quality standards of online music: 128kb/s, a quality comparable to a radio transmission, is palpably better in quality than the 96kb/s some 'sharers' used to save a still-slow-internet bandwidth. Yet who would want to listen to the latter in their hi-fi stereo system? It might be interesting to consider a system where only the free distribution of music above a certain bitrate is considered as piracy.

The same goes for images: even from my photographer point of view, I consider any image I 'put out there' as 'lost' (that it will be freely exchanged and used), and it is for that that I am very careful to only publish images below a certain pixel dimension online.

Automatic Attribution

It would even seem that a free distribution of low-quality media would benefit its authors from an advertising standpoint, but... it is still rare to see an attribution on any web-published media even today. So how can we easily attribute a work to its author?

I think the solution lies in something similar to the EXIF data attached to most modern digital images: were this sort of 'source' info be attached to all file-format data that circulated on the web, we would have no more need to add/reference (often ignored, and still-rudimentary) license data, and our website applications could read it and attribute a maybe link-accreditation (overlay for images, a notification for music, for example), automatically.... and this would demonstrably be a boon-benefit to media authors.

And it doesn't end there: this ties into the RDF 'claim attribution' system I am developing, as this add-on would allow the media itself to be perfectly integrated into the 'data-cloud' that would be any event at any given point in time... but, once again, I digress.

Monday, 29 April 2019

ANN (Artificial Neural Network) OCR: A likely dead-end method. Considering a new approach.

In my recent dives into AI-dependant endeavours, I've been presented with the gargantuan task of extracting data from countless pages of printed, and often ancient, text, and in every, I've run up against the same obstacle: the limitations of Artificial Neural Network (henceforth 'ANN')-dependant OCR.

For starters, ANN is but an exercise in comparison: it contains none of the logic or other processes that the human brain uses to differentiate text from background, identify text as text, or identify character forms (why is an 'a' not a 'd', and what characteristics do each have?). Instead, it 'remembers' through on a library of labelled 'samples' (images of 'a's named 'a') and 'recognises' by detecting these patterns in any given input image... and in many OCR application, the analysis stops there. What's more, ANN is a 'black box': we know whats in the sample library, and we know what the output is, but we don't know at all what the computer 'sees' and retains as a 'positive' match. Of course it would be possible to capture this (save the output of every network step), but I do not think this would aid the shortcomings just mentioned.

The present logic-less method may also be subject to over-training: the larger the sample library, especially considering all the forms (serif, sans serif, italic, scripted, etc.) a letter may have, the more the chance that the computer may find 'false positives'; the only way to avoid this is to do further training, and/or do a training specific to each document, a procedure which would limit the required library (character styles) and thus reduce error. But this, and further adaptation, requires human intervention, and still we have no means of intervening or monitoring the 'recognition' process. Also absent from this system is a probability determination (it is present, but as an 'accepted' threshold programmed into the application itself), and this would prove useful in further character and word analysis.

And all the above is specific to plain text on an uncluttered background: what of text on maps, partly-tree-covered billboards, art, and multi-coloured (and overlapping) layouts? The human brain 'extracts' character data quite well in these conditions; therein are other deduction/induction processes absent from ANN as well.

Considering Human 'text recognition' as a model.

Like many other functions of the human brain, text recognition seems to function as an independent 'module' that contributes its output to the overall thought/analysis process of any given situation. It requires creation, then training, though: a dog, for example, might recognise text (or any other human-made entity) as 'not natural', but the analysis ends there, as it has not learned that certain forms have meaning (beyond 'function'), and may so ignore them completely; a human, when presented with text in a language they were not trained in, may recognise the characters as 'text' (and here there are other ANN-absent rules of logic at work here), but that's about it.

What constitutes a 'recognised character'? Every alphabet has a logic to it: a 'b', for example, in most every circumstance, is a round-ish shape to the lower right of a vertical-ish line; stray from this, and the human brain won't recognise it as a 'b' anymore. In using the exact same forms, we can create p, a, d, and q as well... the only thing differentiating them is position and size. In fact, in all, the Roman alphabet consists of less than a dozen 'logic shapes'.



Not only can the human brain detect and identify these forms: it can also 'fill in the blanks' in situations like, say, a tree branch covering a billboard: the overall identifaction process seems to be an initial 'text, not text' separation, the removal of 'not text' from the picture, then it seems to 'imagine' what the covered 'missing bits' would be, and this is submitted for further analysis.

But the same holds true in cases where a character is badly printed, super-stylised, missing bits, etc.: in fact, if a word is not instantly readable (and this is a highly-trained process in itself), the brain seems to 'dig down' a level to determine what the 'missing' character should be, and 'matches' against that, and this is a whole other level of analysis (absent from ANN). In fact, were we able to extract the probability of a character match of every given character of any given word and compare this to to a dictionary, we would not only create another probability (the 'refined' chances of 'x' character being 'y'), we would also have a means for the computer to... train itself.



Thursday, 7 February 2019

Gravity and Light (energy) is Everything.

Everything is gravity. Everything is light. Everything is both, and they're indissociable. In fact, the two together are a constant in itself, a constant that represents the base, or the base function, if you will, of everything existing in our universe.

The Inverse Squared law is omnipresent in both gravitational equations and magnetism (note: in an earlier post, I hypothesise that both are a variation of the same thing): the closer one measures to a given particle, the higher its gravitational pull, in a proportionately constant way. Concerning gravitation, Newton states:
"I ſay that a corpuſcle placed without the ſphærical ſuperficies is attracted towards the centre of the ſphere with a force reciprocally proportional to the ſquare of its diſtance from that centre."

- Issac Newton,
Or, 'translated' into modern equation form:

$F = G \frac{m_1 m_2}{r^2}\ $

For much of modern physics, this 'law' holds true until we reach a hard-to-observe (-and-never-observed) sub-atomic level, but 'goes west' when immersed in the sea of mathematical hypotheticae beyond. I'd like to maintain that Newton's observation holds true all the way down, but to do so I must explain my ideas on the dynamics that lead to this.

Every Particle: a Point Divided.

For the purpose of this article, 'particle' designates any non-construct manifestation of energy, that is to say any single photon, quark, electron, etc., (without considering later dynamics that will be explained later).

Where there is a particle, there is gravity. Every particle, isolated, is in a 'stable' state, an energy maintaining a constant 'resistance' against that gravitational force, in an action that could almost be considered an orbit. Gravity is considered to be a 'weak' force (not to be confused with the actual 'weak force' physics concept), as the gravity emanating from a single particle is almost immeasurable, and only combined (into a mass) does it begin to gain discernibility, but we think that because we can only approach and observe said particle outside a certain distance.

And it's that concept of 'distance' that has to change: it's only recently that we've begun to understand that it is quite possible, and quite normal, to observe and manipulate physics phenomena below the level of human observation; in the present day, we seem to have paused at the sub-atomic level, but, as a camera zooming in from a view of the entire earth to a single electron, the plunge can go far, far, beyond the latter point.

We seem to apply the 'inverse square' observation without considering what may happen at that extreme depth (of observation). Already we know that if we take the idea of a hydrogen atom and 'blow up' its proton to the size of a pea, one would need a football stadium to contain the orbit of the electron around it. And already the 'binding strength' (that physics does not consider as gravity) is quite strong. So should we retain the distance-attraction-strength aspects of the above model (because the 'has mass' atom model itself is not what we're looking at, here), we see that, at that level, the electron (particle) is still quite 'far away' from the point attracting it. But what if we were to take this dynamic to an even deeper level, even closer to any given point of attraction, within a particle itself?

Before we go there, I'd like to return to the 'particle stability' dynamic. As it has already been observed, different particles have different energy levels: I'd like to propose that that energy level is directly proportional to the distance from the 'centre point' it is bound to, that is to say, the source of the gravitational force. The basic rule (here) is: to maintain particle stability, the closer an energy is to its source of gravity, the higher it has to be. In fact, if we can measure accurately the energy level of any given particle, using the inverse-square observation, we should be able to calculate that energy's distance from the centre of gravitational pull attracting it. So if we were to consider the gravity from the 'weak bind' hydrogen atom model, and increase that inverse-squared down to the level of a single particle, the gravitational pull there must be enormous indeed... thus so must be the energy levels required to maintain that state at that level, also.

Side-note: Light (EMR) is a particle; everything above (in energy level) is two halves of one. 

To avoid referring readers to earlier posts, I'd like to resume these briefly here: there I posit that any ElectroMagnetic Radiation (EMR) 'boosted' above a gamma-level energy level will split into two 'halves', and that their 'forward' light-speed-constant (in relation to their respective points of origin) will be no more. These two 'halves' will have what modern physics calls 'charge', and they will be opposite, that is to say, positive and negative. Consider a waveform with a line down the centre of its path: everything above will be 'positive', and everything below 'negative'. Yet, although separate, those two halves are still one unique entity. This should not be confused with Einstein's 'spooky action at a distance' because, although the effects of this phenomena would be the same, his hypothesis (rather, his reflections on someone else's work) about the creation of that condition is quite different.

I suppose that I should also outline what might happen after that split: here I posit that the halves of a 'split fermion', since they are halves of a unique 'thing', will not be attracted to each other, and cannot annihilate each other, but opposing halves of two different particles can, or at least they'll try. Since the respective particle energies are close enough to their respective gravitational centres to allow another opposing particle half to get close enough to be captured by the enormous gravitational draw at that proximity, the two will attempt to bind, in a dynamic modern physics calls 'the strong force'. I won't get into the dynamics of 'particle(-half) binding' here, but the only 'stable' particle-half combination seems to be a trio, or two positives and one negative, or vice versa, locked in an eternal inter-annihilation struggle, and, depending on the polarity, the result is an either proton or neutron hadron... and this brings us back up to the atomic-level physics we know.

Model conclusion

I've done my best to explain my ideas about an 'energy vs. gravity' dynamic of any given particle in the simplest way possible (and I hope I succeeded), but if any of this stands to testing, another reality becomes true: any single particle, that is to say: photons, quarks (all 'colours'), electrons (and positrons), neutrinos, etc., are but variations of the same thing.

Wednesday, 10 October 2018

Dealing with Depression

I try to make my public profile a 'progressive, positive' one, but sometimes my frustrations with, in maintaining as much integrity as I can, eking out a living in an increasingly shallow, between-(programmed)-classes-exploitative world: we should be helping each other overcome our weaknesses, not using them as leverage to gain control over each other and all (and all the ill-gotten 'reward' that that brings).

And in this 'survive by imitation (so jump through these hoops, or else!)' society (henceforth STI), I tend to be out of even the out-group; my seeking to understand in-group behaviour itself seems to be enough for any STI to put me into the out-group, and I have never learned to just 'fake it' (and since I can't read minds, that is probably an impossible task). And when I explain to my clients the solution for a relatively simple problem (so they won't have to call me next time), I get looked at like I'm from another planet, but my message through doing that is simple: 'Anyone can do this, so save your money for real problems'... but when things get tight, I've learned to do that less (but always feel bad about it).

My underlining integrity also saddles me with an inability to compete with others in the market: I still fondly remember the day where some 'gung-ho' new employee in one of my client companies would rather call someone else (less aware?) than face up to their fucking up a system I built for said client company... not only did I eventually have to witness that sort of manipulation, but I had to witness the lie-filled play-acting of 'the other guy'... and had to watch my clients falling for it (because, as they new nothing about 'how' it worked (only that it did, then didn't), how could they tell the difference). And from then on the gung-ho grudge-holder tried tapping every ensuing problem on me (but it was really a chain reaction based in their fuck-up)... I have no weapons against that sort of dishonesty, so I ended up just dropping them from my client list (which is what gung-ho wanted anyway).

But by doing that, I'm short-changing myself in another way: the brain needs 'reward' situations for motivation, and normally taking home a paycheck should be one of those, but through my added level of thought (understanding), I tend to cancel that, or 'sabotage' it, as some may say... but once one spreads their net of awareness about what effect they have on others around them in an even wider area, there's no going back. 

So, even though today my survival-experience has given me abilities far above that any education could provide, I find myself unable to be 'taken seriously' because of my 'not jumping through the same hoops, the same way, as everyone'... and I say that because I even thought to seek refuge in supposedly-high-minded, rational academia (where I initially felt very fearful and small), but that presupposition turned out to be illusion, too.

So, at present I'm in a very, very, very reward-less world. Not only that, but I'm also blocked from exiting that dilemma by... 'powers' beyond my control (a single person taking advantage of their legal status (and mine being dependant on theirs) to make me pay their work-less, ignorant (expensive!) space-filling ways). I was already in a depression before this new supposedly 'new start' apartment (and my ex-drug-pusher-more-than-anything-psychiatrist thought that I've always been depressed; I'm supposedly unable to even process 'reward' (meaning that I have no experience with it), meaning that the drugs they prescribed would never work, either (they didn't), yet they continued to prescribe them (€192 every two weeks, on average)... this makes no rational sense, but I digress).

I used to depend on alcohol to 'boot' me out of a depressive state, or to make me numb enough to ignore those reward-killing 'details', but even that stopped working once I found a better understanding of what it does to the brain (and that I learned while doing research about why the antidepressants were fucking me up so much) and saw it for the 'fake reward' it is. And there, too, there's no going back (side note: so why do alcoholism 'support' programs never include the critical thought (lessons) required to overcome/re-program our 'default' reactions and instincts?). All the same, I would sometimes resort to my 'knock out remedy' when the depressive bout was particularily bad, but even that stopped working to the point where drinking itself seemed... a sad and pointless exercise. I can drink socially again, but I can't really say that I like it anymore... in any case, it's not what it was before. I feel about alcohol today as I always felt about weed before: one (rational) part of my brain being concerned about another part not functioning 'correctly'... not a positive experience, and it's like watching myself twice (in everything I think and say while 'influenced'), in a way.

So, today I have neither alcohol or antidepressants to 'help me though' depression, and my understanding of my state seems to be just another obstacle; ignorance could be bliss, but only in an ignorance-exploiting world (like today's). My only remedy is to make my own rewards, and that solution seems, for me, to cut myself off from the world entirely, and do some sort of humanities-service task that will remain in human memory. I would also like to develop my AI research (and I think I'm on to something there, but convincing someone to finance non-tech-educated me, good luck) and RDF development (my at-once utterly simple and horribly complex 'fact engine'), but I just can't support myself that way.

At this point I find solace in working with both my mind and my hands (and this apartment was that at one point - I did get a new kitchen-countertop technique invention out of it), but what I really need is to invest myself into something that I've evidently not yet had in my life: something for me.