Monday 3 June 2019

Revising Copyright: Quality Control + the Attribution System.

Already I'm dismayed at seeing those who have done no work benefit from the invention/work of others. Only the morally bankrupt (voir: a socio/psycho-path) could ever do this. Damn Edison creating the 'model' of the investor (they who have already profited from/exploited from the work of others) getting the credit and profit from an inventor's innovation and work, and not the inventor. Who actually invented the lightbulb? You probably still have no idea.

But with that little rant out of the way, how do we treat copyrighted material in this internet age?

The powers-that(-would)-be seem to be clinging 'all or nothing' desperately to an old-world copyright system, and it is failing them, as it is impossible to locate and control all points of data exchange. Not only do their vain attempts to locate remove, paywall or monetise copyrighted material fail, but their efforts can become an incentive to piracy.

It goes beyond there: especially annoying is the 'copyright paranoia' reigning on one of the world's principle sources of information, Wikipedia: magazine and album cover-image use is restricted to an article about that magazine or album, making it impossible to use such art for articles on a band member or book author. As a demonstration of this last point, I am at present working on the article about Camera magazine editor Allan Porter, and I cannot use any images of the books he is the author of or worked on. Even the portraits of himself (given to me by himself) are under strict control, and cannot be above a certain pixel dimension. I do understand the reasoning behind this, but this tongue-tied practice is only katowing to (thus enforcing) the existing 'system' without doing anything at all to change it.

It's about the quality, stupid.

I thought this even back in Napster days, when the music industry moguls were doing their all to track down and remove/paywall any instances of 'their' product. The irony is that the solution to their dilemma existed already in the quality standards of online music: 128kb/s, a quality comparable to a radio transmission, is palpably better in quality than the 96kb/s some 'sharers' used to save a still-slow-internet bandwidth. Yet who would want to listen to the latter in their hi-fi stereo system? It might be interesting to consider a system where only the free distribution of music above a certain bitrate is considered as piracy.

The same goes for images: even from my photographer point of view, I consider any image I 'put out there' as 'lost' (that it will be freely exchanged and used), and it is for that that I am very careful to only publish images below a certain pixel dimension online.

Automatic Attribution

It would even seem that a free distribution of low-quality media would benefit its authors from an advertising standpoint, but... it is still rare to see an attribution on any web-published media even today. So how can we easily attribute a work to its author?

I think the solution lies in something similar to the EXIF data attached to most modern digital images: were this sort of 'source' info be attached to all file-format data that circulated on the web, we would have no more need to add/reference (often ignored, and still-rudimentary) license data, and our website applications could read it and attribute a maybe link-accreditation (overlay for images, a notification for music, for example), automatically.... and this would demonstrably be a boon-benefit to media authors.

And it doesn't end there: this ties into the RDF 'claim attribution' system I am developing, as this add-on would allow the media itself to be perfectly integrated into the 'data-cloud' that would be any event at any given point in time... but, once again, I digress.

No comments:

Post a Comment