Computers have the distinct ability to keep information, not losing a tiny bit (or byte) of it. At least of what it can record, or recognize. A photo image is unique when printed in a piece of paper, or in a film. It has way more information than any single computer could save in a file. But, most of it doesn't matter to our eyes anyway and so just a little part of it is enough to store, and reproduce, and keep it recognizable for us.
To be able to do what I believe they do best, and store information for fast and fine tuned recovery, computers need to copy files. Make backups, spread the same info all over to make it redundant and therefore safe from losing.
This post is a question for every developer out there: why keep making copy applications based in anything else than rsync?
Still today, decades after computers started doing their thing, copying files is a hassle! Slow, conflicting, and untrusted whenever I try to copy anything that may take more than 10 seconds, from any given point A to point B, be it inside my own hard disk, to a pen drive, over the air, through the internet, etc, I will eventually get a corrupted file, a broken connection, or just unexplained slowness where there should be none.
Nothing of that ever happened when I've used rsync.
And it can be used for any kind of file copying. If a connection is broken, you can just continue from where it stopped, auto-magically. It never slows down for no reason, in fact you can choose to auto-compress files just to go through a slow connection a lot faster, using processor capabilities from both sides, which is a breeze for any PC/mac nowadays.
There's just one problem... There's no simple and good interface for it yet! The only one that works fine is the command line, and it's more limited than it could. It gets hard when you want to copy more than just one file or directory/folder.
So why, oh why even today is still so hard to just copy a file? Seems like computers may still have a long way to go.
Nenhum comentário:
Postar um comentário