The project is at GAMMA stage, meaning it needs to be heavily checked. In practice it is just a shell configuration machinery around the rsync command. Indeed it has already an indexing facilities.
What will you be able to do (at present all shell! no GUI yet, sorry):
- sync a directory, your data on an external disk
- See changed files from the last “sync”, which are saved in dated directories
- lookup an index for fast search of files, in the rsync or backup folders
Adding a cron command will make it pretty useful. We will develop it asap.
These below are just ideas, the code implements these ideas only if supported by additional scripts or code.
The target: A nice backup machinery
It exists the time machine under mac os and the classical incremental + full backup done by most backup systems. They are in principle not comparable if we are backup guys, but in practice they solve the same problem to the final user. Get back old data.
A backup system has this little addition: I get back the data even if a tornado hits my computer and my central administration.
So why not to put it together? I mean the time machine and the backup?
The BMU project creates a local time dependent backup system which can be, indeed backed up, to target a consistent incremental + full back and tornado proofed system.
BMU is a first step, supported from the very low price for data storage devices.
Basics: a la time machine
The simple basic usage is to replace the Time Machine(tm) in particular for the network or networked devices. Mac OS makes it very hard to use the Time Machine(tm) if your disk on the network is not connected to an apple device: A time capsule or the like.
On windows you have an almost equivalent service but it is still a bit hidden.
On Linux, I didn’t see it yet. Most probably because there are other tools you can use.
Please note: IF YOU create a cron job (working on it soon), BMU works just like you would think.
- It makes a mirror copy
- It makes dated backups of changed files
- It has a fast indexed facility to search files
What BMU doesn’t have yet:
- A graphical interface a GUI (help needed here)
- A restore facility
- A cron definition and plan
- A cache for when the backup disk is not present
Advanced: a la versioning system
A problem I had was to save partial work before committing to an actual versioning system like git or svn. While the versioning system keeps versions within the project, nobody was keeping my tests or partial code. Before I used to tar the full folder. Now I can use BMU for the project, save every step of my work, and decide if want to save it, back it up via company/other backups system, or simply remove it forever. Once I like the code, I commit it. But I know I have all the steps saved.
Advanced: Group history
This is not yet in place due to permissions and authentication, but if you work on the same “device” it might work already.
There is tons of collaborative tools: google, atlassian, and many more. But when you do not have any, and you work on the same files, you can use BMU. It will keep the history of the changes for you.
A backup for the group projects.
Call for Help
The BMU code at present is a set of shell scripts. They are working nicely. But it needs:
- A GUI
- A time related cron
- Many tests
- Users/Gruops definition within the OS
IF any of you readers is interested the link is:
I am struggling for backups since ever. I assume just like anybody. For my personal machines I used rsync in combination with my script taritdate.sh. Then I kept reading online about using rsync for incremental backups but I could not find any simple example. So I did this little script: Continue reading
This is a very simple script which I use often to create temporary backups of working directories. As complexity grows, I needed to have a script to save locally the state of my work, so that I can easily revert to a previous state. It came out that I also use it for backups. See it:
There are some simple unix commands that are pretty useful but time to time I forget about. I often use them in handling large data sets and I am always surprised how a good pipe might save time and resources making possible to handle large amount of data within a small resources. It might make the difference between having an answer or not.