The project is at GAMMA stage, meaning it needs to be heavily checked. In practice it is just a shell configuration machinery around the rsync command. Indeed it has already an indexing facilities.
What will you be able to do (at present all shell! no GUI yet, sorry):
- sync a directory, your data on an external disk
- See changed files from the last “sync”, which are saved in dated directories
- lookup an index for fast search of files, in the rsync or backup folders
Adding a cron command will make it pretty useful. We will develop it asap.
These below are just ideas, the code implements these ideas only if supported by additional scripts or code.
The target: A nice backup machinery
It exists the time machine under mac os and the classical incremental + full backup done by most backup systems. They are in principle not comparable if we are backup guys, but in practice they solve the same problem to the final user. Get back old data.
A backup system has this little addition: I get back the data even if a tornado hits my computer and my central administration.
So why not to put it together? I mean the time machine and the backup?
The BMU project creates a local time dependent backup system which can be, indeed backed up, to target a consistent incremental + full back and tornado proofed system.
BMU is a first step, supported from the very low price for data storage devices.
Basics: a la time machine
The simple basic usage is to replace the Time Machine(tm) in particular for the network or networked devices. Mac OS makes it very hard to use the Time Machine(tm) if your disk on the network is not connected to an apple device: A time capsule or the like.
On windows you have an almost equivalent service but it is still a bit hidden.
On Linux, I didn’t see it yet. Most probably because there are other tools you can use.
Please note: IF YOU create a cron job (working on it soon), BMU works just like you would think.
- It makes a mirror copy
- It makes dated backups of changed files
- It has a fast indexed facility to search files
What BMU doesn’t have yet:
- A graphical interface a GUI (help needed here)
- A restore facility
- A cron definition and plan
- A cache for when the backup disk is not present
Advanced: a la versioning system
A problem I had was to save partial work before committing to an actual versioning system like git or svn. While the versioning system keeps versions within the project, nobody was keeping my tests or partial code. Before I used to tar the full folder. Now I can use BMU for the project, save every step of my work, and decide if want to save it, back it up via company/other backups system, or simply remove it forever. Once I like the code, I commit it. But I know I have all the steps saved.
Advanced: Group history
This is not yet in place due to permissions and authentication, but if you work on the same “device” it might work already.
There is tons of collaborative tools: google, atlassian, and many more. But when you do not have any, and you work on the same files, you can use BMU. It will keep the history of the changes for you.
A backup for the group projects.
Call for Help
The BMU code at present is a set of shell scripts. They are working nicely. But it needs:
- A GUI
- A time related cron
- Many tests
- Users/Gruops definition within the OS
IF any of you readers is interested the link is:
I am struggling for backups since ever. I assume just like anybody. For my personal machines I used rsync in combination with my script taritdate.sh. Then I kept reading online about using rsync for incremental backups but I could not find any simple example. So I did this little script: Continue reading
The ID. The IDentification Number (or code) is used everywhere and most of the time it is crucial for software applications and for business; “most of the time” also crucial for personal identification.
I would start with noting that indeed the email address is a major example of ID which is simply understood by most of the people. It is unique across the Internet world.
I needed a simple script to quickly change the input data for the command line execution. The interesting part is at the end. It is an initial script to handle big data.
So this is a first attempt (lets call the script: run_praat.sh):