time machine version

Story: Do Automated Cross-Platform Network Backups The Easy Way (Part 2)Total Replies: 7
Author Content
gemlog

Jun 12, 2006
3:04 PM EDT
I thought I tried to promote this notion here before (but maybe not):

http://www.mikerubel.org/computers/rsync_snapshots/

I've been using a modified version of this for, at least, a couple of years now. In combination with ssh pub keys it's great. I normally have hourly, daily and weekly snapshots. Just 24 hours worth -- not 24x7x5 :-)

Actual 'cost' storage-wise depends on how big your files are and how often they get deleted and created, of course. I seem to hover around 20% of extra storage for all those snapshots.

I know Mike backs up a large Uni department with it (papers, grades, thesis etc.). Me, I use it on everything from normal users to clients. You can easily get at the ms boxen via samba.

Maybe your backuppc uses hardlinks too, dunno.
grouch

Jun 12, 2006
4:51 PM EDT
gemlog:

You did point that out in a thread some time back, but it sure doesn't hurt to link it again. There are several excellent ideas implemented there.
gemlog

Jun 12, 2006
10:42 PM EDT
Oops. I thought I might have.

I evangelize the notion as much as possible -- it's just so damn clever. When I originally read it (O.K., maybe 2nd or 3rd pass...), I thought "How cool is that?". Really wish I'd thought of it :-) Saves me a lot of time supporting people too.

A significant, but non-obvious, benefit is that users can restore their own backups without the intervention of IT -- permissions are retained on all the backups. And if you make access read-only, they can't mess those up either; if they pull out the 07h00 version when they meant the 09h00 version or Tuesday instead of Wednesday, why they can just go grab the correct one. Win, win.

tuxchick2

Jun 14, 2006
8:06 AM EDT
Anyone who read the actual articles linked here would have learned that BackupPC permits user-controlled backups and restores, and is more efficient than Mike Rubel's excellent script because it creates a hardlinked pool of common files, rather than making multiple copies of the same file. On one of my customer's systems we have nearly a terabyte of raw data occupying about 470 gigabytes of backup storage. Pretty slick!

Additionally, BackupPC is designed specifically for network backups and administration, and also backs up Windows and Mac clients without needing special client software.

But again, anyone who read the articles would know this.

gemlog

Jun 14, 2006
11:04 AM EDT
Well, if you'd read Mike's script you'd know it uses hardlinks too :-) His uses cp's version and my copy uses the rsync option as it's faster (but he pointed that out to me).

Mike does have a tbyte on his version. I only have one client with that much storage and it's all recoverable from removable, so I can't speak to that scale. I only use it to back up work performed on the data at that site.

I did allow in my original post that backuppc might use hardlinks, because I knew I hadn't had the time to pore over your article. It's too good an idea not to have currency.

After all this time, I doubt very much that Mike's working copy looks at all like the copy on his website. I know mine resembles it only superficially. The key thing, the important thing, is the insight into using hardlinks for backups. I like the simplicity of a short bash script and cron -- it works for me, friends and clients. Set and forget. But I'm not running GM either. Maybe something more complicated is needed in other environments. I'll worry about that when I come to it. Maybe then I'll reach for backuppc. Or maybe something else. No matter, it's the concept that's important.

tuxchick2

Jun 14, 2006
11:10 AM EDT
Why comment on articles you haven't read? Excuse my grumpiness on the subject, but it happens enough to make me rather impatient with it.
gemlog

Jun 14, 2006
11:33 AM EDT
>more efficient than Mike Rubel's excellent script because it creates a >hardlinked pool of common files, rather than making multiple copies

I learn by example :-)

Really tuxchick, the reason I was responding to an old post was because I hadn't even had time to read lxer for *days*. I hate when people don't read my README texts. Or my articles. Or the 3rd warning dialog I put up to them warning disaster is neigh.

But, of course, they continue like that anyhow. And so do I.

And so do you.

We all have a heck of a lot to read each and every day just to keep up with security etc. Never mind reams of docs and manuals. And articles. Some get a detailed study, some get skimmed depending on need.

Believe it or not I posted to be helpful -- not to bruise your ego.

Also, the day I posted that I recognized that I was rushed and forced myself to go back and just skim through looking for a reference to hard links. Even clicked on part one.

Not enough due diligence to rescue me from this thread though.

I guess you can tell by now that I'm having one of my (rare) non-grumpy days.

Procrastinating between jobs really. Guess I'd better get at 'er before I make you any more upset.

Cheer up. Some days us readers/users/clients are just more irritating than others. This is one of those days :-)
tuxchick2

Jun 14, 2006
12:11 PM EDT
Um, no, I don't comment on things I haven't read. Sometimes I misunderstand what I read, but I don't comment on posts or articles without reading them. If you want to be helpful, make an informed comparison.

Posting in this forum is limited to members of the group: [ForumMods, SITEADMINS, MEMBERS.]

Becoming a member of LXer is easy and free. Join Us!