In addition to January 1 being a time for new beginnings, it’s a Tuesday, and I just found a new cool tool!

2012 was a strange year, full of medical situations and job changes for both Bettie and me.  In the midst of the running hither and yon, some of the smaller things in life were missed.  Usually, Bettie is our photo curator.  She will download pictures from the camera to her computer (since we aren’t geeky enough to use a wireless SD card, even though they are available), group them by subject matter (a day could have multiple subjects), rename the folder to make for easy Picasa searching, and then transfer them to my computer.  In 2012, that didn’t happen.

What I did get was a big data dump of every picture she had, as a way of safe-keeping (belt and suspenders: the best backup is duplicated).  What this did was to give me a large chunk of pictures that I already had.  As in tens of gigabytes of duplicated images.  I’m still using spinning disks – haven’t yet made the jump to Solid State Disks – but even with a tebibyte hard drive, there’s only about 25G left.

Sure that I had some duplicate files in there, I went looking for a duplicate file finder.  After a couple missteps, I ended up with Auslogics’ Duplicate File Finder.  This free software comes from Sydney, and works easily.  Download it, install it, run it.  You can tell it to ignore the file name, and to ignore the file date and time (I did both).  DFF uses the MD5 hash to verify that the files are indeed duplicates.  Pick which drives or directories you want checked, and let it go to work.  It can find images only, or all files.  I created a known duplicate, and let it locate it – and it did.  Backed up a couple directory levels (the programs I tried that didn’t work couldn’t find deeper matches), and it still worked.  Crank it up to looking in My Documents for all pictures, and it found lots.  Many, many lots.

So it was time for the big test.  Look at my entire 1T C: drive, starting at the root, and looking for any duplicate files.  If I was going to be serious about this clean-out, I wanted to do a good job.  Then I went to a New Year’s Eve party, so I can’t report on how long this took.  It was done when I returned at about 2AM, so somewhere less than 6 or 7 hours.  And did it find duplicates?

DID IT EVER!

Auslogics DFF

If you don’t want to click through to the pic, that’s almost 88,000 duplicates, totaling over 130G.

I now have some choices to make.  The files are listed in descending size order, so you get your heavy hitters up top.  You also have your choice of how to deal with the duplicates, of deleting them to the Recycle Bin, or removing them directly, or sending them to DFF’s Rescue Center (their own, non-deleting version of the Recycle Bin).  That last is the option I chose – and I haven’t finished my efforts.  I would prefer to work more at the directory level rather than the file level, and use a bigger machete to cut through all the duplicate brush.  So I will be running this multiple times, seeing what the biggest offenders are.

My only thing for the wish list is that I’d like to export the results.  Auslogics has put in some nice selection options – select all, deselect all, invert selection, pick one file per “duplicate group”, pick all duplicates (leaving one unchecked) – and at the end, you can delete what is selected.  But my preference is to create a big batch file and do the work myself.  Of course, I’m not your average bear, nor their target audience.

Auslogics’ Duplicate File Finder – a good program, for which I haven’t found a duplicate.

Advertisements