Git: Complex Simplicity.

I may be totally off on some/all of these points here, but I thought I’d share some tidbits I’ve learned during my deep-dive of Git research tonight:

Git is deceptively simple.

Coming from a background in Subversion, I expected to have to jump through a bunch of hoops to get a repository configured, then get a server configured, etc. It took me most of the night to realize that you really don’t need anything other than the git binaries and a place to put your repository (local or remote).

If you do want to use a remote server to coordinate your repository, try just creating a bare repository on a remote server you can access via SSH, and “git clone” from there. Check out this Stack Overflow post for a great example: http://stackoverflow.com/questions/4948190/git-repository-sync-between-computers-when-moving-around

If you’re coming from Subversion, start by abandoning the concept of a partial checkout.

This concept kept me from making progress with Git longer than any other misconception I had. If you get caught up in trying to recreate your Subversion workflow in Git, you’ll get frustrated. If you embrace the concept of lots of small repositories that represent the folders/projects that you’d selectively check out from a master repository, then you’ll get Git right away. (FWIW, I did read about git submodules, but for my own purposes, fully separate repositories work best.)

The best way to learn is to experiment!

The best advice I can give is to just get your feet wet. Once you have a local version of Git installed, just start creating repositories and experiment with clones, commits, pushes, and pulls. If you do plan to work with a team and/or a remote repository, I highly suggest signing up for a GitHub account – it’s free for public repositories and pretty cheap ($7/mo starting) for private repositories.

There’s tons of help out there…

Speaking of GitHub, they also have a great site to help you get started using both Git and GitHub: http://help.github.com/

Besides the guide on GitHub, here are some of the best guides I’ve found yet:

 

Aperture, I am disappoint.

After wrestling with Aperture’s Flickr integration for weeks, I thought it might be worthwhile to explain why my photostream has been so erratic lately. =P

As I sorted and processed photos I had taken during my trip to China, I used the Aperture Flickr export/upload tool to begin transferring my photos to Flickr, and I had assumed that it would upload the photos, in order, to the sets I had created for each album… this assumption was incorrect.

Every few times I attempted to upload photos, Aperture would hang during the upload process, and I’d force quit. When I investigated my Flickr stream, I found that some of the sets hadn’t been created (not a big deal), that most photos were uploaded to my stream at least two or three times each (big deal), and that all of my photos had been uploaded out of order (infuriating).

I heard that a recent Aperture patch had fixed this issue, so I removed all of my previously uploaded photos (some which had already been commented on and shared with others, unfortunately), installed the patch, and re-attempted the upload.

This time, two sets uploaded in the correct order, so I thought it would work with a larger album (about 400 photos from Shanghai). During the upload, I noticed that Aperture had frozen, again, and I force-quit the application, again.

And, again, my Flickr stream was mangled. Lots of duplicates, most photos out of order, no set created, and when I opened Aperture the next time, it attempted to “finish” the last sync session…which mangled my stream even more.

At this point, I’ve given up on Aperture’s Flickr “integration” (which is giving it too much credit), and I’m going to upload the rest manually through Flickr Uploadr.

I’ll give Aperture credit for being an otherwise solid photo management application, but this experience makes me wish I’d chosen Lightroom instead. =(

(tl;dr – Don’t use Aperture Flickr sync, it’s buggy as hell and will screw up your stream.)

Amazon S3 Access with ASP.NET/C#

Overview

I’ve recently had the pleasure of working with Amazon S3‘s .NET API while investigating ways to offload our existing content delivery model to the cloud. I must say that I’m quite impressed with Amazon’s documentation and examples provided in the SDK, but it still took me a little time to develop the functionality I needed, so I thought I’d share my experiences and hopefully save other devs a bit of time and effort. =)

Getting Started

If you’re planning to integrate S3 into your existing ASP.NET website, you’ll need to start by getting yourself an AWS account here: http://aws.amazon.com/. Luckily, Amazon now provides a “Free Usage Tier” for development/testing purposes, which allows you to use “5 GB of Amazon S3 standard storage, 20,000 Get Requests, and 2,000 Put Requests” for 12 months from your first sign-up, free. Once you’ve signed up for an AWS account, you’ll want to download the AWS SDK for .NET, which includes Visual Studio templates, and a few excellent sample projects to help you get started.

(This is a work in progress; I’ll be updating this post regularly over the next couple of days. Be sure to check back often for more info!)

Fragmentation

Despite my best intentions (and despite having a large backlog of things to write about), I keep forgetting to add new posts to this blog.

However, I am staying quite active on the interwebs, and thought I might share with you where you can find more up-to-date posts:

Honestly, I’m on pretty much every social news/post/aggregation/photo site that I can find, you can search for me by name, no one else has it. 😉

Xmarks Blog » End of the Road for Xmarks

This sucks. I’ve been a huge fan of Xmarks (and Foxmarks, as they used to be called) since they came out. I’m using them currently to sync bookmarks between Firefox and Chrome, and now I’ll be stuck with the built-in sync. =P

As I write this, it’s a typical Sunday here at Xmarks. The synchronization service continues operating quietly, the servers chugging along syncing browser data for our 2 million users across their 5 million desktops. The day isn’t over yet, but we’re on track to add just under 3000 new accounts today.

Tomorrow, however, will hardly be anything but typical, for tomorrow one of our engineers will start a script that will email each of our users to notify them that we’ll be ceasing operations in around 90 days.

(via Xmarks Blog » End of the Road for Xmarks.)

Mind = Blown.


IE9 Beta - Acid3 Score: 95%
IE9 Beta - Acid3 Score: 95%
Firefox 3.6 - Acid3 Score: 94%
Firefox 3.6 - Acid3 Score: 94%

Internet Explorer beat Firefox on the Acid3 test? I think it must be a cold day in hell. 😉

(IE9 is actually pretty badass; I’m very impressed with the work Microsoft has done to step it up in this release. Feels like an actual competitor to Chrome/Firefox, not a ball and chain like previous IE releases.)

Status Update

I just realized that I’ve been totally slacking on blog posts here, partly because the content I want to post is relatively time consuming (development articles, photograph articles, etc.), and partly because it’s just so much easier to post random tidbits on Twitter or Tumblr.

If you have a moment, please follow/add me on both services:

Thanks for reading! =D