Fun question for all the CS nerdz out there:
What is the simplest and/or most efficient function you can write to determine whether a given string has balanced brackets or not?
- Balanced: “(This is balanced)”
- Not Balanced: “(((This is *not*) balanced.”
bool stringHasBalancedBrackets(string inputString)
The function should return true if the brackets are balanced, otherwise false.
(BTW, there is a great solution on Stack Overflow; I’m looking to see what everyone can come up with off the top of their heads.)
Looks familiar, eh? 😉
I may be totally off on some/all of these points here, but I thought I’d share some tidbits I’ve learned during my deep-dive of Git research tonight:
Git is deceptively simple.
Coming from a background in Subversion, I expected to have to jump through a bunch of hoops to get a repository configured, then get a server configured, etc. It took me most of the night to realize that you really don’t need anything other than the git binaries and a place to put your repository (local or remote).
If you do want to use a remote server to coordinate your repository, try just creating a bare repository on a remote server you can access via SSH, and “git clone” from there. Check out this Stack Overflow post for a great example: http://stackoverflow.com/questions/4948190/git-repository-sync-between-computers-when-moving-around
If you’re coming from Subversion, start by abandoning the concept of a partial checkout.
This concept kept me from making progress with Git longer than any other misconception I had. If you get caught up in trying to recreate your Subversion workflow in Git, you’ll get frustrated. If you embrace the concept of lots of small repositories that represent the folders/projects that you’d selectively check out from a master repository, then you’ll get Git right away. (FWIW, I did read about git submodules, but for my own purposes, fully separate repositories work best.)
The best way to learn is to experiment!
The best advice I can give is to just get your feet wet. Once you have a local version of Git installed, just start creating repositories and experiment with clones, commits, pushes, and pulls. If you do plan to work with a team and/or a remote repository, I highly suggest signing up for a GitHub account – it’s free for public repositories and pretty cheap ($7/mo starting) for private repositories.
There’s tons of help out there…
Speaking of GitHub, they also have a great site to help you get started using both Git and GitHub: http://help.github.com/
Besides the guide on GitHub, here are some of the best guides I’ve found yet:
- Pro Git – Great free book on Git usage and configuration.
- GitSvnCrashCourse – Git concepts for Subversion users.
- Git on Wikibooks
- Git User’s Manual: Git Quick Reference – A tl;dr for the rest of the manual.
After spending the night playing with Git and trying to wrap my head around the best way to migrate our large Subversion repository to Git (or Mercurial), I realized that I was really trying to solve a core issue with our Subversion merge workflow at the office and that it might help to post the issue on Stack Overflow for more input from the community.
So, here’s the question I posted to Stack Overflow, along with a link to the post there:
Before I explain the core issue, let me say that I’m actually quite interested in migrating our source control from Subversion to Git/Mercurial if it really is a better solution for our issues, but I’m really looking for the best solution without causing a lot of unnecessary stress on the team. (In other words, I’m not looking for the “dump Subversion altogether and move to Git” answer, since that involves a lot of thrashing and a steep learning curve.)
Now that’s out of the way, here’s our core issue:
My development team is working with a relatively large Subversion repository, where all development used to be done directly on the Trunk. A request from above for a faster release cycle led us to split our work into separate branches, with each branch containing a mirror of Trunk at the time the branch was created and sub-teams working in parallel on each branch. The new cycle is to release a specific branch to production, then merge the new changes into trunk, and merge trunk changes into each of the other branches.
Unfortunately, this has become a very painful and error-prone process, and we need to find a better way to perform our merges that also takes into account simple changes between branches such as code reformatting (some of us use “cleanup code” on our source files, some don’t).
To summarize, we need help figuring out a better way to merge that doesn’t require one or more of our developers to spend an entire day manually resolving conflicts.
(Sorry if that’s a little vague or rambling; I’ll be happy to clarify or provide more details upon request.)
Thanks in advance for any help you can provide!
After wrestling with Aperture’s Flickr integration for weeks, I thought it might be worthwhile to explain why my photostream has been so erratic lately. =P
As I sorted and processed photos I had taken during my trip to China, I used the Aperture Flickr export/upload tool to begin transferring my photos to Flickr, and I had assumed that it would upload the photos, in order, to the sets I had created for each album… this assumption was incorrect.
Every few times I attempted to upload photos, Aperture would hang during the upload process, and I’d force quit. When I investigated my Flickr stream, I found that some of the sets hadn’t been created (not a big deal), that most photos were uploaded to my stream at least two or three times each (big deal), and that all of my photos had been uploaded out of order (infuriating).
I heard that a recent Aperture patch had fixed this issue, so I removed all of my previously uploaded photos (some which had already been commented on and shared with others, unfortunately), installed the patch, and re-attempted the upload.
This time, two sets uploaded in the correct order, so I thought it would work with a larger album (about 400 photos from Shanghai). During the upload, I noticed that Aperture had frozen, again, and I force-quit the application, again.
And, again, my Flickr stream was mangled. Lots of duplicates, most photos out of order, no set created, and when I opened Aperture the next time, it attempted to “finish” the last sync session…which mangled my stream even more.
At this point, I’ve given up on Aperture’s Flickr “integration” (which is giving it too much credit), and I’m going to upload the rest manually through Flickr Uploadr.
I’ll give Aperture credit for being an otherwise solid photo management application, but this experience makes me wish I’d chosen Lightroom instead. =(
(tl;dr – Don’t use Aperture Flickr sync, it’s buggy as hell and will screw up your stream.)
I just started using jsFiddle today at work. Best web development tool since Firebug. Seriously, check it out. http://www.jsfiddle.net
First off, I haven’t forgotten about finishing the Amazon S3 post; I’ve just been sidetracked by a rather frustrating problem that I’m hoping someone can help me with:
Does anyone know how to work around Internet Explorer Protected Mode limitations without requiring the end-user to add our site to the Trusted Sites list?
The problem is that if we enable SSL logins for our site, they can only access SSL pages. IE prevents our non-SSL served pages from accessing the cookie created during the SSL session, so we can either serve everything via SSL (very expensive/resource-intensive), or find some way to set an SSL *and* non-SSL cookie during the login process.
For what it’s worth, I’ve also posted this question (in a much less verbose form) to my Twitter feed here: http://twitter.com/#!/willwm/status/90588135175626752 — feel free to reply to my Twitter post, or this blog post.
Update #1: I’ve also posted this question to Stack Overflow as well:
Update #2: A friend of mine shared these links, hopefully they’ll help:
I’ve recently had the pleasure of working with Amazon S3‘s .NET API while investigating ways to offload our existing content delivery model to the cloud. I must say that I’m quite impressed with Amazon’s documentation and examples provided in the SDK, but it still took me a little time to develop the functionality I needed, so I thought I’d share my experiences and hopefully save other devs a bit of time and effort. =)
If you’re planning to integrate S3 into your existing ASP.NET website, you’ll need to start by getting yourself an AWS account here: http://aws.amazon.com/. Luckily, Amazon now provides a “Free Usage Tier” for development/testing purposes, which allows you to use “5 GB of Amazon S3 standard storage, 20,000 Get Requests, and 2,000 Put Requests” for 12 months from your first sign-up, free. Once you’ve signed up for an AWS account, you’ll want to download the AWS SDK for .NET, which includes Visual Studio templates, and a few excellent sample projects to help you get started.
(This is a work in progress; I’ll be updating this post regularly over the next couple of days. Be sure to check back often for more info!)
Despite my best intentions (and despite having a large backlog of things to write about), I keep forgetting to add new posts to this blog.
However, I am staying quite active on the interwebs, and thought I might share with you where you can find more up-to-date posts:
- Stack Overflow:
(My technical posts are more frequently getting posted on StackOverflow these days, as opposed to this blog…)
(My twitter account gets updated most frequently, sometimes automatically by my Tumblr account.)
(The things that entertain me on the web get posted here, which cross-posts to my Twitter account.)
(I rarely use FriendFeed anymore, but it’s still set to aggregate my content posted elsewhere, like Tumblr and here.)
(I lurk here often, but don’t post as much as elsewhere.)
Honestly, I’m on pretty much every social news/post/aggregation/photo site that I can find, you can search for me by name, no one else has it. 😉
This song rocks. KEEP PORTLAND WEIRD! 😉
My CS300 professor at Portland State University just wrote a fantastic blog post about things he commonly sees in student source code – it’s a great read and helpful for reflecting on your own code and best practices:
I’ve read quite a lot of student code over the years. Some of it is quite good. A lot of it is full of predictable badnesses. These notes attempt to enumerate some of this badness, so that you may avoid it.
This is pretty C-centric in its current form. However, many of its lessons are multilingual. This is a living document, so don’t expect it to be the same next time you look at it. In particular, the numbering may change if something gets inserted in the middle. It is revisioned automatically by Drupal, so feel free to point at a particular revision if you care.
Update: Just found some great tutorials and articles on the MVP wiki here: http://wiki.webformsmvp.com/index.php?title=Spread_the_Word
ASP.NET MVC might be the new kid on the block, but there are still a host of compelling advantages to ASP.NET Web Forms.
The ASP.NET Web Forms MVP project is about bringing the love back to Web Forms through a renewed approach to using it – an approach that facilitates separation of concerns and testability whilst maintaining the rapid development that Web Forms was built to deliver.
This is really cool stuff – we’ve just started using it in our development and I can already see the benefits to plain Web Forms or MVC. Granted, if you’ve already developed a pure MVC site, this probably won’t be useful to you, but if (like most of us, I assume) you have an existing ASP.NET Web Forms site and want to try the features of MVC without completely rewriting your framework, I highly suggest checking this out.
I’ll try to see if I can abstract out some of our internal examples for a future blog post. =)
This sucks. I’ve been a huge fan of Xmarks (and Foxmarks, as they used to be called) since they came out. I’m using them currently to sync bookmarks between Firefox and Chrome, and now I’ll be stuck with the built-in sync. =P
As I write this, it’s a typical Sunday here at Xmarks. The synchronization service continues operating quietly, the servers chugging along syncing browser data for our 2 million users across their 5 million desktops. The day isn’t over yet, but we’re on track to add just under 3000 new accounts today.
Tomorrow, however, will hardly be anything but typical, for tomorrow one of our engineers will start a script that will email each of our users to notify them that we’ll be ceasing operations in around 90 days.
(…And so does Opera, I just don’t have a screenshot of it at the moment. 😉
Internet Explorer beat Firefox on the Acid3 test? I think it must be a cold day in hell. 😉
(IE9 is actually pretty badass; I’m very impressed with the work Microsoft has done to step it up in this release. Feels like an actual competitor to Chrome/Firefox, not a ball and chain like previous IE releases.)