Thursday, November 6, 2008

.NET and PHP Encryption

For a project I'm currently researching at work, we are going to be doing some encrypted communication with a third-party.  Normally, if both companies were using .NET, that would be simple; however, this company uses PHP on Linux, which complicates things a bit.

I talked with their developer, and he was planning on using the PHP functions openssl_public_encrypt() and openssl_private_decrypt() for the encryption and decryption. I did some research, and found this page that described the .NET equivalents. That got me started, but the next problem was that he sent his public key in PEM format, which looks like this:
-----BEGIN PUBLIC KEY-----
(base-64 encoded data here)
-----END PUBLIC KEY-----
From what I could gather, .NET can't read this format, at least not with the built-in classes. I did more research and found this tool which converts between PEM and .NET (and other) formats. When run on the PEM-format public key, this generates a file that looks like this:

<rsakeyvalue>
<modulus>(base-64 encoded data)</modulus>
<exponent>(data)</exponent>
</rsakeyvalue>

This is directly usable by the .NET class RSACryptoServiceProvider using the FromXmlString() method. The same tool can be used to convert the private key to an Xml file, which can be read in using the same class and method.  Once the keys are read in, you can use the Encrypt() and Decrypt() methods on your data.

Friday, October 3, 2008

Are Certifications Worth It?

For our training this year, because of budget limitations, we were restricted from traveling, and ended up all getting CBTs that should enable us to get some Microsoft certifications. I'm about to start my training, and I've started wondering if it will be worth it.

On the one hand, doing the study necessary to earn it will definitely expose me to more of the .NET framework and the C# language than I may have been in by day-to-day programming. Of course, a lot of it may be things that I don't really have a need to know, since certification exams tend to ask about such a wide range of topics.

On the other hand, it's been my experience in the past that getting a certification doesn't usually buy you anything at your current employer; they're more useful when looking for something new. I'm not currently looking for a job, nor am I expecting to (though you never know, especially with the economy going the way it is).

I'm not going to turn down the chance to take a week and study, and I'll definitely take the exams and get the certifications, but I'm still not sure exactly how much benefit I will gain from this.


Tuesday, September 16, 2008

ORMs and Generated Code

At my job, we've been using a code generator for our main applications since we originally started writing them over 5 years ago. It implements Object-Relational Mapping (ORM) and generates most of the code necessary to read and write objects from the database, even includig some rudimentary business logic. Since we started, we've also extended this to include generating some things that I didn't think could even be generated, like Windows dropdowns with enumerations and a custom stored procedure for keeping a permission table up to date.

Overall, it has been a good system, and it has certainly saved us a lot of time over the years. It is very nice to be able to put a bunch of XML into a file, run the code generator, and get virtually everything we need to work with a new table or set of tables.

Still, it has some limitations, and some other issues that have made me reconsider whether using it has been a good idea 100% of the time.

First, the tool we use to generate the code was written by a consulting company that our company used for many years before the IT department became as fully staffed as it is now. Unfortunately, a couple of years ago we severed ties with that company, which means we no longer get updated versions of the tool. It was written in .NET 1.1, and the C# parser still only understands 1.1 constructs. That means if we use anything introduced in 2.0 or later, like generics, LINQ, etc., we get errors when we run the tool. We have source for most of the tool, but we are missing it for the C# parser, which is exactly what we would need to modify to fix this.

Next, I feel like in some ways, it has kept me from learning everything I could about ADO.NET. Unless we have some custom stuff we need to do in the database that the tool can't generate, we just end up calling generated methods to read and write from the database. Now, I do understand how it works, and in fact, I extended it to include the .NET 2.0 transaction model, but I still wonder if I would know more about ADO.NET if it weren't for this tool.

Finally, it has introduced somewhat of a learning curve as we have brought new developers onto the project, especially if they are not really experienced developers. Since it generates so much of the business logic and data access layers, developers have to be trained to not just jump in and start writing or changing code that touches those areas. They need to understand that some of their changes may be wiped out by the tool if they don't do them correctly. So far, most have been positive about this, and have caught on quickly, but it is still something different than most people are used to.

So, looking back, would I have done anything differently if I could have? It's hard to say. Probably the biggest change I would have liked to have made would be to somehow keep the generated code separate from any custom code we wrote. If we had had .NET 2.0 back then, we could have used partial classes (and in fact, if I had the source to the C# parser, that would be the first thing I would add). As it is, the code is all mixed together, and it can sometimes be hard to tell what is getting generated and what is hand-coded. I definitely would not have chosen to write everything by hand. I definitely believe generating code like this is beneficial.

Friday, September 5, 2008

ASP.NET MVC First Impressions

There's been a lot of discussion online lately about Microsoft's new ASP.NET MVC framework. If you don't know what it is, it is a new, alternative framework for web development. It isn't meant to replace ASP.NET web forms, but instead be another choice.

I've been reading about it and playing with the preview releases a bit, so I thought I would write up my brief first impressions. Later on, after I've had a chance to work with it a bit more, and maybe build something useful with it, I'll come back and write something else.

Things I like:

  • The separation of logic. I like how it breaks up the presentation (View), logic (Controller) and data (Model) into separate files. It would make it easy to have alternate views of the same data. For example, if you were writing a blog enginge, you could have one view be the normal text view, and another view be the RSS feed.

  • URL handling. I wrote recently about URL rewriting using ASP.NET web forms. If we were using ASP.NET MVC, we would have been able to have more friendly URLs with no "tricky" code to intercept the calls and rewrite them to what we already had.

  • Cleaner HTML. One of the things I've run into with web forms that has bugged me is the way it renames form elements to things like "ctl00$Leftnavigation1$productSearch". Since you're now responsible for generating form elements yourself, you no longer get this.

  • Testing. Since the model and controller logic are separated from the view, you can now write unit tests against them.

  • Lots of community support. As I mentioned above, a lot of people are using this and talking about it, so before long, there will be plenty of places to go for answers to questions.


Things I'm not wild about:

  • Tag soup. I think there are more ways to do this, but most of the examples I've seen involve putting C# or VB code right in the .aspx file. It's like a return to bad old ASP days.

  • Lack of view state. Again, I'm not sure if this is the only way, but so far everything I've seen indicates that you have to manually repopulate form fields, just like in ASP. And this is actually getting better. I noticed in Scott Guthrie's recent post that Preview 5 now automatically repopulates fields in an error condition.

  • It is a completely different model from what I (and the other devs on my team) are used to. This is obviously not a huge complaint, since I enjoy learning new things, but if we decide to use this, it will take some time to get everyone up to speed.

  • Sparse official documentation. I realize it is still in preview stage, so hopefully this will get better over time.


Overall, this is an interesting framework, and it will be nice to have a choice when developing new projects. Having said that, I don't think we will be rushing to rewrite all our existing code into MVC. We just have far too much time and knowledge invested in what we already have.

Wednesday, August 27, 2008

Using URL Rewriting For More Friendly Links

As part of an application we're developing at work, I recently did some research on whether it was possible to make the links to items, categories, etc., more friendly to search engines. The links we currently have are something like this: /website/productdetails.aspx?productid=123456. Basically, we pass in a product ID to the productdetails.aspx page. However, it would be nicer to have something more like /website/item/123456/rewritten-item-description.aspx.

I did some research and came up with some code that will work, and not be too intrusive to the rest of the code, or require any changes in IIS. First, we'll add a property to the items that returns a link in the correct format, so that we don't have to duplicate that code in the various places we show the link. To convert the item description to something that looks like a filename, I came up with the following method:

private string CreateLink(string prefix, string id, string description)
{
String format = "{0}/{1}/{2}.aspx";
String fixedDescription = "";
fixedDescription = description.Trim().Replace(" ", "-").Replace("&", "and").Replace("/", "-");

Regex nonWord = new Regex(@"[^\w\-]", RegexOptions.IgnoreCase);
fixedDescription = nonWord.Replace(fixedDescription, "");

fixedDescription = DeAccentCharacters(fixedDescription);

return String.Format(format, prefix, id, fixedDescription);
}

This strips out any non-word characters, replaces spaces and slashes with dashes, ampersands with the word "and" and converts accented characters to their non-accented counterparts (that part is done in DeAccentCharacters(), which I won't post, since it just uses a pre-made lookup table, and isn't that interesting). It then prefixes the name with the prefix and id, basically coming up with what I showed above.

Now that we have links in the correct format, we need to interpret them. Note that it looks like we have a couple of directories there ("item" and "123456") that don't actually exist. We need to intercept the request for this page before ASP.NET has a chance to complain about it not existing. To do that, we add some code in the Global.cs class in the App_Code directory. First, in the Application_Start() method, add this, anywhere in the method:

SetupRewriteRules();

This will set up a list of URL rewriting rules, using regular expressions. Then, in Application_BeginRequest(), add this as the first line, before anything else you may be doing:

RewriteURL();

This calls a new method that interprets the rules created in SetupRewriteRules() and rewrites the URL in the request accordingly.

Next, add this class to the file (at the end is preferable):

private class RewriteRule
{
private Regex rule;
private String rewrite;

public RewriteRule(String ruleRegex, String rewriteText)
{
rule = new Regex(ruleRegex, RegexOptions.Compiled | RegexOptions.IgnoreCase);
rewrite = rewriteText;
}

public String Process(String path)
{
Match match = rule.Match(path);

if (match.Success)
{
return rule.Replace(path, rewrite);
}

return string.Empty;
}
}

Next, add a static list to hold these and the SetupRewriteRules() method:

private static List<RewriteRule> rules = new List<RewriteRule>();

private void SetupRewriteRules()
{
rules.Add(new RewriteRule("Item/([^/]*)/(.*).aspx", "/ProductDetails.aspx?productID=$1"));
}

We have more rules, but I'll just show this one. Note that the first parameter is a regular expression that matches "Item/item ID/filename.aspx". The second parameter is what to replace that with. In this case, it takes the first match (the item ID, enclosed in the first set of parentheses), and puts it where the "$1" is in the URL.

Finally, add the RewriteURL() method:

private void RewriteURL()
{
foreach (RewriteRule rule in rules)
{
String subst = rule.Process(HttpContext.Current.Request.Path);
if (subst.Length > 0)
{
HttpContext.Current.RewritePath(subst);
break;
}
}
}

This code iterates through each rule, and if it finds one that matches, it calls RewritePath() using the rewritten URL. This is what actually translates the "friendly" URL into something that works with our application. The great part is that it is totally transparent to the user; they never see the rewritten URL. Postbacks still work fine as well.

I realize there are other ways to do this, like ASP.NET MVC, but our application is pretty much already written, and I'm not wild about going back and redoing it in a totally new technology. This can be retrofitted onto the app without too much pain, and could even be turned off or on with a config file setting if needed.

Wednesday, August 20, 2008

Red Gate Buys .NET Reflector

In a post on his blog today, Lutz Roeder announced that he has sold .NET Reflector to Red Gate Software. This is an indispensible tool for any .NET developer (I can't believe I left it off my list earlier!). I hope Red Gate will continue to support it, especially the free version. We use several of their products at work, and they are a good company, so I have high hopes for it.

Tuesday, August 12, 2008

Favorite Computer Books

I believe all good developers should keep up to date with technology, and one good way to do this is to read technical books. I realize there is a lot of information available on the internet these days, and many people think they can get everything they need from there. However, for some topics, a well-written book is a much better option. With that, here's a list of books that have been useful and/or influential in my programming career:
Note that several of these books are not focused on a specific technology, and thus won't go out of date as soon as a lot of computer books will. I think that's important when deciding what books to buy, especially if you're on a budget. Try and find a book that will last more than the year or so that many tightly-focused books will.

Friday, August 1, 2008

Debugging ASP.Net AJAX Pages

If you've done any ASP.Net AJAX development, you've probably run into the following error message while debugging (Sys.WebForms.PageRequestManagerTimeoutException: The server request timed out):

This happens because the default timeout for asynchronous processing is 90 seconds. This is a reasonable default, and really, nothing in an asynchronous request should take that long. However, if you're stepping through code of any complexity, you can easily exceed that limit. You can fix this by setting the timeout on your script manager, like this:

<asp:ScriptManager ID="MainScriptManager" runat="server" EnablePartialRendering="true" AsyncPostBackTimeout="3600" />

However, you probably don't want that in production. There is also a way to set this in your code-behind, and do it conditionally, based on whether your pages are running in debug mode or not. If you've been doing ASP.Net development for any amount of time you are probably aware that you shouldn't run your production site in with debug enabled in the web.config file. So, how do you tell if that is enabled? In that link, Scott Guthrie mentions that you can use HttpContext.Current.IsDebuggingEnabled to tell. So, the code to set this would look like this:

// if we're running debug mode, increase the async timeout to 1 hour.
// this allows stepping through code without timing out on the front end.
if (HttpContext.Current.IsDebuggingEnabled)
{
MainScriptManager.AsyncPostBackTimeout = 3600;
}

This only increases the timeout if you're running in debug mode. I've implemented this in one of our products, and it has worked great.

Wednesday, July 30, 2008

Too Slow or Too Fast

I had a conversation with a co-worker recently that got me thinking. He had been at a local .Net users' group, and had mentioned that we hadn't upgraded to .Net 3.5 yet. Everyone else was amazed that we weren't taking advantage of this yet.

As I read various blogs, I see lots of companies working on the cutting edge, using things like .Net 3.5, LINQ, the new MVC framework, etc. Sometimes I wonder how they are able to justify moving to these new technologies so quickly. I'm very much in favor of learning new things, and using them where it makes sense, but I think it is also wise to sometimes wait a bit before rushing to upgrade.

We have a product (actually several products, but we consider them one system) with around a million lines of C# code. It currently supports around 400 internal users, as well as 50,000 external users. Our main focus has to be to keep it running, while adding new features and fixing bugs as needed. Without being able to show a tangible benefit of upgrading, we would have a difficult time telling the business that we needed several weeks to convert this system to the latest and greatest offering.

When we made the move to .Net 2.0, we were able to sell them on the fact that it gave us a new web deployment model that would allow us to react to their demands more quickly, as well as the possibility of moving to 64-bit code eventually. At the start of this year, I was given tentative approval to upgrade to .Net 3.5, but since then, we've been so busy that I haven't had a chance. At this point, I'm not sure I would be able to convince anyone that it was worth our time and effort.

Maybe some of the other companies I see adopting the latest thing do so with smaller applications, or with totally new products. Unfortunately, we don't often have the opportunity to do that, since my group's main focus is the large system I mentioned earlier. It does disappoint me somewhat, since I would love to be learning the new technologies, in case I need to find a new job.

Still, it has made me wonder if we are the ones moving too slowly or everyone else is moving too quickly. I know I really don't want to be the one debugging Microsoft's latest code. I've been involved in that before with other products, and it was no fun. We had a product that was delayed over a year while we worked with a vendor to squash all the memory leaks in their DLL.

So, the way I see it, we can either stay with what works until we see a pressing reason to upgrade, or jump in with both feet and deal with the consequences. One is good for business, the other one may be better for my career options.

Wednesday, July 23, 2008

Working From Home

A few months ago, my company made the decision to allow people in the IT department to choose to work from home at least a few days each week. This was partly to help with rising gas prices, and partly to help retain employees, since we've had a hard time filling vacancies recently. I gladly took them up on this, even though I only live about 4 miles from work.

In the time I've been telecommuting, I've learned a few things about how to make it work, at least for me.
  • Have somewhere private. If you have a spouse and/or kids at home, you must have someplace you can go to be totally isolated from them. Having them interrupt you just doesn't work. I'm lucky in that we just finished our basement, and I have a nice office to work from.
  • Have a comfortable chair and a good desk. Before I got my new office, both my chair and my desk were over 10 years old, and would not have made a good environment to to real work in. They were fine for casual use, but not 8 hours a day. I bough an all new desk and chair, and have been happy with that ever since.
  • Have the right equipment. I already had a company-supplied laptop, and I supplement it at home by plugging in a keyboard and mouse, and attaching to my personal monitor, so I can have dual screens. I've had dual screens at work for several years now, and I don't think I could live without it anymore, so that was a must for me. (Some people may be lucky and have their company buy these, but part of the conditions for this was that no extra equipment, beyond a USB headset, would be purchased).
  • Treat going to work just like you would on a normal day where you actually leave home. I still get up at my normal time, shower, get dressed, have breakfast, etc. Then, when 8:00 comes around, I say goodbye to my wife and kids and retreat into my office. I do come out occasionally for breaks, and for lunch, but I try to keep that to a minimum. I know for some people, the attraction of telecommuting is that they can dress like a slob or not shower, but for me, that is part of getting into the right mindset to go to work.
  • Keep track of what you do each day. My manager requires a quick email at the end of the day listing what we accomplished that day. I think that's a good idea, because it keeps me focused on getting as much done as I can so I can show that I've been productive, even though nobody can see me.
  • Keep in touch with your co-workers. We have IP phones and a corporate instant message program in addition to email that helps us all communicate even when nobody's in the office. This is key, since you can't just holler over the cube wall when you need to talk to someone.
Telecommuting has worked out great for me. There are usually fewer distractions, and when I have something intense to work on, I can really focus and get it done. It isn't for everyone; some people may miss the daily interaction with co-workers. But if you can handle it, it can be a real productivity booster. I'm really grateful that my company has allowed us to do this.

Monday, July 21, 2008

Basics: No Warnings

Does your code compile with no warnings? Most developers wouldn't dream of checking in code that didn't compiler without errors, but what about warnings? At other companies I've worked at, the code has been full of warning messages. Some of them were relatively harmless (I was working in legacy C code at the time, and as the compiler go better, more warnings were shown). Others were more critical, and really should have been addressed.

In my current job, the main product I work on has around 1 million lines of code, and I'm proud to say that we keep it warning-free. This is no small feat, since we use XML comments, and have the option turned on that generates a warning whenever one is missing from a public or protected member.

Some people would argue that a warning is just that: a warning. If it isn't an error, why bother fixing it? For one, it clutters up the output window and the error list in Visual Studio. It's no fun having to page through several dozen warnings just to find the error(s) you're looking for.

The most important reason, though, is that often, those warnings may be pointing to something more critical that could be wrong with your code. For example, every time we've upgraded the Infragistics library we use, we've had a number of warnings appear because they are planning on deprecating some methods or properties in a future build, and have marked them as such in the current build. Sure, the code still compiles and runs, but maybe it won't in the next build. Infragistics is giving us advance warning that our code may break. We always make it a point to fix those as part of the upgrade process.

Another warning that I've seen a few times is when a developer declares a method in a class with the same signature as one in the base class without realizing it. The compiler warns that you have hidden the base method, and you should either rename it or use the "new" keyword. If you don't clear up this warning, you may be breaking something without even realizing it.

The next time you build the product you're working on, take a look at the build results. If you're getting warning messages, figure out why and get rid of them.

Tuesday, July 15, 2008

Code Branching and Merging

At work, we've recently started using multiple branches in our configuration management tool. We've had to do this to accommodate simultaneous development on two different major initiatives (one if which is currently in test and one in the early stages of development) along with maintenance releases that the business can't wait for anymore.

In the past, we've had a more simple branching model, where we do a major release off the trunk, and then create a branch for the inevitable "point releases" that we would do to fix things found in the first build. This allowed us to continue development on other projects while only releasing bug fixes in the point builds.

This has been quite a learning process for our small team. I'm actually glad our team is so small; otherwise I'm not sure this would work at all. Over the last week, we've been working on the branch that is already in production, while the QA group tests the most recent major build, which is one higher than the branch in production. As we've done that, we've had to check in our changes (once they test out), then merge them to two other branches, and test those. Finally, today, we did the build on that branch. At this point, we merged our changes from the past week into the trunk. In theory, everything should be relatively in sync, except for new development on the major initiative branch.

To get to the point of this post, here are a few things I've learned since we started doing this:
  • Communication is key. At first, some team members were confused over which branches to check their changes into. Luckily, this was quickly resolved with a couple of email exchanges.
  • Merging can be hard. One of our developers is not as familiar with branching and merging as the others. This developer initially just copied the changed files into the other branches, rather than using the merge tool. This led to code that didn't even build, let alone run correctly. Once it was explained how merging was supposed to work, this was resolved.
  • A good merge tool is required. In an earlier post I recommended WinMerge. That tool has made merging far less painful than what it might have been. Trying to merge files by hand or even using the tool provided with StarTeam is not something I would like to have to do.
  • Even a simple change is no longer simple. With this new model, we have to test our changes in the branch we're working in, sometimes for all the countries we support (we currently do business in 7 countries, with 4 different languages). Then, we have to merge the code to each branch and repeat those tests. This means even a one or two line change can easily eat up 2-3 hours of time.
  • Code separation can really save the day. During this iteration of changes, we were lucky that we had no merge conflicts. Part of this is due to the fact that our system is fairly large, so there are a lot of files. It also helps that we have the system architected in layers, so the business logic is separate from the presentation logic and the data access layer. That made it far less likely that two developers would be in the same section of code at the same time.
Branching and merging are difficult at times, but they eventually come up in any organization that releases software, even if it is only to internal users. The best thing to do is come up with a plan, document it, and then stick with it, at least as long as it makes sense. You should, of course, revisit it from time to time and make sure it is still working for you, and make adjustments if it isn't.

Tuesday, July 8, 2008

Firefox Extensions for Web Development

In my last post, I listed a number of tools I use daily to help me develop software. In this post, I'm going to cover some essential Firefox extensions for doing web development. Firefox is a great browser out of the box, but adding a few extensions (or "Add-ons" as they've started calling them) can make it even better. There are extensions for just about any purpose you can imagine, but I'm going to focus on the ones that help developing and debugging web applications.
  • Web Developer. The name pretty much says it all. This add-on sits adds a new toolbar, with a ton of useful options, from outlining page elements, to viewing source in different ways, to viewing cookie information. Viewing cookies is something I've used quite a bit. When you select that option, it shows you a list of every cookie that was posted to the selected page, including their values, path and expiration. This is invaluable when trouble-shooting cookie-related issues.
  • Firebug. Firebug is similar to the Web Developer add-on, but it adds even more functionality. It's like having a debugger built in to your browser. You can edit, debug and monitor CSS, HTML and Javascript live in any web page.
  • JSView. This add-on lets you view external Javascript and CSS files easily. It adds an option on the right-click menu, and also in the status bar that lists all external Javascript and CSS files loaded by the current page. Selecting one brings it up either in the standard Firefox source viewer, or in an external application of your choice. This is great for checking out how somone else's page is put together.
  • IE Tab. As much as I love Firefox, I still have to make my apps compatible with IE. Rather than loading a separate instance, IE Tab lets me view my page in IE on a tab within Firefox. You can also configure it to automatically load specific sites in an IE tab instance, which can be useful for sites like MSDN or Hotmail, which function better in IE than in Firefox.
  • YSlow. From the description on the linked page: "YSlow analyzes web pages and tells you why they're slow based on Yahoo's rules for high performance web sites." This one requires Firebug to be installed. It gives you a list of things to check for to make your pages load faster.
These are the development add-ons that I'm currently using. I'm sure I'll add more over time, but just adding these few has made Firefox an excellent web development tool.

Tuesday, July 1, 2008

Indispensible Tools

Every developer eventually builds up a set of tools that they can't imagine doing development without. These can be anything from preferred text editors to specialty debugging tools. I have a few that I've come to rely on over the years, and I'm going to list them here.
  • Metapad. This is a very lightweight Notepad replacement, that has a few extra features. I like it because it loads fast and doesn't take much memory, but it still does more than the standard Windows Notepad. It hasn't been updated for awhile, but it still works well for what I need it for. There are other Notepad replacements out there, so if you don't like Metapad, you can choose one of them.
  • CLCL. This is a clipboard extender. It lets me keep the last 30 items I copied to the clipboard available for use again. How many times have you copied something with the intention of pasting it into another file, and then accidentally hit Ctrl-C before you can make it to your destination, losing what you were going to paste? With CLCL installed, you don't have to worry, because it will still be available.
  • WinKey. I'm a big keyboard user, and this app allows me to start frequently used applications with a single keystroke. It lets you assign extra Windows Key shortcuts. Windows has several built in, like Win-E for Explorer or Win-R for Run command, but with WinKey installed, you can assign your own. I usually assign keys for starting my browser, Metapad, Paint.NET, and other things I use a lot.
  • SQL Prompt. If you do much SQL Server coding at all, this tool is a real timesaver. It is basically Intellisense for SQL Server Management Studio. In addition to knowing all the SQL syntax, it also discovers your database schema, and can autocomplete table and column names as you type. It can also auto-format your SQL as you type, and has a ton of preferences for setting up your preferred style. It isn't free, or even cheap, but it is well worth the money in my opinion.
  • Paint.NET. I don't do a lot of image manipulation, so Photoshop would be overkill for me. Paint.NET is a free image editor, written in C#, and it has evolved over time from a basic image editor to a very powerful one, including layers, unlimited undo, and a lot of special effects. If you don't need the power of Photoshop, Paint.NET is highly recommended.
  • Cygwin. As I said above, I'm a big keyboard user, and this extends to using the command prompt a lot as well. I got my professional start programming in a Unix environment, and really got spoiled by the rich command-line environment provided by Unix. Cygwin brings much of this to Windows. I used several of the included tools to create the build server we use at work.
  • WinMerge. We use StarTeam for our configuration management tool at work, and while it does what it needs to do well, I wasn't impressed with the diff/merge utility that comes bundled with it. Instead I use WinMerge, which is much more intuitive, and which integrates well with StarTeam. Here is a post on integrating WinMerge with StarTeam.
  • KeyTweak. One of the first things I do when I get a new system is fire up KeyTweak and disable the Caps Lock key and the Insert key. I'm constantly hitting these keys by accident and putting my system into a state I don't want it in. This app also lets you remap keys to do entirely different things if you want.
  • Fiddler. This tool lets you debug the HTTP stream between your browser and IIS. This has saved me multiple times over the years. When you're having a problem with a web page and you just can't seem to figure out what is going on, this can really shed a lot of light on things.

Wednesday, June 25, 2008

Windows Forms Standards

At work, we develop for both the web and for Windows. One of the things that we've had issues with over the years as developers come and go from our team, is keeping the Windows forms (written C#) consistent as far as usability goes. These are simple things to do, but sometimes hard to remember. I'm posting them here partly for me to remember and partly for others to maybe learn from. This list isn't meant to be exhaustive; just a few things seem to consistently pop up.
  • Escape key should trigger the Cancel or Close button on the form. A lot people (me included) use the keyboard as much as possible, and this allows quickly closing the form you were using. To do this, just set the CancelButton property on the form to the ID of the appropriate button.
  • If your form has an obvious default button, set that as well. This is done by setting the AcceptButton property. This allows the Enter key to trigger that button even when it doesn't have focus.
  • Tab order should be set. Our internal standard is left to right, top to bottom on the form, unless otherwise specified by requirements. It doesn't matter what standard you use, as long as you have one and stick to it on every form you write. Visual Studio has a great visual way of doing this. With the form selected, click on View, then "Tab Order". This will put little numbers beside each form element. Just click on each element in the order you want them to be tabbed through. When you're done, the Escape key will exit out of this mode.
  • Groupboxes should be used to separate functional areas on a form. For example, on a search form, the criteria would be in one groupbox, and the results grid would be in another. We use the Infragistics NetAdvantage suite, so we always use their UltraGroupBox control, but the standard Windows one works well too.
  • Going back to keyboard usage: Menu items should have an accelerator key, where possible. To do this, just put an ampersand ("&") before the letter in the menu caption that you want to be the accelerator. This allows those of use who prefer using the keyboard to more quickly navigate around the application.
  • Buttons on a form should also have accelerator keys when possible. This is done the same way it is for menus: just put “&” before the letter you want to be the accelerator. The “Cancel” button does not need this done, since it will always use Escape as its accelerator.
  • For internationalization, we also always put field labels above the textbox or other control they refer to. This normally allows for more space to put in foreign languages, since they can end up being much longer than the original English equivalent.

Monday, June 23, 2008

Basics: Step Through New Code

One of the basic programming techniques that I try to use regularly is to step through any new code I write in the debugger. A lot of developers tend to only use the debugger for finding problems with code, but by the time you have a bug, I believe you've waited too long. Instead, as soon as you've written some new code, set a breakpoint at the start of it, and then run your application. When you hit the breakpoint, step through the code, asking yourself the following questions:

  1. Is the flow correct for the conditions? That is, are all the conditional statements (if, while, for, etc) branching correctly based on the inputs? If not, why?
  2. Are your variables getting set correctly? Watch the values of all local variables and parameters and make sure they match what you expected when you wrote the new code.
  3. Are you able to make all (or almost all) branches of your code execute? Make sure you can hit all new lines of code, even if it takes several runs. There may be some error-handling or exception catching code that you may not be able to step into without some test code or other jumping through hoops, but do the best you can.
  4. If it is a new method, does the return value match what you were expecting? Make sure you test all conditions possible.
I've found that I produce better quality code when I do this often. A lot of people recommend unit tests, and those are great when you can use them, but for some types of code, like UI code, they are either difficult or impossible to implement. Stepping through with a debugger is easy and doesn't take much more time than running your code normally. I've discovered errors or omissions quite a few times by doing this.

Welcome

I'm starting a blog to write about writing code. I'm a software developer, and I've been writing code since I was in high school (over 20 years ago!). I've been employed professionally as a software developer since 1992, and I've written code in C, C++, Perl, Visual Basic and C#, among others. I was inspired to start this by Jeff Atwood's recent post, where he encourages every developer to write a blog in order to improve their skills. Hopefully I will update this regularly and in the process learn something. You may learn something too.