Monday 24 December 2007

Blogger Code Formatting

I like blogger in general but it's probably not the standard geek's first choice; I suggest this simply because the editor has no UI functionality to preserve the formatting of the code samples most geeks want to paste in. HTML, God bless it, is HTML and Blogger doesn't help you write anything more than basic HTML. Moreover, Blogger's line break "helper" apparently interferes with the old standby tricks like pre, code, and blockquote. I say apparently because either there are a lot of untested myths out there or Blogger has changed how it deals with these tags over time.

So, to get my code samples to paste in effortlessly and to set the story straight, I present below the way I learned to beat the blogger editor and it's as simple, as--wait for it--the trusty pre tag. To paste in the sample below, I simply dropped into HTML mode, pasted in the class definition, wrapped it in a pre tag and fixed up a few line breaks. In my case, I've got the Convert line breaks option (Settings --> Formatting) set to to Yes.

Definitely better than before but pre does not preserve HTML and XML fragments and Blogger simply removes them from the post. The problem here lies mainly with the HTML and XML bits that look like HTML--I'm talking mainly about less than and greater than brackets. To get around this, I simply use an HTML encoder to drop in my markup, encode it, and then copy the output into Blogger. It's a pain but what can you do...


class Base
{
public virtual void DoWork ()
{
Console.WriteLine ("Base.DoWork");
}
}

Sunday 16 December 2007

Senior Developer Interview Questions

We had a couple of senior developer tenders go out at work recently and when it came time to interview the applicants I was asked to put together a test to flush out the wannabes. I've been on the receiving end of these tests in the past and considering this test would be written in the twenty to thirty minutes before a three-panellist interview, I thought I wouldn't be too hard. I'd say most of these questions are analyst programmer questions, if not junior dev questions, although I did through in a few tricks to gauge experience and I also left some questions fairly open in an attempt to understand the applicant's though process. If the applicant couldn't complete the test in the time period given, that would be okay too.

I've included the job criteria headers to help you figure out what I was looking for in each answer; not every question fits within a specific heading. The headings were printed on the test sheet.

The Perth .NET developer market has been on fire since I arrived two years ago and SharePoint work is going through the roof; as a result, we only had a single applicant put forward for the two or three tenders that went out. When and if we have more people across this test I'll attempt to generalise the results and post an update to this post on the effectiveness of this test.

I haven't included the answer here because like I said, some of them are fairly open and a lot of them should be fairly obvious if you've done an undergraduate CS course.

At least 5 year of software development experience and at least 2 year experience developing MS .NET applications

  1. Describe the difference between the execution environments for C++, C#, and Javascript
  2. Is C# a weakly-typed language? Why or why not?
  3. When should performance optimisation occur?
  4. List three problems with this code. Bonus: how could it be improved if you were using .NET 3.5?

    private string someVar = 1234;

    public string SomeVar
    {
    get
    {
    if (string.IsNullOrEmpty (someVar))
    {
    someVar = “5678”;
    }

    return SomeVar;
    }
    set
    {
    someVar = val;
    }
    }

  5. When would you mark a class with the internal visibility modifier?
  6. What’s the difference between a struct and a class? Are structs allocated on the heap?
  7. The property invocation below may return a null result. Rewrite the code to prevent the second line from throwing.

    object result = Employee.Address;
    result.ToString ();

Working knowledge of full lifecycle development methodologies, process and standards and project management; sound knowledge of object oriented system design and development...

  1. List one section heading you might expect to find in each of the following documents:

    · A functional specification
    · A technical specification
    · A UAT test case
  2. List three tasks you would complete before checking in a new class to the Tourism WA source control system.Name each component in the following diagram and briefly explain the relationship between Vehicle and Door:
  3. What is a use case?
  4. Briefly describe the difference between a class, an interface, and a type
  5. Briefly describe a software design pattern you’ve used on a past project and indicate how it helped or hindered code maintenance. Alternatively, describe a design pattern employed by ASP.NET.
  6. After designing a new system, your project manager asks you to estimate construction time for yourself and a junior developer. This is the first time the department has built this type of system and you haven’t previously worked with the junior developer. List three techniques you would consider to ensure your estimate is as accurate as possible

Solid understanding of internet protocols, web development mark-up languages and web standards...

  1. Draw a simple diagram showing the location of a reverse proxy in relation to a database server, a web server, the internet, and a client
  2. Using only inline CSS and DIV tags, write the HTML to produce a three-column, one row table (don’t worry about borders). Don't use tables.
  3. Will the Hello World! text be rendered green, red, or blue by the browser?

    <style>
    span { color: green }
    #makeItRed { color: red; }
    .makeItBlue { color: blue; }
    </style>
    <span id="makeItRed" class="makeItBlue">Hello World!</span>
  4. List three key components of this schema fragment. How many ReservationRequest elements are allowed in an XML document validated against this schema?

<xs:element name="Control">
<xs:complexType>
<xs:sequence>
<xs:element name="ControlID" type="xs:int" minOccurs="0" />
<xs:sequence minOccurs="0">
<xs:element name="ReservationRequest">
<xs:complexType>
<xs:sequence>
<xs:element name="InDate" type="xs:dateTime" />
<xs:element name="Period" type="xs:int" />
<xs:element name="Adults" type="xs:int" />
<xs:element name="Children" type="xs:int" />
<xs:element name="Infants" type="xs:int" /> </xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:sequence>
</xs:complexType>
</xs:element>

Sound knowledge of the Microsoft .NET framework class libraries, ASP.NET and Web Services

  1. What improvements does the .NET 2.0 System.Collections.Generic namespace offer over the System.Collections namespace from previous versions of the framework?
  2. List two key differences between a user control and a web control
  3. Compare the .NET application cache and the ASP.NET output cache
  4. Write a call to the Format ( ) method of the String class to return the string “Sam is 38 years old.” Assume you have a variable declared and initialised as follows:

    int age = 38;

    Give an example of a situation when you would use a StringBuilder instead

  5. Describe the difference between application state, session state, and view state. Discuss session state in relation to scalability and suggest an alternative.
  6. List two techniques to secure web service communication on the internet

Ability to perform unit and integration testing...

  1. What purpose do the following serve when debugging a section of code:

    · The F11 key on your keyboard
    · The Immediate window

  2. Briefly describe the concept of “regression testing”
 

Copying a SharePoint Document Library Programmatically

Based on the number of posts out there about copying the content of a list or document library, I'm willing to suggest SharePoint hasn't made this particular task easy. Whether it's through the various UIs or programmatically, this task seems like it's more difficult than it should be. As I recently found out, even clearing the content in an existing list is a hassle.

Before I go on, a bit of background. We were initially using the in-built variation tools to copy content from a source language site to a number of target English sites--in other words, www.westernaustralia.com/en was being copied to the /uk, /au, /nz, and /sg sites. I won't bore you with the details but the variation tool was deemed too blunt for our requirements and one of the developers on my team wrote a custom variation tool to do exactly what we want. The custom variation tool copies sub sites and pages but unfortunately it doesn't copy documents in document libraries; we don't make extensive use of the Documents library you get when you create a new publishing site but a few speicific sites do contain PDFs that need to be pulled over.

Since a document library is a list at heart, I started by examining the SPList API, assuming it would provide me with everything I need to manage the list items. I also looked into the SPListItemCollection returned by the Items property of SPList, and the SPListItem class. SPList was pretty hopeless. SPListItemCollection was somewhat more helpful, exposing the standard Add, Delete, and Count methods, and SPListItem was really enticing with its CopyFrom and CopyTo methods. Of course this was nearly a complete waste of time as few of these methods and properties really helped out at all. CopyFrom and CopyTo failed at runtime, Delete works as advertised but SPListItemCollection does not overload the Delete method or provide a sister method to delete everything in the list, and Add only adds a new list item if you get the very confusing url parameter pointing at the right location (a quick hint: it's expecting the URL of the destination file in the case of a document library...).

When it was all said and done, I'd written my own ClearList helper method, cast my destination list to a SPDocumentLibrary, accessed the Files collection via the RootFolder property of said document library, and passed in the byte array representing the uploaded file.

Here's the code I ended up with to copy the contents of the Documents list in one sub site to the Documents list in another sub site within the same web application:

using (SPSite site = new SPSite ("http://dev-moss-mh:101/"))
{
using (SPWeb sourceWeb = site.AllWebs ["Source_Site"])
{
using (SPWeb destinationWeb = site.AllWebs ["Destination_Site"])
{


SPList sourceDocuments = sourceWeb.Lists ["Documents"];
SPList destinationDocuments = destinationWeb.Lists ["Documents"];


if (sourceDocuments.ItemCount > 0)
{
ClearList (destinationDocuments);


foreach (SPListItem currentSourceDocument in sourceDocuments.Items)
{
Console.WriteLine ("Adding: {0}", currentSourceDocument.Name);


byte [] fileBytes = currentSourceDocument.File.OpenBinary ();

const bool OverwriteDestinationFile = true;
string relativeDestinationUrl =
destinationDocuments.RootFolder.Url +
"/" +
currentSourceDocument.File.Name;

SPFile destinationFile =
((SPDocumentLibrary) destinationDocuments).RootFolder.Files.Add (
relativeDestinationUrl,
fileBytes,
OverwriteDestinationFile);
}
} } } }


As you can tell by the variable name, the Add method requires a relative URL pointing within the context of the destination site. This seems odd to me since Add ( ) is called on the destination list itself--why it can't figure out the destination URL is beyond me.

My ClearList implementation is also mildly interesting: deleting items within a foreach loop is obviously a no-no since the foreach syntax in C# is interacting with an IEnumerator object so my first attempt was to iterate over the list using a for loop and deleting list items from the zero index through to the final item in the list. This worked but only sporadically, occasionally leaving items behind. Calling ClearList a second time did the trick with a small list but that's just weird programming.

The solution I arrived at is detailed below and essentially comes down to the fact that the SharePoint list API must maintain a real-time (or part-time) connection with the database; in other words, deleting an item at index zero could mean SharePoint re-fetches the list content so by the time my for loop moves on to the next index, the list has effectively shuffled itself down so index zero is still populated. As you can see, I'm now simply iterating over the list and always deleting the item at index zero. I could have possibly used a while loop and the listToClear.Items.Count property directly but it's difficult to know whether SharePoint can be trusted in a case like this. I'll leave that up to you to try out...

private static void ClearList (SPList listToClear)
{
int initialItemCount = listToClear.Items.Count;

for (int counter = 0; counter < initialItemCount; counter++)
{
// Always delete the list item at index 0
SPListItem listItemToClear = listToClear.Items [0];
listItemToClear.Delete ();
}
listToClear.Update ();
}
 
 
 

Thursday 13 December 2007

LastModifiedIndicator Kind of Works

SharePoint includes a useful out-of-the box control that can be used to display the date and time a page was last modified. It's not without its hiccups, as I'll describe in a moment, but it's quick and simple to use.

The control in question can be dropped into a page layout or master page using the following syntax:

&lt;PublishingWebControls:LastModifiedIndicator runat="server" />

When the page is served, my local dev server displayed the text below (I'm guessing this display format is relevant to the regional configuration of the server):

12/14/2007 2:24 AM

The LastModifiedIndicator class is derived from WebControl and exposes no additional properties or methods to configure the format decoratively. This would obviously be quite handy because the next best "quality" alternative is probably to implement your own LastModified webpart. The format displayed above will be unsuitable in many cases; although I haven't played with the control's output at any length, a quick and easy hack might be to access the last modified string on the client side and use a bit of JavaScript to reformat its contents using the JavaScript date functions. Definitely not pretty but a little less heavy-handed than building your control or getting in there with some server side code.

You may have noticed a time of 2:24AM in the sample output I listed above; no, I'm not a late night hacker! I actually modified the test page I was working on at 11:24AM on 13 December 2007 but my dev server was telling me I modified it at the unwholesome hour of 2:24, one day in the future. I wish I could do that... or SharePoint would allow me to do that but no luck there so far. Anyway, I haven't tested this solution yet but the legendary Tania down the road from Tourism indicated she got the control to behave by "changing the regional settings of the site and all subsites to Australia and then changing to GMT." I'm not sure exactly what that means but I'm assuming she fiddled with the site settings.

Tuesday 27 November 2007

stsadm restore Results in Access Denied 0x80070005

Restoring a content database using stsadm -o restore usually works pretty well. We use this command to restore content backups from our authoring environment to our local development environments.

Because our authoring environment is administered by one of the operations guys, however, I frequently receive an Access Denied error from stsadm that reads like this:

Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))

The 12 Hive log file offers up the full stack trace but reveals nothing of any real interest. Bear in mind, I'm logged in to my development server (W2k3) using my AD credentials and I'm a local administrator. An earlier log entry tells me stsadm.exe is dealing with a request from me so I know everything is pretty much okay. Of course it's not and I'm getting the above error.

The solution lies with the fact I'm not listed as a site collection administrator for the old site collection that's being replaced.

Luckily, I can change this quite easily. Fire up the the Central Administration console and browse to the Application Management tab. Select the Site collection administrators link in the SharePoint Site Management section and configure yourself (or the relevant account name) as the Secondary Site Collection Administrator. Unfortunately you can't specify a group as either primary or secondary administrator. As a result I'll also have to do this again before the next restore since I'm not a site collection admin in authoring.

Update: Poolio (see comments) and I have both run into this problem all over again after the source site collection is locked before the backup is kicked off. To work around this, remove the lock:

stsadm -o setsitelock -url [url] -lock none 
 

Visual Studio 2008 Pros

Our team recently started development on a new MOSS-based shell for the five regional tourism operators in Western Australia. As per westernaustralia.com, the sites use custom branding and a number of custom web parts. Although we set out using Visual Studio 2005, a number of us in the development team are keen to move to Visual Studio 2008 now that it and .NET 3.5 have been officially released. We’re all excited about 2008 and it makes sense to migrate while the new sites are under development rather than post-release.

What follows is a list of ‘pros’ for moving forward with VS 2008.

Pricing
  • MSDN licenses with Visual Studio 2008 Professional are cheap ($2000USD retail and that gives you a lot more than just Visual Studio...)

Product Maturity and Future Migration

  • Visual Studio is a mature product and moving from one mature release (2005) to another mature release (2008) is expected to be low-risk. .NET 3.5 is also an "additive" release that builds on top of the .NET 2.0 Framework so the risk to an existing code base is very low.
  • The migration path from .NET 2.0 to .NET 3.5 is clear and involves none of the issues encountered migrating an ASP.NET 1.0/1.1 project to .NET 2.0.
  • As Microsoft Content Management Server 2002 moved from a minimum requirement for .NET 1.1 to .NET 2.0 between service packs, there is a small risk MOSS 2007 will take the same approach in a future release
  • Visual Studio 2003, 2005, and 2008 can be installed side-by-side if necessary

Developer Productivity, Maintenance, and Operations

  • Visual Studio 2008 and new C# language features offer productivity increases like Javascript Intellisense and debugging, enhanced CSS and master page support, automatic properties, object and collection initialisers, and extension methods. The new language features have the potential to reduce the amount of code to be written and maintained, thereby simplifying debugging, reducing the learning curve for new developers, and lowering maintenance costs
  • AJAX and .NET 3.0 SP1 are built-in to .NET 3.5, minimising installation requirements on servers and local development environments
  • .NET 3.5 includes cumulative .NET Framework patches and service packs to ensure the operating environment is up-to-date and secure
  • Visual Studio 2008 will allow us to compile existing NET 2.0 projects in the Visual Studio 2008 IDE if necessary

Thursday 8 November 2007

Clear Your Compiler Warnings

There’s a minor problem with this code—at least as far as the compiler’s concerned. Can you spot it?
...
catch (Exception ex)
{
// Don’t do anything
}
The problem is this: the ex variable is declared but never used and the C# compiler will warn you of the fact.

warning CS0168: The variable 'ex' is declared but never used

Does this matter in terms of how the code runs? Of course not but what if everyone always did this and the explosion of compiler warnings masked a more important warning relating to something else? Building the wa.com solution spits out 56 warnings—42 of which have to do with variables never being used—and the entire warning mechanism provided by the compiler is therefore of very little use to us. If it weren’t for this mess, someone might have noticed this fairly useful warning before now:

warning CS0162: Unreachable code detected

There’s an easy way to get around this problem when it comes to exceptions... just don’t declare the exception variable. This example does exactly the same thing at runtime but compiles without the warning. If you need to inspect the exception while debugging, use $exception in a watch.
...
catch (Exception)
{
// Don’t do anything
}
For all other warnings to do with undeclared or unassigned variables, delete the offending declarations!

As for the example catch statement above, you all know this is bad, bad, bad (in all but the most extreme circumstances), right? Bugs disappear into black holes like this and a more specific exception, at the very least, should be caught instead. A better way would be to remove the try/catch statement, test thoroughly to flush out the defects in your code, and reserve exceptions for truly exceptional circumstances.

How to enable anonymous access for an existing SharePoint web application

It can be a little daunting if you're new to SharePoint and tasked with doing something you've never done before. Can it be done in SharePoint? Will doing it break your site or the entire installation? Is doing it so difficult it's not worth doing? Configuring anonymous access is one of those tasks because you're dealing with SharePoint (and ASP.NET indirectly), your site collection (and potentially your database indirectly), IIS, and occasionally the file system.

At the time of writing there are a number of sites and blog posts out there offering instructions on how to configure anonymous access. Some are extremely detailed--and depending on what you're trying to accomplish, unnecessarily so. Others are a bit vague. For my own sake I therefore thought it might be useful to file this under Things to Remember but I hope it helps you too...

What you'll find below is a detailed step-by-step set of instructions for setting up anonymous access for a fully branded web site like http://www.westernaustralia.com/. The anonymous access site gives internet users the ability to browse the site without having to log in and another site allows content editors to post content updates using their domain accounts.

A bit of background information

In brief, the steps below involve 'extending an existing web application' (that's a SharePoint concept) by creating a sister web app from an existing web app. The extended web app will use the same content database as the original and will be configured to support anonymous access. The top-level site of the database will also be configured to support anonymous access. As a final option, I'll show you how to disable all other types of non-anonymous access.

The following tasks should be completed by a server administrator and assume you have already created a web application the normal way (it might be a good idea to ensure it's working before you begin...)

1. Extend an existing web application

  • Open the Central Administration console and select the Application Management tab.
  • Select Create or extend Web application from the SharePoint Web Application Management section.
  • Select Extend an existing Web application on the next screen.
  • Select an existing web application to extend.
  • Modify the description and configure the port and, optionally, the host header.
  • Set Allow Anonymous to Yes.
  • Set the Load Balanced URL Zone to Internet (you may choose another zone here if you like but Internet generally means anonymous so it's the best option).

Once you've extended a web application, the new (i.e. extended) application seems to disapper from the Central Administration screens: it won't be listed as a web application and it doesn't appear as an option when selecting a web app. You will, however, get a new directory for the extended web app under inetpub\wwwroot\wss\virtualdirectories\ and a new IIS site; you can also remove the extended site from SharePoint if required.

2. Enable anonymous access on the site's corresponding site collection

Although the site collection will be shared by the existing web application and the anonymous web application, the following steps must be completed via the anonymous web application.
  • Browse to the home page of the extended web application
  • Select Site Settings --> Modify All Site Settings from the Site Actions drop-down menu.
  • Under Users and Permissions, select the Advanced permissions link.
  • Select Anonymous Access from the Settings menu.
  • Set Anonymous Access to Entire Web site.

Sites inherit the permissions of their parent by default so if you have any problems with a specific site you can ensure it's set to inherit permission from here as well (browse to the site settings screen for the relevant site first).

If you can’t see the Anonymous Access menu item, either the web app hasn’t been configured for anonymous access (see above or below) or you’re accessing the site via the default zone instead of the internet zone—you must access the site via the internet zone (at the extended URL).

3. Test
  • Browse to the anonymous site in Firefox (or turn off integrated windows authentication if you're using IE); the site should be rendered without the Site Actions menu and other SharePoint controls.
  • Browse to a SharePoint administration screen (eg. /_layouts/settings.aspx) and you should be prompted to supply login credentials.

At this point your site is set up to allow anonymous access but will also prompt you to log in as an administrator if you hit any of the SharePoint screens. This may be desirable but alternatively you may want to lock down external access to your public site; if that's the case, read on...

4. Remove integrated authentication from the anonymous web application (optional)
  • Open the Central Administration console and select the Application Management tab.
  • Select Authentication providers from the Application Security section.
  • Select the Internet zone (this is the zone specified when the anonymous application was extended).
  • Deselect Integrated Windows authentication.
  • Set Enable Client Integration to No.

5. Test

  • Browse to the anonymous site in Firefox (restart any open browser windows if you receive a 401 error immediately after completing step 4). The home page should appear as it did previously.
  • Browse to a SharePoint administration screen (eg. /_layouts/settings.aspx); you should receive a 401 UNAUTHORIZED HTTP error (which, in this case, is appropriate).

6. Troubleshooting

If you run into difficulties (mainly with 401s and 403s popping up where they shouldn't), these ideas may help.

  • Make sure the page you're trying to access is published. It's easy to forget this simple step in all the excitement but if a page (or image, etc) doesn't have at least one published version MOSS won't serve it up
  • Reset IIS--it's quick an easy ;-)
  • Grant the Read & Execute permission to the Authenticated Users group on the anonymous site's web.config and /bin directory (both can be found below Inetpub\wwwroot\wss\VirtualDirectories); do the same again for the authenticated site. Permissions on these files are reset every time the authentication method is changed in SharePoint.
  • Recognise extending a web app creates a new site in IIS and corresponding directory under wwwroot with its own web.config. Ensure the newly-created web.config in the extended site contains everything it needs to; ensure any virtual directories and applications are properly configured
  • Redeploy any solutions, features, etc to make sure everything’s where it needs to be (custom private assemblies in particular)
  • It's possible your custom code is doing something that requires elevated permissions. The Visual Studio debugger will help you locate the culprit. If you can't remove the offending code, you can wrap it using a delegate:

SPSecurity.CodeToRunElevated elevatedAction =
new SPSecurity.CodeToRunElevated(delegate() { /* dodgy stuff */ });
SPSecurity.RunWithElevatedPrivileges(elevatedAction);

  • If necessary, remove the extended web application using the Central Administration console (also remove the IIS site) and start again.

Saturday 15 September 2007

Potluck 'Round the Hearth

While discussing the issues of public space for resources in Peopleware, the authors quote from Christopher Alexander's A Pattern Language:

"Without communal eating, no human group can hold together. Give each [working group] a place where people can eat together. Make the common meal a regular event."

The authors go on to highlight the relationship between shared space in broader societal terms (the home, in particular) and the office workspace.

As so many of us spend a great deal of time and a large portion of our lives at work, I believe strongly in extending my definition of "family" to include the people I work with. This fits naturally with the hierarchical structure found so often in work places: my immediate circle of co-workers becomes my brothers and sisters; my managers becomes parents, grandparents, uncles, and aunts; other colleagues in the organisation become cousins and second cousins.

From this mindset (and, admittedly from a mindset that includes fun and enjoyment!), I introduced my team to the potluck lunch about a year and a half ago. The potluck concept was not my idea: I attended a party held outside work by a colleague from my previous employer and the party was themed around the potluck; the people and layout of the workspace of my new employer simply allowed me to suggest we bring the team together on a regular basis and everyone bring a single dish of food.

Although we don't measure many things at Tourism--let alone productivity increases, the potluck lunch concept has proven successful in general. At the very least, it's a great opportunity to sit back with my work family, indulge in new and interesting food, and have a chat... a scheduled group downtime. It's also been an interesting way to introduce new team members to others in the office and give them a sense of how we work and what it feels like to be a part of our team.


We aim for a monthly potluck lunch and usually go in for some kind of theme. When our first potluck was held, we had a very diverse team and everyone brought food representative of their home culture. We'll also hold a goodbye potluck when someone leaves the team.

The rules of potluck are few and simple:
  1. Each person only brings enough food to feed one to two people (or a single dish);
  2. Each person tries their utmost to make something at home the night before--food purchased the day of the potluck is usually a rush job and tends to be deep-fried;
  3. Alcohol is a suitable food substitute (but this works at Tourism).

We don't usually plan who's bringing what--it usually just works out. As we do have a few vegetarians about we try to cater for them and generally try to arrive at a balance of savoury vs sweet (dessert is always nice!).

Thursday 13 September 2007

Web 2.0 Graphic Design

Ever wonder what makes the latest breed of web site so attractive? http://www.webdesignfromscratch.com/ certainly has and the author has kindly produced a number of well-written articles on how to design a good looking, functional site.

There's a lot of material to go through on this site but it's all very well organised and worthwhile reading.

Burp. Excuse me.

I had a look at Burp proxy recently. If you haven't heard of Burp before, it's a debug proxy that has one unique advantage over the likes of Fiddler: you can manually intercept, modify, and forward individual requests and responses.


Burp is a little Java app and you don't need to install it in order to get up and running. Although the program worked as advertised, my biggest gripe is that you have to manually configure your browser proxy settings to use localhost:8080--Fiddler just works by comparison).



As I'm on a corporate network, I also had to figure out where to configure my domain account/password. The server returned security errors without this. Once set, it's done but I'm naturally wary of supplying my password to a potentially "black" app like this (I run as Admin on my dev box...); our security policy also requires I change my password every thirty days so this is just one more location I need to update my password every month.

The proxy works as advertised, stopping at every request/response passing through and allowing you to modify it, drop it, or forward it on. You can exclude requests for certain media types and automatically modify other aspects of the headers or content. I'm primarily using the proxy to inject an X-Forwarded-For header.

Wednesday 12 September 2007

German site ist wunderbar!

The German version of the www.westernaustralia.com site will launch officially tomorrow via Minister Sheila McHale but it's live now: http://www.westernaustralia.com/de
This site is just a little bit of my handiwork here at Tourism Western Australia...

Tuesday 11 September 2007

Reach for the Light


So close... yet so far.

International Resource Identifier Support in .NET 3.5

The September 2007 issue MSDN Magazine contains an interesting article about changes to the System.Net namespace. Among the discussion about sockets and other low-level changes, the authors discuss support for International Resource Identifiers (IRIs) and their benefits over URIs.

http://msdn.microsoft.com/msdnmag/issues/07/09/Networking/default.aspx

I didn't realise it was possible to use non-ASCII characters in a domain name and while many DNS servers don't support non-ASCII domain names, Punicode provides a mechanism to work around this limitation.

So in essence, you can now take a domain name containing Unicode characters (like this: http://微軟香.com) and work with it directly using the URI class. This is certainly a great thing for international visitors to our web sites but as I only read English and French, I was really clinging to the English URLs as the last remaining way to identify our pages in SharePoint! Ah well, modern times, 'tis a global world...

Wednesday 29 August 2007

IE Renders Spurious '#text' Node as a Gap

I today had the misfortune of discovering IE6/7 doesn't like to display relatively "normal" HTML.

Take a look at this code and the bolded DIV in particular:

<html><body>
<style>
* { padding: 0px; margin: 0px; }
img { border-width: 0px; }
</style>
<div>
<img src="pic.gif" />
</div>
<div style="width: 285px; height: 100px; background-color: Green;">
<p>Other stuff</p>
</div>
</body></html>

This renders a lovely gap between the image in the top DIV and the bottom DIV:

Inspecting the DOM using the IE Developer Toolbar reveals IE is interpreting and rendering a spurious text node from the markup:

As you might expect, Firefox has no issues with this and renders the two DIVs one on top of the other with no gap between.

Despite average CSS skills, I was unable to style this into submission without completely mangling the existing code and CSS (simply adjusting the DIV's height had no effect); instead I managed to get around this by simply removing all whitespace between the opening and closing DIV tags:

<div><img src="pic.gif" /></div>

Since the image in this example is essentially functioning as background image for the DIV, I could have alternatively set its background-image to the URL of the image.

Tuesday 28 August 2007

Generating Public, Strongly-Typed Resource Classes with Visual Studio

Visual Studio 2005 does a great job of managing your .resx files and automatically generates strongly-typed classes exposing the contents of those files. If you create a new resource file and add it to your project you'll notice the Custom Tool property has a value of ResXFileCodeGenerator to suppor this behaviour.

This is generally all well and good but there is a catch: ResXFileCodeGenerator generates classes with members marked internal; in other words, you won't be able to access your resources using the generated class if you're working in another project (i.e. another assembly).

The resgen.exe tool does all the hard work behind the scenes and does have a flag called PublicClass that will override this behaviour--set this flag and your classes will be generated with public visibility. Unfortunately you can't run this tool automatically until compile-time, which means your resources won't be as conveniently accessible as they are by default; you'll also have to write a post-build script or use another method to do all the hard work moving your generated files around.

Luckily Visual Studio 2008 solves this problem by allowing you to set the Custom Tool property to PublicResXFileCodeGenerator. As the name suggests, the generated methods come out the other end marked as public and this all happens from within Visual Studio.

If you really can't wait for Visual Studio 2008 (and it's not far away), you may want to look into a handy little extension called ResXFileCodeGeneratorEx. In addition to allowing you to create publically-accessible, strongly-typed classes for your resources, it also helps out if you're dealing with format strings in your resource files. The only downsides I can think of are the fact that Visual Studio 2008 will make this tool less necessary (apart from the format bit) and that every developer will need to install it on their machine. No biggie but the sort of thing that can cause headaches for new developers joining your team.

Wednesday 15 August 2007

IE Dev Toolbar Stops Working (IE7)

The IEDevToolbar is a great help for web-based development with Internet Explorer. I've been using it since it was in beta and while it generally does the job, the bugs have been ever-present in different forms.

One of the latest things I've noticed is how the toolbar seems to stop working (generally after refreshing a page that's changed at the server). The menu options are greyed out and clicking with the pointer refuses to select any page elements. Closing the toolbar and re-opening it again fixes the problem but there is a better way.

For some reason, the toolbar doesn't always automatically refresh itself. As a result the DOM tree represented in the toolbar doesn't match the DOM tree in the browser. Closing and re-opening the toolbar synchronises the toolbar with the page but this can also be accomplished but hitting the IEDevToolbar's Refresh button (one of several icons that don't make a lot of sense at first glance). The menus function once again and page elements are clickable. Why this doesn't always happen automatically is beyond me.

ebay Retailers That Suck

My wife bought an MP3 player a few weeks ago and after deciding she wanted an arband for it we hunted around and finally found a workable version on ebay from Accstation (www.accstation.com). We won the auction at $0.99 and after adding a few dollars shipping and handling, it looked like yet another successfull online transaction. Then came the payment part.

Accstation uses a third-party company to process credit card payments (they also offer payment via PayPal). I opted to use the credit card payment method, completed the online forms, and clicked the submit button: transaction failed. Okay... I thought, probably just a temporary problem with their servers or a network issue, let's try again. Same error. Well, I thought, since I'm seeing this error, the transaction surely can't be reaching the payment gateway; let's start again from the beginning and double-check all my details. Transaction failed. Okay, I'm fed up now... one last time for the fun of it and I'll call my credit card company. Transaction failed.

At this point I give MasterCard a buzz to ensure my card hasn't been blocked and I have sufficient funds to pay the lousy $0.99 + S/H. The representative tells me everything is fine with the card and my account BUT four transactions just went through for the same amount. They haven't been approved yet but there are my four failed transactions. Blow gasket now.

I email the seller directly, I send the seller a note via ebay, I email the credit card processing company. Accstation's autoresponder autoresponds with a useless email message and the credit card processing company refuses to take any responsibility for this fiasco, despite their involvement in processing my payment four times over. A day passes while I wait for a response from Accstation and then another day and another day. I browse their web site and email their accounts department, their sales department, their customer service department, and their auctions department. MasterCard tells me they can't do anything until the transactions are approved.

Someone finally replies and asks me to email them back with my credit card number, expiry date, amount, etc and they'll get back to me within three to five days. There's no way I'm going to send a mysterious bot my credit card details via email and they shouldn't require that information anyway. I never hear back from "Tammy."

In the end, ebay notifies me I won the auction and must pay up before the week is out or I'll be stricken down by the Internet gods. The four payments finally disappear from my MasterCard account and I log in to Accstation's payment system to pay my $0.99 bill, this time via PayPal. The planets align and this time everything works... a week later and my wife has her arm band.

$0.99 plus shipping and handling works out to very little profit for Accstation but I did not hesitate to leave a negative feedback rating on ebay and there's no way I'll ever buy anything from this company again. For the minimal effort it takes to reply to an email from an upset customer, the end result could have been a win-win situation.

Friday 10 August 2007

The Final Effect

Friday morning and the cummulative effects of yet another change request set in...

Thursday 9 August 2007

Creating a Custom CultureInfo

The System.Globalization.CultureInfo class comes with a number of pre-defined cultures but thankfully Microsoft recognises it hasn't supplied all culture/language combinations (real or imagined) and will allow you to build your own. One way you can do this is using the System.Globalization.CultureAndRegionBuilder class.

We ran into trouble while attempting to localise the westernaustralia.com English-language sites targeted at specific regions (we've got a "global" EN site, a "domestic" AU site, and UK, NZ, and SG variants). .NET 2.0 (and Windows XP/Server 2003) define CultureInfos and locales for all of our language/region combinations except for Singapore; the closest in-built option we could find was zh-SG (which is Chinese/Singapore). Although we could have cheated and used zh-SG, we're also running a number of foreign-language variations of the site; to be explicit, avoid confusion, future-proof this aspect of the site, and--most importantly--to make use of .NET's resource fallback mechanism (from en-SG to en), we decided to define a custom CultureInfo.

While creating a new CultureInfo isn't a difficult task, it's not as easy as supplying an "en-SG" string to the CultureInfo constructor or deriving a new class from the CultureInfo class (you can derive a new CultureInfo from an existing CultureInfo, however).

CultureInfo ci = new CultureInfo ("en-SG"); // This will fail at runtime
internal class MyCultureInfo : CultureInfo // There's an easier way...

MSDN provides a succinct article on building a custom CultureInfo class and trust me, it's really quite easy. The article fails to mention that you need to add a reference to the sysglobl assembly to gain access to the CultureAndRegionBuilder class so as long as you remember that step you should be fine. The sample provided also prefixes the new CultureInfo with "x-" and I think this is a great idea: doing so should avoid any conflict when you move to the next version of the .NET Framework or a new platform. Vista and, presumably, Server 2008, include the en-SG locale so naming our new CultureInfo "x-en-SG" means we can anticipate a smooth transition if the existing wa.com code is ever moved to Windows Server 2008.

You don't need to create and register a new CultureInfo every time your application runs (and you probably don't want to since the CultureInfo is written to the filesystem when its registered and invoking Register () again will fail) so we've built the create/register code into our deployment script. We simply try to unregister the existing CultureInfo, create x-en-SG from scratch based on the en-US CultureInfo and the SG RegionInfo, and register the new CultureInfo. Our English-Singapore .resx files reflect the new CultureInfo and are named as though x-en-SG were an in-built CultureInfo: *.x-en-SG.resx.

Broken .resx files in Visual Studio 2005/.NET 2.0

Working with resources is so much easier in .NET 2.0 but things do occasionally go awry. Most of the time the problem is really easy to fix.

As discussed in another post, Visual Studio 2005 does a lot of work behind the scenes to surface your resources as strongly-typed objects; if you're not careful with your .resx files, however, you might end up in a situation where your .resx files aren't being compiled for you. As a result, you lose Intellisense for your resources, the ResourceManager may end falling back to your default resource file when it shouldn't, or your default resource file might not load at all. Copying and renaming Visual Source Safe-controlled .resx files is one little culprit that occasionally brings everything to a halt.

It's important to remember Visual Studio doesn't just "do" things for you--it must be told what to do and frequently relies on stand-alone tools included with .NET or sitting outside of the VS shell. A good example of this can be seen by inspecting the property sheet (in Visual Studio) for one of your .resx files.

When you create a new .resx file Visual Studio does all the right things by setting the Build Action to "Embedded Resource" and setting the Copy to Output Directory as "Do not copy". Just as importantly, Visual Studio also sets the Custom Tool property as "ResXFileCodeGenerator" and this particular setting can occasionally get stripped away when you're renaming or moving resource files. If in doubt and your resources are not being made available to your application, check this property; if it's not set, set it to "ResXFileCodeGenerator".

If the Custom Tool property on your .resx files is set correctly, Visual Studio will help you out by running the specified tool for you every so often to ensure your resources are available programmatically (this can actually be a pain in the neck sometimes so I recommend using a tool like Resourcer to edit your resource files...). If this isn't happening, you can simply right-click on your .resx file in Visual Studio and select Run Custom Tool.

Tuesday 7 August 2007

Visual Studio 2005 ASP.NET Development Server (Cassini) and HttpCachePolicy

The ASP.NET Development Server (aka Cassini) included with Visual Studio 2005 (the default web server for new web projects) doesn't honour cache headers emitted using the HttpCachePolicy class.

Inspecting the response headers from a page running in Cassini reveals the Cache-Control header is set private despite instructions to make the response publically cacheable:

context.Response.Cache.SetExpires (DateTime.Now.AddSeconds (30));
context.Response.Cache.SetCacheability (HttpCacheability.Public);
context.Response.Cache.SetValidUntilExpires (true);
context.Response.Cache.VaryByHeaders ["Accept-Language"] = true;


Just as importantly, no Vary header is sent down the line.

By contrast, running the same page in IIS sets the Cache-Control header public and the Vary header is set to Accept-Language, as intended.

The ASP.NET Development Server is a great help--when it works; when it does stuff like this it really throws a spanner in the works.

Monday 9 July 2007

Creative Zen Neeon 2 (2GB)


My wife bought the Creative Zen Neeon 2 (2GB) on Saturday for a mere $99 (AUD) from MYER. Shopbot and MyShopping were listing the same player from $179 to $226 at various online retailers so it all seems like a great deal from a big department store! 2GB is by no means huge but for the price you can't go wrong.

The sound quality on this device using the supplied earbuds is quite good with a nice range. We had the bass boost turned on initially and had to turn it off because some songs were distorting slightly. Bass response was still great.

The user interface isn't bad once you get used to it (my wife isn't a technophile and she cottoned on in no time) and the thumb wheel works quite well. I don't have huge hands but my hand did start getting tired after playing with the thing for a while. The unit has volume controls and a record button one side and a play/pause/power button on the other side with the thumb wheel. Neither of us have used the iPod wheel before so I guess we don't know what we're missing out on--and don't care!

The Zen plays MP3, WMA, and WAV formats. It also plays video but the video has to be transcoded using the supplied software. Images are viewable as JPGs. The radio reception was a little iffy and was coming in quite fuzzy while the FM radio on my mobile phone had crystal clear reception right beside the Zen. I haven't tried the line-in function yet. The display was nice and bright and the shiny black surface of the case didn't show too many fingerprints.

Without installing the software suite, we connected up to a USB 2.0 port off a Windows XP computer and were able to start copying files immediately. Transfer time wasn't exactly fast but nothing to complain about. The internal folder structure on the device is extremely logical and you could probably copy other files onto the player for moving to and fro. We also connected the device to a Windows Vista computer using slower USB 1.0 ports but the player wasn't recognised and we didn't persevere.

The unit came with earbuds, a USB cable, a line-in cable (1/8"), a lanyard, a DVD with a user manual and software, a printed user manual, and some stickers. The battery is built-in and presumably not user-serviceable. I'm pretty sure this thing has a 1-year warranty.

I don't think I'd pay full-price for a player of this size but at $99 the iPod and variants don't really compare.

Monday 2 July 2007

VPC MAC Address Conflict

After experiencing intermittent network issues with his VPC, one of my work colleagues was forced to contact our helpdesk extraordinaire to resolve the issue. We use a base VPC (.vmc + .vhd) as the foundation for our individual developer environments and the problem was traced back to the same MAC address being used by multiple VPCs. By repeatedly copying the same .vmc file, a number of us ended up with VPCs sharing a common MAC address.

A MAC address uniquely identifies a NIC at the hardware level (right down at the bottom of the network stack) and is set in the factory. The VPC world in which we live, however, means we're no longer dealing with physical hardware all the time and, as you probably know, multiple software NICs can be added to a VPC quite easily.

As a result, Microsoft Virtual PC generates a new MAC address whenever a new VPC is created and a software NIC is assigned. A VPC's MAC address is stored in the .vmc file, which is actually a valid XML file (open the file in Notepad or an XML editor). Once generated, the VPC's MAC address is stored in an element called "ethernet_card_address" and stays that way until it's either manually changed or the .vmc file is replaced.

Until we encountered this issue, we were saving both the .vmc and .vhd files as part of our development environment "base image"; had we stored only the .vhd file(s), we'd be forced to create a new .vmc file when creating a new environment. Creating a new .vmc file is a simple task and doing so would have avoided the problem encountered by my colleague. The alternative to deleting the .vmc file is to simply delete the value contained in the ethernet_card_address element of the .vmc file. Virtual PC will generate a new MAC address the next time the VPC is started.

It should be noted that while our individual VPCs now employ NICs with unique MAC addresses, the base machine was part of a workgroup and was therefore not sysprep'd. We join our VPCs to our dev domain after copying the base image.

Tuesday 26 June 2007

Closing Tag Identifier

I often see HTML comments used to relate a closing tag to its opening tag; when a DIV or some other container tag contains enough content to make viewing the entire block without scrolling impossible, I think this makes perfect sense.

For example:

<div id="myTag">
</div> <!-- end myTag -->


This is also sometimes helpful with C-like languages that use braces to delimit blocks of code.

Although modern development tools highlight closing tags and braces, HTML code in particular isn't always read within one of these tools (View-->Source in IE opens the source code in Notepad, for instance).

To assist, I think it would be helpful to have the option of attributing a closing tag with the same id used by the opening tag:

<div id="myTag">
</div id="myTag">


Our fancy editors would keep the opening/closing ids in sync and browsers could safely ignore the additional attribute.

In practice, this works quite nicely with the caveat that a page built like this is invalid.

Monday 25 June 2007

My "Cubicle" Rocks!!!

I recently reconfigured things at work...

Thursday 21 June 2007

Must-Have Web Development Tools

ASP.NET Fragment Caching in MOSS

I've noticed an odd happening with fragment caching in MOSS 2007: a user control declared in a master page will not be cached but when that same user control is wrapped in a second user control and the wrapper control is declared in the master page, the wrapped control caches as expected.

Things look like this:

Masterpage:

<%@ Register tagprefix="CustomControls" tagName="Container" src="~/UserControls/Container.ascx" %>
<%@ Register TagPrefix="CustomControls" TagName="ToCache" Src="~/UserControls/ToCache.ascx" %>
<CustomControls:Container id="container" runat="server"/>
<CustomControls:ToCache ID="toCache" runat="server"/>

Container.ascx:

<%@ Register tagprefix="CustomControls" tagName="ToCache" src="~/UserControls/ToCache.ascx" %>
<CustomControls:ToCache id="toCache" runat="server" />

The resulting HTML looks like this:

<body>
<div id=”container”>
<div id=”toCache”>... </div> // This caches
</div>
<div id=”toCache”>... </div> // This won’t cache
</body>

The ToCache.ascx user control sets a simple @OutputCache directive in the markup and I can't see anything that would limit output caching in the master page, the hosting .aspx page, or the web.config.

I haven't ripped this apart and tried it in a clean site but I'm definitely experiencing this behaviour within the context of the wa.com development environment. I know MOSS controls (and somehow enhances) output caching but I have yet to look into how this works--as far as I know, I'm using the default output caching configuration.

Update: I wonder if this has anything to do with the control being hosted in a master page instead of an aspx page layout. Slim chance...

Disabling the "Reply to All" email button

Gotta love it when the CEO sets the technology direction in your office... (the identities of those involved in this memo have been obscured but this is a genuine email). Guess that's the final nail in the coffin for Outlook on my desktop.

Surely there's a Dilbert strip for this?

From: xxx
Sent: Thursday, 21 June 2007 10:25 AM
To: All Staff
Subject: Disabling the "Reply to All" email button

The CEO has requested that Corporate IT disable the “Reply to All” button for all staff in order to assist with email and time management.

We will trial this for a few weeks and then I will seek feedback on how effective this has been and request executive directors to bring the feedback to the executive management team for discussion.

The process will happen progressively over the next few days.

Thanks


xxx
Executive Director



Wednesday 20 June 2007

Units of Time

One thousandth of a second = 1 millisecond (ms)
One millionth of a second = 1 microsecond (µs)
One billionths of a second = 1 nanosecond (ns)

See http://www.wilsonmar.com/1clocks.htm for great discussion about time in a networked world.

Monday 18 June 2007

Visual Studio 2005 Debugger Won't Break

Ever encounter a situation where the Visual Studio 2005 debugger absolutely refuses to break? I ran up against this problem this afternoon (yet again) and it took me a moment to discover the reason why.

Here's the scenario:
  • A standard user control with a code-behind file is hosted in an insignificant ASPX page;
  • The user control code-behind is doing some work to render the control and also has a handler method subscribing to a button click event;
  • The relevant assemblies are built in debug mode and are deployed to a separate development server along with the associated markup files;
  • The assemblies on the server are the same version as those on the development workstation;
  • The application is configured as it should be in IIS and the debug attribute on the web.config's compilation element is set true;
  • The VS 2005 remote debugger is installed and running on the dev server;
  • The VS 2005 "client" is attached to the remote server's ASP.NET worker process (w3wp.exe in this case);
  • Two breakpoints are set on the user control code: one on Page_Load and one on the button click handler;
  • Browsing to the host page doesn't cause the debugger to break (except intermittently, sometimes following an iisreset)...
Inspecting the two breakpoints reveals nothing out of the ordinary; symbols are loaded and the debugger is breaking every so often, just for kicks. The code itself is executing happily, it seems.

Here's the problem:

The user control in question had an OutputCache directive set to cache the control for sixty minutes. Removing this little devil resulted in a slap of the forehead and allowed the debugger to break as expected.

The OutputCache directive prevents the control being added to the control tree of the hosting page at runtime; ASP.NET loads an existing version from the cache instead of executing the control code.

...kind of a silly problem since the debugger gives you no inidication the control is loading from the cache but it's all too easy to forget about this sort of thing!!! The golden rule I usually try to apply is to hold off on performance tuning until the very end of the development/testing process and this should generally include large-scale caching. Obviously this doesn't apply in a maintenance situation.

[Update: a few additional tips...
  • Double-check the deploy location of your assembly; if it's in the GAC you can deploy to the bin directory until the cows come home but ASP.NET will continually load the assembly from the GAC. Either remove the GAC'd assembly or deploy to the GAC (you can also deploy PDB files to the GAC but you need to drop them under gac_msil using the command line).
  • Once you've attached the debugger, bring up the Modules window (Debug -> Windows -> Modules); locate your assembly and verify whether symbols have been loaded and the location where the assembly was loaded.]