Sunday, 29 March 2009

Driving Silverlight Uptake – Playboy Archive

There’s nothing link pornography to drive technology and with the Playboy archive now freely available online there’s really no reason not to install Silverlight.

Deep Zoom takes on a whole new meaning in this context. Really though, it’ll be interesting to review Silverlight adoption rates when the masses stumble across this stuff.

If you’re reading this at work and don’t feel comfortable viewing porn (in the name of research) on company time but still want to experience Deep Zoom, check out the Hard Rock Cafe’s memorabilia site instead.

Saturday, 28 March 2009

SharePoint Abstraction

There seems to be a seismic change afoot in the SharePoint development world as those of us working on the ground with SharePoint move forward with the platform. In my mind this is the SharePoint community growing forward from our dark history as we move beyond the pains of dealing with pure SharePoint and begin to view it as just another component of the applications we build. Instead of building into SharePoint we’re building on top of it.

WTF? I’m talking about wrapping domain abstractions around the SharePoint API so developers can work more intuitively with the domain model and for the obvious performance and unit testing benefits. In essence, it’s a return to good up-front design and worshipping the gods of encapsulation.

When I first started out in the real world everyone was building frameworks. In the object-oriented world, design often starts with the problem domain and a class hierarchy is constructed to model or reflect the relevant entities. The framework becomes the programmatic representation of both the problem and solution. You would never force the developers using that API to, for example, deal with a list of Objects that (these were the days before generics) that would have to be cast to a specific type; your list would derive from an existing list class or one of its parent classes and close the gates to a subset of types and strongly-typed operations. Database access and whatnot would be similarly bubble-wrapped with a common, isolated, and tested mechanism to open a database connection, manipulate a specific object, and clean up.

The last few years of dealing with the minutiae of the SharePoint API has led us astray on the back of the SDK and correct and incorrect code samples available online. Instead of taking these snippets at face value—as samples to be adapted to meet our needs, they’ve been viewed as the code to make SharePoint work at its most basic level. WSS 3.0/MOSS 2007 bugs aside, it’s been difficult bridging the gap from managed to unmanaged code for those of us raised in the cuddly worlds of Java and .NET. Simply put, the SharePoint API is large, comes complete with legacy baggage, and doesn’t always do things the way .NET developers expect. Fair enough, it’s a big application (er, platform) with a history of its own and a wealth of functionality… and it’s not .NET! The end result, however, is we’ve been too busy dealing with a steep learning curve to bother with the application of good design practices and hide away things those aspects that are hard to do well.

The first inkling in my mind of a return to abstraction came from reading Eric Shupps’ post on test driven development with SharePoint. I’m neither a TDD user nor am I a fan but after seeing it in action at Jeremy Thake’s Perth User Group presentation last week, I now understand not having to mock the SharePoint API by abstracting countless objects make building unit tests a lot easier. But wait a minute—encapsulating access to SharePoint and refactoring an application to match the problem domain sounds familiar! Oh yeah, that’s because we used to do it all the time!!!

The Microsoft Patterns and Practices gang have been busy putting together the long awaited SharePoint Guidance bits. It’s interesting to note the sample application they’ve built makes welcome use of abstractions based on common design patterns—all of which are clearly documented. Microsoft obviously has a lot of interaction with customers and the community, apart from product knowledge of both SharePoint and .NET, so the SPG will most likely prove to be an invaluable resource as we expand our understanding of SharePoint and real best practices become apparent.

One final note: my previous tech lead (who now works on SharePoint v14) is a beautiful Japanese fellow with no time for big, unwieldy frameworks and sprawling code. I would label him very much a follower of the minimalist style—an approach not invaluable to IT as a whole (I believe we like building complex solutions because the challenge makes us feel good). So where does the argument for abstraction fit with simple, clean design principles? I believe the two go together hand in glove for the simple reason abstraction leads to centralisation and therefore consistency, meeting the minimalist designer’s goal of less code. Instead of every developer handling list access in a different way with ten lines of code wrapped in three using statements, a developer calls a method in one line of code. But this is perhaps another discussion ;-)

Additional Reading

Clear the Flash Cache aka Local Storage

You’re probably used to the idea of session state and cookies and how to clear them using you’re favourite toolbar. Did you know the Flash plug-in also has its own client-side cache called Local Storage? Just like any other cache, stuff will occasionally get stuck in Local Storage and you may need to clear it out.

You can control your local storage settings by right-click->Settings… on any Flash movie but as far as I know you can’t clear cached data using this approach. To comprehensively clear the Local Storage cache I use this web-based tool provided by Adobe: 

Saturday, 21 March 2009

Putting Creative Agencies on Notice

Working across a number of highly-visual, rich, and interactive web sites, I’ve had the, ahem, delight of working with several of Australia’s big creative design firms over the years. I won’t name names.

These guys have variously been tasked with building good looking sites and funky widget things; their modus operandi involves mood boards, wireframes, and giving themselves stupid titles like Web Maven and Producer. They seem to rate their development ability on the basis of their ActionScript skills and their capability (I’m guessing) to offload—er, subcontract—work to an Indian sweatshop doing MOSS and AJAX and/or hire junior devs with just enough aptitude to fool most people. Naturally they’re happy to charge thousands of dollars for something like an email template—but they’re used to that because they all come from over East. Marketing people love them because they use the words “digital” and “creative” and have beards.

Well Mr. Creativity I’ve got news for you: your development skills suck and the recession economy means most organisations will be expecting full value for the outlandish buck you charge. To wit, you’ll need to start delivering more than spiel because I’m onto you.

Some of the experiences I could recount here would come across as unbelievable but here are a few classic examples nonetheless…

Creative Agency Blunder #1

We’ve been battling it out with a particular agency from Sydney for a year now just to get a reasonably simple data-driven application built that would integrate with our existing mapping solution (which the same firm built in 2007). When we first went back to these clowns the same guys who did most of the work back in 2007 were still around and picked up the new bits… before they resigned. Fair enough, people leave places. The only problem was the application was largely built in Flash and we’d already started integrating; you’ve already read my comments on the development abilities of these agencies (why can’t they just stick to being creative? They’re good at that!!) so needless to say the code was buggy and half the functionality requested hadn’t been implemented (spec? What spec? There never was a spec…).

Back and forth we went with the replacement creative team while they effectively rewrote the existing application in HTML and JQuery, retaining bits and pieces of the Flash here and there; deliveries kept coming, poor Luke in my team kept on re-integrating, bugs kept getting found, and no one had the guts to can the project.

Naturally we had a small number of change requests as the process trundled along and we were happy to pay for those changes. But the real punch line was when we received a bill for bugs they themselves had introduced! No joke—and this after significant delays brought about only by their incompetence.

Creative Agency Blunder #2

Then there’s our MOSS developer friends, also out in Sydney. Initially they requested remote access to our MOSS environment so they could develop yet another flashy but very simple applet. I declined that one as we hadn’t worked with them previously at the technical level and there’s no way I could trust them to behave within our constantly moving code base—and that’s apart from figuring out how to expose TFS and all other bits of our environment externally.

These guys actually produced a “spec” for us, which was reassuring; it was one page, of course, but the five bullet points covered largely what would be required and I made the silly assumption Marketing, as the group managing the contract (don’t ask), would have a firm grasp on things from their end. The spec basically said “web part”, “rss feed”, “web service you’ll write”, and some mention about Flash/Silverlight/AJAX. The price tag would be 10k+ and we would write all of the backend functionality in-house so they couldn’t botch it up and we could reuse it for another part of the site. Put it this way: all they had to build was a web part with some basic animation that pulled a small amount of data from a web service we would expose—nothing more.

The first delivery—after they emailed me to figure out how to invoke one of our web services—included a user control and instructions on how to install Son of Smart Part. The control was also built to rely on session state which I believe is a joke for any big web site (the widget would be going on our homepages… and we’re sitting behind two layers of caches, which was made clear to them). Finally, all the CSS and javascript was bunged into the control itself instead of sitting in external files—surely they can at least do that bit well?!?

I sent that version back before we even attempted to integrate and they responded a few days later with v2. Rewritten as a web part, I passed it along to Matt in our dev team for integration and he soon noticed basic functionality was missing—all of the key wiz bang, essentially. As Matt had already undertaken some work to integrate the thing, we sent it back with the request they work from our modified code base as much as possible, to which they agreed.

v3 came back to us the afternoon before launch day with none of our modifications but we progressed re-integration anyway—the turn around time to raise this and wait for a reply of some form was unacceptable. Did I mention when they “tested” version 3 before sending it our way they forgot to switch over to calling our web service and seemed to be astounded at the old data they were seeing—from their servers!!! Lo and behold, this version incorrectly addressed what was missing from the first version and broke another aspect of the control.

Sigh… by this point we’ve spent as much time attempting to integrate the thing as it took the agency to build it and it’s still nowhere near where it needs to be. Did I say how simple this was supposed to be?

My advice to these agencies is as follows:

  • Define a technical specification that you understand
  • Don’t deliver subpar code—test to make sure it meets the requirements and delivers basic functionality expected by any normal web user. Value add in this area.
  • Don’t charge your customers for your mistakes
  • Reduce your rates because you’re not worth what you’re charging
  • Get with the times: Flash sucks big time and Silverlight/AJAX will slowly kill it; do what you do best (design) and hire people with the skills to deliver in whatever technology set you’re selling—not inner-city Photoshop monkeys who think they’re developers because they can write bad Javascript.
  • You’ve got to communicate to integrate
  • Stop giving yourselves stupid titles you can’t back up

Monday, 16 March 2009

Presenting at the next SharePoint User Group Presentation

Just a reminder I’ll be delivering part two of my two-part presentation on how we do web content management on MOSS 2007 this Tuesday at 12:30pm. We had a great turnout for the first presentation so it will be interesting to see who comes back!! Honestly, it’s amazing how many of you SharePoint/MOSS guys I’ve met through the first presentation and I’m really looking forward to bumping into a few more of you—please come up and say hello!

The blurb on the user group site is a straight copy of the part one blurb so here’s what I’ll be talking about tomorrow: was one of the first public-facing MOSS 2007-based internet sites launched in Australia and is billed as the Western Australia Tourism Commission’s flagship web site. Two years on and thirty MCMS 2002-based tourism sites are now being migrated to the MOSS 2007 platform. In the second segment of this two part presentation, Michael Hanes, the Development Coordinator/Tech Lead at Tourism WA, talks about the backend MOSS environments. In this presentation Michael presents the existing and replacement hardware environments, virtualisation, environment structure, farm configuration, security, site collection structure and variations, performance, tooling, content delivery (Akamai), and content deployment.

Jeremy’s aiming to record the presentation again so, all being well, the part two webcast will be available after the event in case you’re unable to come along.

Here’s part one in case you missed it. See you down there for part two!!

[Update: a vodcast of the presentation is now available here with a PDF of the slides and notes here.]

[Update: the original PowerPoint deck is now available.]

Thursday, 12 March 2009

The Summary Paper

I’ve got a favourite new business document: I call it the summary paper and it reminds a lot of the book report format from third grade. I’ll claim to have invented my specific version but I’m sure there’s different names for this document depending on the project management/software development/business model you’ve adopted; no matter…

So what are the summary paper’s defining features, you ask? The summary paper is:

  • About something really important The summary paper exists to feed into a meeting; if someone’s called a meeting hopefully the subject matter is sufficiently important to warrant having everyone in the same room not doing other work.
  • Broad Include at least a brief context or history about the topic to bring readers to the same level and quickly address any knowledge gaps.
  • Specific With the context out of the way, the remainder of the paper should be exactingly specific—as always, qualify and quantify wherever possible and prefer well conceived charts, diagrams, and images that can stand on their own or at least significantly augment the text. Most importantly, the summary report should be about one single thing and that subject must inform the entire report; if something isn’t relevant to the main subject, it belongs in a different summary report. Specificity in many cases equates to authoritativeness.
  • Short The whole point is to create something approachable and easy to digest between tasks or on the train ride home. No one really cares about what you do but they care event less about that fifty pager no one will ever read because they don’t have the time or interest. Shortness is goodness because a short summary paper is also easy to adapt to changing information; when it comes time to discuss the paper in a meeting it’s easy to say “let’s take five minutes to read through this document.” Simply put, a short paper is more likely to get read. Assume your readers already know something about the subject. Aim for one to four pages and print it double-sided.
  • Enticing Print in colour and come up with a precise title that tells the reader exactly what the document is about.
  • Textual Use bullet points and tables for short lists and data. The summary paper is meant to be read so write sentences and paragraphs grouped under headings. Avoid complex tables, tables with lots of text, and bullet points in tables… Number and call out all figures in the text.
  • Fast Write it quickly before you loose focus and forget key points.
  • Timely Focus on the current state of play and how things are now—not how they used to be or how they might be in the future. As a snapshot in time, ensure the data presented is current to within a few days.
  • Decisive State problems clearly and write in clear terms. Don’t dick around with fancy vocabulary, wordiness, and bad writing. Do use a spell checker and proofread, proofread, proofread. Use an appendix of terms and abbreviations (and even synonyms) if required. Use an appendix for references is required. Don’t bog down the first few pages with titles and logos and document controls and sign offs and and references and disclaimers; do keep it simple: include the author’s name(s), the date, and a simple version number (eg. v1.0 where the 1 is the major version and the 0 is the internal draft version). Include page numbers and numbered headings for ease of reference during discussions. Use consistent formatting to create meaning.
  • Open Encourage feedback and be willing to incorporate ideas and suggestions—v1.0 should never be the final version. Incorporate feedback as early as possible.
  • Owned Limit the number of authors (preferably to a single person) to encourage ownership and clearly identify the person responsible for making updates. If the paper is eventually earmarked for distribution to senior management but was initiated by a junior, allow the junior to retain ownership—especially if continual feedback looping starts to happen. Ensure whoever initiates the paper has an existing understanding of the subject and remains engaged as discussion evolves.
  • Complete and Accurate TBDs should be avoided. Exclude placeholder sections containing no content. If critical information is required for completeness, find and validate that information. If existing information is unclear, clarify it by research (speak to others, do some reading, experiment).
  • Actionable List possible solutions—don’t wait for that to happen at some future meeting. Identify, by name, individuals required to provide information or complete tasks.

I love these papers. My experience with this approach so far has been to say to someone arriving at my desk “here, read this” and they actually do read it (sometimes standing right there) and it results in the flow of ideas I can use to make the paper better. I’ve easily got ten to twenty individuals feeding into a single piece of work and these briefs are the fastest, most accurate way to turn discussion into action.

Tuesday, 10 March 2009

Approaches to updating content types and lists created using via features

This is a bit of a placeholder article for now to bring together useful links about how to update content types and lists defined in a feature. Modifications to list instances—and propagating those changes to content created on the original instance—are notoriously difficult but it seems like it can be done.

Monday, 2 March 2009

Perth SharePoint User Group Webcast Q&A

Jeremy sent through a handful of questions from the February PSPUG webcast so I thought I would answer them here. Thanks to Joshua for these awesome questions!

Q. What decisions did you make early that you wish you could change now? Or what would you do differently if you could start over?

A. Where to begin?! We’ve been lucky, in one sense, having the option to rebuild the partner web sites on a separate platform. At that time, was up and running and we pretty much did start over.

On the side, we made a very conscious decision soon after launch to move from SharePoint Designer and master pages/layouts held in the content database to a solution/feature-based deployment situation. Moving away from SPD is one of the best things we’ve ever done. We’ve currently got one feature for deploying most things on the site but there are cases where it would have been useful to componentise some aspects.

The biggest issues we face currently, off the top of my head, are as follows:

  • Too many page layouts I mentioned the concept of page layout sprawl in the presentation; we’ve locked ourselves into a situation where content editors love being able to easily create a new page with all the page components they need for that section of the site at the cost of flexibility from their end and maintainability from the dev perspective. We’re revisiting this now and looking at ways a content editor can create a new page from a smaller selection of layouts and have the necessary web parts, specific to that subsite, added automatically. The Andrew Connell way of doing this through the AllUsersWebPart node in manifest.xml doesn’t work very well in this case as it’s too coarse and results in duplicate web parts every time the related features are activated. I imagine we’ll probably end up using the SPLimitedWebPartManager to do this but I’m not yet sure how!
  • Reliance on user controls over web parts This very much ties in with my first bullet point. User controls are easier to develop because you’ve got real markup to play with and they help construct a tight, enforceable template but you’re trading configurability and the web part storage mechanism to go down that path. The partner sites are based solely on web parts and they work quite well in that environment where different partner sites want web parts in different places. Writing web parts that are centrally configured is also an interesting twist to the web part approach.
  • Separate authoring farm I’ll be speaking more about this in part two of my presentation but there was a perceived need (on recommendation from MS and for records keeping regulations) to create a separate farm for content editing activities; content would then flow happily from authoring to production using the content deployment tools. Yeah right. In reality we have no need for a separate farm and with a lot of new hardware going in shortly we’ll be looking to merge the authoring and production farms. We’re doing this successfully with the partner sites.
  • Two code bases We maintain two separate Visual Studio solutions, one for and one for partners. The partner site development effort taught us how to really use the SharePoint platform as a site provisioning platform and while and the partner sites are quite different, I’d like to see the two solutions merged so becomes just another partner site, perhaps with it’s own special set of master pages/layouts.
  • Lists Some things started out as lists on but we drifted away from lists for a while for performance and cacheability reasons. We’ve worked through those issues now and use lists successfully on the partner sites to simplify per-site, content editor-managed configuration and could easily apply that knowledge back to
  • Abstraction The SharePoint learning curve coupled with my predecessor’s minimalist approach to all things code mean our code base doesn’t abstract access to the platform. As a result, it’s currently up to every developer to understand SharePoint integration and things like how to correctly dispose of unmanaged objects. Good stuff to know, no doubt, but it makes life more difficult for a new developer coming on board who hasn’t worked with MOSS and means different devs end up doing things differently, which complicates maintenance.
  • Variations The variations piece seemed appropriate at the time and the site structure reflects our variation structure. Variations actually work but they work at too high a level four our specific needs. Some of our pages are not to be overwritten while others should be and unless you’re using workflow to stop it happening, the variations mechanism will simply copy the entire site (AFAIK—it’s been a long time since we last looked at this one!). In reality, while our marketing division likes their language-based site structure, they’re not willing to maintain different content for each site so it’s complete overkill.

Q. Could you explain in more detail the search tool you built? Seems to have a nice combination of BDC [Business Data Catalog] and SQL features.

A. I was hoping to have our “search dude” along to speak on this subject but unfortunately he was too busy learning to fly—to give you an idea of his calibre. I’ll tell you what I know.

I may have briefly mentioned in the presentation that we started out, on Dimension Data’s recommendation, using MOSS search and the BDC. It’s worth pointing out we also used the SharePoint v2 search bits in conjunction with MCMS 2002 on the old sites so it seemed like a logical step forward. For the record, we’re indexing MOSS page content, tourism product in the ATDW database, and .pdfs.

With the weight of many different tourism operators and industry bodies seeking to influence their ranking in our search results, our search requirements are convoluted. As a rough example, we always display pages first, followed by accredited tourism operators (randomised), followed by non-accredited operators (randomised), followed, until recently, by non-members (randomised). In many cases, this resulted in the need to boost certain keywords, boost individual results in special circumstances, and “band” results. Then there’s things like B&B versus Bed and Breakfast, returning accurate results for “perth hotels five star”, and so on.

The guys working with the Beta 2 TR found it didn’t provide the customisation we needed and had some major memory management issues; despite assistance from the SharePoint Ranger Team, we were soon looking to alternatives. The Google search appliance was examined but we found even Google’s search algorithm wasn’t suited to our particular business domain. We haven’t revisited the BDC or Google since so these comments should not be considered a reflection on the current products.

With the BDC bug out of our bonnet, it was time to look at a custom build and that was accomplished by following the BDC model and adjusting it to our needs. The solution we’ve got in place now is a three-part system comprising a crawler, a SQL Server 2005 database which serves as an index, and a presentation layer built with web parts. The crawler is a console application that uses the MOSS API to iterate over the SharePoint sites it’s configured to crawl; it also has separate functionality to crawl the ATDW database. The crawler can do full or incremental crawls and the latest build has been working flawlessly.

The crawler doesn’t duplicate page and product record data but collects specific, searchable data that is then stored in the database. The database structure was originally de-normalised but performance testing during the latest rebuild found performance was better when the data was restructured according to typical relational database design principles. The database tables are also full-text indexed and a number of stored procedures expose results to the application using the business logic discussed previously.

While this solution meets our specific needs, it must be stressed the database structures reflect the things we index: pages, tourism products, and files. In other words, this solution works really well for us but it’s not a rebuild of the BDC and pointing it to a different LOB database would require a significant amount of rework.

Q. Could you profile a typical content author / publisher? What training do they do?

A. We’ve got two different types of “typical” content editor.

On the side, content editors have full control over the site (the security model is admittedly weak) and have a marketing/digital marketing background. These guys are the business owners of the site and they’ve been working with the platform and the site for a couple of years. They’re pretty comfortable creating new pages from an overly large selection of layouts, checking out an existing page, adding reusable content and rich text, checking in/approving/publishing. Basically, they’re young, web-savvy types who call themselves geeks because they know how to cut basic HTML and use Flickr ;-)

We used to have a dedicated trainer in house (no more) and while some related training materials were prepared and delivered at the time, I think the marketing team now learn it as they go and share information between themselves.

On the partners side, the experience of these content editors varies widely. Some may have worked with MCMS previously so they’ve got some preconceived ideas about how content management systems work (and how responsive they should be!). These guys all work off-site but are supported via a 1800 number and our four-person Service Delivery and Content Management team—effectively a content helpdesk. The partners range from youngish types focused solely on maintaining content to older volunteers; they all have different levels of experience using the internet. SDCM therefore train all new partners, bringing them in and going through everything they need to know to maintain the content on their sites. This includes page basics as well as managing list content, accessing stats, and so on.

Hope this was useful… any additional questions posted as comments shall be addressed here.

Sunday, 1 March 2009

Unable to display this Web Part error and Alternate Access Mappings

I ran into an unusual little problem this afternoon while playing with an unusual environment configuration. I say the environment configuration is unusual because the host that contains my Basic MOSS installation isn’t a domain member; that means no DNS and local security accounts—ick. The problem itself is actually quite common but my solution, given my environment setup, is a little unusual, at least for me.

Before diving in, a bit of background: I’m running an x86 Windows 2008 VM on a x64 Windows Hyper-V host machine. The VM has a normal Windows computer name, MOSS is installed in its basic configuration (SQLExpress, using local accounts), and there’s no domain controller or DNS server in sight. I’ve got a  simple web application created on port 30000 running as Network Service and extended into the Internet zone with anonymous access configured. A site collection was created before extending the web app and it’s using the Publishing Portal template. Everything is working fine across both sites within the VM.

The problem arose when I decided to run a quick sanity check and access both sites from another machine on the network. After allowing access to both ports in the Windows Firewall, I was able to browse most of the default page, although with no DNS in place and no hosts file entry, I naturally had to browse to the VM using it's IP address (eg. I was surprised to get anything back, assuming SharePoint wouldn’t be happy with the IP address-based request. I say I was able to browse most of the page because the empty Press Releases content query web part was coming up with the ol’ favourite:

Unable to display this Web Part. To troubleshoot the problem, open this Web page in a Windows SharePoint Services-compatible HTML editor such as Microsoft Office SharePoint Designer. If the problem persists, contact your Web server administrator.

We normally get this very error at Tourism because deployment stuffs up file system permissions; to fix that we grant Authenticated Users Read & Execute permissions on the web.config file, the /bin directory, and everything in the /bin directory for the authenticated and anonymous sites. I tried that first and the problem went away for a minute and then came back as I reset IIS and fiddled with the firewall.

Weirdness. I don’t like weirdness.

Remembering my original surprise, I recalled I was accessing the sites using an IP address instead of the computer name and this ended up as the cause of the problem; an error event in the Application event log confirmed this:

Invalid URL:  You may also need to update any alternate access mappings referring to http://MyComputerName:30000. 

To get things working, I dropped into Central Admin/Operations/Alternate Access Mappings and added a new internal URL for both zones:

Et voila! Fixed!

Well nearly. This "fix" now prompts me to authenticate twice when browsing the authenticated site off the VM, once for the IP address and again for the host name. From that point my browser's address bar is also telling me I'm accessing the site via the host name. Not sure what magic the AAM setup is doing there but a minor niggle.

It’s worth noting this could also be resolved using a host file entry: MyComputerName

And it’s also worth noting you’ll get this web part error for many other reasons—remember, my environment configuration today was unusual!!


Opening a Port in the Windows 2008 Firewall

When doing quick and dirty MOSS development in my dev environment, I favour creating new web applications on a non-standard port over using port 80 and host headers. Why? I’ve been working that way for a long time and it’s just easy… no other reason, really!

Because I’m now running on a non-standard port, the Windows 2008 firewall won’t like me anymore. And to tell you the truth, while I like the idea of a strong, configurable firewall baked into the operating system, I’m not up to speed on its intricacies… so the feeling is mutual. Nevertheless, I’ve got to deal with this beast and expose access to my new web site. I bounce back and forth between my virtual server and other machines so having the site accessible is handy.

My previous encounter with this firewall was to enable echo (ping) requests, which I ended up doing through the command line:

netsh firewall set icmpsetting 8 enable

Don't forget to check out my post on enabling PING in Windows Server 2008 R2.

Nice—but today I want to play with the UI.

The Windows Firewall UI is accessible from the Control Panel menu as Windows Firewall and from the Administrative Tools menu as Windows Firewall with Advanced Security. The Control Panel applet is very similar to the regular old Windows Firewall applet (think XP) so choose the Administrative Tools version—it’s got all the goodies.

From that point it’s all about rules and you’ll see Microsoft has taken the liberty of setting up lots and lots of rules for you (paranoid?). You could potentially modify one of these existing rules but while mucking around I noticed some are “predefined” and cannot be changed (in particular, see Inbound Rules/World Wide Web Service (HTTP Traffic-In… there are two of these, one for port 80 and one for port 443 or SSL).

So that’s the background, now here’s how to add a new rule for your new web application:

  1. On the Inbound Rules node, right-click and select New Rule… and the New Inbound Rule Wizard will fire up
  2. On the Rule Type screen, select Port
  3. On the Protocol and Ports screen, ensure TCP is selected and add your port(s) to the Specific local ports list
  4. On the Action screen, ensure Allow the connection is enabled
  5. On the Profile screen, select all or whichever profiles apply to your environment (Private is probably fine)
  6. On the Name screen, supply your new rule with a name and a description for future reference. Finish out the wizard.
Or make life easily repeatable and use the command line:

netsh firewall add portopening protocol=TCP port=30000 name="SharePoint 30000" mode=ENABLE scope=ALL profile=STANDARD