Project Tracking with Team Foundation Server 2010

November 23, 2009 3 comments

Microsoft recently released the second public beta preview of the 2010 release of it’s developer toolset, Visual Studio. A key plank in the Visual Studio 2010 story for development teams is their Application Lifecycle Management tool, Team Foundation Server (TFS).

With this release, Microsoft have made some significant changes to the way in which TFS supports project management, particularly with respect to Agile projects.

With TFS 2010, Microsoft have decided to place all the Process Guidance documentation online instead of embedding a copy of it in each individual project’s portal site. So, rather than just duplicate Microsoft’s content here, I’ll provide links to it from relevant points, and instead attempt to provide my own interpretation of what it means, and how to use the process.

I’m not a Project Manager, nor a process expert so why am I writing about this?

Ok, so I might not be a Project Manager by trade, nor am I PMP or Scrum certified or anything like that. What I am is someone who has a spent many years in software development, over a range of platforms, and leading projects both large and small. Over my time I’d like to think I’ve learnt a thing or two from my experiences.

This is my attempt to take what Microsoft are doing with Visual Studio 2010 and describe how I think it should be used. What I like, what I don’t like.

I’m learning as I go with these products, just like everyone else is. I’m hoping to gain a better understanding of the practical application of these new tools as I go. I’m also hoping that by sharing my experiences along the way on this blog, and with a bit of luck, it might just help you to use VS2010 and TFS better too.

What am I going to cover here?

Right, with that out of the way, what am I going to cover here? Well, lets consider the make up of a typical software development team. Over the years, I’ve worked on a range of projects varying in both team size and project scope. I once spent 18 months or so working on a project which only had .75 of a full time person employed on it. Bizarrely, I wasn’t the only person working on that particular project!

I’ve also been involved in projects that have involved upwards of twenty people, spread across 6 separate locations, in three different countries. But the one thing they all have in common is that the roles on the project can be broadly categorised into about 4 groups.

  • Project management
  • Business representatives
  • Developers
  • Testers
  • That’s not  to say that, for example, if you’re a tester you won’t be interested in the developer information however. But I am going to focus on the day-to-day uses of TFS for each of these team members.

    It’s worth noting again that I’m talking here about project tracking with TFS, so I’ll be writing specifically about those aspects of TFS that relate to each of these roles with respect to progress through a project. So the features I’ll be discussing in this series will centre around work item tracking, reporting, and planning features.

First up, what is there in TFS2010 that will mean that Project Management will want to use it?

Reboot!

November 17, 2009 Leave a comment

It’s been a while since I added to my series on VSTS 2010 in action, and a lot has happened since I posted Part 2.

For starters, VS2010 Beta 2 has been released. This refresh contains much more content around the Process Guidance for the MSF 5 process templates, and so I’ve taken a slight pause in transmission on this series in order to absorb this and to reboot, if you like, my series on VSTS2010. When I post again, I’ll be using, and referring to, the features in Beta 2.

This does mean that I’ll go back over what I’ve already covered too, and update it for the new bits. Conceptually it’s still the same, but it’s a chance for me to tidy up what I’ve said already so that it actually makes some sense! :D

As well as this, the company I work for, Optimation, has re-launched it’s web presence, and the new website includes a company blog, which I will be contributing to. In fact, these posts will be submitted to that blog also, and cross posted here. I can’t guarantee the order in which this will happen though, so I recommend that if you’re really hanging out for a new fix of VSTS2010 goodness from me, you check both places.

Also, as I’m not the only person who’ll be contributing to the Optimation blog, a whole range of content will appear there, from some really smart cookies, so do go and check it out! :)

So, to summarise, I’m re-starting the series, and it’ll be cross-posted (hopefully – depends on approval from the “editorial team” at work) on the blog of the company I work for, Optimation.

VSTS 2010 in Action Part 2 – Project Planning

July 21, 2009 Leave a comment

Welcome to the latest instalment in my series on VSTS2010 in action. So far I’ve covered the process I’m using, which matches as much as possible the development process I use at work. I’ve also talked about the tools I’m currently using at work, and what I hope to discover about TFS 2010. Check out the previous post in the series here. There is also a full list of all posts in the series here.

This time I’m going to cover project planning. Brian Harry has a good overview of the Project Management improvements in TFS 2010 here, I’d suggest you have a read of this, since he covers some areas you may be interested in, which I’m not going to go into, such as MS Project integration.

Our process begins with some project setup tasks, before development really begins. 

  1. Create backlog of user stories
  2. Add any predecessor/successor links
  3. Define areas and assign stories to each area
  4. Rank the stories
  5. Estimate the stories

Then, each iteration/sprint we take a selection of stories from the backlog and implement them. Sounds easy huh? :)

What is a user story? Well, Wikipedia’s definition can be found here, but I like to think of the user story as a way of documenting, from a user point of view, a particular functional requirement that the system must support. So, for example, “As a user I would like to log in to the site, so that I can manage my bank accounts”, might be a user story for an internet banking website. Microsoft’s documentation for MSF 5 is still being developed, but they have some good material on User Stories here.

One of the quirks of the MSF Agile process that came out of the box with TFS in earlier versions is that it used “Scenarios” instead of the more well known “User Story” or “Use Case” for high level requirement documentation, which meant that lazy (or busy!) teams would shy away from using it, because they didn’t have the time or inclination to implement their own custom Work Item type for user story. Well as of MSF Agile v5, which is the agile process shipping with TFS 2010, Microsoft have switched to the “User Story” approach. I’m not sure its actually much different in fact, other than the change of name for the work item type, but it’s more consistent with the terminology that agile teams expect.

Here’s a what the User Story screen looks like in VS2010:

clip_image001

Some points to note about the user story are that there is a title field, which is pretty much the only required field; area and iteration path fields allowing you to classify stories; the Stack Rank, Story Points and Risk fields; the fact that you are encouraged to enter acceptance criteria with the description; and that there are a number of other tabs that support various linkages to other work items such as test cases, tasks and other user stories. I’ll be coming back to this screen to cover my usage of some of these fields in more detail later so I’ll leave it at that for now.

At this stage in the project, user stories should not be too highly detailed, since they are really intended to be a starting point, something to be elaborated upon down the track, when the time comes to implement them.

Now, usually the initial list of user stories will be defined in a workshop or series of workshops with the customer. Because of this, there will be a point at the beginning of the project where you’re going to have to enter a lot of user stories at the same time. You could do this by clicking the “Team” menu in Visual Studio, and selecting “Add New Work Item > User Story”, filling out the form above, then rinse and repeat for each story. But that’s not going to be very efficient when you’ve got a lot of stories to capture. Fortunately you can enter them in bulk through a number of other means, such as from an Excel spreadsheet, or via the project portal.

I’ve chosen to use Excel to enter the stories into my project backlog. Excel integration is not new to TFS2010, so you’ll probably be aware of the excel “Team” add-in that takes care of the retrieval from and publishing to, TFS. I find that the easiest way to establish the appropriate integration is to do an initial “push” from TFS to excel, then make some changes, before publishing back to the server. With TFS2010, there is a ready made spreadsheet that helps you to do this for your product backlog. To find it, open the Team Explorer tool window in Visual Studio, expand the tree under your project, and select “Documents > Shared Documents > Product Backlog.xlsm”. This will bring up the Project Backlog spreadsheet.

The worksheet is pretty self explanatory really so I won’t go into it’s usage in much detail. It also has a tab containing instructions for its use. My project’s product backlog, with user stories all entered, estimated and ranked, is shown in this screenshot.

clip_image002

So the first step is to enter all the stories. I’ve gone through and entered not just the title, but also a description, containing the acceptance criteria for each story. It’s important to include this up front, since the acceptance criteria will ultimately drive your implementation and testing.

In order to be able to choose the stories for the first iteration, we have to have our project backlog sorted into some kind of priority order. Generally we’ll need to consider both the importance to the business and system dependencies or importance when we rank stories. So for example, the customer might consider it more important to have the ability for users to make an order in an e-commerce site than for their internal users to be able to update the product catalogue, but if it really makes sense from an implementation point of view to do the product catalogue first then that would likely change the order of precedence for these stories.

I must admit to being surprised that the User Story work item in TFS2010 does not have a priority field (the backlog spreadsheet has this field, but it’s not present in the User Story work item… :/). So you’ll have to trust me that I did take the business priority of each story into account! :) Other major factors that I consider when ranking the backlog were the Risk involved in implementing each story, plus each stories dependencies. One thing that also does not appear in this spreadsheet are the linkages. TFS 2010 supports a number of different types of linkage, but at this stage I am most interested in the predecessor/successor relationships.

So I’ve gone through all my stories, and assigned a Risk rating, as well as established the links to predecessor and successor stories for each entry in my backlog. At this point I found it also made sense to assign an Area Path for each story, not just for categorisation, but to help me determine the relationships between each story. You are able to use the backlog spreadsheet to do all of this, except establish the relationships. For this you need to use the Visual Studio tools. You can run the “Open User Stories” query and use the query results pane to navigate between user stories, updating each story as you go in the lower half of the window. The Query results screen has had a bit of a usability overhaul in VS2010 as well, with the addition of buttons allowing you to quickly show/hide the entire results pane, or the entire work item detail pane, and to change between the original over/under arrangement and side-by-side. Below is a screenshot of the result window with the relevant links tab in focus for one of my user stories, showing the predecessors and successors for that story.

clip_image003

TFS 2010 provides a field on the user story called “Stack Rank” which you use to represent the blended ranking of each story. Taking each story’s risk, area path and dependency graph into account I’ve assigned a development order to each item in my backlog, and used the Stack Rank field to store it. So what I end up with is my backlog sorted in the order I should be implementing each story.

Now, I could move on to the next step now, assigning stories to the first iteration. However, first of all I’m going to assign relative sizes to each story using story points. I will explain this in a bit more detail in a future post, but the sizing is important because we can use it to give us a good overview of progress towards project completion, both in terms of burn down, as well as earned value.

So I’ve also done estimates of the size of each story here, from a relative point of view. Note that user stories do not have a field to hold an effort estimate in hours, only the story points estimate. This is because you’ll typically track effort against tasks, rather than stories, and each story should have a number of tasks linked to it. Therefore you can calculate the total estimate hours for each story by adding the effort estimate for all the tasks linked to that story.

So, with that done, we now have our project backlog of user stories, with each story given a risk, links to any successor and predecessor stories, an area path classification, and a relative size estimate measured in story points.

My next post in this series will cover the next step, which is to assign stories to iterations, so we can begin on iteration 0.

VSTS 2010 in Action Part 1 – Introduction

July 14, 2009 1 comment

As mentioned, I’ve been playing with VS and TFS 2010 beta 1, and since I have a personal project that I can use them on, I’m going to cover my adventures in a series of posts.  

The way I plan to run my project is to follow as closely as possible the actual process we use at work, since one of my goals is to find out how well TFS 2010 fits with our current process out of the box, and to figure out where any gaps might exist. Once I’ve worked that out, I’ll also consider how we might close any gaps, be it through a custom process template, custom or modified work items, custom built reports, or whatever it might be.  

By way of introduction, It would be useful to first cover our current tools and our current process. This should provide some background on where I’m coming from with this, and hopefully provide some insight on a couple of the issues that I think are present in our existing toolset in particular.  

Current tools

Currently we only use a subset of the functionality in TFS 2008, as we have other software that we use for things like Project Tracking, and team collaboration. For better or worse, I’m going to work through my project in this series whilst making use of the entire TFS toolset, since my personal belief is that by doing so we’ll get better value for money, as well as enable some key code quality and engineering practices that we currently are not able to perform.  

Our current toolset includes a Software as a Service project management/tracking tool called Rally, which has a really nice user interface and supports a variety of different approaches to agile project management. However, since Rally sits not just outside TFS but outside the firewall, there are some interesting problems we face with it.  

We are, in my opinion, also hamstrung by the fact that we cannot link user stories, defects, tests, etc to items within TFS source control, meaning that we lose the benefits of traceability that an integrated solution offers us. That said, Rally does have a good open API which we can, and do, use to integrate with TFS.  

However, these integrations have to be built by hand, which means someone taking the time to build and test them, and we just don’t have the time to do that. Add to this the fact that the API changes periodically, meaning that we run the risk of having our integrations just stop working suddenly. Add to this also the fact that any outage in connectivity at all means you lose access to the tool, plus the additional license cost (which is considerable with Rally, I’m told), and you begin to see why I’m keen to see whether we can get by with TFS instead.  

As I mentioned, we currently use TFS 2008, but only for Version Control and build. We have one or two projects that make use of the SharePoint integration for collaboration but by and large, projects instead use a Wiki for collaboration, with document storage done on the file system. The Wiki tool we use also incurs a license cost, so I’m planning to evaluate the experience of using the wiki pages built into the TFS SharePoint templates, whilst also using the project portals for document sharing.  

Our current process 

Without going to great lengths explaining our current development process, I’ll just say that it takes elements of SCRUM, and throws in bits of a couple of other methodologies as well. We generally describe our requirements with User Stories, and elaborate on these by adding tasks to them. Our PM practice lead is keen on 2 week iterations, which I find a little short, but not too bad.  

At the start of the project we’ll have a workshop with the customer to outline the user stories, so come iteration zero we will have some idea of what we’re aiming towards. We’ll also have the list done in some sort of priority order, so we also know which order we should be tackling the stories in.  

We’ll typically start with an iteration zero, which we would use to do project start up, as well as perhaps a prototype or proof of concept for the project. This is one area I’m looking at doing something a bit different with my project, but I’ll talk about that in a later post.  

Prior to each iteration we’ll do a planning session where we size the user stories using story points, which is a relative sizing method. Again, more on this in a later post. At the planning session we’ll take the stories from the top of the backlog, and estimate the amount of effort involved in doing each one. We take the total number of available developer hours for the upcoming iteration and subtract the hours for each story off the total until we’ve used up all the hours we have available.  

Before we do the estimating for the next iteration, we first do a retrospective for the previous iteration. This gives us a chance to discuss what went well, what didn’t, and what we’re going to do about it.  

The Project 

My project is focused on building a website for a (currently!) fictional business that uses flight simulators to provide customers with multiplayer simulated air to air combat. The website will have a range of static content on it, but also include some content managed areas. Content management won’t be anything too fancy, though, since the aim of the project isn’t to build a CMS, or even a CM enabled website. The site will also include a blog, and an online store, so that people can purchase merchandise online, to help fund the business and get it off the ground. 

In my next post I’ll start actually talking about TFS 2010, beginning with getting my product backlog set up. Specifically, I’ll cover using VS2010 with Team Explorer, the Web Access tool and project portal, and finally Excel to create and manage the backlog.

TFS and VS2010 – play time!

July 14, 2009 1 comment

Well, I’ve finally got the chance to have a play with Visual Studio 2010, and so I thought I’d give it a decent working over.  

I’ve got myself a copy of the Beta 1 release of VS 2010 Team Suite edition, plus TFS2010. One of my main interest areas is in the use of TFS, and therefore I’m keen to explore the changes and (hopefully!) improvements in the 2010 release of this, as well as Visual Studio itself. Hopefully I can get a feel for the changes in store for us, since we’ve paid for Software Assurance with TFS 2008, meaning we get a “free” upgrade to TFS 2010 when it arrives.

I have a project that I’ll be working through using VS 2010 and .net 4, and I’ll be using TFS for source control, work item tracking/project management, and build services.  

So I’m off to have a bit of a play! :)

Service Factory Modelling Edition – part 2

July 14, 2008 4 comments

In my last post, I went over a number of issues that my project team and I had with the Service Factory Modelling Edition.

One of the issues was with respect to the generated Entity-to-DataContract translator classes, specifically the fact that the generated class contains no “collection” translation method.

Now, it may seem obvious to some that there is a straightforward way to leverage the generated methods to achieve translation of an entire collection of entities to data collection types, or vice-versa, but not everyone finds it so obvious (at least, some on my project team didn’t find it obvious!! :) )

Ok, lets imagine we have generated a translator class for an Address entity and an AddressDC data contract type. You get the following two static methods (code removed for clarity):

public static BusinessEntities.Address TranslateAddressDCToAddress(DataContracts.AddressDC from)
{
    ...
}
public static DataContracts.AddressDC TranslateAddressToAddressDC(BusinessEntities.Address from)
{
    ...
}

Now, we also have a collection of address entities (stored as a List<Address>), that we want to translate into a collection in our data contract. The data contract has been designed using the modelling tools in the Service Factory, and our AddressDC collection has been modelled as a collection of type List also.

Basically, you loop through the collection you’re translating from and for each record, you call the appropriate translator method, like so:

List<Address> output = new List<Address>();

foreach (AddressDC addressDCItem in AddressDCColl)
{
    output.Add(TranslateBetweenAddressAndAddressDC.TranslateAddressToAddressDC(addressItem));
}

Ok, so the other part of this equation is making sure that we factor this well, and don’t write this code over again everywhere we want to use it. The most logical place to put this is to write it as a new method on the existing translator class.

Of course the thing to remember with this, and this is the issue I really had with the translator generation in the Service Factory, is that the translator is not a partial class, so we have to add code to the existing code file, and because of that, we need to realise that this code will be “removed” again if the translator ever needs to be re-generated.

Obviously we’re likely to want collection translators that go both ways, so there will be two new hand-coded methods. Therefore our final translator would look something like this (again, some of the code has been removed for clarity):

public static BusinessEntities.Address TranslateAddressDCToAddress(DataContracts.AddressDC from)
{
    ...
} 
public static DataContracts.AddressDC TranslateAddressToAddressDC(BusinessEntities.Address from)
{
    ...
}
Public List<Address> TranslateAddressList(DataContracts.AddressDCColl from)
{
    List<Address> output = new List<Address>();
    foreach (AddressDC addressDCItem in from)
    {
        output.Add(TranslateBetweenAddressAndAddressDC.TranslateAddressDCToAddress(addressItem));
    }
    return output;
}
Public DataContracts.AddressDCColl TranslateAddressList(List<Address> from)
{
    DataContracts.AddressDCColl output = new DataContracts.AddressDCColl();
    foreach (Address addressItem in from)
    {
        output.Add(TranslateBetweenAddressAndAddressDC.TranslateAddressToAddressDC(addressItem));
    }
    return output;
}

Please note that this code has been coded off the cuff, and has not been tested, nor even compiled. It’s there just to give you a general idea of what to do.

Mark

Service Factory Modelling Edition – Part 1

July 4, 2008 1 comment

In my last post I mentioned that I’d be writing a bit about the experiences I’ve had recently with some of the Microsoft Patterns and Practices Software Factories. On a recent project, I made the call to introduce the use of the Service Factory Modelling Edition, the Repository Factory (a.k.a. the Data Access Guidance Package), and the Smart Client Software Factory to my team.

To get the ball rolling, I’m going to begin with a bit of a discussion on some of the issues I found with the Service Factory Modelling Edition.

This list is a pretty exhaustive list of the things that tripped us up during the course of the project. I’m not having a moan here, because overall I think it’s a pretty cool tool, and I think that the P+P group did a damn fine job of the designers in particular, considering that this is pretty much the first version of the tool to include them.

  1. The model designers don’t support cut-and-paste of sub-elements (such as primitive sub-types on a data contract). So if you have a datacontract type, that is used in two different places, you can’t copy and paste, you have to create the second one from scratch.
  2. The designers for data contract and service contract do not allow the user to manually enter the type name for types that differ from the default “System.String” – have to use the type selector GUI instead (extreme level of user interface friction). This one caused me some aggravationsince in a lot of cases, I wanted the type of a contract member to be something other than string.
  3. When using the WCF Extension, the DataContract designer should support auto-increment or auto-assignment of the “Order” property of sub-types rather than defaulting to 0 for every newly added subtype, since it is necessary to have a unique Order value for every member.
  4. There is no support for nullable types in translators. For whatever reason, the Entity to DataContract type translator codegen and wizard do not like nullable types. I had some other issues with the translators also, chiefly the facts that a) they are NOT generated as partial classes, and b) there is no ability to generate a translator for collections of a given Entity/DataContract type. I’ll cover how I got around this in a later post.
  5. No support for tidily removing generated artifacts – translators, DC’s, Services, etc. There is no easy way to undo a “generate” – you have to manually delete the generated code if you remove or rename an item using the designers.
  6. There is no out-of-the-box support for WCF bindings other than wsHttpBinding and basicHttpBinding. You need to implement an extension to provide this support. This was annoying, but was pretty easy to sort out. I’ll outline this in a later post also.
  7. There is no way to reference types from one data contract model in another data contract model. If you have a data contract type defined in a model, and you want to use that same type in more than one place, then you can only do so within the same model. This leads to some of my models being overly large, with everything on the one model or, if you do persist in splitting things up into separate models (which I think makes perfect sense), then you end up with more than one copy of the type (one on each model), which is a real problem when it comes to code generation, and keeping the two (or more!!) models in synch with each other.
  8. No support for generation of asynchronous client proxy code. At one point I wanted to be able to generate an asynchronous client proxy, but I found that this was not supported in the Service Factory Modelling Ed.
  9. The out-of-the-box service host project is implemented as an ASP.Net web site. There is no support within the service factory for self-hosting scenarios, such as hosting services in a windows service (at least, I never found any). I will also cover my solution to this in a later post.

 

Again, as I said, I’m not criticising the Service Factory. I think it really made things easier on this project in particular, as we had a very junior developer join the team mid-project (she was from the customer’s development team in fact) and I think that without this tool, she would probably still be trying to get her head around the concepts involved in service development.

Mark

Follow

Get every new post delivered to your Inbox.