Automated UI testing of AngularJS applications using TFS – Part III

11 03 2015

In the first post we heard about the theory on running E2E tests with protractor. In Part II we created the custom activity to start the automation of this process. This post will finish the automation of the process by integrating the activity and creating a new build.

Configure the build controller

After building the custom activity library the created *.dll needs to be stored inside the Team Foundation Server. The folder holding the *.dll with the activity must be added to the list of custom assemblies on the build controller. This is necessary otherwise the build controller isn’t able to use the activity. To define the custom assemblies directory for a build controller you can use the “Team Explorer”. Go to “Builds” and click “Action”, choose the “Manage Build Controllers…” entry.


The following dialog will be displayed:


Select you build controller and click on the “Properties” button. The following properties window will be displayed:


Here you need to define the version control path to custom assemblies by providing the folder holding the created activity.

Another thing we need to do on the build controller is the configuration of the powershell execution policy. By default the execution of powershell scripts isn’t enabled. To enable the execution of all powershell scripts you can call “Set-ExecutionPolicy Unrestricted” on the powershell (administrator previliges are required). If you don’t want to enable all powershell scripts have a look at the help about execution policies.

Create the custom build template

Start with the creation of a new build. This can be done with the help of the “Team Explorer”. Inside “Builds” click on “New Build Definition”.


As a start, download one of the existing build templates. This template is customized and enhanced with the created activity. To download the template, click on “Process”, to display the process section. The window will show the build process templates available on the Team Foundation Server. Click on the “Show details” button to display the build process template details. The “Download” link should be visible now and you can download a template.


For the moment we can close the new build again (saving the new build isn’t necessary at the moment).

The customizing starts with the creation of an “Activity Library” project. Add the downloaded template to the new created project. Open up the file properties of the template inside Visual Studio and make sure that the build action is set to “Content”.


The next step is to add a reference to the *.dll holding our created custom activity. Now the activity should be displayed inside the toolbox. In case the custom activity didn’t show up, customize the toolbox. Click right on the toolbox and select “choose items”. At this dialog you can add the *.dll thru clicking the “Browse…” button.


Add the custom activity to the build template at an appropriate position with drag and drop. The right position for the activity depends on the template which is customized. After the activity was added you need to set the properties for the activity. The following picture shows a sample configuration. The path to the file depends on the folder structure of you application and needs to be adjusted to the existing folder structure. Provide the path relative to the source folder of the TFS-Build.


Create the new build for running protractor tests

Start with the creation of a new build. Inside the Team Foundation Server we click on “Builds” and on “New Build Definition”. Inside the opened view in the general section, we provide a name for the new build definition.

Change to the “Build Defaults” section. Choose your build controller and set the drop location for your environment.


Let’s go on with the “Process” section. Open up the details of the process template and click on the “New…” button to add the customized build process template. Follow the wizard to add the created *.xaml file of the customized template. The template should appear inside the dropdown for selecting it as build process file. Depending on the customized build template different things need to be configured. Inside the used template the project to build and the configurations needs to be defined. The configuration is important that the mstest call can publish to the correct build configuration.


All other options can be changed like you prefer. This shows only a minimal setup for getting everything up and running.

After you saved the new created build you can trigger the build and all your protractor tests will be executed (when the build is triggered and everything is configured correctly). To display the results of a protractor test run, click on the test results of the build and the results will be displayed inside the “Test Results” window of Visual Studio.


Note: All the described steps and screenshots are from TFS 2013 and Visual Studio Premium 2013. On other versions some differences may arise.

Conclusion: A couple of things are necessary to automate E2E-tests with protractor but as soon the custom activity is created it is easy to integrate the activity into different projects with the need for automated UI tests with AngularJS. In case you are interested in automating JavaScript tests with Jasmine and Karma my collegues (Michael Lehman and Stefan Barthel) created a trx reporter for Karma, too. The process to integrate this is nearly the same. Only some minor adjustments are necessary. The karma-trx-reporter can be found at github and the npm package for the karma exporter can be found here.

Automated UI testing of AngularJS applications using TFS – Part II

11 03 2015

In the first article “Automated UI testing of AngularJS applications using TFS – Part I” we started with the theory on running E2E-tests with protractor. This post and the third part will show the automation of this process. Inside this part the focus is the creation of the custom activity to achieve this.  Requirements for the automation of E2E-tests with protractor

Before we start with the creation of the activity, make sure to fulfil the necessary requirements, to get the E2E-tests up and running with protractor on an installed Team Foundation Server:

  • On the build controller
    • Visual Studio Premium or higher
    • nodejs
  • For development
    • Visual Studio

Create a custom activity

We start with the creation of a custom build activity to automate the E2E testing with protractor. Inside this activity we handle everything that is necessary to run the protractor tests and publish the test results. With the custom activity we are able to integrate the automated protractor tests very easy into different build templates. This improves the reusability in a great way.

Start with a new project and create an “Activity Library” in your preferred programming language.


Now we can start to create the custom activity. By default a new *.xaml file was created inside the new project. The first thing we drop from the toolbox to the activity is a sequence. This is the container for all activities needed for running protractor tests. The following picture shows the necessary arguments and variables which should be created on the level of the sequence.



To setup the arguments and variables additional references are needed. Please add / make sure that the following references are added to the project:

  • Microsoft.TeamFoundation.Build.Activities
  • Microsoft.TeamFoundation.Build.Client
  • Microsoft.TeamFoundation.Client
  • Microsoft.TeamFoundation.Build.Workflows
  • Microsoft.TeamFoundation.Deployment.Workflow
  • Microsoft.TeamFoundation.VersionControl.Client
  • Microsoft.TeamFoundation.VersionControl.Common
  • System.Drawing
  • System.Activities.Presentation
  • PresentationCore
  • PresentationFramework
  • WindowsBase

Note: This list depends on the Version of your installed Visual Studio. The following references are necessary for Visual Studio Premium 2013. Make sure that the installed Visual Studio version on the build controller is the same, you use for the activity creation, to avoid version problems.

The next step is, to retrieve the required values from the build, to setup the variables with values. With the help of the “GetEnvironmentVariable”-Activity reading the “WellKnownEnvironmentVariables.SourcesDirectory” the value can be retrieved. The retrieved result value should then be stored to the “SourcesDirectory” variable.


The “GetTeamProjectCollection”-Activity should store the data to the “TeamProjectCollection” variable. With the “GetBuildDetail”-Activity we receive the information we need to store inside the “BuildDetail” variable.

In case you cannot find these activities inside the toolbox you need to customize the toolbox. Click right into the toolbox pane and click on the “choose items” menu item. Inside the dialog you can add the Microsoft.TeamFoundation.Build.Workflow.dll. Depending on the Visual Studio version you are using, this can be found at a location similar to “C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\PrivateAssemblies”

Go on with the nodejs package manager and install all dependencies. For this step we create an “InvokeProcess”-Activitiy. This activity is used to trigger a powershell script which is responsible for installing the required dependencies. The “InvokeProcess”-Activity should be configured that the standard output is written as a build message and all errors are written as build errors.


The properties of the “WriteBuildMessage”-Activity and the “WriteBuildError”-Activity should be setuped like you can see on the following pictures.



In case you want to log the complete console output of the invoked process to the build log, change the “BuildMessageImportance” to high.

The next thing is the configuration of the step by itself. A powershell script is executed which installs all dependencies (in case they aren’t already installed). Therefore the followings properties should be set on the “InvokeProcess”-Activity.


string.Format(CultureInfo.InvariantCulture, @" ""& '{0}' '{1}' "" ", SourcesDirectory + InstallDependenciesScript, SourcesDirectory + ProtractorConfigFile);



The complete script which installs all dependencies can be found here.
The next process step is the start of the selenium server (all protractor tests needs an up and running selenium server). Another “InvokeProcess”-Activity is needed to achieve this. Configure this activity like described before. Only change the argument property to the following:

string.Format(CultureInfo.InvariantCulture, @" ""& '{0}' "" ", SourcesDirectory + WebdriverManagerScript);

This “InvokeProcess”-Activity calls a powershell script which starts the selenium server in the background thru the webdriver-manager cmdlet. The complete powershell script can be downloaded here.

The next activity we need to add is a “Delay”. We need to add this because the selenium server needs some time to be up and running. A delay of 10 sec should be enough.

At this point everything is installed and up and running. With another “InvokeProcess”-Activity the test will be executed. Configure the activity as described before and change the arguments and filename properties.

"\"" + SourcesDirectory + ProtractorConfigFile + "\""



This activity calls protractor with the created protractor configuration file that should exists inside the source control of the TFS. Protractor is executed and runs all the tests provided thru the protractor configuration file.

Maybe you need to adjust the path to the protractor.cmd depending under which user the build controller is running and where npm installed the protractor package.

As described inside the first post the trx reporter is attached thru the protractor configuration. Therefore we found a generated trx file with all results inside the working folder. With mstest the results can be published and associated with the current build. This can be done by another “InvokeProcess”-Activity with the following properties.


string.Format(CultureInfo.InvariantCulture, @"/publish:""{0}"" /publishresultsfile:""{1}"" /publishbuild:""{2}"" /teamproject:""{3}"" /platform:""{4}"" /flavor:""{5}""", TeamProjectCollection.Uri, SourcesDirectory + @"\ProtractorTestResults.trx", BuildDetail.BuildNumber, BuildDetail.TeamProject, Plattform, Configuration)


"C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\MSTest.exe"

Note: Depending on the Version of Visual Studio you have installed the mstest.exe has different arguments available. The publish arguments of mstest.exe are only available if you have installed Visual Studio Premium / Ultimate or Visual Studio Test Professional.
The next step is to read the test output from the trx file and evaluate the results. In case of an error while executing the test the build should fail. With an “Assign”-Activity the trx result output file is read and assigned to the “TestOutcome” variable. Set the following properties for the activity to achieve this.




System.IO.File.ReadAllText(SourcesDirectory + @"\ProtractorTestResults.trx")

To evaluate the “TestOutcome” variable an “If”-Activity is needed. If an error is written to the test results a build error will be published otherwise everything is fine and nothing needs to be done. The condition of the “If”-Activity should be the following:

!string.IsNullOrEmpty(TestOutcome) && !TestOutcome.ToUpper().Contains(@"RESULTSUMMARY OUTCOME=""PASSED""")

Now you are done with the creation of the custom activity. If you like you can download a version of the complete activity here.

In the next blog post the creation of a customized build template and the integration into a TFS Build is described.

Note: All the described steps and screenshots are from TFS 2013 and Visual Studio Premium 2013. On other versions some differences may arise.

Automated UI testing of AngularJS applications using TFS – Part I

11 03 2015

This is the first part of 3 articles describing the automation of UI-tests with Team Foundation Server for AngularJS applications. Inside the first blog post all is explained how AngularJS applications can be tested manual. The automation of these manual steps will be discussed in Part II and Part III.

Why should we write automated test?

The quality of software application is more and more important. Customers / users expect a software application that works as expected and without errors. In case you can’t provide the quality the users expect, you can be sure that they search for alternatives.
Enterprise environments require certain standards to develop professional software products. Automated testing is one of these standards and helps to achieve the required product quality especially with more and more complex software products. Writing tests is the safety net to make sure that the quality won’t decrease and that all the existing features work as expected, especially when adding additional features to the application.
An automated test harness can reduce the costs for manual testing and can be repeated every time without the need of human testers. Manual regression testing can be reduced to a minimum (only tests that can’t be automated need manual regression testing).
Another benefit of automated tests is that bugs and quality issues can be found earlier. Testing different parameters and edge cases can be very time consuming with manual tests. Automated tests can reduce the amount of time needed and they are repeatable once they are written.

Benefits of integrating automated tests into the build infrastructure

The benefit of the integration of the test harness into the build infrastructure is the transparency and the traceability (especially with continuous integration). Every time when a build is triggered the automated tests provide you with results from the test harness. You can directly see what quality issues the current version of the application has and which change caused an issue. Having all necessary tests automated to make sure that the software is working as expected, you can release a new feature / new version of the software as soon as all tests pass. This reduces the need for complex manual tests and will save a lot of time until a new version can be released. The effort and time for manual tests increase much faster compared to the effort that is necessary for automated testing. This will become clearer when the product is growing and more and more features must be tested after changing the system / product.

Tools used for automated UI testing of AngularJS applications

In case you want to test an AngularJS application with automated UI tests you will find the protractor project helpful. Protractor is an E2E (end-to-end) Framework especially for AngularJS applications. This program is built on top of the WebDriverJs. WebDriverJs is using the Selenium browser automation framework. WebDriverJS helps you to run tests against a website and interact with the page like a normal user would do it. Protractor enhances the functionality of the WebDriverJS with a couple of AngularJS specifics which will improve writing test cases for AngularJS applications.

The workflow of automate E2E tests with protractor and TFS

The following picture provides an overview how E2E test can be automated with the Team Foundation Server.


How to write UI tests with protractor

The only thing you need for writing test is an editor of your choice. The test is written in JavaScript. The syntax to describe a test is from the Jasmine framework. With this syntax and the functions from the protractor library you are able to create tests which will interact with an AngularJS web page like a user would do it.

The following test is a sample from the protractor homepage and shows the interaction with the AngularJS homepage.

describe('angularjs homepage todo list', function() {
  it('should add a todo', function() {
    element(by.model('todoText')).sendKeys('write a protractor test');
    var todoList = element.all(by.repeater('todo in todos'));
    expect(todoList.get(2).getText()).toEqual('write a protractor test');

The test navigates to the AngularJS homepage. Some input is made with the keyboard and a mouse click is executed. At the end the results of the interaction with the page is asserted to check if adding the element worked as expected.

How to run E2E tests with protractor

To run an E2E test with protractor you need to do the following things:

  • Install protractor & webdriver-manager
  • Start the selenium server (with the webdriver-manager command)
  • The web application which should be tested should be up and running
  • Create a config file for protractor
  • Run the tests by calling protractor with the config file as parameter

Installation of protractor & webdriver-manager using npm (the nodejs package manager) can be done by calling:

npm install -g protractor
npm install -g webdriver-manager

Installing and starting the selenium server can be done by calling the following command:

webdriver-manager update
webdriver-manager start

Create a protractor configuration file (conf.js) with the following content (change / enhance the spec parameter depending where you stored you test spec files):

exports.config = {
  seleniumAddress: 'http://localhost:4444/wd/hub',
  specs: ['Tests/*.js']

Run the tests:

protractor conf.js

Now we are able to run the tests and see the test results on the console from which protractor was called. The next step is to export the test results to a format which is viewable inside TFS and Visual Studio.

Trx-Export of protractor test results

The test results of the protractor run should be transformed into a format which can be stored and viewed by Team Foundation Server / Visual Studio.

Two of my colleagues (Michael Lehman and Stefan Barthel) created a reporter for protractor to report the test results into the *.trx file format. Trx files are viewable inside Visual Studio with the test result window or can be published with mstest as build results of the Team Foundation Server. This reporter can be installed with npm by running the following command:

npm install protractor-trx-reporter

After the installation of the node package, we need to adjust the protractor configuration file. The following code needs to be added inside the protractor config file.

onPrepare: function () {
    new jasmine.TrxReporter('ProtractorTestResults.trx'));

Running the protractor tests again with the enhanced configuration will create the trx-file. This file can be published and integrated on the Team Foundation Server with the help of mstest. The command to do this can be found inside the next post where we start with the automation of this manual process.

If you need more information about the trx reporter you can visit the github page of the protractor-trx-reporter or the npm package site.

Windows developer article online

19 09 2012

As I mentioned inside an earlier post the article about the serialization in .NET using MongoDB was published on the windows developer. Today I realised that the article is available online.

Would be really happy to receive some feedback or discuss the concept with you.


Issue with lists / collections of string

28 08 2012

I did some development for Windows 8 in the last couple of weeks and came across an issue which I want to share with you.

I’m currently working on Windows 8 Release Preview with Visual Studio 2012 RC installed. Hopefully the issue is fixed inside the final version.

Let’s start with some code to demonstrate the issue. Create a simple list with string elements on the view model

MyList = new List<string> { "Test1", "Test2", string.Empty };

The next step is to bind this property to a GridView. We start by creating a CollectionViewSource which we can bind to a GridView.

            Source="{Binding MyList}"/>
        ItemsSource="{Binding Source={StaticResource MyListViewSource}}"
                <TextBlock Text="{Binding}"></TextBlock>

When I executed the code, I expected something like the following result (3 GriedView items, 2 with text and 1 with an empty text):


But the rendered result was different:


Every GridView item which is bound to string.Empty is rendered a black box.

A workaround for this issue is to create a simple class with a string property (don’t forget to update the binding).

MyList = new List<MyClass> {
    new MyClass { StringValue = "Test1"} ,
    new MyClass { StringValue = "Test2"},
    new MyClass { StringValue = string.Empty }

I know this isn’t nice but a workaround until the issue is fixed.

First article on a German .net magazine

28 06 2012

I have written an article with a colleague from work about the serialization in .NET using MongoDB. If you are interested in the article have a look at the current windows.developer.

The article is in German but I hope many of you will read it and I would be very happy to get some feedback or discuss this article with you.


HACK: Creating triggers for MongoDB

14 05 2012

I visited the MongoDB conference in Berlin. At one talk about tips, tricks and hacks for MongoDB the speaker mentioned that there is a little hack which you can use to create a trigger for MongoDB. I wanted to try this out because he only mentioned how to do this theoretically very shortly.

When you have configured MongoDB to work as a replicaset you maybe have noticed that on the local database a new collection called “” is created. Inside this collection MongoDB stores all insert / update and delete operations which are executed against this replicaset (it’s comparable to the transaction log on a SQL Server). The oplog collection is used to distribute all the operations from the primary node to all secondary’s. With the help of this collection and a little javascript file we are able to create something which behaves like a trigger.

Let’s start with the oplog collection. If you look at an entry from this collection you can see something which can look similar to the following extract.

    "ts" : {
        "$timestamp" : NumberLong("5724119038133534721")
    "h" : NumberLong("-7041921609633449468"),
    "op" : "i",
    "ns" : "TestApplication.BlogPost",
    "o" : {
        "_id" : ObjectId("4f7027f0df6e252390d2332a"),
        "Author" : "Test Author",
        "CreationDate" : new Date("Mon, 12 Mar 2012 00:00:00 GMT +01:00"),
        "Comment" : "My Comment"

ts: is the timestamp. We need the timestamp to avoid that an element can be triggered twice.

op: is the operation. The interesting operations are “i” for insert / “u” for update and “d” for delete.

ns: is the namespace (database and collection) were the operation was executed.

o: is the object which is created or updated.

If you need more information about the oplog have a look at the following page about the oplog on the website from MongoDB:

Now we create the javascript file. This script has a while loop without any option to exit this loop. We want to watch for all changes on the oplog and want to react on these changes. As long the script is running we have a behavior similar to a trigger.

Two features of MongoDB are used to allow the execution of this script (have a look at the links if you want to have further information):

Now have a look at the script and modify and reuse it if you like.

var coll =;
var lastTimeStamp = coll.find().sort({ '$natural' : -1 })[0].ts;

    cursor = coll.find({ ts: { $gt: lastTimeStamp } });
    // tailable
    cursor.addOption( 2 );
    // await data
    cursor.addOption( 32 );

    while( cursor.hasNext() ){
        var doc =;
        lastTimeStamp = doc.ts;
        printjson( doc );

What the current script does is checking for operations inside the oplog and print out the oplog entry. Just change the line with the printjson command to the operation you want to perform as the result of the trigger. On the line where you initialize the cursor you can enhance the query if you maybe only want to react on update operations.

I developed on a project with MongoDB nearly 1.5 years now and I didn’t came across a problem were I really need a trigger. I saw a couple of people asking for triggers at different pages and hope I can help some of them with this little hack. This is not tested on a high traffic environments.