Automated UI testing of AngularJS applications using TFS – Part III

11 03 2015

In the first post we heard about the theory on running E2E tests with protractor. In Part II we created the custom activity to start the automation of this process. This post will finish the automation of the process by integrating the activity and creating a new build.

Configure the build controller

After building the custom activity library the created *.dll needs to be stored inside the Team Foundation Server. The folder holding the *.dll with the activity must be added to the list of custom assemblies on the build controller. This is necessary otherwise the build controller isn’t able to use the activity. To define the custom assemblies directory for a build controller you can use the “Team Explorer”. Go to “Builds” and click “Action”, choose the “Manage Build Controllers…” entry.


The following dialog will be displayed:


Select you build controller and click on the “Properties” button. The following properties window will be displayed:


Here you need to define the version control path to custom assemblies by providing the folder holding the created activity.

Another thing we need to do on the build controller is the configuration of the powershell execution policy. By default the execution of powershell scripts isn’t enabled. To enable the execution of all powershell scripts you can call “Set-ExecutionPolicy Unrestricted” on the powershell (administrator previliges are required). If you don’t want to enable all powershell scripts have a look at the help about execution policies.

Create the custom build template

Start with the creation of a new build. This can be done with the help of the “Team Explorer”. Inside “Builds” click on “New Build Definition”.


As a start, download one of the existing build templates. This template is customized and enhanced with the created activity. To download the template, click on “Process”, to display the process section. The window will show the build process templates available on the Team Foundation Server. Click on the “Show details” button to display the build process template details. The “Download” link should be visible now and you can download a template.


For the moment we can close the new build again (saving the new build isn’t necessary at the moment).

The customizing starts with the creation of an “Activity Library” project. Add the downloaded template to the new created project. Open up the file properties of the template inside Visual Studio and make sure that the build action is set to “Content”.


The next step is to add a reference to the *.dll holding our created custom activity. Now the activity should be displayed inside the toolbox. In case the custom activity didn’t show up, customize the toolbox. Click right on the toolbox and select “choose items”. At this dialog you can add the *.dll thru clicking the “Browse…” button.


Add the custom activity to the build template at an appropriate position with drag and drop. The right position for the activity depends on the template which is customized. After the activity was added you need to set the properties for the activity. The following picture shows a sample configuration. The path to the file depends on the folder structure of you application and needs to be adjusted to the existing folder structure. Provide the path relative to the source folder of the TFS-Build.


Create the new build for running protractor tests

Start with the creation of a new build. Inside the Team Foundation Server we click on “Builds” and on “New Build Definition”. Inside the opened view in the general section, we provide a name for the new build definition.

Change to the “Build Defaults” section. Choose your build controller and set the drop location for your environment.


Let’s go on with the “Process” section. Open up the details of the process template and click on the “New…” button to add the customized build process template. Follow the wizard to add the created *.xaml file of the customized template. The template should appear inside the dropdown for selecting it as build process file. Depending on the customized build template different things need to be configured. Inside the used template the project to build and the configurations needs to be defined. The configuration is important that the mstest call can publish to the correct build configuration.


All other options can be changed like you prefer. This shows only a minimal setup for getting everything up and running.

After you saved the new created build you can trigger the build and all your protractor tests will be executed (when the build is triggered and everything is configured correctly). To display the results of a protractor test run, click on the test results of the build and the results will be displayed inside the “Test Results” window of Visual Studio.


Note: All the described steps and screenshots are from TFS 2013 and Visual Studio Premium 2013. On other versions some differences may arise.

Conclusion: A couple of things are necessary to automate E2E-tests with protractor but as soon the custom activity is created it is easy to integrate the activity into different projects with the need for automated UI tests with AngularJS. In case you are interested in automating JavaScript tests with Jasmine and Karma my collegues (Michael Lehman and Stefan Barthel) created a trx reporter for Karma, too. The process to integrate this is nearly the same. Only some minor adjustments are necessary. The karma-trx-reporter can be found at github and the npm package for the karma exporter can be found here.


Automated UI testing of AngularJS applications using TFS – Part II

11 03 2015

In the first article “Automated UI testing of AngularJS applications using TFS – Part I” we started with the theory on running E2E-tests with protractor. This post and the third part will show the automation of this process. Inside this part the focus is the creation of the custom activity to achieve this.  Requirements for the automation of E2E-tests with protractor

Before we start with the creation of the activity, make sure to fulfil the necessary requirements, to get the E2E-tests up and running with protractor on an installed Team Foundation Server:

  • On the build controller
    • Visual Studio Premium or higher
    • nodejs
  • For development
    • Visual Studio

Create a custom activity

We start with the creation of a custom build activity to automate the E2E testing with protractor. Inside this activity we handle everything that is necessary to run the protractor tests and publish the test results. With the custom activity we are able to integrate the automated protractor tests very easy into different build templates. This improves the reusability in a great way.

Start with a new project and create an “Activity Library” in your preferred programming language.


Now we can start to create the custom activity. By default a new *.xaml file was created inside the new project. The first thing we drop from the toolbox to the activity is a sequence. This is the container for all activities needed for running protractor tests. The following picture shows the necessary arguments and variables which should be created on the level of the sequence.



To setup the arguments and variables additional references are needed. Please add / make sure that the following references are added to the project:

  • Microsoft.TeamFoundation.Build.Activities
  • Microsoft.TeamFoundation.Build.Client
  • Microsoft.TeamFoundation.Client
  • Microsoft.TeamFoundation.Build.Workflows
  • Microsoft.TeamFoundation.Deployment.Workflow
  • Microsoft.TeamFoundation.VersionControl.Client
  • Microsoft.TeamFoundation.VersionControl.Common
  • System.Drawing
  • System.Activities.Presentation
  • PresentationCore
  • PresentationFramework
  • WindowsBase

Note: This list depends on the Version of your installed Visual Studio. The following references are necessary for Visual Studio Premium 2013. Make sure that the installed Visual Studio version on the build controller is the same, you use for the activity creation, to avoid version problems.

The next step is, to retrieve the required values from the build, to setup the variables with values. With the help of the “GetEnvironmentVariable”-Activity reading the “WellKnownEnvironmentVariables.SourcesDirectory” the value can be retrieved. The retrieved result value should then be stored to the “SourcesDirectory” variable.


The “GetTeamProjectCollection”-Activity should store the data to the “TeamProjectCollection” variable. With the “GetBuildDetail”-Activity we receive the information we need to store inside the “BuildDetail” variable.

In case you cannot find these activities inside the toolbox you need to customize the toolbox. Click right into the toolbox pane and click on the “choose items” menu item. Inside the dialog you can add the Microsoft.TeamFoundation.Build.Workflow.dll. Depending on the Visual Studio version you are using, this can be found at a location similar to “C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\PrivateAssemblies”

Go on with the nodejs package manager and install all dependencies. For this step we create an “InvokeProcess”-Activitiy. This activity is used to trigger a powershell script which is responsible for installing the required dependencies. The “InvokeProcess”-Activity should be configured that the standard output is written as a build message and all errors are written as build errors.


The properties of the “WriteBuildMessage”-Activity and the “WriteBuildError”-Activity should be setuped like you can see on the following pictures.



In case you want to log the complete console output of the invoked process to the build log, change the “BuildMessageImportance” to high.

The next thing is the configuration of the step by itself. A powershell script is executed which installs all dependencies (in case they aren’t already installed). Therefore the followings properties should be set on the “InvokeProcess”-Activity.


string.Format(CultureInfo.InvariantCulture, @" ""& '{0}' '{1}' "" ", SourcesDirectory + InstallDependenciesScript, SourcesDirectory + ProtractorConfigFile);



The complete script which installs all dependencies can be found here.
The next process step is the start of the selenium server (all protractor tests needs an up and running selenium server). Another “InvokeProcess”-Activity is needed to achieve this. Configure this activity like described before. Only change the argument property to the following:

string.Format(CultureInfo.InvariantCulture, @" ""& '{0}' "" ", SourcesDirectory + WebdriverManagerScript);

This “InvokeProcess”-Activity calls a powershell script which starts the selenium server in the background thru the webdriver-manager cmdlet. The complete powershell script can be downloaded here.

The next activity we need to add is a “Delay”. We need to add this because the selenium server needs some time to be up and running. A delay of 10 sec should be enough.

At this point everything is installed and up and running. With another “InvokeProcess”-Activity the test will be executed. Configure the activity as described before and change the arguments and filename properties.

"\"" + SourcesDirectory + ProtractorConfigFile + "\""



This activity calls protractor with the created protractor configuration file that should exists inside the source control of the TFS. Protractor is executed and runs all the tests provided thru the protractor configuration file.

Maybe you need to adjust the path to the protractor.cmd depending under which user the build controller is running and where npm installed the protractor package.

As described inside the first post the trx reporter is attached thru the protractor configuration. Therefore we found a generated trx file with all results inside the working folder. With mstest the results can be published and associated with the current build. This can be done by another “InvokeProcess”-Activity with the following properties.


string.Format(CultureInfo.InvariantCulture, @"/publish:""{0}"" /publishresultsfile:""{1}"" /publishbuild:""{2}"" /teamproject:""{3}"" /platform:""{4}"" /flavor:""{5}""", TeamProjectCollection.Uri, SourcesDirectory + @"\ProtractorTestResults.trx", BuildDetail.BuildNumber, BuildDetail.TeamProject, Plattform, Configuration)


"C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\MSTest.exe"

Note: Depending on the Version of Visual Studio you have installed the mstest.exe has different arguments available. The publish arguments of mstest.exe are only available if you have installed Visual Studio Premium / Ultimate or Visual Studio Test Professional.
The next step is to read the test output from the trx file and evaluate the results. In case of an error while executing the test the build should fail. With an “Assign”-Activity the trx result output file is read and assigned to the “TestOutcome” variable. Set the following properties for the activity to achieve this.




System.IO.File.ReadAllText(SourcesDirectory + @"\ProtractorTestResults.trx")

To evaluate the “TestOutcome” variable an “If”-Activity is needed. If an error is written to the test results a build error will be published otherwise everything is fine and nothing needs to be done. The condition of the “If”-Activity should be the following:

!string.IsNullOrEmpty(TestOutcome) && !TestOutcome.ToUpper().Contains(@"RESULTSUMMARY OUTCOME=""PASSED""")

Now you are done with the creation of the custom activity. If you like you can download a version of the complete activity here.

In the next blog post the creation of a customized build template and the integration into a TFS Build is described.

Note: All the described steps and screenshots are from TFS 2013 and Visual Studio Premium 2013. On other versions some differences may arise.

Automated UI testing of AngularJS applications using TFS – Part I

11 03 2015

This is the first part of 3 articles describing the automation of UI-tests with Team Foundation Server for AngularJS applications. Inside the first blog post all is explained how AngularJS applications can be tested manual. The automation of these manual steps will be discussed in Part II and Part III.

Why should we write automated test?

The quality of software application is more and more important. Customers / users expect a software application that works as expected and without errors. In case you can’t provide the quality the users expect, you can be sure that they search for alternatives.
Enterprise environments require certain standards to develop professional software products. Automated testing is one of these standards and helps to achieve the required product quality especially with more and more complex software products. Writing tests is the safety net to make sure that the quality won’t decrease and that all the existing features work as expected, especially when adding additional features to the application.
An automated test harness can reduce the costs for manual testing and can be repeated every time without the need of human testers. Manual regression testing can be reduced to a minimum (only tests that can’t be automated need manual regression testing).
Another benefit of automated tests is that bugs and quality issues can be found earlier. Testing different parameters and edge cases can be very time consuming with manual tests. Automated tests can reduce the amount of time needed and they are repeatable once they are written.

Benefits of integrating automated tests into the build infrastructure

The benefit of the integration of the test harness into the build infrastructure is the transparency and the traceability (especially with continuous integration). Every time when a build is triggered the automated tests provide you with results from the test harness. You can directly see what quality issues the current version of the application has and which change caused an issue. Having all necessary tests automated to make sure that the software is working as expected, you can release a new feature / new version of the software as soon as all tests pass. This reduces the need for complex manual tests and will save a lot of time until a new version can be released. The effort and time for manual tests increase much faster compared to the effort that is necessary for automated testing. This will become clearer when the product is growing and more and more features must be tested after changing the system / product.

Tools used for automated UI testing of AngularJS applications

In case you want to test an AngularJS application with automated UI tests you will find the protractor project helpful. Protractor is an E2E (end-to-end) Framework especially for AngularJS applications. This program is built on top of the WebDriverJs. WebDriverJs is using the Selenium browser automation framework. WebDriverJS helps you to run tests against a website and interact with the page like a normal user would do it. Protractor enhances the functionality of the WebDriverJS with a couple of AngularJS specifics which will improve writing test cases for AngularJS applications.

The workflow of automate E2E tests with protractor and TFS

The following picture provides an overview how E2E test can be automated with the Team Foundation Server.


How to write UI tests with protractor

The only thing you need for writing test is an editor of your choice. The test is written in JavaScript. The syntax to describe a test is from the Jasmine framework. With this syntax and the functions from the protractor library you are able to create tests which will interact with an AngularJS web page like a user would do it.

The following test is a sample from the protractor homepage and shows the interaction with the AngularJS homepage.

describe('angularjs homepage todo list', function() {
  it('should add a todo', function() {
    element(by.model('todoText')).sendKeys('write a protractor test');
    var todoList = element.all(by.repeater('todo in todos'));
    expect(todoList.get(2).getText()).toEqual('write a protractor test');

The test navigates to the AngularJS homepage. Some input is made with the keyboard and a mouse click is executed. At the end the results of the interaction with the page is asserted to check if adding the element worked as expected.

How to run E2E tests with protractor

To run an E2E test with protractor you need to do the following things:

  • Install protractor & webdriver-manager
  • Start the selenium server (with the webdriver-manager command)
  • The web application which should be tested should be up and running
  • Create a config file for protractor
  • Run the tests by calling protractor with the config file as parameter

Installation of protractor & webdriver-manager using npm (the nodejs package manager) can be done by calling:

npm install -g protractor
npm install -g webdriver-manager

Installing and starting the selenium server can be done by calling the following command:

webdriver-manager update
webdriver-manager start

Create a protractor configuration file (conf.js) with the following content (change / enhance the spec parameter depending where you stored you test spec files):

exports.config = {
  seleniumAddress: 'http://localhost:4444/wd/hub',
  specs: ['Tests/*.js']

Run the tests:

protractor conf.js

Now we are able to run the tests and see the test results on the console from which protractor was called. The next step is to export the test results to a format which is viewable inside TFS and Visual Studio.

Trx-Export of protractor test results

The test results of the protractor run should be transformed into a format which can be stored and viewed by Team Foundation Server / Visual Studio.

Two of my colleagues (Michael Lehman and Stefan Barthel) created a reporter for protractor to report the test results into the *.trx file format. Trx files are viewable inside Visual Studio with the test result window or can be published with mstest as build results of the Team Foundation Server. This reporter can be installed with npm by running the following command:

npm install protractor-trx-reporter

After the installation of the node package, we need to adjust the protractor configuration file. The following code needs to be added inside the protractor config file.

onPrepare: function () {
    new jasmine.TrxReporter('ProtractorTestResults.trx'));

Running the protractor tests again with the enhanced configuration will create the trx-file. This file can be published and integrated on the Team Foundation Server with the help of mstest. The command to do this can be found inside the next post where we start with the automation of this manual process.

If you need more information about the trx reporter you can visit the github page of the protractor-trx-reporter or the npm package site.

Setup MongoDB as a service with Powershell

27 02 2012

I have written a small Powershell script to setup a MongoDB instance as a single node or in a replicaset configuration. I want to share the script here and hope this is useful for some of you (feedback is welcome). To run the script you need admin rights; otherwise the service can’t be created. The main purpose of the script is to get MongoDB up and running on a Windows PC for local development.

In this post I want to talk about how the script works. In a further post I will provide some information regarding the setup and configuration.

I have separated the script into 3 files. MongoDbSetup.ps1, WebClient.ps1 and Zip.ps1. The MongoDbSetup-file is the main script and responsible for the installation. WebClient and Zip are only small helpers. WebClient.ps1 is used to download a file and display a progress bar (while downloading). The file Zip.ps1 is used to unzip a zip-file to a specified destination folder (used to unpack MongoDB after download).

What the MongoDB setup script do

The following picture shows a simplified process what the script does.


I want to provide you with a bit more details about the script execution. The first thing we do is to setup the folder structure we expect inside the script. We have a download folder where the downloaded binaries from MongoDB are stored (zip-files). Every installed instance has his own folder in this case “MongoDB ReplicaSet”. Inside the “MongoDB ReplicaSet” directory we have 3 folders. “Bin” for storing the unzipped MongoDB binaries, “data” for the database files and “log” for all log messages.


The location and name where the folders get stored can be defined as a parameter while calling the script. Before we start to download the zip-file holding the MongoDB binaries we want to make sure there is no service running with the same name like the service we want to create. Therefore the script shuts down existing services to allow replacing the service thru the script. This can be handy for the reason you want to update an existing instance (for example you want to update the MongoDB version). Then the download of the zip-file with the binaries starts. The download will fetch the Windows 64-bit version of the binaries for the specified MongoDB version (tested with version 2.0.2 and 1.8.5). If the format of the filename on the MongoDB server will change you need to update the script. If you install a second node the download won’t fetch the file from the server as long you have the zip file inside the download folder (which is created thru the installation process).

The next step is to unzip and copy the executables to the bin folder for the new instance. The fact that we have a bin folder for every instance, made it possible to run different MongoDB versions on the different instances.

Now all preparation is done. We can start with the installation of the single node or the replicaset. I will describe the process of the replicaset installation because it’s much more interesting. The single node installation is a sub-part of the replicaset configuration.

For the reason we want to make changes on an existing replicaset with the script, we need to remove the existing instances with the same name. Afterwards we can setup the number of nodes. The amount of nodes is provided thru a parameter when calling the script. After the setup of the nodes another node is installed as arbiter, if you didn’t change the default configuration. Now we have all nodes installed and need to start the services.

The last step is to configure of the replicaset. We create a file which holds the configuration for all nodes. After creating the file we can run mongo.exe and provide the file as parameter to run and initiate the replicaset configuration. The replicaset needs a bit of time till it is up and running. Connect to your newly created instance and check the replicaset status by calling rs.status(). Then you are done.

As I mentioned above there are a couple of parameters you can set by calling the script to override the default values. In the following table you can find a list with these parameters.

Parameter Default Usage
version 2.0.2 Specify the version of the MongoDB binaries we want to use for the new instance.
Mode ReplicaSet Options are ReplicaSet and SingleMode. Depends on the instance you want to install.
portNumber 30000 Start the port number at the given port. On nodes for a replicaset the port is increased for every node.
numberNodes 2 Number of nodes (without arbiter)
useArbiter True Create and use an arbiter
destinationPath c:\mongodb\ Path where the installation stores the data
serviceName MongoDB ReplicaSet Name of the service which is created. When creating a replicaset a number is attached to the name.

I have uploaded the scripts on GitHub; use it on your own risk 🙂
The repository is located at:



CSV export from MongoDB using PowerShell

7 02 2012

The tools to import and export data on a MongoDB instance are very powerful. I really like the tools because they are very easy to use. Some features would be nice to have, but you can reach a lot with the current set of tools and options.  Detailed information about the import and export tools can be found at the following address:

Mongoexport offers an ability to export data to csv which you can easily read with Excel. This allows “normal” users to display, sort & filter data in their familiar environment. Especially for flat documents the csv export is a great option.

To export data from a collection you can use a command which is similar to this one:

mongoexport -d <databaseName> -c <collectionName> -f “<field1,field2>” --csv -o <outputFile>

But wait there is one thing which I don’t like about this. We must define the fields we want to export. When you use the “csv”-option for mongoexport, the field-options becomes required. But what can I do to avoid a hard coded list of fields? Especially on an environment where many changes will happen, you need a solution which works without a manual edited field list.

What we can do is a map/reduce to get all field names from every document inside a collection. With this result you are able to call mongoexport with a field list which is generated on the fly. Details about map/reduce for MongoDB can be found at the following address:

The map/reduce can look like the following example:

function GetAllFields() {
    map = function(){
        for (var key in this) { emit(1, {"keys" : [ key ]}); }

    reduce = function(key, values) {
        var resultArray = [],
            removeDuplicates = function (elements) {
                var result=[],
                for (var i = 0, elemCount = elements.length;
                                                        i < elemCount; i++) {
                    for (var j = 0, keyCount = elements[i].keys.length;
                                                         j < keyCount; j++) {
                       listOfElemets[elements[i].keys[j]] = true;
                for (var element in listOfElemets) {
            return result;

        return { "keys" : removeDuplicates(values) };

    retVal = db.<collectionName>.mapReduce(map, reduce, {out : {inline : 1}})

This function can be stored inside a js-file. Now we need to execute the function. This can be done, for example with PowerShell and the help of mongo.exe. The result of the script execution is a comma separated field list with all field-names on the 1st -level from one collection. This is exactly the format we need for the csv export using mongoexport. Therefore we are ready to go and can call the export to a csv-file.

The following code shows a PowerShell script which retrieves the field-list and runs the export afterwards.

$fieldNames = (mongo.exe <server>/<database> <scriptFile> --quiet)
(mongoexport.exe -d <databaseName> -c <collectionName> -f $fieldNames --csv   -o <outputFile>)

Hope this will be useful for someone!

Query and update data on MongoDB using PowerShell

30 01 2012

I’m working on client side at the moment where we use MongoDB as data storage. We reached the point where we need to read some data from MongoDB with PowerShell. I thought this might be interesting to some people. Therefore I decided to share my experience about this.

What do we need to connect to MongoDB with PowerShell

The only thing we need is a MongoDB driver. I used the official C# driver which is supported by 10gen. 10gen is the company which develops MongoDB and offers support, training and consulting for MongoDB. The latest driver binaries can be downloaded at github from

How to connect to the MongoDB server

First you need to add references to the dll’s which are provided by the MongoDB driver. Then you can establish the connection to the database. The next step is to connect to a collection. On the collection you can perform different CRUD operations.

$mongoDbDriverPath = "C:\driver\mongoDB\"
$dbName = "MyDatabase"
$collectionName = "MyCollection"
Add-Type -Path "$($mongoDbDriverPath)\MongoDB.Bson.dll"
Add-Type -Path "$($mongoDbDriverPath)\MongoDB.Driver.dll"
$db = [MongoDB.Driver.MongoDatabase]::Create("mongodb://localhost/$($dbName)")
$collection = $db[$collectionName]

Create a new document

First you need to create a BsonDocument. On the document you can add new key-value pairs by calling the add-method. As soon you finalized the new document you can store the document inside a collection with the save-method.

$document = new-object MongoDB.Bson.BsonDocument

When you use the add-method like on the sample above, MongoDB automatically creates a string value for you. If you want to create an entry with an e.g. an ObjectId, you can use the following statement.


For other object types please have a look at the MongoDB driver documentation.

Read data from collection

If you want to fetch all data from a single collection and display all objects you can use the following command.

$results = $collection.findall()
foreach ($result in $results) {
    write-host $result

For the reason you are only interested in a single column you can access a single columns with the following notation.

foreach ($result in $results) {
    write-host $result[“LastName”]

If you want to select a specific document you need to start writing a query. If you want to select all documents which have a specific LastName use the following query.

$query = [MongoDB.Driver.Builders.Query]::EQ("LastName","Weber")

For more complex scenarios you can combine different conditions to a single query. If you want to select all documents having the specified LastName and PreName use this command.

$query = [MongoDB.Driver.Builders.Query]::AND(

The driver offer many different options to combine conditions to a single query. Not only an AND exists. After building the query, you need to find the results. This can be done by calling a find with the query as parameter.

$results = $collection.find($query)

Update data inside a collection

To update an existing document you can start with querying for an existing document. The following command search for a document with the provided ObjectId. The Id on the document is unique. Therefore we can use findOne to get this document from the collection (because we expect only a single result). To update the fetched document we use the set-method. When everything is changed don’t forget to call save. The fact that we have a document with an existing ObjectId forces MongoDB to update the document and not to create a new one.

$query = [MongoDB.Driver.Builders.Query]::EQ("_id",
$result = $collection.findOne($query)

Delete documents from a collection

Deleting existing documents from a collection is easy. If you want to delete every document you can do this by calling removeAll. This will remove every document from the specified collection. Most of the time, you only want so delete a single document. To remove only specific data call the remove-method on the collection with a query. Every document which is returned by the query will be then be deleted.

$query = [MongoDB.Driver.Builders.Query]::EQ("PreName",


With the C# driver it is easy to perform CRUD-Operations with MongoDB. Just have a look at the documentation of the driver if you need to perform some advanced queries. I really like how easy it is to connect to MongoDB via PowerShell.