Setup MongoDB as a service with Powershell

27 02 2012

I have written a small Powershell script to setup a MongoDB instance as a single node or in a replicaset configuration. I want to share the script here and hope this is useful for some of you (feedback is welcome). To run the script you need admin rights; otherwise the service can’t be created. The main purpose of the script is to get MongoDB up and running on a Windows PC for local development.

In this post I want to talk about how the script works. In a further post I will provide some information regarding the setup and configuration.

I have separated the script into 3 files. MongoDbSetup.ps1, WebClient.ps1 and Zip.ps1. The MongoDbSetup-file is the main script and responsible for the installation. WebClient and Zip are only small helpers. WebClient.ps1 is used to download a file and display a progress bar (while downloading). The file Zip.ps1 is used to unzip a zip-file to a specified destination folder (used to unpack MongoDB after download).

What the MongoDB setup script do

The following picture shows a simplified process what the script does.

image

I want to provide you with a bit more details about the script execution. The first thing we do is to setup the folder structure we expect inside the script. We have a download folder where the downloaded binaries from MongoDB are stored (zip-files). Every installed instance has his own folder in this case “MongoDB ReplicaSet”. Inside the “MongoDB ReplicaSet” directory we have 3 folders. “Bin” for storing the unzipped MongoDB binaries, “data” for the database files and “log” for all log messages.

image

The location and name where the folders get stored can be defined as a parameter while calling the script. Before we start to download the zip-file holding the MongoDB binaries we want to make sure there is no service running with the same name like the service we want to create. Therefore the script shuts down existing services to allow replacing the service thru the script. This can be handy for the reason you want to update an existing instance (for example you want to update the MongoDB version). Then the download of the zip-file with the binaries starts. The download will fetch the Windows 64-bit version of the binaries for the specified MongoDB version (tested with version 2.0.2 and 1.8.5). If the format of the filename on the MongoDB server will change you need to update the script. If you install a second node the download won’t fetch the file from the server as long you have the zip file inside the download folder (which is created thru the installation process).

The next step is to unzip and copy the executables to the bin folder for the new instance. The fact that we have a bin folder for every instance, made it possible to run different MongoDB versions on the different instances.

Now all preparation is done. We can start with the installation of the single node or the replicaset. I will describe the process of the replicaset installation because it’s much more interesting. The single node installation is a sub-part of the replicaset configuration.

For the reason we want to make changes on an existing replicaset with the script, we need to remove the existing instances with the same name. Afterwards we can setup the number of nodes. The amount of nodes is provided thru a parameter when calling the script. After the setup of the nodes another node is installed as arbiter, if you didn’t change the default configuration. Now we have all nodes installed and need to start the services.

The last step is to configure of the replicaset. We create a file which holds the configuration for all nodes. After creating the file we can run mongo.exe and provide the file as parameter to run and initiate the replicaset configuration. The replicaset needs a bit of time till it is up and running. Connect to your newly created instance and check the replicaset status by calling rs.status(). Then you are done.

As I mentioned above there are a couple of parameters you can set by calling the script to override the default values. In the following table you can find a list with these parameters.

Parameter Default Usage
version 2.0.2 Specify the version of the MongoDB binaries we want to use for the new instance.
Mode ReplicaSet Options are ReplicaSet and SingleMode. Depends on the instance you want to install.
portNumber 30000 Start the port number at the given port. On nodes for a replicaset the port is increased for every node.
numberNodes 2 Number of nodes (without arbiter)
useArbiter True Create and use an arbiter
destinationPath c:\mongodb\ Path where the installation stores the data
serviceName MongoDB ReplicaSet Name of the service which is created. When creating a replicaset a number is attached to the name.

I have uploaded the scripts on GitHub; use it on your own risk 🙂
The repository is located at: https://github.com/danielweberonline/MongoDB-Setup

Cheers,

Daniel

Advertisements




Creating Slugs with Action Filters

16 02 2012

Slugs are the possibility to create SEO-friendly URLs (speaking URLs). Slugs are a great way to optimize you page URL for search engines.

http://www.<domain&gt;.de/Post/145732 – URL with a meaningless Id
http://www.<domain&gt;.de/Post/Creating-slugs-with-Action-filters – speaking URL with a slug

Even for humans speaking URLs are much better instead of meaningless URLs with Ids. You can already see what will occur when opening a link.

Basic slug implementation

We start with the creation of a new Route. This can be done by adding the following route to the Global.asax:

routes.MapRoute(
    null,
    "Post/{slug}",
    new { controller = "Post", action = "Show", slug = string.Empty });

With this route we are able to create URLs like shown in the sample above. As you can see inside the route, we need a post controller with an action called show. The following code shows a sample implementation of this controller (I assume you have a working DI-Configuration and the BlogPostRepository is injected)

private readonly IBlogPostRepository blogPostRepository;

public PostController(IBlogPostRepository blogPostRepository)
{
    this.blogPostRepository = blogPostRepository;
}

public ActionResult Show(string slug)
{
    if (slug.Equals(string.Empty))
    {
        return this.HttpNotFound();
    }

    var blogPost = blogPostRepository.FindPost(slug);
    if (blogPost == null) {
        return this.HttpNotFound();
    }

    return View(blogPost);
}

Inside the action show, we check that a slug value is provided thru the URL. If no slug value is present we return HttpNotFound (404) as result. Having a slug value inside the URL we can call the repository which is responsible for fetching data from the data store. If the URL is valid, a BlogEntry is fetched from the repository. The result should be given to the view to display the data. For the reason the repository returns no result we return another 404.
The problem with this implementation is that the size of the controller can increase very fast, especially when you have more parameters.

Slugs with Action Filters

Now I want to show you the same implementation using action filters. In my opinion this is quiet handy and better then the basic implementation.
We start with the implementation of the new action filter.

public class BlogPostFilterAttribute : ActionFilterAttribute {

    [Dependency]
    public IBlogPostRepository BlogPostRepository { get; set; }

    public override void OnActionExecuting(ActionExecutingContext 
        filterContext) {
        var slug = filterContext.RouteData.Values["slug"] as string;

        if (slug.Equals(string.Empty)) {
            filterContext.Result = new HttpNotFoundResult();
            return;
        }

        var blogPost = BlogPostRepository.FindPost(slug.ToString());

        if (blogPost == null) {
            filterContext.Result = new HttpNotFoundResult();
            return;
        }

        filterContext.ActionParameters["BlogPost"] = blogPost;
    }
}

As you can see, the code looks similar to the basic implementation. What changed is the fact that we extract the slug value from the RouteData of the filterContext. For the cases we want to display a 404 we provide the filterContext result with a new instance of HttpNotFoundResult. On the happy path when we found a post with the help of the repository, the blogPost entry is stored as action parameter on the filterContext. Now we need to update the show action. First we need to provide the new attribute on the action. Next we need to change the method parameters from string to blogPost. The blogPost parameter inside the action will now be filled with the blogPost data which was fetched inside the action filter. Last but not least we can remove all the unnecessary code from the action. The show action should look similar to the following code:

[BlogPostFilter]
public ActionResult Show(BlogPost blogPost) {
    return View(blogPost);
}

Add support for Property Injection on Action Filters

The special thing about action filters is that we need to write some additional code to support injecting data into action filters. In the code above you can already see that I provided the Dependency attribute on the BlogPostRepository property. Without the customizing the property injection didn’t work. What we need to do, I found inside a blog post from Brad Wilson (http://bradwilson.typepad.com/blog/2010/07/service-location-pt4-filters.html).
We need write a filter attribute provider, which is provided with your dependency injection container. The following code shows a sample implementation for unity.

public class UnityFilterAttributeFilterProvider : 
    FilterAttributeFilterProvider {
    private readonly IUnityContainer container;

    public UnityFilterAttributeFilterProvider(IUnityContainer container) {
        this.container = container;
    }

    protected override IEnumerableGetControllerAttributes(
        ControllerContext controllerContext,
        ActionDescriptor actionDescriptor) {

        var attributes = base.GetControllerAttributes(controllerContext, 
            actionDescriptor);

        foreach (var attribute in attributes) {
            this.container.BuildUp(attribute.GetType(), attribute);
        }

        return attributes;
    }

    protected override IEnumerableGetActionAttributes(
        ControllerContext controllerContext,
        ActionDescriptor actionDescriptor) {

        var attributes = base.GetControllerAttributes(controllerContext, 
            actionDescriptor);

        foreach (var attribute in attributes) {
            this.container.BuildUp(attribute.GetType(), attribute);
        }

        return attributes;
    }
}

Inside the Global.asax you need to add the following code inside the Application_Start. This removes the default filter and adds the filter with the DI container to the FilterProviders.

var filterAttributProvider = FilterProviders.Providers.Single(f => f is FilterAttributeFilterProvider);
FilterProviders.Providers.Remove(filterAttributProvider);

var provider = new UnityFilterAttributeFilterProvider(container);
FilterProviders.Providers.Add(provider);

After this last step you should be able to use the injected repository.

Conclusion

With the help of action filters, it’s easier to reduce the controller size and make the code reusable thru an attribute which can be provided on different controllers and actions.





Skiing at Germany

12 02 2012

Working at client side at Munich has some advantages. You can spend some time at the nice city Munich. But what I really like is how close you are located to the mountains. It’s only a short 1h drive till you can stand in front of huge mountains. At the moment they are completely covered with snow. This is really a nice view you can believe me.

In the last weeks I went to two different areas for a short skiing day. One day I went directly after work to Oberaudorf. The open a couple of days per week for floodlight skiing. The conditions where really good and I really enjoyed the time there expect the temperatures. It was freezing cold at the last lift rides.

Today I spent the day at Brauneck. This ski area is only 60 km away from Munich. It was a beautiful day. The weather was cold (about -18 degrees when I arrived) and sunny. The ski run is nice with some challenging sections.

Have a look at the photos I have taken at both places (only handy pictures, sorry!).

Cheers,

Daniel





Flying Bentley

8 02 2012

Today in the office we had a great and free entertainment program. We had the possibility to watch some guy doing their job.

Okay watch people doing their work sounds not that great. But we were able to watch some guys, who work for a company, which is responsible for the transportation of heavy loads with helicopters.

What they did was pretty cool. Their job was to fly a Bentley to the top of the neighbour building. Therefore I had a great view and was able to watch the whole action and took a small video which you can watch here if you like.

Vodpod videos no longer available.

Flying Bentley, posted with vodpod

Cheers,

Daniel





CSV export from MongoDB using PowerShell

7 02 2012

The tools to import and export data on a MongoDB instance are very powerful. I really like the tools because they are very easy to use. Some features would be nice to have, but you can reach a lot with the current set of tools and options.  Detailed information about the import and export tools can be found at the following address: http://www.mongodb.org/display/DOCS/Import+Export+Tools

Mongoexport offers an ability to export data to csv which you can easily read with Excel. This allows “normal” users to display, sort & filter data in their familiar environment. Especially for flat documents the csv export is a great option.

To export data from a collection you can use a command which is similar to this one:

mongoexport -d <databaseName> -c <collectionName> -f “<field1,field2>” --csv -o <outputFile>

But wait there is one thing which I don’t like about this. We must define the fields we want to export. When you use the “csv”-option for mongoexport, the field-options becomes required. But what can I do to avoid a hard coded list of fields? Especially on an environment where many changes will happen, you need a solution which works without a manual edited field list.

What we can do is a map/reduce to get all field names from every document inside a collection. With this result you are able to call mongoexport with a field list which is generated on the fly. Details about map/reduce for MongoDB can be found at the following address: http://www.mongodb.org/display/DOCS/MapReduce

The map/reduce can look like the following example:

function GetAllFields() {
    map = function(){
        for (var key in this) { emit(1, {"keys" : [ key ]}); }
    };

    reduce = function(key, values) {
        var resultArray = [],
            removeDuplicates = function (elements) {
                var result=[],
                    listOfElemets={};
                for (var i = 0, elemCount = elements.length;
                                                        i < elemCount; i++) {
                    for (var j = 0, keyCount = elements[i].keys.length;
                                                         j < keyCount; j++) {
                       listOfElemets[elements[i].keys[j]] = true;
                    }
                }
                for (var element in listOfElemets) {
                    result.push(element);
                }
            return result;
        }

        return { "keys" : removeDuplicates(values) };
    };

    retVal = db.<collectionName>.mapReduce(map, reduce, {out : {inline : 1}})
    print(retVal.results[0].value.keys);
}
GetAllFields();

This function can be stored inside a js-file. Now we need to execute the function. This can be done, for example with PowerShell and the help of mongo.exe. The result of the script execution is a comma separated field list with all field-names on the 1st -level from one collection. This is exactly the format we need for the csv export using mongoexport. Therefore we are ready to go and can call the export to a csv-file.

The following code shows a PowerShell script which retrieves the field-list and runs the export afterwards.

$fieldNames = (mongo.exe <server>/<database> <scriptFile> --quiet)
(mongoexport.exe -d <databaseName> -c <collectionName> -f $fieldNames --csv   -o <outputFile>)

Hope this will be useful for someone!





Problem using $rename on indexed fields using MongoDB

1 02 2012

Today we found a problem which occurs on MongoDB when you rename an indexed field. This problem occurs on version 2.0.2. I didn’t test the problem on another version.

If you have simple documents having the following structure:

{
    PreName : “Daniel”,
    Name: “Weber”
}

You maybe want to rename all “Name” elements to “LastName”. To do this you can use the rename functionality from MongoDB
http://www.mongodb.org/display/DOCS/Updating#Updating-%24rename

The command for a rename should be something like this:

db.MyCollection.update( { } , { $rename : { "Name " : "LastName" } } )

When you look at the documents after running the rename query everything is fine, the field name is renamed correctly.

For the reason you run the rename command again, nothing happen (as expected) because a field with “Name” doesn’t exist anymore.

The problem occurs only when you have an index on the field you want to rename. Therefore we create the same simple document and insert an index on the “Name” property with the following command:

db.MyCollection.ensureIndex( { "Name" : 1 } )

Information about index creation can be found on
http://www.mongodb.org/display/DOCS/Indexes#Indexes-Basics

Now we run the same update command. When we have a look into the database the field name changed as expected. Then we run the rename command again. A strange thing happened. The renamed field is deleted and the “LastName” value is lost.

Therefore be careful with renames of indexed elements where a script runs the rename more than once.