DotNetCurry Logo

Behavior Driven Development (BDD) – an in-depth look

Posted by: Andrei Dragotoniu , on 7/3/2017, in Category Patterns & Practices
Views: 6639
Abstract: The article explains how Behavior Driven Development (BDD) works and provides a real-world example of how to use it to provide meaningful tests which can act as living documentation.

This article expects the readers to be familiar with the testing mindset in general, it will however touch on how things can be built, taking advantage of SOLID principles and other methods of writing testable code.

Behavior Driven Development (BDD) – a quick description and example

BDD stands for Behavior Driven Development. The syntax used to describe the behavior is Gherkin.

The idea is to describe what should happen in a language, as naturally as possible.

If you are familiar with Unit Testing and are comfortable writing unit tests, then you are familiar with the way they read. Depending on how much a test needs to cover, it can be quite difficult to work out what it does, because it is after all, just code.

Only a developer can really understand what happens there.

BDD tackles the issue in a different way.

Let’s hide the code and start a conversation, so much so that now anyone can read a scenario and understand what it tests.

Let’s take an example:

Given a first number 4

And a second number 3

When the two numbers are added

Then the result is 7

There is no code here. This scenario can be read like a story. We can give it to a Business Analyst to make sure we’re tackling the right thing, or give it to a tester, or can revisit this later and refresh our memory on how things need to work, and why did we build things a certain way.

Are you keeping up with new developer technologies? Advance your IT career with our Free Developer magazines covering C#, Patterns, .NET Core, MVC, Azure, Angular, React, and more. Subscribe to our magazine for FREE and download all previous, current and upcoming editions.

We are describing a bit of behavior here, in this case, it could be a Math operations sub-system where we have clearly defined one of the behaviors of this system. Of course, more tests are to be written to cover the complete behavior and take care of edge cases.

If this all starts to sound like writing unit tests, then that’s a good thing.

BDD and Unit testing in some respects are similar and do not prevent developers from using both, if that is appropriate.

Using the Gherkin syntax makes it very easy to explain what is being tested in a natural language, which even non-developers can read and understand.

A QA person or a Business Analyst, for example, could copy and paste such a test, change the numbers and come up with their own test cases, without having to write any code at all, or without even seeing the code.

Here is a very good writeup on Gherkin in case you are interested in details: https://github.com/cucumber/cucumber/wiki/Gherkin

Now we have the test, how does it all work from here onwards?

Each line in the test is called a step, each step becomes a separate method, and each method gets called in the order they are written.

In our example, the first two lines ( the Given and the And ) will setup the initial data, the When will take care of calling the method we want to test, and the Then is where the assert will happen.

Since each step is a separate method, hopefully by now it is obvious that we need to be able to share some state between steps.

Don’t worry, this isn’t state as you think of it and it doesn’t break any of the testing principles, especially the one which says that a test should never alter state or should depend on state created by another test.

It simply means that each test needs to be able to have its own state and that state needs to be available for every step in that test.

Specflow gives us a ScenarioContext which is just a dictionary and is used to store the data needed for executing the test. This Context is cleared at the end of the test and it will be empty again when the next test runs.

With each step as a separate method, one last point to be considered here is that the step can be reused between multiple tests. Look at the first two steps in our test example. If we pass the number as an input parameter to this step method, we can reuse it wherever we reuse the steps.

The test looks more like this:

Given a first number {parameter1}

And a second number {parameter2}

When the two numbers are added

Then the result is {expected result}

Now that is much more generic and hopefully clearly shows the reusability of each step. We don’t have to use the same steps in every test and they don’t even need to be in the same order! We’ll take a look at this a bit later.

As we keep adding tests, the actual code we write becomes smaller because for each system behavior we are testing, we will get to the point where we simply reuse the existing steps we have already coded.

So even if we spend a bit of time initially writing the testing code; as we advance, eventually the amount of time spent on writing additional steps goes down to virtually zero.

Software used for BDD

We need to see what tools can help us harness the full power of BDD. This article is written from a back-end point of view, but there are alternatives for pure front end work as well, but they won’t be discussed in in this article.

I will be using:

  • Visual Studio 2017 (bit.ly/dnc-vs-download)
  • Specflow – Visual Studio extension – this will help with the Gherkin syntax and the link between the tests and the steps code.
  • NUnit – used for Asserts. You can use something else here, FluentAssertions works just as well.

There is one NuGet package which installs both Specflow and NUnit, I’d use that one as it makes things easier.

So, first install the Visual Studio Specflow extension. This will give us the file templates and syntax coloring.

vs-specflow-extension

Specflow

The Specflow Visual Studio extension will allow you to create feature files. These files are the placeholder for your test scenarios.

The extension also adds syntax coloring to the feature files which is a nice visual indicator on what you have done and what you still need to do. It creates a connection between the steps of each test scenario and the test method behind them, which is quite handy especially when you have lots of feature files and lots of tests.

specflow-feature-file

Once a feature file is created, it will look like this:

image

The feature text describes the problem.

The scenario is basically one test and we can have multiple scenarios in one feature file.

The tag is used in the Test Explorer window and it allows us to group tests in a logical manner. Our initial test could look like this:

image

Please note how the references to UI elements have been removed. This goes back to what was said initially - focus on functionality, as well as on the core bits that do something; not how things are displayed and where.

In the Visual Studio solution, we still need to install Specflow and NUnit using the NuGet package SpecFlow.NUnit:

specflow-nunit

I created a MathLib class library and added this NuGet package to it.

Once we have all these packages installed, open the Test Explorer window, build the solution and you should see the following:

test-explorer

I filtered by Traits, which then shows the tags we created. I used two, the MathLib to show all the tests in the library (Add, Divide etc.), but then I can see them grouped by Math operation as well as under the Add tag for example. This is again just personal preference. The tags can be quite a powerful way of grouping your tests in a way which makes sense to you.

So now we have a feature file, as well as a test, but we haven’t written any test code yet.

What we need next is a steps code file, where all the steps for our tests can go. We will start with one file, but we can separate the steps into multiple step files, to avoid having too much code in one file.

If you have another look at our test, you’ll see that the steps are colored in purple. This is a visual indicator that there is no code yet.

Let’s create a steps code file, which is just a standard C# file.

The code looks like this:

using TechTalk.SpecFlow;

namespace MathLibTests
{
    [Binding]
    public sealed class Steps
    {
    }
}

The only thing we added is the Binding attribute at the top of the class. This is a Specflow attribute and it makes all the steps in this file available to any feature file in this project, wherever they may be located.

Now, go back to the feature file, right click on any of the steps and you will see a Generate Step Definitions option in the context menu:

generate-step-definitions

Click the Generate Step Definitions option and then Copy methods to clipboard:

step-definitions

Notice how the four steps appear in the window. Code will be generated for each one of them.

Now simply paste the code in the steps file created earlier:

using TechTalk.SpecFlow;

namespace MathLibTests
{
    [Binding]
    public sealed class Steps
    {
        [Given(@"a first number (.*)")]
        public void GivenAFirstNumber(int p0)
        {
            ScenarioContext.Current.Pending();
        }

        [Given(@"a second number (.*)")]
        public void GivenASecondNumber(int p0)
        {
            ScenarioContext.Current.Pending();
        }

        [When(@"the two numbers are added")]
        public void WhenTheTwoNumbersAreAdded()
        {
            ScenarioContext.Current.Pending();
        }

        [Then(@"the result should be (.*)")]
        public void ThenTheResultShouldBe(int p0)
        {
            ScenarioContext.Current.Pending();
        }

    }
}

Save the file and then look at the feature file again. Our initial Scenario, which had all the steps in purple, now looks like this:

image

Notice how the color has changed to black and the numbers are in italic which means they are treated as parameters. To make the code a bit clearer, let’s change it a little bit:

using TechTalk.SpecFlow;

namespace MathLibTests
{
    [Binding]
    public sealed class Steps
    {
        [Given(@"a first number (.*)")]
        public void GivenAFirstNumber(int firstNumber)
        {
            ScenarioContext.Current.Pending();
        }

        [Given(@"a second number (.*)")]
        public void GivenASecondNumber(int secondNumber)
        {
            ScenarioContext.Current.Pending();
        }

        [When(@"the two numbers are added")]
        public void WhenTheTwoNumbersAreAdded()
        {
            ScenarioContext.Current.Pending();
        }

        [Then(@"the result should be (.*)")]
        public void ThenTheResultShouldBe(int expectedResult)
        {
            ScenarioContext.Current.Pending();
        }

    }
}

At this point, we have the steps, we have the starting point and we can add some meaningful code.

Let’s add the actual math library, the one we will actually test.

Create a class library, add a MathLibOps class to it with the Add() method:

using System;

namespace MathLib
{
    public sealed class MathLibOps
    {
        public int Add(int firstNumber, int secondNumber)
        {
            throw new NotImplementedException();
        }
    }
}

Now let’s write enough test code to have a failing test.

Let’s look at the Steps file again. Notice all those ScenarioContext.Current.Pending() lines in every step? This is the Context we were talking about before. That’s where we will put all the data we need. Think of it as a dictionary, with key /value pairs. The key will be used to retrieve the right data so we will give it some meaningful values to make our life easier.

using MathLib;
using NUnit.Framework;
using TechTalk.SpecFlow;

namespace MathLibTests
{
    [Binding]
    public sealed class Steps
    {
        [Given(@"a first number (.*)")]
        public void GivenAFirstNumber(int firstNumber)
        {
            ScenarioContext.Current.Add("FirstNumber", firstNumber);
        }

        [Given(@"a second number (.*)")]
        public void GivenASecondNumber(int secondNumber)
        {
            ScenarioContext.Current.Add("SecondNumber", secondNumber);
        }

        [When(@"the two numbers are added")]
        public void WhenTheTwoNumbersAreAdded()
        {
            var firstNumber = (int)ScenarioContext.Current["FirstNumber"];
            var secondNumber = (int)ScenarioContext.Current["SecondNumber"];

            var mathLibOps = new MathLibOps();
            var addResult = mathLibOps.Add(firstNumber, secondNumber);

            ScenarioContext.Current.Add("AddResult", addResult);
        }

        [Then(@"the result should be (.*)")]
        public void ThenTheResultShouldBe(int expectedResult)
        {
            var addResult = (int)ScenarioContext.Current["AddResult"];
            Assert.AreEqual(expectedResult, addResult);
        }

    }
}

Look at the first two Given methods, notice how we take the parameters passed into the methods and then add them to the context with a clear key so we know what they represent.

The When step uses the two values from the context, instantiates the Math class and calls the Add() method with the two numbers, then it stores the result back in the context.

Finally, the Then step takes the expected result from the feature file and it compares it to the result stored in the context. Of course, when we run the test, we will get a failure as we don’t have the right code yet.

To run the test, right click it in the Test Explorer window and use the Run Selected Tests option:

run-selected-test

The result will look like this:

failing-test

The result is as expected, so now let’s fix the lib code and make it pass:

namespace MathLib
{
    public sealed class MathLibOps
    {
        public int Add(int firstNumber, int secondNumber)
        {
            return firstNumber + secondNumber;
        }
    }
}

Now, let’s run the test again and we should see something a bit more cheerful:

first-test-pass

Cool, so at this point, we should be fairly familiar with how it all hangs together.

The biggest question we need to ask now is this:

OK, this is all great, but how is this different from unit testing and what value does it actually provide? What am I getting?

I have a feature file, that’s nice I suppose, but I could have easily written a unit test and be done with it. A Business Analyst is not going to care about my basic Add two numbers thing.

So, let’s look at how we would implement something a bit more complex.

Specflow has a lot more features and we only touched on a few. A very nice feature is the ability to work with tables of data. This is important when the data is not as simple as a number. For example, imagine you have an object with five properties, which would make it more difficult to deal with, as we would now need five parameters, instead of one.

So, let’s have a more serious project, let’s implement an Access Framework for a website and this Access Framework will tell us if a user can perform various actions on our website.

 

A Complex Problem description

We have a website where people can visit and then search and apply for jobs. Restrictions will apply based on their membership type.

Membership types (Platinum, Gold, Silver, Free)

Platinum can search 50 times / day and apply 50 times / day.

Gold can search 15 times / day and apply to 15 jobs / day

Silver can search 10 times / day and apply to 10 jobs / day

Free can search 5 times / day and apply to 1 job / day.

What we need

1. We need to define the Users

2. We need to define the membership types

3. We need to define the restrictions for every membership type

4. We need a way to retrieve how many searches and applications a user has done every day.

The first three are configuration, the last one is user data. We could use this to define the ways in which we interact with the system.

Actual code

Let’s create a class to represent the membership types. It could look like this:

namespace Models
{
    public sealed class MembershipTypeModel
    {
        public string MembershipTypeName { get; set; }

        public RestrictionModel Restriction { get; set; }
    }
}

The RestrictionModel class contains the max searches per day and the max applications per day:

namespace Models
{
    public sealed class RestrictionModel
    {
        public int MaxSearchesPerDay { get; set; }

        public int MaxApplicationsPerDay { get; set; }
    }
}

Next, we want a UserModel, which will hold the data we need for a user:

namespace Models
{
    public sealed class UserModel
    {
        public int ID { get; set; }
        public string Username { get; set; }

        public string FirstName { get; set; }

        public string LastName { get; set; }

        public string MembershipTypeName { get;set; }

        public UserUsageModel CurrentUsage { get; set; }
    }
}

The UserUsageModel will tell us how many searches and applications a user has already done that day:

namespace Models
{
    public sealed class UserUsageModel
    {
        public int CurrentSearchesCount { get; set; }

        public int CurrentApplicationsCount { get; set; }
    }
}

Finally, we want a class which will hold the results of the AccessFramework call:

namespace Models
{
    public sealed class AccessResultModel
    {
        public bool CanSearch { get; set; }

        public bool CanApply { get; set; }
    }
}

As you can see I kept this very simple, we don’t want to get lost in implementation details.

We do want to see how BDD can help us with something which is not just a Hello World application.

So now we have our models, let’s create a couple of interfaces, these will be responsible for the data retrieval part.

First, the one dealing with generic configuration data:

using Models;
using System.Collections.Generic;

namespace Core
{
    public interface IConfigurationRetrieval
    {
        List<MembershipTypeModel> RetrieveMembershipTypes();
    }
}

The second one deals with user specific data:

using Models;

namespace Core
{
    public interface IUserDataRetrieval
    {
        UserModel RetrieveUserDetails(string username);
    }
}

These two interfaces will become parameters to the AccessFrameworkAnalyser class and they will allow us to mock the data required for the tests:

using Core;
using Models;
using System;
using System.Linq;

namespace AccessFramework
{
    public sealed class AccessFrameworkAnalyser
    {
        IConfigurationRetrieval _configurationRetrieval;
        IUserDataRetrieval _userDataRetrieval;       

        public AccessFrameworkAnalyser(IConfigurationRetrieval configurationRetrieval, IUserDataRetrieval userDataRetrieval)
        {
            if ( configurationRetrieval == null || userDataRetrieval == null)
            {
                throw new ArgumentNullException();
            }

            this._configurationRetrieval = configurationRetrieval;
            this._userDataRetrieval = userDataRetrieval;
        }

        public AccessResultModel DetermineAccessResults(string username)
        {            
            if ( string.IsNullOrWhiteSpace(username))
            {
                throw new ArgumentNullException();
            }

            var userData = this._userDataRetrieval.RetrieveUserDetails(username);
            var membershipTypes = this._configurationRetrieval.RetrieveMembershipTypes();

            var userMembership = membershipTypes.FirstOrDefault(p => p.MembershipTypeName.Equals(userData.MembershipTypeName, StringComparison.OrdinalIgnoreCase));            
            var result = new AccessResultModel();

            if (userMembership != null)
            {
                result.CanApply = userData.CurrentUsage.CurrentApplicationsCount < userMembership.Restriction.MaxApplicationsPerDay ? true : false;
                result.CanSearch = userData.CurrentUsage.CurrentSearchesCount < userMembership.Restriction.MaxSearchesPerDay ? true : false;
            }

            return result;
        }
    }
}

We don’t do a lot here. We simply use Dependency Injection for our two interfaces, then populate the result by comparing how many searches and applications are available for the membership type of the selected user, against their current searches and applications.

Please note that we don’t really care how this data is actually loaded, typically there would be an implementation for each interface which goes to a database, but for this example, we don’t really care about that part.

All we need to know is that we will have a way of getting that data somehow and more than likely hook up the real implementations using an IOC of some kind in the actual UI project which needs real data. Since we don’t really care for that part, we won’t implement it, we will simply show some of the tests required.

Our feature file could look like this:

bdd-feature-file

The pipes denote the Specflow way of dealing with tabular data.

The first row contains the headers, the rows after that contain the data. The important thing is to note how much data we setup and how readable it all is. Things are made simpler by the fact that there is no code here, nothing hides the actual data. At this point we can simply copy and paste a test, change the data and have another ready just like that.

The point is that a non-developer can do that just as well.

Let’s look at the first scenario.

As you can see, first we setup the membership types that we want to work with. Remember we don’t care about real data, we care about the functionality and the business rules here and that’s what we are testing. This makes it very easy to setup data any way we like.

The second step sets up the user and their existing counts of searches and applications.

And finally, we expect a certain result when the AccessFrameworkAnalyser class is used.

There are a few important things to mention here.

How do we load the tabular data in the steps code?

Here is an example which loads the data for the membership types:

private List<MembershipTypeModel> GetMembershipTypeModelsFromTable(Table table)
{
    var results = new List<MembershipTypeModel>();

    foreach ( var row in table.Rows)
    {
        var model = new MembershipTypeModel();
        model.Restriction = new RestrictionModel();

        model.MembershipTypeName = row.ContainsKey("MembershipTypeName") ? row["MembershipTypeName"] : string.Empty;

        if (row.ContainsKey("MaxSearchesPerDay"))
        {
            int maxSearchesPerDay = 0;

            if (int.TryParse(row["MaxSearchesPerDay"], out maxSearchesPerDay))
            {
                model.Restriction.MaxSearchesPerDay = maxSearchesPerDay;
            }
        }

        if (row.ContainsKey("MaxApplicationsPerDay"))
        {
            int maxApplicationsPerDay = 0;

            if (int.TryParse(row["MaxApplicationsPerDay"], out maxApplicationsPerDay))
            {
                model.Restriction.MaxApplicationsPerDay = maxApplicationsPerDay;
            }
        }

        results.Add(model);
    }

    return results;
}

It is a good idea to always check that a header exists before trying to load anything. This is very useful because depending on what you’re building, you don’t always need all the properties and objects at the same time. You might only need a couple properties for a few specific tests in which case you don’t need tables full of data. You just use the ones you need and ignore the rest and everything still works.

The actual step for loading the membership types now becomes very trivial:

[Given(@"the membership types")]
public void GivenTheMembershipTypes(Table table)
{
    var membershipTypes = this.GetMembershipTypeModelsFromTable(table);
    ScenarioContext.Current.Add("MembershipTypes", membershipTypes);
}

This is exactly like before - load the data > store in context > job done.

Another interesting bit here is how we mock what we need.

I used NSubstitute for this and the code is quite simple:

[When(@"access result is required")]
public void WhenAccessResultIsRequired()
{
    //data from context
    var membershipTypes = (List<MembershipTypeModel>)ScenarioContext.Current["MembershipTypes"];
    var user = (UserModel)ScenarioContext.Current["User"];

    //setup the mocks
    var configurationRetrieval = Substitute.For<IConfigurationRetrieval>();
    configurationRetrieval.RetrieveMembershipTypes().Returns(membershipTypes);

    var userDataRetrieval = Substitute.For<IUserDataRetrieval>();
    userDataRetrieval.RetrieveUserDetails(Arg.Any<string>()).Returns(user);

    //call to AccessFrameworkAnalyser
    var accessResult = new AccessFrameworkAnalyser(configurationRetrieval, userDataRetrieval).DetermineAccessResults(user.Username);
    ScenarioContext.Current.Add("AccessResult", accessResult);
}

The initial data comes from steps which ran before this one, then we setup the mocks and finally call AccessFramework and store the result back in the context.

The final step, the actual assert, looks like this:

[Then(@"access result should be")]
public void ThenAccessResultShouldBe(Table table)
{
    var expectedAccessResult = this.GetAccessResultFromTable(table);

    var accessResult = (AccessResultModel)ScenarioContext.Current["AccessResult"];
    expectedAccessResult.ShouldBeEquivalentTo(accessResult);
}

Here I used another NuGet package, FluentAssertions. This one allows me to compare objects without worrying about how many asserts I will need for every single property. I can still have just one assert.

The full code is attached, please have a look, it’s a lot easier to follow things in Visual Studio. Note the structure of the solution, everything is in a separate project, everything references exactly what it needs and nothing more:

solution-layout

Hopefully by now you are starting to see the advantages of using BDD. The main point for me is that once the actual requirement is clear, we don’t need to look at code to work out what it does. All we need to do is look at the feature files.

It is a good idea to tag the scenarios with ticket numbers so you know which requirement each test is covering. This provides visibility to the business in terms of how much we have covered and what is left to do.

When a bug is encountered, it is a very good idea to write a test which replicates the bug and then fix it. This way you can be sure that a certain bug once fixed, it stays fixed.

If you need to debug a BDD test scenario you can simply set a breakpoint on a step and then right click the in the Test Explorer window, choose the “Debug Selected Tests” and off you go.

debug-test

BDD Downsides

So, you showed us the cake, what are the downsides of this approach?

Only one that I found so far and this is not a BDD issue specifically, but a tool issue.

Once you have several feature files and a healthy number of tests, you could potentially have quite a few steps. There is no easy way to tell when a step method is not used by any feature file. Codelens is not going to help here. You can’t tell if this particular step is called by ten scenarios. There are no counts anywhere so this could potentially mean that you could have orphan step methods.

Of course you can always delete one step method and then check if any feature file is affected but that could take a while, depending on how many feature files you have.

As I said this is not really a BDD issue, it is a Specflow issue and chances are it will only get better as more time passes.

For me, the benefits of using BDD greatly outweigh the issues with Specflow.

Download the entire source code of this article (Github).

This article was technically reviewed by Yacoub Massad.

Was this article worth reading? Share it with fellow developers too. Thanks!
Share on LinkedIn
Share on Google+
Further Reading - Articles You May Like!
Author
Andrei Dragotoniu is a software developer from Southampton, UK. He currently works for DST Bluedoor as a Lead Backend Developer, working on a financial platform, getting involved in code standards, code reviews, helping junior developers. He is interested in architectural designs, building efficient APIs and writing testable code. In his spare time, he blogs about technical subjects at eidand.com


Page copy protected against web site content infringement 	by Copyscape




Feedback - Leave us some adulation, criticism and everything in between!