Beauty of BDD with C#

In this post, we will be discussing about BDD and its implementation with C#. This article will be divided into multiple parts. This part will be covering the introduction of BDD and its implementation with C# with a basic example.

Need Of BDD ?

Lets take an example, business comes with a requirement of customer search functionality and say Business Analysts have written the user story some what like this.

As a user I should be able to search customer details with First Name,Last Name,Flat Number and Pin Code.

User enters the following combination (First Name +Last Name),(First Name + Flat Number + Pin Code), (Last Name + Flat Number + Pin Code) exact customer details should be extracted. With Firstname, Lastname, Flat Number or Pin Code list of customers should be extracted.

Consider one scenario in BDD

When user enters the first name

and user enters the last name

Then the user should be able to see a single customer record with customer details

Did you observe the difference in readablity, it has become better isn’t it? This might be a simple scenario but in real world we have to face complex scenarios then the difference becomes more clear.

When a user with no understanding of application looks at the above scenario, it would become very easy for him to understand this.

What is BDD ?

BDD stands for Behavior driven development. It is a methodology that is followed while development of a system. Most of us relate BDD only to Automated testing and think that implementation of BDD takes only to write GIVEN WHEN and THEN, but it is more than that and on a whole it’s a methodology. As the name stands Behavior Driven Development, it makes us focus on the behavior of the module that is being developed. A particular user story is broken down from a user perspective and how user uses it and then the development kick starts.  In simple words it focuses and symbolizes on WHY we are writing the code and WHAT does the code achieve in terms of functionality and how the end users get benefited. Lets see how we can achieve this.

In the upcoming posts I have planned to show a complex requirement and how it can be achieved using BDD and also on the advanced concepts of Specflow depending on availability.

Readers can understand better if they have the basic OOPS understanding and also some hands on experience with Visual studio.

How can we implement BDD

I have personally seen many projects where the thoughts of implementing BDD kick starts and gets blocked in implementation. Reasons might be many lets not dig into it, let us focus on what it takes to implement BDD. Let us start with the initial meet where Product Owner or the Business comes up with certain requirements and the discussion happens between BAs and POs. When the user stories are clear to BAs and they come up to team to discuss about the same, here comes the concept of BDD. Where the whole team sits together to break the user story into user behavior and convert these user behavior into scenarios which can be easily understood for a normal user who doesn’t have functional understanding. These scenarios should exactly replicate user behavior.

Pre-requisites  for implementing BDD with C#

Note – This article is mainly focused from a C# development perspective.

Visual studio provides an extension called Specflow, this helps us to write the scenarios in Gherkin language which is plain English text. Add Specflow to project


BDD Work flow :

Starts with Feature File where we can write down the scenarios in Given When and Then.

When we add this extension to project we would be getting feature, stepdefinition etc templates for implementing BDD. As we can see in the below screenshot, we have feature file and stepdefinition templates and the extension for feature.

After adding the feature lets see how the template looks:

It starts with Feature tag, this is the place where we define what is the exact functionality that this feature would achieve.

Then comes the Scenario, this is where we write a group of statements which produces some kind of output that is expected. As we can see here the scenario is written in Give When and Then. These 3 keywords are recognized by Specflow. Don’t get confused with AND keyword here as it replicates the GIVEN before it. If there are multiple Given When and Then we can simply put AND after first Given When and Then. Each statement or Each Given Then or When that we write is called a step and it is associated to code via step definition which we will be discussing next.

Like this we can write down multiple scenarios and each scenario should replicate the user behavior. Keep an eye on the color of the steps. Right now these are pink, when we associate it to their particular step the color changes. This is a great advantage we have using Visual studio, since the feature when compiled doesn’t give a compile time error if a step is not associated with any step definition the only way to find out an unused step is through color.

When we save the feature file, there is a feature.cs file which is auto generated. Best practice is not to add or change the feature.cs file manually as it is auto generated and may lead to compile time errors and always save the feature file before committing or else the changes would not be reflected in .cs file. The .cs file has the information about scenarios, Specflow version used test runner information etc.

Here comes the beauty of this extension and also the concept of how every step that we write is associated to code. When you look into the definition of the step in feature, we will see a pop up coming up. Let us do it for the first step as shown below.

Copy this to clip board, and add a step definition file to the project. Naming convention that is followed while naming this file is FeatureName+Steps.cs. We have addition feature, so this would be AdditionSteps.cs. This helps in better readability and also if we are having a big set of stepdefinition files, we can easily differentiate what we are looking for. Again don’t forget that we are implementing BDD, so we have to be particular on naming and every file should symbolize the functionality that it achieves.

When we add we can see the default template that would be associated to the default feature template of addition of two numbers. Adding the screenshot of feature too so that it would be easy to relate. As discussed above, the color of the steps if associated to step definition must change. Lets see it

Before adding step definition

Add step definition file

After adding step definition file. still we can see one step in pink the reason being that we added it manually in addition to the default template provided. So just go into definition and then copy the code and paste it in step definition.


Now we can see each and every step is bound to a step definition.

Lets start with Binding attribute that is in class. This attribute links the feature to step definition. So when the code runs, it would search for all the class files which has Binding attribute and then the particular step. As we can see each method is attributed with the exact step that we have in feature. This is important to note, you might miss on this while writing features so please make sure you either copy the auto code and paste it here or write it manually with more attention as  you will not get any compile time error when the step doesn’t have a matching step definition, it would fail at run time and you might spend hours to realize what went wrong.

Now let us come to implementation that is shown in each method. The standard line ScenarioContext.Current.Pending() states that the current context of the scenario is pending implementation. Pay attention on ScenarioContext class as it is most important as we progress. Let us write some simple code and make this test pass.

As you can see, scenario context is a kind of storage till the scenario execution is in scope. You might think of how the step that we have written has a input parameter in the step definition. Every step that we write each is by default read as string. If we have anything other than string in this case an int, we get it as parameter to our method. You can add a reference to your library and invoke the method under test from anywhere in the stepdefinition as per your need. Finally we assert the expected and actual result from our logic.

Use MSTest as default test runner and check the app config if it’s the same there too. You can right any kind of logic in step definitions file and see that the step definition file is not overloaded with lot of service calls or business logic code. Practice suggests the step definition files to be static as they can be used across the scenarios and should only contain the logic to handle input and assert output in addition to other class calls. Achieve this by adding some common helper classes which perform or call the classes that contain all the required business logic.

This was an introduction to BDD and I hope you get some insight into it as it is a beautiful way to implement functionality. In the upcoming post I would like to go with a complex example and see how we can achieve it.









Deploying Backbase 5.6.4+ application on Weblogic 12c

Recently we deployed the Backbase 5.6.4 application on Weblogic c.  As part of our project we had to migrate to Backbase 5.6.4 version and also upgrade the Weblogic to 12c from 11g version. Backbase supports Weblogic 11g application server and it doesn’t support the deployment to Weblogic 12c as per the documentation for CXP 5.6.4 version. Deployment steps were not available for Weblogic 12c for CXP 5.6.4 version. We had to move to Weblogic 12c as it provides zero downtime deployment and the bank really wanted this upgrade! When we deployed Backbase 5.6.4 to 12c we ran into many problems with  jar conflict issues and class loading issues.

We noticed that the backbase wars like portalserver.war created in the usual way works in Tomcat but not in Weblogic 12c. This blog explains how we solved these conflicts and the steps we have to do to create the Weblogic 12c specific wars that works.

Following are some stacktraces of the class loading issues that came up.

Stacktrace 1
<Dec 8, 2017 2:38:12 PM IST> <Error> <Deployer> <BEA-149231> <Unable to set the activation state to true for the application ‘portalserver’.
at weblogic.servlet.internal.WebAppModule.startContexts(
at weblogic.servlet.internal.WebAppModule.start(
at weblogic.application.internal.flow.ModuleStateDriver$
at weblogic.application.utils.StateMachineDriver.nextState(
at weblogic.application.internal.flow.ModuleStateDriver.start(
Truncated. see log file for complete stacktrace
Caused By: javax.xml.bind.JAXBException: ClassCastException: attempting to cast zip:/domain/consumer_portal/servers/NewPortal_mod/tmp/_WL_user/portalserver/d16z7t/war/WEB-INF/lib/jaxb-api-2.2.6.jar!/javax/xml/bind/JAXBContext.class to jar:file:/middleware/jdk1.7.0_79/jre/lib/rt.jar!/javax/xml/bind/JAXBContext.class.  Please make sure that you are specifying the proper ClassLoader.
at javax.xml.bind.ContextFinder.handleClassCastException(
at javax.xml.bind.ContextFinder.newInstance(
at javax.xml.bind.ContextFinder.newInstance(
at javax.xml.bind.ContextFinder.find(
at javax.xml.bind.JAXBContext.newInstance(
Truncated. see log file for complete stacktrace
Stacktrace 2
Caused By: java.lang.ClassCastException: Cannot cast weblogic.wsee.jaxws.framework.policy.WSDLGeneratorExtension to
at java.lang.Class.cast(
at weblogic.servlet.internal.EventsManager$
at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(
at weblogic.servlet.internal.Event


Stacktrace 3
Caused By: java.util.ServiceConfigurationError: Provider com.ctc.wstx.stax.WstxInputFactory not a subtype
at java.util.ServiceLoader.access$300(
at java.util.ServiceLoader$LazyIterator.nextService(
at java.util.ServiceLoader$
at java.util.ServiceLoader$
at Method)
at org.hibernate.validator.internal.xml.XmlParserHelper.<init>(
at org.hibernate.validator.internal.xml.ValidationXmlParser.<init>(
at org.hibernate.validator.internal.engine.ConfigurationImpl.getBootstrapConfiguration(

Following are the steps we have to do to resolve the classloading issues and jar conflicts.

Step 1

We need to add the following weblogic.xml configuration at portalserver/…./WEB-INF/

<?xml version="1.0" encoding="UTF-8"?>





Using this configuration we specify weblogic to prefer web-inf classes from the portalserver war over the corresponding classes of weblogic lib directory.

Step 2

Create a maven profile weblogic  in Backbaseproject/webapps/portalserver/pom.xm to exclude some jars from the portalserver.war.



Weblogic 12c comes up with these jars and so we have to exclude them from the war. Otherwise there will be jar conflict issues  and classloading issues while the server is starting up. This bundling makes sure that we have the JAXWS-RT-2.1.1.jar in the portalserver.war


Using this weblogic  profile if we run mvn clean install -Pweblogic,  it will create the war  excluding the jars mentioned above in the snippet. This war can be deployed to Weblogic 12c.


Step 3 
We also faced the jar conflict issue for a particular SOAP service and it started failing after going live. The stacktrace was like this
 Dec 07, 2017 1:24:02 AM fixQuotesAroundSoapAction
INFO: Received WS-I BP non-conformant Unquoted SoapAction HTTP header: processMessage
Dec 07, 2017 1:24:02 AM$HttpToolkit handle
After carefully observing the logs we noticed that Weblogic 12c webservice related jars are causing the conflict when ever the SOAP service is invoked. We figured the path of the jar in weblogic and it is
We removed this jar by renaming with .bak and then the conflict was resolved and the SOAP Service was working fine as was also the rest of the application. We had to resolve this on runtime after we went live with the application


Following the above steps we can deploy Backbase 5.6.4+ application on Weblogic 12c successfully. Weblogic 12c has many useful features and I recommend you to upgrade to weblogic 12c for the backbase application.

Why Code The SOLID Way

Coding is fun when we do it for fun. And Software development resembles a game when we are fond of challenges. Nevertheless, we all come across of time when it relates to a business. These days, it’s not only about innovation, it’s equally about sustenance. Therefore, when we talk about sustenance of an idea on the low level, we deal into maintaining applications in a continuous evolution pattern. A very well-known fact is 80% of time and effort which also means money is spent on maintenance of a product. Let us try to understand this fact.

To start with, I started visualizing the following use case: Say I purchased a bike for 1 lakh and I spend 4 lakhs on its maintenance over next 5 years. These numbers pushed me to uproot the cause of high maintenance of a software that is already been developed.

With a lot of research, I feel one of the major challenge with developers these days is Bad Code. Now the question is what is Bad Code? Okay! So before going ahead with the concept of Bad Code, let’s see the programming world with top level view.

With an approximate data, it was concluded that number of programmers are getting double every five years. That means a perpetual process of having half of the programmers in the world with less experience will always prevail. There are many complex implications to this, the major one being -callowness in code. And I quote here – “If your code just worked as per the business requirement, Congratulation! The worse code is ready”. Meaning thereby, that piece of code which will completely suffice the business logic might guarantees no quality in terms of free from rigidity and fragility- the two most common characteristics of a bad code. Something was missing in the code. We do not shoot an email the way we chat, because we adhere to the mailing etiquettes. Similarly, coding etiquettes is what I am talking here.

And as I reach into the gorge, it leads me to some of the basics in programming. Every Object-Oriented design talks about encapsulation, inheritance and polymorphism. But did they really come up with OO design?

Absolutely Not. They were in practice even before OO concept surfaced. So, what does OO came up with? For me, Object Oriented is all about managing the desired dependency in a selective way where it can avoid the bad coding practices – rigidity and fragility. Let’s define these two terms now: Rigidity means change in code at one place breaks the code at another place, and fragile code is that which has unwanted dependency for a small change. These two either completely or partially, marks a bad code.

Now if we understood this correctly it’s easy to digest why the estimations for a small change in functionality goes high; resulting loss of time and money.

So how can we reduce the maintenance cost effectively? This is where I encountered SOLID principles rightly proposed by Mr. Bob Martin, well known for his book ‘Clean Code’. He came up with some coding principles which will suggest us to write code in way that will basically help us organize the dependent modules and avoid bad code characteristics like rigidity, fragility and immobility. In fact, we will have a better maintainable code for a product.

The five principles:

Single Responsibility principle

Open/Closed Principle

Liskov Substitution Principle

Interface Segregation Principle

Dependency Inversion Principle

talks very clearly on how we need to organize our OO design. Let’s talk about how each of these principles help us writing a better code.

Understanding SOLID principles

Single Responsibility Principle:

A class should have only one reason to change.

Each class or module should be very specific to its role. In the figure 1.1, a payroll application EmployeeService class is doing multiple operations like getting employee information from database, calculate reporting hours and also generating pay slip.

Fig 1.1 Code Illustration breaking Single Responsibility

So, we can implement single responsibility if we can put all three operations in separate classes as depicted in figure 1.2.

The different concern from a business point of view should be decoupled, so that any change in one of the module should not impact the other modules.

Fig 1.2 Code Illustration following Single Responsibility

Open/Closed Principle:

A class, once completed should be open for extension but close for modification.

It means we should be able to add or modify the functionality of module without changing the base module. Let’s consider the same Employee scenario.

Fig 1.3 Code Illustration breaking Open/Closed Principle

Suppose employee class is completed and now a new type of employee joined, say temporary employee or intern. So, in this case we should not be modifying the employee class(Fig 1.3), instead we should inherit this employee class and provide custom behaviors to the new class (Fig. 1.4).

Fig 1.4 Code Illustration following Open/Closed Principle

Liskov Substitution principle:

Derived classes must be usable through the base class interface without the need for the user to know the difference.

For modules to adhere to LSP, the input parameters of any method in derived class should be contravariant. Contravariance, allow us to use a more generic type, where a more derived type is specified.

Say in the above example, we refactor the code further and made derived classes like below:

Fig 1.5 Code Illustration for LSP

We have a method in SalaryService class to calculate Salary of an employee. with input parameter specified as PermanentEmployee.


Fig 1.6 Code Illustration breaking Liskov Substitution principle

At any later point of time, If Company announced that we will have salary for temporary employees also, then we will need to have another method which will calculate salary for temporary employees. This is not very generic solution.

If we apply LSV here, Our Salary class method should be same and the input parameter type for the GeneratePayslip method should be the base class so that GeneratePayslip can be called for any child class of Employee base class.

Fig 1.7 Code Illustration following Liskov Substitution principle

Interface Segregation Principle:

Clients should not be forced to depend upon interfaces that they don’t use.

This means that a class should not be inheriting from unrelated interfaces that may not be needed. We can have explicit interfaces for different classes so that class inherit only the required interfaces and implement them.

In the above example, suppose I implement interface IEmployeeService with some methods as depicted below:

Fig 1.8 Code Illustration breaking Interface Segregation principle

Now each class inheriting IEmployeeService interface must implement all three methods.

A class EmployeeService which deals with crud operation on Employee may not be interested to operation like GeneratePayslip. And Salary class which implements GeneratePayslip may not be interested to deal with adding and editing operations.

As per ISP, interfaces should be segregated on basis of classes which will implement them. Unrelated operations should be placed in separate interfaces.

Fig 1.9 Code Illustration following Interface Segregation principle

Dependency Inversion Principle:

Two points that it covers are:

  • High-level modules should not depend on low-level modules. Both should depend on abstractions.
  • Abstractions should not depend on details. Details should depend on abstractions.

In a general flow, we should not keep classes coupled in such a way that any change in low level modules (like business logic implementation) should lead to recompilation of the high-level modules (likes controller classes).

In below code snippet, Salary service is dependent on EmployeeService class which contains the employee information.

Fig 1.10 Code Illustration breaking Dependency inversion principle

Instead, we should be having a mechanism in place where the dependent modules should be injected. We can achieve dependency injection by constructor injection, parameter injection or implement any other inversion of control pattern to adhere with DIP. Below code is an example of constructor injection(Fig. 1.11).

Fig 1.11 Code Illustration following Dependency inversion principle

SOLID principles help us manage the dependency and structure the modules in such a way that we develop code keeping in mind the readability, usability, maintainability and portability of  product. Just like how we practice a morning walk – some extra effort to daily life helps us keeping away from doctors, the same way if we practice these principles when we code, it helps in survival of product in long run.


Agile in Marketing- Making it inclusive

Agile is a methodology which helps a team to work together in sync and achieve a common goal. It enables team to achieve quality output and faster time to market. At Xebia, we encourage its adoption not only in product development teams but across entire organization including IT, Training, HR and Marketing.

Over last few years, I have worked with several project teams and seen the implementation of Scrum framework from close corners. Last year I got chance to work with Marketing team and there also I have seen few Scrum ceremonies being implemented successfully. In this post, I would like to share my experience and throw light on how support teams can successfully implement this methodology in their system.

What motivated us to move to Scrum

Marketing team setup is entirely different from the scrum product development teams- like two ends of a spectrum which are a part of the same system but have very different wavelengths. Marketing Team’s nature of work is very dynamic and they have to flip-flop between tasks frequently and sometimes in a very ad-hoc manner. Most of the team members work independently in silos with each member having ownership of his/ her own task. In order to improve the team collaboration and help each other better, we decided to gravitate towards Scrum. But the first question that came across was:

How we can start using it and will this help marketing?

We have a big team of Agile coaches in Xebia. So we roped them in for their expert advice in implementing Daily scrum ceremony.

First all of us we listed out our goals we wanted to achieve:

  1. Improve productivity
  2. Measure the output
  3. Faster time to market
  4. Improve team communication and collaboration

We prioritized these goals so that we could allocate our time and efforts towards them in a more focused manner. Keeping these goals firmly entrenched in our mind, we moved ahead and prepared our Agile board.

Our Agile Board

Implementation of Scrum was not an easy task in Marketing as compared to scrum product teams.  In product teams you have a defined backlog from where you pick User stories for each sprint on priority basis whereas in marketing you cannot have a defined backlog as many times tasks are allocated on need basis. The challenges confronting us were different. While preparing our board we kept the following points in mind:

  1. Teams could not have a defined backlog as tasks keep on coming on adhoc basis.
  2. Team members’ dependency on stakeholders, SMEs for certain tasks.
  3. Many times, team receives high priority unplanned task.
  4. What should be the Definition of “Done”?

After several rounds of discussion, in true agile flavour, we came up with first version of our board which we sought to refine further in coming months.

  • We kept our sprint cycle short – just for a week because of ad-hoc nature of tasks. The short sprint cycle resulted in shorter feedback loops. So at the end of week we could see the results and also take corrective measures if things were not moving in right direction.
  • We created two boards- (a) Weekly Agile board which had all the tasks that team would be performing in the week. (b) Epic board which showed the bigger picture of our task which were broken down into smaller tasks and moved to our weekly board. Instead of User Stories we call these smaller piece of work as Tasks J.
  • Every week we plan and come up with a backlog based on the task priority. So all tasks, whether it’s a social media campaign / post, articles, Newsletter, or website work will now be driven through backlog. In upcoming months, we are planning to create horizontal swim lanes for different class of work. For example- a lane for social media campaign, another one for Newsletter and so on. This will help us in measuring the cycle time at the end of each sprint.
  • In case of requirement of any unplanned task during the week, its priority will be discussed and decided by our Product Owner. We will have a column for unplanned tasks in order to measure number of planned tasks that were skipped during the week.
  • Definition of “Done” is defined differently for different tasks. In order to accommodate different class of work, we have introduced columns such as Under Review, Waiting for, Blocker, Impediment. This enables us to move tasks to these columns in case the work is assigned or pending from other teams.  For example- if we are trying to develop a case study, then it will undergo several phases such as- requirement gathering, content development, image creation, review, feedback incorporation and final publish. In order to track the task progress better, we move them through columns sequentially- Under Review, Waiting for and finally to Done. But this may not be same for all the tasks. Some may undergo this cycle while others may not and can directly be marked as Done. So, Definition of Done is different for different tasks.

Our Daily Stand-up

Our whole team got familiar with the concept of daily scrum – every morning, we start our day with the daily scrum. Our entire team gathers in a room and all the team members share the task status of yesterday and what he/ she is going to take-up today and also if there are any impediments. We move the sticky notes from one column to another depending upon the current status of the task. As a practice we have each team members name on the sticky notes so that all members are aware of each other’s tasks. Though, initially it was difficult for each one of us to share our work, list down our tasks and be transparent, but soon all of us got accustomed with it.

What we achieved

  • Agile board facilitated visual management and made the communication flow fast.
  • It helped the team in organizing its work in a more coordinated manner which resulted in better collaboration and greater transparency.

Overall, the team is happy and we all noticed some remarkable improvements in the way we work and in terms of our output. It has injected a new sense of enthusiasm in our system and motivated us to improve further, thereby improving our throughput and productivity.


After few months we recognized that instead of Scrum we are actually doing Proto Kanban. We have not implemented an end to end pull system but we are in process where this partial implementation will mature and evolve towards a true Kanban system.

Next steps

In coming months, we are planning to have Retrospective meeting and also think of ways to predict our output.

Stay tuned… in our next post we will share more insights on measurement of our work output and our Kanban journey.

Spring Security and OAUTH2 with Azure Active Directory

Azure Active Directory (Azure AD) uses OAuth 2.0 to enable you to authorize access to web applications and web APIs in your Azure AD tenant. In Azure Active Directory (Azure AD), a tenant is representative of an organization. It is a dedicated instance of the Azure AD service that an organization receives and owns when it signs up for a Microsoft cloud service such as Azure. 

Step-1 : To Implement OAUTH2 with Azure AD first of all you must get a tenant on Azure AD.

How to get an Azure Active Directory tenant

Step-2: Register the new app with Azure AD

To set up the app to authenticate users, first register it in your tenant by doing the following:

  1. Sign in to the Azure portal.
  2. On the top bar, click your account name. Under the Directorylist, select the Active Directory tenant where you want to register the app.
  3. Click More Servicesin the left pane, and then select Azure Active Directory.
  4. Click App registrations, and then select Add.
  5. Follow the prompts to create a Web Application and/or WebAPI.
  6. After you’ve completed the registration, Azure AD assigns the app a unique application ID. Copy the value from the app page to use in the next sections.
  1. From the Settings -> Properties page for your application, update the App ID URI. The App ID URI is a unique identifier for the app. The naming convention is https://<tenant-domain>/<app-name> (for example, http://localhost:8080/ ResourceApp /).
  1. To Get Tenant ID Click App registrations, and then select Endpoints

When you are in the portal for the app, create and copy a key for the app on the Settings page. You’ll need the key shortly.

Step-3: Get a new or existing application in which you want to secure your REST endpoints with oauth2 token.

Here I am using a Spring Boot App for demonstration purpose.

Add following dependencies in your pom.xml.


Add following properties in

security.oauth2.client.userAuthorizationUri= ceddc98e-d9e8-4242-bd71-8a08a48/oauth2/authorize

Client Id is obtained as Application ID Object under App Registration

To Get Client Secret Click Azure Active Directory–>App Registration–>App–>Settings–>Keys

To Get Tenant ID Click Azure Active Directory–>Properties

Directory ID is Tenant ID so update it

Now Click Azure Active Directory–>App Registrations–>End Points

Update below for property security.oauth2.client.accessTokenUri =

Update below for property security.oauth2.client.userAuthorizationUri =


And let other properties remain as it is.

Now Add following configuration classes in your security configuration package.

By using @EnableGlobalMethodSecurity we can easily secure our methods with Java configuration.

@EnableGlobalMethodSecurity(prePostEnabled = true, proxyTargetClass = true)
public class MethodSecurityConfig extends GlobalMethodSecurityConfiguration {

    protected MethodSecurityExpressionHandler createExpressionHandler() {
        return new OAuth2MethodSecurityExpressionHandler();

By Using @EnableResourceServer we can make application to be configured as resource server. ResourceServerConfigurerAdapter provide configurations for defining spring security configuration concerns.

public class OAuth2Config extends ResourceServerConfigurerAdapter {

    private TokenExtractor tokenExtractor = new BearerTokenExtractor();

    public void configure(HttpSecurity http) throws Exception {
        http.addFilterAfter(new OncePerRequestFilter() {
            protected void doFilterInternal(HttpServletRequest request,
                                            HttpServletResponse response, FilterChain filterChain)
                    throws ServletException, IOException {
                // We don't want to allow access to a resource with no token so clear
                // the security context in case it is actually an OAuth2Authentication
                if (tokenExtractor.extract(request) == null) {
                filterChain.doFilter(request, response);
        }, AbstractPreAuthenticatedProcessingFilter.class);
                "/access_token", "/refresh/access_token", "/auth_server/config",
                "/index", "/", "/index**", "/resources/**", "/css/**", "/welcome**",
                "/fonts/**", "/icons/**", "/js/**", "/libs/**", "/img/**"

    public UserInfoTokenServices remoteTokenServices(final @Value("${security.oauth2.resource.userInfoUri}") String checkTokenUrl,
                                                     final @Value("${security.oauth2.client.clientId}") String clientId) {
        final UserInfoTokenServices remoteTokenServices = new UserInfoTokenServices(checkTokenUrl, clientId);
        return remoteTokenServices;

Resource server must provide a bean for ResourceServerTokenServices. UserInfoTokenServices does the same purpose here so that request access at resource server should be validated from authorization server.

Now in our application there will be two types of resources

    • Unsecured Resources
    • Secured Resources


Unsecured Resources: are those which can be accessed in application with no need of access token. Unsecured resources must be configured in OAuth2Config’s configure method otherwise they will be treated as secured resources. AuthServer resources should be Unsecured resources as they will help to retrieve access token.

public class AuthServer {

    TokenService tokenService;

    public AuthServer(TokenService tokenService) {
        this.tokenService = tokenService;

    public AuthServerResponse configurations(){
       return tokenService.getServerDetails();

    public AuthenticationResult authorizeToken(@RequestBody @Valid AuthorizationRequest authorizationCode) throws Exception {
        return tokenService.getAccessTokenFromAuthorizationCode(authorizationCode.getCode(), authorizationCode.getRedirectUri());

    public AuthenticationResult refreshToken(@RequestBody @Valid AuthorizationRequest authorizationCode) throws Exception {
        return tokenService.getAccessTokenFromRefreshToken(authorizationCode.getRefreshToken(), authorizationCode.getRedirectUri());


AuthServerResponse is response to api “/auth_server/config” providing client-id, tenant and URL that client will use to get authorization code.

public class AuthServerResponse {
    private String clientId;
    private String tenantId;
    private String authority;

    public AuthServerResponse(String clientId, String tenantId, String authority) {
        this.clientId = clientId;
        this.tenantId = tenantId;
        this.authority = authority;

    public String getClientId() {
        return clientId;

    public void setClientId(String clientId) {
        this.clientId = clientId;

    public String getTenantId() {
        return tenantId;

    public void setTenantId(String tenantId) {
        this.tenantId = tenantId;

    public String getAuthority() {
        return authority;

    public void setAuthority(String authority) {
        this.authority = authority;

Token Service

public interface TokenService {

    AuthenticationResult getAccessTokenFromAuthorizationCode(String authorizationCode, String redirectUri) throws Exception;

    AuthenticationResult getAccessTokenFromRefreshToken(String refreshToken, String redirectUri);

    AuthServerResponse getServerDetails();


Default Token Service

public class DefaultTokenService implements TokenService {

    private static final Logger LOGGER = LoggerFactory.getLogger(DefaultTokenService.class);

    private TokenGenerator tokenGenerator;

    DefaultTokenService(TokenGenerator tokenGenerator) {
        this.tokenGenerator = tokenGenerator;

    public AuthenticationResult getAccessTokenFromAuthorizationCode(String authorizationCode, String redirectUri) throws Exception {
        AuthorizationCode request = new AuthorizationCode(authorizationCode);
        try {
            return tokenGenerator.getAccessToken(request, redirectUri);
        } catch (Throwable throwable) {
            return throwException(throwable);

    public AuthenticationResult getAccessTokenFromRefreshToken(String refreshToken, String redirectUri) {
        try {
            return tokenGenerator.getAccessTokenFromRefreshToken(refreshToken, redirectUri);
        } catch (Throwable throwable) {
            return throwException(throwable);

    public AuthServerResponse getServerDetails() {
        return tokenGenerator.getServerDetails();

    private AuthenticationResult throwException(Throwable throwable) {
        LOGGER.error(String.format("Failed To retrieve access token using refresh token"), throwable.getMessage());
        throw new TokenGenerationException("Users Access Could not be retrieved from Authentication Server");
public class TokenGenerator {

    private String clientId;

    private String clientSecret;

    private String tenant;

    private String authority;

    private String resource;

    public AuthenticationResult getAccessToken(
            AuthorizationCode authorizationCode, String currentUri)
            throws Throwable {
        String authCode = authorizationCode.getValue();
        ClientCredential credential = new ClientCredential(clientId,
        AuthenticationContext context = null;
        AuthenticationResult result = null;
        ExecutorService service = null;
        try {
            service = Executors.newFixedThreadPool(1);
            context = new AuthenticationContext(authority + tenant + "/", true,
            Future<AuthenticationResult> future = context
                    .acquireTokenByAuthorizationCode(authCode, new URI(
                            currentUri), credential, resource, null);
            result = future.get();
        } catch (ExecutionException e) {
            throw e.getCause();
        } finally {

        if (result == null) {
            throw new ServiceUnavailableException(
                    "authentication result was null");
        return result;

    private AuthenticationResult getAccessTokenFromClientCredentials()
            throws Throwable {
        AuthenticationContext context = null;
        AuthenticationResult result = null;
        ExecutorService service = null;
        try {
            service = Executors.newFixedThreadPool(1);
            context = new AuthenticationContext(authority + tenant + "/", true,
            Future<AuthenticationResult> future = context.acquireToken(
                    "", new ClientCredential(clientId,
                            clientSecret), null);
            result = future.get();
        } catch (ExecutionException e) {
            throw e.getCause();
        } finally {

        if (result == null) {
            throw new ServiceUnavailableException(
                    "authentication result was null");
        return result;

    public AuthenticationResult getAccessTokenFromRefreshToken(
            String refreshToken, String currentUri) throws Throwable {
        AuthenticationContext context = null;
        AuthenticationResult result = null;
        ExecutorService service = null;
        try {
            service = Executors.newFixedThreadPool(1);
            context = new AuthenticationContext(authority + tenant + "/", true,
            Future<AuthenticationResult> future = context
                            new ClientCredential(clientId, clientSecret), null,
            result = future.get();
        } catch (ExecutionException e) {
            throw e.getCause();
        } finally {

        if (result == null) {
            throw new ServiceUnavailableException(
                    "authentication result was null");
        return result;


    public AuthServerResponse getServerDetails() {
        return new AuthServerResponse(clientId, tenant, authority);



public class AuthorizationRequest {
    String code;
    String redirectUri;
    String refreshToken;

    public String getCode() {
        return code;

    public void setCode(String code) {
        this.code = code;

    public String getRedirectUri() {
        return redirectUri;

    public void setRedirectUri(String redirectUri) {
        this.redirectUri = redirectUri;

    public String getRefreshToken() {
        return refreshToken;

    public void setRefreshToken(String refreshToken) {
        this.refreshToken = refreshToken;
public class TokenGenerationException extends RuntimeException {

    public TokenGenerationException(String message) {

public class UserAccessFailedException extends RuntimeException {

    public UserAccessFailedException(String message){

Secured Resources: are those which must have a valid access token that are authorized by authorization server

We have user resources which will be secured and protected from un authorized access in application. Such as user email will be retrieved if user is authenticated.

public class UserResource {

    public String email(){
        OAuth2Authentication authentication = (OAuth2Authentication)SecurityContextHolder.getContext().getAuthentication();;
        UsernamePasswordAuthenticationToken usernamePasswordAuthenticationToken =
                (UsernamePasswordAuthenticationToken) authentication.getUserAuthentication();
        return (String)((Map) usernamePasswordAuthenticationToken.getDetails())
                .getOrDefault("userPrincipalName","Result Not Found");


And Finally, a main class to boot strap our application as Spring Boot application.

public class AzureOauth2Application {

   public static void main(String[] args) {, args);

Now running above code if user resource is accessed without providing access token it will generate Unauthorized error as follows with status 401.

To Obtain Access Token we shall use Authorization Code Grant Flow:

First of all invoke GET URL http://localhost:8080/auth_server/config

Which will generate following response

      “clientId”: “06a8ddb3-79b1-4826-a61c”,
      “tenantId”: “ceddc98e-d9e8-4242-bd71”,
      “authority”: “” 

From above response create a URL which will look like as follows:

Note : Make sure redirect_uri must be registered as Reply URL in Azure Active Directory.

After you hit above URL it will redirect request on redirect_url with a code query parameter, after user is successfully authenticated on azure authorization server.

This code is very short-lived token and is required in subsequent request to azure server.

 Get access token using authorization code:

Invoke API http://localhost:8080/access_token to generate access token like as follows :

Response to above api hit look like:

      "idToken":" IsInVuaXF1ZV9uYW1lIjoiY2t1bWFyLnhlYmlhQEZMWVNQSUNFLkNPTSIsInVwbiI6ImNrdW1hci54ZWJpYUBGTFlTUElDRS5DT00iLCJ2ZXIiOiIxLjAifQ.",
      "accessToken":" aI6u72PhZzyA5cFya7XT43CNLfkdFCsliDQu_Q4pRS59LHYZafZRSRrIpfRanex-OYLVl-sEu7rQhFRAUk56BKlKbjzkLx7olmQ6yL2hxq4jSA",
      "refreshToken":" PHbJnNvhSNQjdLpJnso3lNy_JcaU4m1UACPmAAhlzdpXjYXtQV66vH1vPu2KZNLYxrkysVseENMTAaI4tHmDUPBnZVJ829HqPJQ91SGG5XYB_IAA",

Now using access_token obtained in previous request we can hit our secured resources with Authorization bearer header as follows and it will allow result where earlier it was saying Unauthorized Access.

Get access token using refresh token:

Now once user is already authenticated and if access token is expired using refresh token access token can be again retrieved via following api.

Response of this API would be same as of access_token api. now new access_token obtained can be used to further request secured resources same way as user/email api is accessed.

This is how your application may use Azure Active Directory users to Sign in your app without keeping local user database and their credentials.

Source code is available here


Getting started with Hyperledger Fabric and Allied Tools

Since Bitcoin went past the $10K mark sometime last week, there’s a huge buzz in the media about the growth story of the Bitcoin. At Xebia, we’ve been keen followers of the Bitcoin for a while. As technologists, beyond the buzz what interests us immensely is the underlying Blockchain technology that makes the Bitcoin tick. We see tremendous potential in the Blockchain as a solution offering relevant for varied domains such as public services, healthcare, financial services, trades to name a few.

What is Blockchain?

HBR defines Blockchain as “an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way”. For an even clearer explanation of the Blockchain you could refer to the plain english explanation of blockchain.


Hyperledger is the Linux Foundation’s umbrella project under which several Blockchain related projects such as Hyperledger Fabric, Hyperledger Sawtooth, Hyperledger Cello, Hyperledger Explorer, etc. are being incubated. We recently started exploring a few of these projects & have a Hyperledger Fabric based blockchain set-up locally for experimentation.

Building Hyperledger Fabric

Our dev environments are standard (old) Intel i5s, with 4 cores, 16GB ram, running Ubuntu 14.04. There are several pre-requisites for building hyperledger such as specific versions of docker, docker-compose, go, etc. as mentioned here in dev environment set-up doc. Please ensure that each of these are done correctly without any errors before proceeding further.

Go Docker

The recommended way to get started with Hyperledger is via Docker containers. Pre-built containers are readily available for download from dockerhub. The install binaries & docker images script located here downloads the necessary platform specific binaries & docker images to the local system.

Note: An alternate to Docker based set-up is to checkout fabric code locally & build it . We ran into a few flaky tests that would fail the build intermittently, though we were able to build by skipping tests. Getting to a clean build with tests remains a future goal for us.

Once Fabric was set-up, we started evaluating other developer tooling projects being incubated under the Hyperledger umbrella. We went on to install Hyperledger Composer, Hyperledger Explorer & Hyperledger Cello. Hyperledger Indy is also in our radar for secure identity management in the future.

Hyperledger Composer

Hyperledger Composer is a framework for developing Blockchain applications on top of the Hyperledger Fabric. Composer basically makes it very simple to get started with building Blockchain based applications. They even let you try composer online on their site.

As a first cut on the installation we went with the composer playground-local set-up. The set-up was breezy. We got started with the playground tutorial on our local, & had the trader network set-up very soon.

Persisting Networks Across Restarts of Hyperledger Fabric & Composer

One of the issues that we ran into was that our blockchain applications did not seem to survive playground or machine restarts. A look at the code made it clear that this was not a playground issue, rather the expected behaviour with the local set-up.

The playground local scripts was delegating to a script, with an explicit docker-compose down instruction. This ensured that each start of the composer playground local would do a fresh clean start of the blockchain fabric docker containers.

An alternative to playground local is to do the complete composer development environment set-up. The key difference being that the complete dev set-up has a seperate standalone hyperledger fabric installation running from the fabric-tools folder. However, the same docker-compose down command is still there in the script of fabric, & we had to make a few hacks to the script (there may be better ways):

*  docker-compose down changed to stop:

ARCH=$ARCH docker-compose -f “${DIR}”/composer/docker-compose.yml stop

* Commented out instructions to create & join composer channels (do this only after the composer application has been started once since they don’t exist initially):

# Create the channel

#docker exec peer channel create -o -c composerchannel -f /etc/hyperledger/configtx/composer-channel.tx

# Join to the channel.

#docker exec -e “CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/” peer channel join -b composerchannel.block

We gave the playground tutorial another run & found everything to be working perfectly. Next we wanted to see/ view the blocks that we just created & thus reached the next tool in our set the Hyperledger Explorer.

Hyperledger Explorer

Hyperledger Explorer provides a web based access on top of the Hyperledger Fabric blockchain. This was just what we needed to see the blocks that we just created via Hyperledger Composer.

The installation instructions for Blockchain explorer includes a mysql db installation & import, followed by a npm install of the codebase as given here here. As instructed, we updated the config.json with our mysql db credentials, changed the port to 9080 (default port 8080 was already in use by composer) & started explorer.

Even though the explorer ui was accessible on http://localhost:9080, we were unable to see details of any of the blocks created via composer. We needed to make two additional changes to the config.json file to finally get this working:

* Channel name

“channelsList”: [“composerchannel”],

* Disable tls


Note 1: Even though disabling tls made this work for now, this needs to be revisited for a more real world production grade set-up in the future.

Note 2: In trying to figure out the channel names being used by composer, we found that composer currently lacks multi-channel support.

At this point, we were able to perform a transaction on composer, that made it in to the fabric blockchain, which would then show up on the hyperledger explorer in real-time. This gave us a good end-to-end view of our own hyperlerledger based blockchain applications in action.

So far we had been running our docker containers locally, & as a next step wanted to experiement with a distributed set-up. This brought us to Hyperledger Cello.

Hyperledger Cello

Hyperledger Cello makes it easy to provision & manage Hyperledger Fabric (& potentially other) blockchain networks. Cello has a master-slave (master nodes & worker nodes) architecture, where the master manages the blockchain running from the worker nodes. The web dashboard & rest api’s are also provided on the master node.

There a two parts to the cello installation. Installation on the master node, requires code to be downloaded & installed via:

$ make setup-master

& to start services on the master:

$ make start

Cello Master Port Conflict Issue:

We ran into several port conflicts since cello services by default makes use of the ports 80 & 8080, which were already taken up by composer, etc. To fix this we changed the docker-compose.yml file being used by cello:

*  Cello Rest Service Exposed Port (changed to port 50):




– “50”

* Nginx Port Mapping (changed to 50:50 & 5080:5080)




– “50:50”

– “5080:5080”

* User Dashboard Port Mapping (changed to 5081:5080)




– “5081:5080”

With those changes all our services were now up.

Next we did the specified installation for the cello worker nodes. These included adding a few params (allow-cors & ulimit) on the docker daemon, & making some firewall settings on the nodes.

With that we were able to hit the cello dashboard on the master node on the (configured) port 5080, add nodes, view details, etc.

We have several other scenarios to try out on cello, fabric, & the other hyperledger components cutting across OS & container types, as well as evaluating its management & ops capabilities. We hope to keep sharing details on the same as we go long. Till then happy block chaining!

A new way of writing components: Compound Components

As you already might have used many patterns/techniques while developing an application with React, today we are going to scratch the surface of one of the techniques called Compound Components.

Why Compound Component?

Compound Component allows Developer to take control over the rendering behavior, that gives the Developer, ability to decide in what order the component should render. Compound Component also reduces the tension of Developer by not needing to passing tons of configuration as a prop. For instance.

Giving more rendering control to Developer
The main purpose of creating this Compound Component was not to tie rendering to the Tabular Component, but instead put the developer in charge of it.

Compound Component is a great pattern that has proven to be very valuable for several React libraries. In this post, we will discuss how to use Compound Components to keep your React application tidy, well-structured and easy to maintain

This post assumes basic knowledge of

  1. ES6
  2. React Context

What is Context?

Context is a great feature introduced in React and is very useful when you want to expose the APIs, so application using your component can make use of those API. Context is also used to pass the data deep in the component tree without needing intermediate component know about it. Context provides an abstraction, handling the dirty work at one place, so the component using it does not need to know how it’s done.

Context provides great flexibility to Compound Component. With the help context, the Developer gets more control over how he wants to render the components. We will see the benefits of Context throughout this post.

You can learn more about context here–

Use Case
1. As a user, I want to display the information in a tabular form
2. As a user, I want to have a search functionality built in.
3. As a user, I want to have a control over which fields the search should work.
4. Whatever keywords user enter, filter the records based on keywords, and highlight those keywords.

Let’s dive in!

So our first task is to create a UI which will render data in tabular form. Let’s create a file named table.js and set up the foundation.



So what is happening in this file?We have created a Parent Component which will be responsible for passing Props to Child Components…How?? that will see in a minute.

Now let’s define the data that we are going to pass as Props to our Tabular Component

const columns = [
    displayName: "Id",
    sortable: true,
    searchable: true
    displayName: "FirstName",
    sortable: true,
    searchable: true
    displayName: "LastName",
    sortable: true,
    searchable: true

This is the metadata for displaying the column name and also tells the Tabular Component which field should be searchable. Remember use-case 3. And this is the data which will be rendered into Rows

const data = [
    Id: "1",
    FirstName: "Luke",
    LastName: "Skywalker"
    Id: "2",
    FirstName: "Darth",
    LastName: "Vader"
    Id: "3",
    FirstName: "Leia",
    LastName: "Organa"

Now, let’s define how our application is going to consume our Tabular Component.



As you can see, I have monkey patched the Table Component. For this to work, we need to have the reference to Table Component in Tabular Component like this:

Monkey Patching Table Component

Next up, we will define a Table Component, which will be responsible for drawing a table.

Table Component

Table component has access to the context defined in Tabular Component via

static contextTypes = {
  [TABULAR_CONTEXT]: PropTypes.object.isRequired

We are storing the instance of Table component in Context so can access its state from other components. Just to keep the code clean and readable, I have created a separate stateless Row Component for rendering data.

Stateless Row Component

So now it’s time to run the application…As you can see it renders

Under construction!!

Because we are not rendering the children of the Tabular component. Let’s quickly do that

Tabular Component

Here we are looping through all the children using React’s Children API and passing props to each child. So after updating the Tabular Component’s render method you should be able to see the desired result like this

Now we have completed our first use-case, lets quickly jump to second use-case which is —

Adding search functionality

For implementing search functionality, we need an input box. So for that, we will be creating a new component called SearchBox. Before that let’s update the app.js file



As you can easy it is to maintain the code using Compound Component. We have added SearchBox in the same manner how we added Table Component.

SearchBox Component

SearchBox Component

This is a simple component not having much responsibility apart from rendering input box, and in this component also we are saving the instance of the component in Tabular Context.


One thing to note here is, we are not defining any state of an input field in SearchBox component because what we want is whenever user search for any query, we want to re-render the Table component with filtered records. If we persist the state of an input field in SearchBox component, only the SearchBox component will re-render and not the Table component.

We are saving the state of an input field in Table component to re-render, whenever input is changed. So what we are doing here is, we are accessing the state of Table component via context


and updating it. So let’s make the necessary changes and add the filter logic in the Table component.


In the render method, before rendering Row, we are passing data through the search filter so every time user searches anything, we only render filtered data. So let’s see the what we are doing in the searchFilter method.

The searchfilter method takes row as input, then it fetches the value from row by using the column displayName. Second, we have the search value(query), we compare the searchValue with the value we got from row along with whether that field is searchable or not. After stitching this logic in Table Component, let’s see this in action.

Search filter in Action

With this, we have completed use-case 2 and use-case 3. Now, let’s add the functionality to highlight searched keywords. We want our component to be as customizable as possible. Hence we allow the user to add the style for highlighting of the keywords. Lets update the app.js file


Use-case4 Highlighting the searched keywords in Table

To highlight the string we need to update the Row component, as that is the one which is responsible for rendering rows.

Stateless Row Component

The only new thing in this is we are making use of query and highlightStyle, which are being passed as props. Also, we are calling highlightWord() method which returns the decorated string.

Highlighing the searched keyword

The highlightword method accepts actual column value, the query string, and the highlight style. We are extracting the string which is matched with query string and the actual value and wrapping it with a <span/> tag with the provided style and returning it to Row component.

Now, let’s run the code…

Highlighting searched query

Now if we look at the code that is being used by the application is very small, tidy and customizable.

Making rendering more powerful with React Context

Now, what if the Developer comes in and decides to change the style of Tabular component like this,

Just by adding the wrapping </div >, our UI breaks. Because if we see the render method of Tabular Component, we see it’s mapping the props over to its direct children, and now one of those children is a </div>. So we are cloning a Div and passing some stuff which is completely irrelevant to it.

Now, here comes Context to resue to decouple the UI hierarchy from the relation between Tabular and Table Component. The only thing we need to change in our app is, instead of taking data from props, we are going to make use of Context.

Tabular Component

Storing data in Context

Here we have updated the childContextTypes object and getChildContext() method to pass row data and column metadata via context. And with help of context, we have removed the dirty cloning implementation to pass the data to child components. Now Tabular component render method just returns the children.

Now let’s update the Table Component

Table Component

Pulling data from Context instead of Props

Here we have updated the contextTypes object to fetch row data and column metadata via context. With this changes, our app runs as usual.

There is one more feature a Table must have, which is Sorting. I haven’t added in this..but feel free to implement that. If you find any difficulties in implementing then let me know in a comment box.
You can find the source code here —

Ryan Florence – Compound Components —

Executing mobile Automation test cases on Sauce Labs cloud

Hi Friends,

In my last post, we saw how can we setup different mobile platforms to execute mobile test cases. Here we will see how can we integrate our execution environment to Sauce Labs i.e executing the test cases on sauce labs cloud.

A brief introduction about Sauce Labs:

Sauce Labs is a cloud platform which allows users to run tests in the cloud on more than 700 different browser platform, operating system and device combinations, providing a comprehensive test infrastructure for automated and manual testing of desktop and mobile applications using Selenium, Appium and JavaScript unit testing frameworks.

In easy words, you do not have to setup your own infrastructure with various devices, OS and browser to run test cases. Buy a sauce lab subscription and you are all set to go.

Now let us see how can we execute our mobile test cases on Sauce labs cloud.

The very first step for executing our test cases on Sauce labs is to create Sauce lab account. Sauce labs provides a 14 days free trial to explore its various features. Lets create a free account first.

Steps to create a Free Trial account on Sauce Labs

1. Go to

2. Click on the Free trial button at the top right corner.

3. Fill in all the details on following screen.

4. Click on the Create account button.

5. An account verification mail will be sent to your email id. Click on the link provided in the email to confirm the sign up.

6. Now click on Sign in button from top right corner.

7. Sign in with the newly created account details.

8. After successful login click on the arrow next to your name at the top right corner and click on My account link.

9. Scroll down a bit and click on Show button corresponding to the Access Key.

10. Enter your password in the prompt and click on the Authorize button.

11. Your Access Key will be displayed. Copy and keep it aside for further usage.

Now we are done with setting up the free account on Sauce lab and got our Access Key (authentication token), next step is to upload our apk(for Android), ipa/app(for IOS) to the Sauce lab cloud.

Command to upload Android test application (apk file) on Sauce Labs

curl -u <saucelabUserName>:<saucelabAccessKey> -X POST -H “Content-Type: application/octet-stream”<saucelabUserName>/<name of app file>?overwrite=true –data-binary @<absolute local path of app file>

Following parameter need to be changed before executing the above command.

saucelabUserName — username/email to access sauce lab account.

saucelabAccessKey — Access key associated with your Sauce Lab account.

name of app file — name of the apk file (for Android).

absolute local path of app file — absolute path of your apk file (the actual location of the file on the system’s hard drive) eg: /Users/Documents/testApps/testapp.apk

I have download one sample apk file to demonstrate how we can execute test cases for Android native platform on Sauce labs. The app name is ApiDemos-debug.apk and it is placed in my system’s Downloads folder. You can download it from here. download ApiDemos-debug.apk

Now considering your saucelabUserName is testuser and your saucelabAccessKey is testaccesskey, all required parameters should look like following.

saucelabUserName — testuser

saucelabAccessKey– testaccesskey

name of app file — ApiDemos-debug.apk

absolute local path of app file with file name — /Users/ngoyal/Downloads/ApiDemos-debug.apk

Your command should be like this now.

curl -u testuser: testaccesskey -X POST -H “Content-Type: application/octet-stream” –data-binary @/Users/ngoyal/Downloads/ApiDemos-debug.apk

Now open Terminal/CommandPrompt and execute the above command. This may take few seconds to upload your file on the Sauce Labs cloud. After successful upload you should see something like this in the Terminal.

Note – By any chance, If you see the “size” as 0 that means your file is not uploaded on the Sauce labs cloud. Check the command and run it again ensuring all parameters are configured correctly in the command.

After uploading test app (ApiDemos-debug.apk) on the Sauce labs successfully, we will start a virtual device with specific configuration on the Sauce labs cloud manually using the appium server to ensure if all our capabilities are correct.

Steps to start a virtual device manually on the Sauce Labs

  1. Start the appium server.

2. Once the server is started, click on the Search icon (the first icon at the top right corner)

 3. Now click on the Sauce Labs tab from the next screen and enter your Sauce Username and Access key

4. Now click on Desired Capabilities and add the following capabilities one by one by clicking on + icon

Note – the value of ‘app’ capability should be sauce-storage:<your test app name>

5. Now click on Start Session button.

6. A rotating loader should be displayed on the screen as following.

7. Now go to, sign in with your account. Go to Dashboard and click on Automated Tests.

8. You will see a job running with name “Unnamed job with c66ob………..”

9. Click on the job, you will see a message “Loading Live video”. This may take some time to launch the live video of your running test.

10. After few seconds you will be able to see the live video of your test running on the device (6.0 in our case) that you asked for.

This means all our configurations for Android device is perfect and our test is launched on the virtual device. Now lets see how can we do it for IOS platforms.

Command to upload IOS test application (ipa/app file) on Sauce Labs

curl -u <saucelabUserName>:<saucelabAccessKey> -X POST -H “Content-Type: application/octet-stream”<saucelabUserName>/<name of app file>?overwrite=true –data-binary @<absolute local path of app file>

The command is same as we used to upload the Android apk but one thing that should be kept in mind while uploading the IOS app is “the app should be in zip format“.  Yes, you read it correct. You have to compress your app to zip format and configure the app name and absolute path accordingly.

Following parameter need to be changed before executing the above command.

saucelabUserName — username/email to access sauce lab account.

saucelabAccessKey — Access key associated with your Sauce Lab account.

name of app file — name of the ipa/app file in zip format(for IOS)

absolute local path of app file — absolute path of your apk or ipa/app file in zip format (the actual location of the file on the system’s hard drive) eg: /Users/Documents/testApps/

I have download one sample app file to demonstrate how we can execute test cases for IOS platform on Sauce labs. The app name is and it is placed in my system’s Downloads folder.

After converting it into zip it becomes

Now considering your saucelabUserName is testuser and your saucelabAccessKey is testaccesskey, we have all required parameters as following.

saucelabUserName — testuser

saucelabAccessKey– testaccesskey

name of app file –

absolute local path of app file with file name — /Users/ngoyal/Downloads/

Your command should look like this now.

curl -u testuser: testaccesskey -X POST -H “Content-Type: application/octet-stream” –data-binary @/Users/ngoyal/Downloads/

Now open Terminal/CommandPrompt and execute the above command. This may take few seconds to upload your file on the Sauce Labs cloud. After successful upload you should see something like this in the Terminal

By any chance, If you see the “size” as 0 that means your file is not uploaded on the Sauce labs cloud. Check the command and run it again ensuring all parameters are configured correctly in the command.

After uploading test app ( on the Sauce labs successfully, we will start a virtual device with specific configuration on the Sauce labs cloud manually using the appium server.

Follow the same steps as explained for Android but use the following capabilities to start the IOS virtual device.

After clicking on the Start Session button, you can go to Sauce labs and see the live execution happening on the virtual device (IOS Simulator)

By this, we are done with setting up and executing our test on Android Emulator and IOS Simulator respectively manually.

Now we will see how can we do that programatically. We just have to use two additional steps other than we did in manual setup and we are all set 🙂

Set following as environmental variables in your system.



If you are a mac user, you can set these environmental variables as following:

export SAUCE_USERNAME=<your sauce lab user name>
export SAUCE_ACCESS_KEY=<your sauce lab access key>

Launching and Running the Android Native test cases on Sauce labs programatically

Following capabilities can be used to run Android native test cases on Sauce labs.

DesiredCapabilities capabilities = new DesiredCapabilities();

capabilities.setCapability(MobileCapabilityType.PLATFORM_VERSION, “6.0”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, “Android”);

capabilities.setCapability(MobileCapabilityType.APP, “sauce-storage:<your apk file name>”);

capabilities.setCapability(MobileCapabilityType.DEVICE_NAME,“Android Emulator”);

URL url = new URL(“https://<saucelab username>:<saucelab access key>“);

AppiumDriver driver = new AndroidDriver(url,capabilities);

These capabilities will launch the apk file on virtual device on Sauce labs. Further you can write your test cases to be executed.

Note – Platform version and the device name may vary based on the device to be configured


Launching and Running the Android Web test cases on Sauce labs programatically

Following capabilities can be used to run Android web test cases on Sauce labs.

DesiredCapabilities capabilities = new DesiredCapabilities();

capabilities.setCapability(MobileCapabilityType.PLATFORM_VERSION, “6.0”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, “Android”);

capabilities.setCapability(MobileCapabilityType.DEVICE_NAME,“Android Emulator”);


URL url = new URL(“https://<saucelab username>:<saucelab access key>“);

AppiumDriver driver = new AndroidDriver(url,capabilities);

These capabilities will launch the virtual device on Sauce labs with Chrome browser opened. Further you can write your test cases to be executed.

Note – Platform version and the device name may vary based on the device to be configured.


Launching and Running the IOS Native test cases on Sauce labs programatically

Following capabilities can be used to run IOS native test cases on Sauce labs.

DesiredCapabilities capabilities = new DesiredCapabilities();

capabilities.setCapability(MobileCapabilityType.PLATFORM_VERSION, “10.2”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, “ios”);

capabilities.setCapability(MobileCapabilityType.APP, “sauce-storage:<your zip file name>”);

capabilities.setCapability(MobileCapabilityType.DEVICE_NAME,“iPhone 6 Simulator”);

URL url = new URL(“https://<saucelab username>:<saucelab access key>“);

AppiumDriver driver = new AndroidDriver(url,capabilities);

Note – Platform version may change based on mac version being used and the device name may vary based on the device to be configured.


Launching and Running the IOS Web test cases on Sauce labs programatically

Following capabilities can be used to run IOS web test cases on Sauce labs.

DesiredCapabilities capabilities = new DesiredCapabilities();

capabilities.setCapability(MobileCapabilityType.PLATFORM_VERSION, “10.2”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, “ios”);

capabilities.setCapability(MobileCapabilityType.DEVICE_NAME,“iPhone 6 Simulator”);


URL url = new URL(“https://<saucelab username>:<saucelab access key>“);

AppiumDriver driver = new AndroidDriver(url,capabilities);

Note – Platform version may change based on mac version being used and the device name may vary based on the device to be configured.

Please refer the following link to check various capabilities required for different platforms and versions.

I hope this post will help you guys setup execution environment on Sauce labs for mobile test cases. Please do comment in case you feel any difficulty while setting this up.

Thank You!


Blockchain: Reshaping Financial Services Industry

When we talk about the innovations in Financial services industry Blockchain Technology appears at the top of the list. Blockchain is considered as second generation of Internet that promises to bring transparency, trust, privacy and security to the global economy. According to the 2016 report by World Economic Forum(WEF) on future infrastructure of banking, 80% of banks will initiate blockchain projects by the end of 2017. Furthermore, it is estimated that blockchain investments will surpass $3 billion mark by end of this year. We can safely conclude that time is right for financial institutions to embrace and unleash blockchain potential.

What is BlockChain?

Blockchain is a distributed ledger that stores the information or transactions performed by millions of computers every day. The data in distributed ledger is stored with consensus of participating nodes and is replicated across the network. Such distributed ledgers are useful for real time and secure data sharing.

What makes blockchain unique is that transactions can’t be modified after they are committed, which make the records immutable and secure. Most people use trusted middleman such as bank to make a transaction. However, blockchain facilitates peer-to- peer secure exchange of any type of value – money, goods or property across the globe without the need of third party.

There are two types of ledgers – public and private. Public ledgers allow anyone to add or read the data without the approval of any authority. Bitcoin and Ethereum are examples of public blockchains. Private ledgers, on the other hand, is restricted to limited number of participants and need permission to join the group. Considering the data security and regulatory environment, banks and financial institutions are exploring this type of blockchain.

Impact of BlockChain on the industry?

Blockchain is revamping the financial services industry for speed and inclusion. Since data is stored in encrypted form on the shared ledger and is single source of truth, all the authorized stakeholders in the value chain can fetch the information directly from the blockchain without having dependency on each other. This can help in achieving faster processing speed and significant reduction in cost.

Using blockchain, billions of people who are excluded from the economy will be connected and will contribute to the global economy. Global remittances that take days will now be completed in a few seconds and at much lower cost. Bureaucracy and corruption will be eliminated from the financial systems as blockchain holds all the transacting parties accountable for their actions.

BlockChain Use Cases

Blockchain will soon transform banks and financial institutions the way they operate. Some of the use-cases which are being worked upon actively are:

Digital Identification

When the customer opts for multiple bank accounts, he is required to provide his identification details to every bank. Present financial system doesn’t allow banks to share that client information with each other as the information is stored in their central repository.

Blockchain can be used by banks for know-your-customer(KYC) requirements. Once the customer registers his identity, he is not required to register with every bank provided banks are connected to the blockchain. Using single source of client identification, blockchain can help banks not only in reducing costs, but also in optimizing resources while maintaining the data security.

Cross-Border Payments

Traditionally, cross-border payments are facilitated by multiple trusted intermediaries such as banks and remittance centers. Such third parties usually take 3-5 business days for transfers and charge heavy fee on remittances.

With Blockchain technology, payment transactions could be simplified by eliminating the need of middlemen while reducing the processing time and the costs of remittance substantially. Moreover, Blockchain maintains an audit trail of every transaction, which means that source and destination of any illegal transaction could easily be traced. This is a significant development for the financial institutions and regulators worldwide.

One such example is Ripple, a blockchain based payment system for banks that can be used to make secure real time payments globally at a reasonable cost.

Clearing and Settlement

The post-trade settlement and clearing stages are important stages of equity trading process. After receiving the trade confirmations, trade settlement usually takes 3 days during which investors can’t take any action on the securities. Various intermediaries such as custodians, depositories, clearing houses, exchanges and brokers are involved and transaction records are stored in centralized database.

Blockchain or distributed ledger technology can automate the post trade process using smart contracts and improve efficiency of the clearing system, thereby reducing the trading cost. Furthermore, trade settlement could happen real-time and with better governance and collaboration among all the market participants.

NASDAQ has been at the forefront of the blockchain revolution and has built Linq blockchain ledger to complete and record the securities transactions at its exchange.

Smart Contracts

Currently, we rely on third parties such as our judicial system, lawyers or notary for the enforcement of the paper contracts such as property agreement, employer-employee contract, partnership contract, vendor agreement etc. Smart contracts on blockchain are transforming the way we look at the standard paper contracts.

Smart contracts are computer programmed for facilitating, verifying and executing paper- less contractual instruments between the parties. Since smart contracts are self- executing, they can eliminate the need of middlemen and can be programmed to execute under certain conditions and rules. The involved parties can access contracts anywhere and approve them faster, resulting in improved speed and efficiency of the whole contracting process

How does the future look?

The financial services industry has moved past the awareness stage and banks and financial institutions globally are investing in conducting Proof of Concept(POCs) to explore blockchain capabilities. Improved efficiencies, transparency, faster payments, security and immutability are the key benefits that organizations will reap by adopting the technology.

Considering its rising popularity, adoption rate of blockchain will continue to increase and we expect it to become mainstream technology in the next few years. Blockchain is our gateway to future of finance and will become part of critical financial infrastructure for providing better, cheaper, secure and faster financial services to the customers

About Xebia

Xebia, a niche agile software development and digital consulting firm, is an active member of Blockchain Special Interest Group (SIG) setup by NASSCOM for developing and collaborating on blockchain implementations. Our mission is to not only help our global clients in blockchain implementations but also assist them in navigating complex blockchain landscape while at the same time creating awareness in academia and the industry


[1] Tapscott, Don, and Alex Tapscott.
Blockchain Revolution: How the Technology behind Bitcoin Is Changing Money, Business and the World” Penguin Random House LLC, 2016.

[2] Rubini, Agustin. Fintech in a Flash:
Financial Technology Made Easy. Simtac Ltd, 2017.

Appium – Setting up various mobile platforms for automation

Hey Folks,

This post will take you through how to setup various mobile platforms (Android Native, Android Web, IOS Native and IOS web) using DesiredCapabilities, virtual devices and Appium server.

Though numerous information is already available on this over the internet but this post will try to segregate it in one place and will see how can we run native and web apps on virtual devices (Android Emulator and IOS Simulator).

Lets get started without wasting any time.

You will need following softwares (Versions may vary) to setup the platforms. I am using following versions. You may use other versions as well ensuring the best compatibility.

Appium 1.65

Genymotion 2.10.0 (for Android Emulator)

–You can also use the default emulators that comes along with the Android Studio.

Xcode 8.2.1 (for IOS Simulator)

Setting up Desired Capabilities for Android Native platform:

Following capabilities will be used to launch any Android native app on the Android virtual device (Emulator)

DesiredCapabilities capabilities = new DesiredCapabilities();

File app = new File(“<path to the android app apk file>”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_VERSION, “6.0”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, “Android”);

capabilities.setCapability(MobileCapabilityType.APP, app.getAbsolutePath());


URL url = new URL(““);

AppiumDriver driver = new AndroidDriver(url,capabilities);


First we are creating the object of DesiredCapabilities, this object will be used to carry all desired capabilities.

Next we are creating a File object and assigning the apk file path to it along with the apk name.

Then will setup various capabilities in the capabilities object.

  • PLATFORM_VERSION – – should be your virtual device version.
  • PLATFORM_NAME – – should be Android for Android platforms (irrespective of native and web).
  • APP – should be absolute path to apk file so that appium can find and install it on the device.
  • DEVICE_NAME – – can be Emulator or the device name (eg Nexus 5 etc) which was assigned while creating the virtual device.

Now we are creating the URL object and providing the address where appium server is running. By default it runs on as you can see below

You can run appium on different ports using the command appium -p <port number>

Finally we are launching the AndroidDriver using the URL and DesiredCapabilities that we created. If everything is setup correctly, appium server is up and virtual device is running then it should launch your apk file on the virtual device.

Setting up Desired Capabilities for Android Web platform:

Its pretty much similar as setting up environment for Android Native except few things that we will cover here.

DesiredCapabilities capabilities = new DesiredCapabilities();

capabilities.setCapability(MobileCapabilityType.PLATFORM_VERSION, “6.0”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, “Android”);


capabilities.setCapability(MobileCapabilityType.BROWSER_NAME, “Chrome”);

URL url = new URL(““);

AppiumDriver driver = new AndroidDriver(url,capabilities);


Since it is Web platform we do not need any apk file and hence no capability to mention the app path. Instead, we will mention the browser name that we want to launch. Here it is Chrome in our case.

Start the appium server and the virtual device (Emulator). Now run the code where these capabilities are written. It should launch the chrome browser on the virtual device successfully.

Now you can write test cases to open any url and test various scenarios.

Note – Chrome should be installed in the virtual device where we want to simulate our test. If you do not have chrome already installed then follow any post from internet to install it on virtual device or refer this post

Install Google Play Store and Chrome on Genymotion Virtual Device

Setting up Desired Capabilities for IOS Native platform:

DesiredCapabilities capabilities = new DesiredCapabilities();

File app = new File(“<path to the ios .app or .ipa file>”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_VERSION, “10.2”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, “IOS”);

capabilities.setCapability(MobileCapabilityType.APP, app.getAbsolutePath());

capabilities.setCapability(MobileCapabilityType.DEVICE_NAME,“iPhone 6”);

URL url = new URL(““);

AppiumDriver driver = new IOSDriver(url,capabilities);

Note – The platform version and the device name should be as per the devices configured in the Xcode. I have taken the platform version and device name from the device configured in the Xcode, as shown below.

These capabilities should launch the IOS simulator and install your IOS native app. You can further automate your test cases for the native app.

Setting up Desired Capabilities for IOS Web platform:

Now we will see how to launch Safari in IOS Simulator to automate web apps.

DesiredCapabilities capabilities = new DesiredCapabilities();

capabilities.setCapability(MobileCapabilityType.PLATFORM_VERSION, “10.2”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, “IOS”);

capabilities.setCapability(MobileCapabilityType.DEVICE_NAME,“iPhone 6”);


URL url = new URL(““);

AppiumDriver driver = new IOSDriver(url,capabilities);


For the web platform we do not need any app, so removing the related capability and adding the capability for browser name.

Start the appium server executes code that has all the capabilities mentioned above. IOS simulator should launch with safari.

You can further write your test cases to automate any web app.

Note – IOS simulators comes by default with Safari installed, no need to install it explicitly.

Thank you! In the coming posts we will see how to execute mobile test cases on Sauce labs.

Any feedback is most welcome 🙂