Getting started with Hyperledger Fabric and Allied Tools

Since Bitcoin went past the $10K mark sometime last week, there’s a huge buzz in the media about the growth story of the Bitcoin. At Xebia, we’ve been keen followers of the Bitcoin for a while. As technologists, beyond the buzz what interests us immensely is the underlying Blockchain technology that makes the Bitcoin tick. We see tremendous potential in the Blockchain as a solution offering relevant for varied domains such as public services, healthcare, financial services, trades to name a few.

What is Blockchain?

HBR defines Blockchain as “an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way”. For an even clearer explanation of the Blockchain you could refer to the plain english explanation of blockchain.

Hyperledger

Hyperledger is the Linux Foundation’s umbrella project under which several Blockchain related projects such as Hyperledger Fabric, Hyperledger Sawtooth, Hyperledger Cello, Hyperledger Explorer, etc. are being incubated. We recently started exploring a few of these projects & have a Hyperledger Fabric based blockchain set-up locally for experimentation.

Building Hyperledger Fabric

Our dev environments are standard (old) Intel i5s, with 4 cores, 16GB ram, running Ubuntu 14.04. There are several pre-requisites for building hyperledger such as specific versions of docker, docker-compose, go, etc. as mentioned here in dev environment set-up doc. Please ensure that each of these are done correctly without any errors before proceeding further.

Go Docker

The recommended way to get started with Hyperledger is via Docker containers. Pre-built containers are readily available for download from dockerhub. The install binaries & docker images script located here downloads the necessary platform specific binaries & docker images to the local system.

Note: An alternate to Docker based set-up is to checkout fabric code locally & build it . We ran into a few flaky tests that would fail the build intermittently, though we were able to build by skipping tests. Getting to a clean build with tests remains a future goal for us.

Once Fabric was set-up, we started evaluating other developer tooling projects being incubated under the Hyperledger umbrella. We went on to install Hyperledger Composer, Hyperledger Explorer & Hyperledger Cello. Hyperledger Indy is also in our radar for secure identity management in the future.

Hyperledger Composer

Hyperledger Composer is a framework for developing Blockchain applications on top of the Hyperledger Fabric. Composer basically makes it very simple to get started with building Blockchain based applications. They even let you try composer online on their site.

As a first cut on the installation we went with the composer playground-local set-up. The set-up was breezy. We got started with the playground tutorial on our local, & had the trader network set-up very soon.

Persisting Networks Across Restarts of Hyperledger Fabric & Composer

One of the issues that we ran into was that our blockchain applications did not seem to survive playground or machine restarts. A look at the code made it clear that this was not a playground issue, rather the expected behaviour with the local set-up.

The playground local scripts composer.sh was delegating to a statup.sh script, with an explicit docker-compose down instruction. This ensured that each start of the composer playground local would do a fresh clean start of the blockchain fabric docker containers.

An alternative to playground local is to do the complete composer development environment set-up. The key difference being that the complete dev set-up has a seperate standalone hyperledger fabric installation running from the fabric-tools folder. However, the same docker-compose down command is still there in the startup.sh script of fabric, & we had to make a few hacks to the script (there may be better ways):

*  docker-compose down changed to stop:

ARCH=$ARCH docker-compose -f “${DIR}”/composer/docker-compose.yml stop

* Commented out instructions to create & join composer channels (do this only after the composer application has been started once since they don’t exist initially):

# Create the channel

#docker exec peer0.org1.example.com peer channel create -o orderer.example.com:7050 -c composerchannel -f /etc/hyperledger/configtx/composer-channel.tx

# Join peer0.org1.example.com to the channel.

#docker exec -e “CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin@org1.example.com/msp” peer0.org1.example.com peer channel join -b composerchannel.block

We gave the playground tutorial another run & found everything to be working perfectly. Next we wanted to see/ view the blocks that we just created & thus reached the next tool in our set the Hyperledger Explorer.

Hyperledger Explorer

Hyperledger Explorer provides a web based access on top of the Hyperledger Fabric blockchain. This was just what we needed to see the blocks that we just created via Hyperledger Composer.

The installation instructions for Blockchain explorer includes a mysql db installation & import, followed by a npm install of the codebase as given here here. As instructed, we updated the config.json with our mysql db credentials, changed the port to 9080 (default port 8080 was already in use by composer) & started explorer.

Even though the explorer ui was accessible on http://localhost:9080, we were unable to see details of any of the blocks created via composer. We needed to make two additional changes to the config.json file to finally get this working:

* Channel name

“channelsList”: [“composerchannel”],

* Disable tls

“enableTls”:false

Note 1: Even though disabling tls made this work for now, this needs to be revisited for a more real world production grade set-up in the future.

Note 2: In trying to figure out the channel names being used by composer, we found that composer currently lacks multi-channel support.

At this point, we were able to perform a transaction on composer, that made it in to the fabric blockchain, which would then show up on the hyperledger explorer in real-time. This gave us a good end-to-end view of our own hyperlerledger based blockchain applications in action.

So far we had been running our docker containers locally, & as a next step wanted to experiement with a distributed set-up. This brought us to Hyperledger Cello.

Hyperledger Cello

Hyperledger Cello makes it easy to provision & manage Hyperledger Fabric (& potentially other) blockchain networks. Cello has a master-slave (master nodes & worker nodes) architecture, where the master manages the blockchain running from the worker nodes. The web dashboard & rest api’s are also provided on the master node.

There a two parts to the cello installation. Installation on the master node, requires code to be downloaded & installed via:

$ make setup-master

& to start services on the master:

$ make start

Cello Master Port Conflict Issue:

We ran into several port conflicts since cello services by default makes use of the ports 80 & 8080, which were already taken up by composer, etc. To fix this we changed the docker-compose.yml file being used by cello:

*  Cello Rest Service Exposed Port (changed to port 50):

restserver:

….

expose:

– “50”

* Nginx Port Mapping (changed to 50:50 & 5080:5080)

nginx:

…..

ports:

– “50:50”

– “5080:5080”

* User Dashboard Port Mapping (changed to 5081:5080)

user-dashboard:

……

ports:

– “5081:5080”

With those changes all our services were now up.

Next we did the specified installation for the cello worker nodes. These included adding a few params (allow-cors & ulimit) on the docker daemon, & making some firewall settings on the nodes.

With that we were able to hit the cello dashboard on the master node on the (configured) port 5080, add nodes, view details, etc.

We have several other scenarios to try out on cello, fabric, & the other hyperledger components cutting across OS & container types, as well as evaluating its management & ops capabilities. We hope to keep sharing details on the same as we go long. Till then happy block chaining!

A new way of writing components: Compound Components

As you already might have used many patterns/techniques while developing an application with React, today we are going to scratch the surface of one of the techniques called Compound Components.

Why Compound Component?

Compound Component allows Developer to take control over the rendering behavior, that gives the Developer, ability to decide in what order the component should render. Compound Component also reduces the tension of Developer by not needing to passing tons of configuration as a prop. For instance.

Giving more rendering control to Developer
The main purpose of creating this Compound Component was not to tie rendering to the Tabular Component, but instead put the developer in charge of it.

Compound Component is a great pattern that has proven to be very valuable for several React libraries. In this post, we will discuss how to use Compound Components to keep your React application tidy, well-structured and easy to maintain

This post assumes basic knowledge of

  1. ES6
  2. React Context

What is Context?

Context is a great feature introduced in React and is very useful when you want to expose the APIs, so application using your component can make use of those API. Context is also used to pass the data deep in the component tree without needing intermediate component know about it. Context provides an abstraction, handling the dirty work at one place, so the component using it does not need to know how it’s done.

Context provides great flexibility to Compound Component. With the help context, the Developer gets more control over how he wants to render the components. We will see the benefits of Context throughout this post.

You can learn more about context here– https://reactjs.org/docs/context.html

Use Case
1. As a user, I want to display the information in a tabular form
2. As a user, I want to have a search functionality built in.
3. As a user, I want to have a control over which fields the search should work.
4. Whatever keywords user enter, filter the records based on keywords, and highlight those keywords.

Let’s dive in!

So our first task is to create a UI which will render data in tabular form. Let’s create a file named table.js and set up the foundation.

table.js

Table.js

So what is happening in this file?We have created a Parent Component which will be responsible for passing Props to Child Components…How?? that will see in a minute.

Now let’s define the data that we are going to pass as Props to our Tabular Component

const columns = [
  {
    displayName: "Id",
    sortable: true,
    searchable: true
  },
  {
    displayName: "FirstName",
    sortable: true,
    searchable: true
  },
  {
    displayName: "LastName",
    sortable: true,
    searchable: true
  }
];

This is the metadata for displaying the column name and also tells the Tabular Component which field should be searchable. Remember use-case 3. And this is the data which will be rendered into Rows

const data = [
  {
    Id: "1",
    FirstName: "Luke",
    LastName: "Skywalker"
  },
  {
    Id: "2",
    FirstName: "Darth",
    LastName: "Vader"
  },
  {
    Id: "3",
    FirstName: "Leia",
    LastName: "Organa"
  }];

Now, let’s define how our application is going to consume our Tabular Component.

app.js

app.js

As you can see, I have monkey patched the Table Component. For this to work, we need to have the reference to Table Component in Tabular Component like this:

Monkey Patching Table Component

Next up, we will define a Table Component, which will be responsible for drawing a table.

Table Component

Table component has access to the context defined in Tabular Component via

static contextTypes = {
  [TABULAR_CONTEXT]: PropTypes.object.isRequired
};

We are storing the instance of Table component in Context so can access its state from other components. Just to keep the code clean and readable, I have created a separate stateless Row Component for rendering data.

Stateless Row Component

So now it’s time to run the application…As you can see it renders

Under construction!!

Because we are not rendering the children of the Tabular component. Let’s quickly do that

Tabular Component

Here we are looping through all the children using React’s Children API and passing props to each child. So after updating the Tabular Component’s render method you should be able to see the desired result like this

Now we have completed our first use-case, lets quickly jump to second use-case which is —

Adding search functionality

For implementing search functionality, we need an input box. So for that, we will be creating a new component called SearchBox. Before that let’s update the app.js file

app.js

app.js

As you can see.how easy it is to maintain the code using Compound Component. We have added SearchBox in the same manner how we added Table Component.

SearchBox Component

SearchBox Component

This is a simple component not having much responsibility apart from rendering input box, and in this component also we are saving the instance of the component in Tabular Context.

 

One thing to note here is, we are not defining any state of an input field in SearchBox component because what we want is whenever user search for any query, we want to re-render the Table component with filtered records. If we persist the state of an input field in SearchBox component, only the SearchBox component will re-render and not the Table component.

We are saving the state of an input field in Table component to re-render, whenever input is changed. So what we are doing here is, we are accessing the state of Table component via context

this.context[TABULAR_CONTEXT].table

and updating it. So let’s make the necessary changes and add the filter logic in the Table component.

TableComponent

In the render method, before rendering Row, we are passing data through the search filter so every time user searches anything, we only render filtered data. So let’s see the what we are doing in the searchFilter method.

The searchfilter method takes row as input, then it fetches the value from row by using the column displayName. Second, we have the search value(query), we compare the searchValue with the value we got from row along with whether that field is searchable or not. After stitching this logic in Table Component, let’s see this in action.

Search filter in Action

With this, we have completed use-case 2 and use-case 3. Now, let’s add the functionality to highlight searched keywords. We want our component to be as customizable as possible. Hence we allow the user to add the style for highlighting of the keywords. Lets update the app.js file

app.js

Use-case4 Highlighting the searched keywords in Table

To highlight the string we need to update the Row component, as that is the one which is responsible for rendering rows.

Stateless Row Component

The only new thing in this is we are making use of query and highlightStyle, which are being passed as props. Also, we are calling highlightWord() method which returns the decorated string.

Highlighing the searched keyword

The highlightword method accepts actual column value, the query string, and the highlight style. We are extracting the string which is matched with query string and the actual value and wrapping it with a <span/> tag with the provided style and returning it to Row component.

Now, let’s run the code…

Highlighting searched query

Now if we look at the code that is being used by the application is very small, tidy and customizable.

Making rendering more powerful with React Context

Now, what if the Developer comes in and decides to change the style of Tabular component like this,

Just by adding the wrapping </div >, our UI breaks. Because if we see the render method of Tabular Component, we see it’s mapping the props over to its direct children, and now one of those children is a </div>. So we are cloning a Div and passing some stuff which is completely irrelevant to it.

Now, here comes Context to resue to decouple the UI hierarchy from the relation between Tabular and Table Component. The only thing we need to change in our app is, instead of taking data from props, we are going to make use of Context.

Tabular Component

Storing data in Context

Here we have updated the childContextTypes object and getChildContext() method to pass row data and column metadata via context. And with help of context, we have removed the dirty cloning implementation to pass the data to child components. Now Tabular component render method just returns the children.

Now let’s update the Table Component

Table Component

Pulling data from Context instead of Props

Here we have updated the contextTypes object to fetch row data and column metadata via context. With this changes, our app runs as usual.

There is one more feature a Table must have, which is Sorting. I haven’t added in this..but feel free to implement that. If you find any difficulties in implementing then let me know in a comment box.
You can find the source code here — https://codesandbox.io/embed/6j0kn85nmr

Motivation
Ryan Florence – Compound Components — https://www.youtube.com/watch?v=hEGg-3pIHlE

Executing mobile Automation test cases on Sauce Labs cloud

Hi Friends,

In my last post, we saw how can we setup different mobile platforms to execute mobile test cases. Here we will see how can we integrate our execution environment to Sauce Labs i.e executing the test cases on sauce labs cloud.

A brief introduction about Sauce Labs:

Sauce Labs is a cloud platform which allows users to run tests in the cloud on more than 700 different browser platform, operating system and device combinations, providing a comprehensive test infrastructure for automated and manual testing of desktop and mobile applications using Selenium, Appium and JavaScript unit testing frameworks.

In easy words, you do not have to setup your own infrastructure with various devices, OS and browser to run test cases. Buy a sauce lab subscription and you are all set to go.

Now let us see how can we execute our mobile test cases on Sauce labs cloud.

The very first step for executing our test cases on Sauce labs is to create Sauce lab account. Sauce labs provides a 14 days free trial to explore its various features. Lets create a free account first.

Steps to create a Free Trial account on Sauce Labs

1. Go to http://saucelabs.com

2. Click on the Free trial button at the top right corner.

3. Fill in all the details on following screen.

4. Click on the Create account button.

5. An account verification mail will be sent to your email id. Click on the link provided in the email to confirm the sign up.

6. Now click on Sign in button from top right corner.

7. Sign in with the newly created account details.

8. After successful login click on the arrow next to your name at the top right corner and click on My account link.

9. Scroll down a bit and click on Show button corresponding to the Access Key.

10. Enter your password in the prompt and click on the Authorize button.

11. Your Access Key will be displayed. Copy and keep it aside for further usage.

Now we are done with setting up the free account on Sauce lab and got our Access Key (authentication token), next step is to upload our apk(for Android), ipa/app(for IOS) to the Sauce lab cloud.

Command to upload Android test application (apk file) on Sauce Labs

curl -u <saucelabUserName>:<saucelabAccessKey> -X POST -H “Content-Type: application/octet-stream” https://saucelabs.com/rest/v1/storage/<saucelabUserName>/<name of app file>?overwrite=true –data-binary @<absolute local path of app file>

Following parameter need to be changed before executing the above command.

saucelabUserName — username/email to access sauce lab account.

saucelabAccessKey — Access key associated with your Sauce Lab account.

name of app file — name of the apk file (for Android).

absolute local path of app file — absolute path of your apk file (the actual location of the file on the system’s hard drive) eg: /Users/Documents/testApps/testapp.apk

I have download one sample apk file to demonstrate how we can execute test cases for Android native platform on Sauce labs. The app name is ApiDemos-debug.apk and it is placed in my system’s Downloads folder. You can download it from here. download ApiDemos-debug.apk

Now considering your saucelabUserName is testuser and your saucelabAccessKey is testaccesskey, all required parameters should look like following.

saucelabUserName — testuser

saucelabAccessKey– testaccesskey

name of app file — ApiDemos-debug.apk

absolute local path of app file with file name — /Users/ngoyal/Downloads/ApiDemos-debug.apk

Your command should be like this now.

curl -u testuser: testaccesskey -X POST -H “Content-Type: application/octet-stream” https://saucelabs.com/rest/v1/storage/testuser/ApiDemos-debug.apk?overwrite=true –data-binary @/Users/ngoyal/Downloads/ApiDemos-debug.apk

Now open Terminal/CommandPrompt and execute the above command. This may take few seconds to upload your file on the Sauce Labs cloud. After successful upload you should see something like this in the Terminal.

Note – By any chance, If you see the “size” as 0 that means your file is not uploaded on the Sauce labs cloud. Check the command and run it again ensuring all parameters are configured correctly in the command.

After uploading test app (ApiDemos-debug.apk) on the Sauce labs successfully, we will start a virtual device with specific configuration on the Sauce labs cloud manually using the appium server to ensure if all our capabilities are correct.

Steps to start a virtual device manually on the Sauce Labs

  1. Start the appium server.

2. Once the server is started, click on the Search icon (the first icon at the top right corner)

 3. Now click on the Sauce Labs tab from the next screen and enter your Sauce Username and Access key

4. Now click on Desired Capabilities and add the following capabilities one by one by clicking on + icon

Note – the value of ‘app’ capability should be sauce-storage:<your test app name>

5. Now click on Start Session button.

6. A rotating loader should be displayed on the screen as following.

7. Now go to http://saucelabs.com, sign in with your account. Go to Dashboard and click on Automated Tests.

8. You will see a job running with name “Unnamed job with c66ob………..”

9. Click on the job, you will see a message “Loading Live video”. This may take some time to launch the live video of your running test.

10. After few seconds you will be able to see the live video of your test running on the device (6.0 in our case) that you asked for.

This means all our configurations for Android device is perfect and our test is launched on the virtual device. Now lets see how can we do it for IOS platforms.

Command to upload IOS test application (ipa/app file) on Sauce Labs

curl -u <saucelabUserName>:<saucelabAccessKey> -X POST -H “Content-Type: application/octet-stream” https://saucelabs.com/rest/v1/storage/<saucelabUserName>/<name of app file>?overwrite=true –data-binary @<absolute local path of app file>

The command is same as we used to upload the Android apk but one thing that should be kept in mind while uploading the IOS app is “the app should be in zip format“.  Yes, you read it correct. You have to compress your app to zip format and configure the app name and absolute path accordingly.

Following parameter need to be changed before executing the above command.

saucelabUserName — username/email to access sauce lab account.

saucelabAccessKey — Access key associated with your Sauce Lab account.

name of app file — name of the ipa/app file in zip format(for IOS)

absolute local path of app file — absolute path of your apk or ipa/app file in zip format (the actual location of the file on the system’s hard drive) eg: /Users/Documents/testApps/testiosapp.zip

I have download one sample app file to demonstrate how we can execute test cases for IOS platform on Sauce labs. The app name is UICatalog.app and it is placed in my system’s Downloads folder.

After converting it into zip it becomes UICatalog.zip

Now considering your saucelabUserName is testuser and your saucelabAccessKey is testaccesskey, we have all required parameters as following.

saucelabUserName — testuser

saucelabAccessKey– testaccesskey

name of app file –UICatalog.zip

absolute local path of app file with file name — /Users/ngoyal/Downloads/UICatalog.zip

Your command should look like this now.

curl -u testuser: testaccesskey -X POST -H “Content-Type: application/octet-stream” https://saucelabs.com/rest/v1/storage/testuser/UICatalog.zip?overwrite=true –data-binary @/Users/ngoyal/Downloads/UICatalog.zip

Now open Terminal/CommandPrompt and execute the above command. This may take few seconds to upload your file on the Sauce Labs cloud. After successful upload you should see something like this in the Terminal

By any chance, If you see the “size” as 0 that means your file is not uploaded on the Sauce labs cloud. Check the command and run it again ensuring all parameters are configured correctly in the command.

After uploading test app (UICatalog.app) on the Sauce labs successfully, we will start a virtual device with specific configuration on the Sauce labs cloud manually using the appium server.

Follow the same steps as explained for Android but use the following capabilities to start the IOS virtual device.

After clicking on the Start Session button, you can go to Sauce labs and see the live execution happening on the virtual device (IOS Simulator)

By this, we are done with setting up and executing our test on Android Emulator and IOS Simulator respectively manually.

Now we will see how can we do that programatically. We just have to use two additional steps other than we did in manual setup and we are all set 🙂

Set following as environmental variables in your system.

SAUCE_USERNAME

SAUCE_ACCESS_KEY

If you are a mac user, you can set these environmental variables as following:

export SAUCE_USERNAME=<your sauce lab user name>
export SAUCE_ACCESS_KEY=<your sauce lab access key>

Launching and Running the Android Native test cases on Sauce labs programatically

Following capabilities can be used to run Android native test cases on Sauce labs.

DesiredCapabilities capabilities = new DesiredCapabilities();

capabilities.setCapability(MobileCapabilityType.PLATFORM_VERSION, “6.0”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, “Android”);

capabilities.setCapability(MobileCapabilityType.APP, “sauce-storage:<your apk file name>”);

capabilities.setCapability(MobileCapabilityType.DEVICE_NAME,“Android Emulator”);

URL url = new URL(“https://<saucelab username>:<saucelab access key>@ondemand.saucelabs.com:443/wd/hub“);

AppiumDriver driver = new AndroidDriver(url,capabilities);

These capabilities will launch the apk file on virtual device on Sauce labs. Further you can write your test cases to be executed.

Note – Platform version and the device name may vary based on the device to be configured

 

Launching and Running the Android Web test cases on Sauce labs programatically

Following capabilities can be used to run Android web test cases on Sauce labs.

DesiredCapabilities capabilities = new DesiredCapabilities();

capabilities.setCapability(MobileCapabilityType.PLATFORM_VERSION, “6.0”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, “Android”);

capabilities.setCapability(MobileCapabilityType.DEVICE_NAME,“Android Emulator”);

capabilities.setCapability(MobileCapabilityType.BROWSER_NAME,“Chrome”);

URL url = new URL(“https://<saucelab username>:<saucelab access key>@ondemand.saucelabs.com:443/wd/hub“);

AppiumDriver driver = new AndroidDriver(url,capabilities);

These capabilities will launch the virtual device on Sauce labs with Chrome browser opened. Further you can write your test cases to be executed.

Note – Platform version and the device name may vary based on the device to be configured.

 

Launching and Running the IOS Native test cases on Sauce labs programatically

Following capabilities can be used to run IOS native test cases on Sauce labs.

DesiredCapabilities capabilities = new DesiredCapabilities();

capabilities.setCapability(MobileCapabilityType.PLATFORM_VERSION, “10.2”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, “ios”);

capabilities.setCapability(MobileCapabilityType.APP, “sauce-storage:<your zip file name>”);

capabilities.setCapability(MobileCapabilityType.DEVICE_NAME,“iPhone 6 Simulator”);

URL url = new URL(“https://<saucelab username>:<saucelab access key>@ondemand.saucelabs.com:443/wd/hub“);

AppiumDriver driver = new AndroidDriver(url,capabilities);

Note – Platform version may change based on mac version being used and the device name may vary based on the device to be configured.

 

Launching and Running the IOS Web test cases on Sauce labs programatically

Following capabilities can be used to run IOS web test cases on Sauce labs.

DesiredCapabilities capabilities = new DesiredCapabilities();

capabilities.setCapability(MobileCapabilityType.PLATFORM_VERSION, “10.2”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, “ios”);

capabilities.setCapability(MobileCapabilityType.DEVICE_NAME,“iPhone 6 Simulator”);

capabilities.setCapability(MobileCapabilityType.BROWSER_NAME,“Safari”);

URL url = new URL(“https://<saucelab username>:<saucelab access key>@ondemand.saucelabs.com:443/wd/hub“);

AppiumDriver driver = new AndroidDriver(url,capabilities);

Note – Platform version may change based on mac version being used and the device name may vary based on the device to be configured.

Please refer the following link to check various capabilities required for different platforms and versions.

https://wiki.saucelabs.com/display/DOCS/Platform+Configurator#

I hope this post will help you guys setup execution environment on Sauce labs for mobile test cases. Please do comment in case you feel any difficulty while setting this up.

Thank You!


 

Blockchain: Reshaping Financial Services Industry

When we talk about the innovations in Financial services industry Blockchain Technology appears at the top of the list. Blockchain is considered as second generation of Internet that promises to bring transparency, trust, privacy and security to the global economy. According to the 2016 report by World Economic Forum(WEF) on future infrastructure of banking, 80% of banks will initiate blockchain projects by the end of 2017. Furthermore, it is estimated that blockchain investments will surpass $3 billion mark by end of this year. We can safely conclude that time is right for financial institutions to embrace and unleash blockchain potential.

What is BlockChain?

Blockchain is a distributed ledger that stores the information or transactions performed by millions of computers every day. The data in distributed ledger is stored with consensus of participating nodes and is replicated across the network. Such distributed ledgers are useful for real time and secure data sharing.

What makes blockchain unique is that transactions can’t be modified after they are committed, which make the records immutable and secure. Most people use trusted middleman such as bank to make a transaction. However, blockchain facilitates peer-to- peer secure exchange of any type of value – money, goods or property across the globe without the need of third party.

There are two types of ledgers – public and private. Public ledgers allow anyone to add or read the data without the approval of any authority. Bitcoin and Ethereum are examples of public blockchains. Private ledgers, on the other hand, is restricted to limited number of participants and need permission to join the group. Considering the data security and regulatory environment, banks and financial institutions are exploring this type of blockchain.

Impact of BlockChain on the industry?

Blockchain is revamping the financial services industry for speed and inclusion. Since data is stored in encrypted form on the shared ledger and is single source of truth, all the authorized stakeholders in the value chain can fetch the information directly from the blockchain without having dependency on each other. This can help in achieving faster processing speed and significant reduction in cost.

Using blockchain, billions of people who are excluded from the economy will be connected and will contribute to the global economy. Global remittances that take days will now be completed in a few seconds and at much lower cost. Bureaucracy and corruption will be eliminated from the financial systems as blockchain holds all the transacting parties accountable for their actions.

BlockChain Use Cases

Blockchain will soon transform banks and financial institutions the way they operate. Some of the use-cases which are being worked upon actively are:

Digital Identification

When the customer opts for multiple bank accounts, he is required to provide his identification details to every bank. Present financial system doesn’t allow banks to share that client information with each other as the information is stored in their central repository.

Blockchain can be used by banks for know-your-customer(KYC) requirements. Once the customer registers his identity, he is not required to register with every bank provided banks are connected to the blockchain. Using single source of client identification, blockchain can help banks not only in reducing costs, but also in optimizing resources while maintaining the data security.

Cross-Border Payments

Traditionally, cross-border payments are facilitated by multiple trusted intermediaries such as banks and remittance centers. Such third parties usually take 3-5 business days for transfers and charge heavy fee on remittances.

With Blockchain technology, payment transactions could be simplified by eliminating the need of middlemen while reducing the processing time and the costs of remittance substantially. Moreover, Blockchain maintains an audit trail of every transaction, which means that source and destination of any illegal transaction could easily be traced. This is a significant development for the financial institutions and regulators worldwide.

One such example is Ripple, a blockchain based payment system for banks that can be used to make secure real time payments globally at a reasonable cost.

Clearing and Settlement

The post-trade settlement and clearing stages are important stages of equity trading process. After receiving the trade confirmations, trade settlement usually takes 3 days during which investors can’t take any action on the securities. Various intermediaries such as custodians, depositories, clearing houses, exchanges and brokers are involved and transaction records are stored in centralized database.

Blockchain or distributed ledger technology can automate the post trade process using smart contracts and improve efficiency of the clearing system, thereby reducing the trading cost. Furthermore, trade settlement could happen real-time and with better governance and collaboration among all the market participants.

NASDAQ has been at the forefront of the blockchain revolution and has built Linq blockchain ledger to complete and record the securities transactions at its exchange.

Smart Contracts

Currently, we rely on third parties such as our judicial system, lawyers or notary for the enforcement of the paper contracts such as property agreement, employer-employee contract, partnership contract, vendor agreement etc. Smart contracts on blockchain are transforming the way we look at the standard paper contracts.

Smart contracts are computer programmed for facilitating, verifying and executing paper- less contractual instruments between the parties. Since smart contracts are self- executing, they can eliminate the need of middlemen and can be programmed to execute under certain conditions and rules. The involved parties can access contracts anywhere and approve them faster, resulting in improved speed and efficiency of the whole contracting process

How does the future look?

The financial services industry has moved past the awareness stage and banks and financial institutions globally are investing in conducting Proof of Concept(POCs) to explore blockchain capabilities. Improved efficiencies, transparency, faster payments, security and immutability are the key benefits that organizations will reap by adopting the technology.

Considering its rising popularity, adoption rate of blockchain will continue to increase and we expect it to become mainstream technology in the next few years. Blockchain is our gateway to future of finance and will become part of critical financial infrastructure for providing better, cheaper, secure and faster financial services to the customers

About Xebia

Xebia, a niche agile software development and digital consulting firm, is an active member of Blockchain Special Interest Group (SIG) setup by NASSCOM for developing and collaborating on blockchain implementations. Our mission is to not only help our global clients in blockchain implementations but also assist them in navigating complex blockchain landscape while at the same time creating awareness in academia and the industry

https://xebia.com

Notes

[1] Tapscott, Don, and Alex Tapscott.
Blockchain Revolution: How the Technology behind Bitcoin Is Changing Money, Business and the World” Penguin Random House LLC, 2016.

[2] Rubini, Agustin. Fintech in a Flash:
Financial Technology Made Easy. Simtac Ltd, 2017.

Appium – Setting up various mobile platforms for automation

Hey Folks,

This post will take you through how to setup various mobile platforms (Android Native, Android Web, IOS Native and IOS web) using DesiredCapabilities, virtual devices and Appium server.

Though numerous information is already available on this over the internet but this post will try to segregate it in one place and will see how can we run native and web apps on virtual devices (Android Emulator and IOS Simulator).

Lets get started without wasting any time.

You will need following softwares (Versions may vary) to setup the platforms. I am using following versions. You may use other versions as well ensuring the best compatibility.

Appium 1.65

Genymotion 2.10.0 (for Android Emulator)

–You can also use the default emulators that comes along with the Android Studio.

Xcode 8.2.1 (for IOS Simulator)

Setting up Desired Capabilities for Android Native platform:

Following capabilities will be used to launch any Android native app on the Android virtual device (Emulator)

DesiredCapabilities capabilities = new DesiredCapabilities();

File app = new File(“<path to the android app apk file>”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_VERSION, “6.0”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, “Android”);

capabilities.setCapability(MobileCapabilityType.APP, app.getAbsolutePath());

capabilities.setCapability(MobileCapabilityType.DEVICE_NAME,“Emulator”);

URL url = new URL(“http://0.0.0.0:4723/wd/hub“);

AppiumDriver driver = new AndroidDriver(url,capabilities);

Explanation:

First we are creating the object of DesiredCapabilities, this object will be used to carry all desired capabilities.

Next we are creating a File object and assigning the apk file path to it along with the apk name.

Then will setup various capabilities in the capabilities object.

  • PLATFORM_VERSION – – should be your virtual device version.
  • PLATFORM_NAME – – should be Android for Android platforms (irrespective of native and web).
  • APP – should be absolute path to apk file so that appium can find and install it on the device.
  • DEVICE_NAME – – can be Emulator or the device name (eg Nexus 5 etc) which was assigned while creating the virtual device.

Now we are creating the URL object and providing the address where appium server is running. By default it runs on 0.0.0.0:4723 as you can see below

You can run appium on different ports using the command appium -p <port number>

Finally we are launching the AndroidDriver using the URL and DesiredCapabilities that we created. If everything is setup correctly, appium server is up and virtual device is running then it should launch your apk file on the virtual device.

Setting up Desired Capabilities for Android Web platform:

Its pretty much similar as setting up environment for Android Native except few things that we will cover here.

DesiredCapabilities capabilities = new DesiredCapabilities();

capabilities.setCapability(MobileCapabilityType.PLATFORM_VERSION, “6.0”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, “Android”);

capabilities.setCapability(MobileCapabilityType.DEVICE_NAME,“Emulator”);

capabilities.setCapability(MobileCapabilityType.BROWSER_NAME, “Chrome”);

URL url = new URL(“http://0.0.0.0:4723/wd/hub“);

AppiumDriver driver = new AndroidDriver(url,capabilities);

Explanation:

Since it is Web platform we do not need any apk file and hence no capability to mention the app path. Instead, we will mention the browser name that we want to launch. Here it is Chrome in our case.

Start the appium server and the virtual device (Emulator). Now run the code where these capabilities are written. It should launch the chrome browser on the virtual device successfully.

Now you can write test cases to open any url and test various scenarios.

Note – Chrome should be installed in the virtual device where we want to simulate our test. If you do not have chrome already installed then follow any post from internet to install it on virtual device or refer this post

Install Google Play Store and Chrome on Genymotion Virtual Device

Setting up Desired Capabilities for IOS Native platform:

DesiredCapabilities capabilities = new DesiredCapabilities();

File app = new File(“<path to the ios .app or .ipa file>”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_VERSION, “10.2”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, “IOS”);

capabilities.setCapability(MobileCapabilityType.APP, app.getAbsolutePath());

capabilities.setCapability(MobileCapabilityType.DEVICE_NAME,“iPhone 6”);

URL url = new URL(“http://0.0.0.0:4723/wd/hub“);

AppiumDriver driver = new IOSDriver(url,capabilities);

Note – The platform version and the device name should be as per the devices configured in the Xcode. I have taken the platform version and device name from the device configured in the Xcode, as shown below.

These capabilities should launch the IOS simulator and install your IOS native app. You can further automate your test cases for the native app.

Setting up Desired Capabilities for IOS Web platform:

Now we will see how to launch Safari in IOS Simulator to automate web apps.

DesiredCapabilities capabilities = new DesiredCapabilities();

capabilities.setCapability(MobileCapabilityType.PLATFORM_VERSION, “10.2”);

capabilities.setCapability(MobileCapabilityType.PLATFORM_NAME, “IOS”);

capabilities.setCapability(MobileCapabilityType.DEVICE_NAME,“iPhone 6”);

capabilities.setCapability(MobileCapabilityType.BROWSER_NAME,”Safari”);

URL url = new URL(“http://0.0.0.0:4723/wd/hub“);

AppiumDriver driver = new IOSDriver(url,capabilities);

Explanation:

For the web platform we do not need any app, so removing the related capability and adding the capability for browser name.

Start the appium server executes code that has all the capabilities mentioned above. IOS simulator should launch with safari.

You can further write your test cases to automate any web app.

Note – IOS simulators comes by default with Safari installed, no need to install it explicitly.

Thank you! In the coming posts we will see how to execute mobile test cases on Sauce labs.

Any feedback is most welcome 🙂

Banking Industry: Innovate or Perish

With the advent of digitization, the Banking industry is undergoing a massive change — a disruption in the way traditional banks do business. With technological shifts and changes in regulatory environment, the emerging FinTech players are giving incumbents a good run for their money. New players, business ideas, platforms, and models are rising because of the constantly evolving technological landscape.

There are new innovative technology entrants who are reaching out to customers effectively while most of the banks are still struggling to stay relevant in the business. As customers are leaning toward a seamless digital banking experience, the traditional banking practices are gradually fading into oblivion.

Companies like LendingClub, Square, Uber, Mint, Alibaba, etc., are disrupting the traditional model of banking. LendingClub, for example, provides peer-to-peer loans at lower interest rates, Uber provides auto financing to drivers and Alibaba provides the largest online payment platform.

Digital technologies are reshaping the industry in several ways.

First, financial transactions are moving from traditional channels to digital channels (mobile and web), changing the way customers interact with banks and FinTech companies.

Second, banks are getting unbundled. Lending platforms and digital payments are phenomenally growing, removing reliance on banks for making payments or borrowing money.

Third, the speed of financial transactions has astronomically increased. Transactions such as remittances that used to take days can now be completed within few seconds using Blockchain technology.

And last, the Robo Advisors that use complex computer algorithms are going to replace human financial advisors to provide unbiased financial advice at much lower cost.

Banks must reinvent, why?

Retail banks are facing difficult times. They may soon disappear as technology players are catching up fast and challenging their very existence. Tech Startups are building advance business models that are difficult for banks to imitate and follow at such rapid pace.

It is becoming imperative for incumbent banks reinvent themselves digitally, otherwise, they may have a hard time building relationship with the next generation of consumers, risk losing their market share and becoming obsolete.

This tipping point, on the other hand, provides a huge opportunity for growth to banks, provided they relentlessly innovate to offer better online experiences to the customers with product offerings tailored to their needs.

Digital transformation initiatives should be taken at banks to move quickly like startups do. First, banks need to adopt agile mindset to build digital solutions faster for their customers. By promoting lean learning culture, they can test to refine the customer experience, have faster product iteration, resulting in better product-market fit.

Second, they need to extend their existing technology capabilities and go all digital. They may be using technology for work-flows, internal operations, but most of them still send/receive documentation by paper or fax for account opening, loan and credit card applications.

An alternate approach to this — customers can use smart devices, from opening an account and paying bills to investing in their preferred financial instrument without even stepping inside a physical branch.

FinTechs and Banks Need to Collaborate

The primary goal of FinTechs was to solve the banking sector problems by focusing on providing a seamless digital customer experience, streamlining operational processes, and offering innovative products and services in lending, payments, wealth management, and remittance to customers. The collaboration between Banks and FinTechs can bring immense value to both parties.

FinTechs are technology focused. They are not constrained by any bulky legacy banking system. They can add agility to ideas/technology implementations at banks and can help increase their product and service offerings.

Whereas, banks are risk-averse and usually move slowly due to the highly-regulated environment. However, they can provide FinTechs their industry expertise – operations, legal, risk and access to their Banking Systems Infrastructure.

Though banks have been reluctant, some of them have been offering APIs for FinTechs to provide a wide range of products and services to their customers. Moreover, regulations such as Payment Services Directives (PSD2) are compelling banks to open-up to third parties and bring more transparency in the system.

Overall, banks are bringing the barriers down and allowing entry for FinTechs and providing them access to the huge customer base, which otherwise would take them longer time to build.

The Future of the Industry

The banking industry will continue to remain highly competitive where an increasing number of FinTech entrants will bring creative banking solutions for the customers and earn profits. It is quite possible that some of the traditional banks won’t be able to adapt to this changing industry. Unfortunately, this might either lead to collapse of a few banks or force them to radically change their business models.

The collaboration between Banks and FinTechs will be the key to shaping the future of the financial services ecosystem. Industry players who relentlessly innovate to provide frictionless digital customer experience while providing personalized products and services faster will be able to sustain longer in this constantly evolving landscape.

About Xebia

Xebia, a niche agile software development and digital consulting firm has teamed up with highly qualified partners to implement digital banking solutions for banks and financial organizations globally. We have a Center of Excellence (COE) dedicated to providing state-of-the-art Omni-Channel Digital Banking Solutions. Our mission is to help financial clients succeed in their digital transformation so that they can not only survive, but also thrive in today’s highly competitive landscape.

https://xebia.com

References:

[1] Broeders, Henk, and Somesh Khann “Strategic choices for banks in the digital age” Mckinsey & Company, Mckinsey, 2015. http://bit.ly/2A1otw2
[2] Chishti, Susanne, and Janos Barberis. The FINTECH Book. John Wiley & Sons, 2016.
[3] Rubini, Agustin. Fintech in a Flash: Financial Technology Made Easy. Simtac Ltd, 2017.

Reusing Rails Scopes

In this post, I would like to discuss about one of the least explored methods of ActiveRecord
i.e. merge. I came across this method after working with Rails for about 2 years.
It mainly focuses on merging of scopes in associations. Confused ?? Let’s head straight to an example.

Let’s say I’ve orders, line items and products as follows :

class Order < ApplicationRecord

  has_many :line_items
  has_many :products, through: :line_items

end
class LineItem < ApplicationRecord

  belongs_to :order
  belongs_to :product

end
class Product < ApplicationRecord
  scope :popular, -> { where("products.published = ? and products.bought_count > ?", true, 200) }
end

Now here I would like to find out all the orders with popular products. Normally we would approach the problem as follows:

  Order.joins(:products).where("products.published = ? and products.bought_count > ?", true, 200)

Or

class Order < ApplicationRecord

  has_many :line_items
  has_many :products, through: :line_items

  scope :with_popular_products, -> { joins(:products).where("products.published = ? and products.bought_count > ?", true, 200) }
end
Order.with_popular_products

The resulting query is

  SELECT "orders".*
  FROM "orders"
  INNER JOIN "line_items" ON "line_items"."order_id" = "orders"."id"
  INNER JOIN "products" ON "products"."id" = "line_items"."product_id"
  WHERE (products.published = 't' and products.bought_count > 200)

Here, the issues are

  1. Our code does not abide by the DRY principle.
  2. If the business logic for a popular product changes, we would have to take care we change it in every place being used.
  3. The popular product logic should have nothing to do with our Order model. It should stay encapsulated within the product.

How does merge come to the rescue ?

Let’s see for ourselves how merge can help us get rid of the above issues. We can rewrite the above scope as

scope :with_popular_products, -> { joins(:products).merge(Product.popular) }

results in

  SELECT "orders".*
  FROM "orders"
  INNER JOIN "line_items" ON "line_items"."order_id" = "orders"."id"
  INNER JOIN "products" ON "products"."id" = "line_items"."product_id"
  WHERE (products.published = 't' and products.bought_count > 200)

It fires the same query as above but code is much more cleaner and avoids replication.

Here we can see that the product logic remains within the product and we can reuse it whenever and wherever required.

There are some more ways of using merge

  1. Performing a join with multiple where conditions across tablesLet’s suppose we need to find out the delivered orders whose products are published.
    There can be two ways to do it

    Order.joins(:products).where(status: :delivered, products: { published: true})

    OR

    Order.where(status: :delivered).joins(:products).merge(Product.where(published: true))

    Both will result in the same query as follows

    SELECT "orders".*
    FROM "orders"
    INNER JOIN "line_items" ON "line_items"."order_id" = "orders"."id"
    INNER JOIN "products" ON "products"."id" = "line_items"."product_id"
    WHERE ("orders"."status" = 'delivered' and products.published = 't')

    But what if my Product model has a different table name say deals

    class Product < ApplicationRecord
      scope :popular, -> { where("products.published = ? and products.bought_count > ?", true, 200) }
    
      def self.table_name
        'deals'
      end
    end

    I’ll have to change my where query as per the table name and will always have to keep in mind such table name mappings

    Order.joins(:products).where(status: :delivered, deals: { published: true})

    But do you know what will happen with the query using merge ?
    Voila! No change needed!

  2. Merging two resultsImagine you’ve a website of various technical course videos and these videos can be accessed as per the user’s accessibility.
    For example, A guest user can view say just first video of each course, a user who has an account can view some free courses, a user with subscription can view the paid courses too.

    Suppose we’re using CanCan to manage the abilities of the user. Now to find the videos accessible by a user we can use

    Video.accessibe_by(current_ability)

    where current_ability tells me the access rights of the user (guest/free/subscription).

    Now if I want to find the videos accessible by the current user which are also published

    accessible_videos = Video.accessibe_by(current_ability)
    Video.where(published: true).merge(accessible_videos)

    It returns the intersection of all published videos with the ones accessible by current user.

Please drop in your suggestions/feedback in the comments below to help me improve.

Performance tuning the camel parameters in backbase CXP application

Backbase is an Omni-Channel Digital Banking platform empowering financial institutions to accelerate their digital transformation and effectively compete in a digital-first world. It unifies functionality from of traditional core systems and new FinTech capabilities into a seamless digital customer experience. Thereby, drastically improving any the customer channel.
In any banking application, we interact with core banking for everything via Middleware ESB. In a Backbase CXP application, we make all calls to Middleware via camel. A typical Backbase CXP application’s architecture and system’s interaction is like shown below.

backbase cxp interaction

In a recent Backbase CXP project which that went live, we began experiencing slowness in the application when the number of concurrent users increased to (200+) and it became difficult to use the iOS and Android apps that are were consuming the Backbase CXP backend. We generated the thread dumps at the time the system was hanging and analyzed them using the tool samurai and jvisualvm.Lots of these threads were in WAITING mode. We analyzed the thread dumps a bit more and found that many of these threads were waiting with stacktrace like below.


"http-nio-8080-exec-499" #679 daemon prio=5 os_prio=0 tid=0x00007fb1f4204000 nid=0xb685 in Object.wait() [0x00007fb148308000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager.doGetConnection(MultiThreadedHttpConnectionManager.java:518)
- locked <0x0000000e853c1c30> (a org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$ConnectionPool)
at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager.getConnectionWithTimeout(MultiThreadedHttpConnectionManager.java:416)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:153)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)

As we can see above in the above thread dump snippet, that the thread is waiting when it is trying to get the connection from the MultiThreadedHttpConnectionManager. This We had identified as the problem which was causing so many threads to wait and resulted in slowness. In our codebase, we created a common camel route to connect to Middleware for all the calls. So this is the common http endpoint which the camel is invoking every time. We further looked into our codebase and camel source code and discovered that the camel-core jar is using the apache commons-httpclient.jar to make http connections to the Middleware using the class MultiThreadedHttpConnectionManager. We found that Backbase uses its own default Multithreaded connection manager defined in backbase-ptc.xml. The property ptc.http.maxConnectionsPerHost is used to control the number of connections per host. This is part of the jar ptc-core.jar.

<bean id="ptc_httpConnectionManager" class="org.apache.commons.httpclient.MultiThreadedHttpConnectionManager">
<property name="maxConnectionsPerHost" value="$ptc{ptc.http.maxConnectionsPerHost}"/>
<property name="maxTotalConnections" value="$ptc{ptc.http.maxTotalConnections}"/>
</bean>

These default values of the max connections in this connection manager are very little to handle 200 concurrent users.

## Maximum number of concurrent requests for one remote resource.
ptc.http.maxConnectionsPerHost=50

## Maximum total number of concurrent requests.
ptc.http.maxTotalConnections=100

Solution

The approach we took to solve this problem was to make the Backbase CXP’s CamelContext to use a different MultiThreadedHttpConnectionManager that has higher optimal values. For making the change in the default camel context, we had to add the default backbase-integration.xml to portalserver/src/main/resources/META-INF/spring/backbase-integration.xml and then edit the file to attach new MultiThreadedHttpConnectionManager to the camel context using the following code.

<bean id="http" class="org.apache.camel.component.http.HttpComponent">
<property name="camelContext" ref="bb-integration-context"/>
<property name="httpConnectionManager" ref="myHttpConnectionManager"/>
</bean>

<bean id=”myHttpConnectionManager” class=”org.apache.commons.httpclient.MultiThreadedHttpConnectionManager”>
<property name=”params” ref=”myHttpConnectionManagerParams”/>
</bean>

<bean id=”myHttpConnectionManagerParams” class=”org.apache.commons.httpclient.params.HttpConnectionManagerParams”>
<property name=”defaultMaxConnectionsPerHost” value=”1000“/>
<property name=”maxTotalConnections” value=”1000“/>
</bean>

So with the above code, we have defined myHttpConnectionManager to handle higher load and add it to the default camel context bb-integration-context. With this change on the camel context picked the new connection manager which could handle higher load of requests. This has solved the bottleneck for the http connections made to call the ESB and the application was performing well. This was the approach that we had followed.

Another simpler approach could have been to just change the default values in backbase.properties file

ptc.http.maxConnectionsPerHost=some_number
ptc.http.maxTotalConnections=some_other_number

We could have done this, but the ptc module was going to be removed in the coming Backbase versions, so we stuck with to our initial approach.

Conclusion

In any Backbase CXP application, the Camel connection’s bottleneck will be an inevitable sure problem that will to happen with the default values ( and esp. if CXP is on one node only) as all the http requests will be sent to the Middleware and then if the concurrent users reaches like 200+, then the application will be slower.
This blog has addressed how to solve this performance issue which happens because of waiting in MultiThreadedHttpConnectionManager.

Selenium with C# and xUnit

In this blog I will explain how you can start with automating your functional test using C# and xUnit.

A little introduction about the tools and technologies we are going to use:
  • C# is an elegant and type-safe object-oriented language that enables developers to build a variety of secure and robust applications that run on the .NET Framework. Source
  • Selenium is a portable software testing framework for web applications. Selenium provides a record/playback tool for authoring tests without learning a test scripting language (Selenium IDE). It provides supports to C#, Java, Groovy, Perl, PHP, Python and Ruby. The tests can then be run against most modern web browsers. Selenium deploys on Windows, Linux, and Macintosh platforms. Source
  • xUnit is a free, open source, community-focused unit testing tool for the .NET Framework. Written by the original inventor of NUnit v2, xUnit is the latest technology for unit testing C#, F#, VB and other .NET languages. Source

I am not going in the details as there are plenty of information available on Internet about these, we will focus on creating a actual xUnit test framework using C# and Selenium.

Prerequisites

  • Visual Studio (for this blog we are using VS 2015)

Let’s Start

  • Open Visual Studio

  • Click on New Project, it will open a window, you need to select a class library project, enter name for solution say “Functional” and for project say “Demo” then click OK.

  • Now you will see screen as follows:

  • Right click on the Project “Demo” and select “`Manage NuGet Packages…“`

  • It will take you to following screen:

  • Search for following packages and install those:
    • Selenium.WebDriver
    • Selenium.Chrome.WebDriver (chrome driver exe)
    • xUnit.runner.visualstudio (to discover xUnit tests)

 


Just an information, xUnit have it’s own way of working, unlike NUnit or MSUnit there is no [SetUp] or [TestInitialize], here you need to achieve this using parameterless constructor.

You can get all other information in following article: https://xunit.github.io/docs/comparisons.html

  • Now you are ready to automate your first test, change class name to Tests(or whatever you want) and start writing your first test as below: you can find code here

  • Now just build your code by right click the project Demo Or by pressing  Ctrl + Shift + B and you will be able to see your test in “Test Explorer”.

  • To run you test you need to right click on the test itself or click on Run All in the Explorer, check image below:

 

  • It will execute test and if Assert get passed it will show your test case as passed.


This blog is just to give an idea about how you can start Selenium with C# and xUnit, now you can start building a complete framework of your choice. If you need in-depth information or have any feedback, please mention in comments.

Why do we react?

Functional programming and reactive programming have been more of theoretical concepts for frontend developers in the past because they seemed like overkill tools especially for something as simple as web-pages in the era when the frontend was a dumb static representation of the server state.

But now things have changed. Redux creator Dan Abramov rightly compares asynchronicity and mutation with coke and mentos. While being very common names in our day to day lives, if these come together, the mixture can become explosive enough to get out of control very soon.

Today we create multi-platform, high-performance, rapidly evolving interfaces to cater fairly complex applications.

User Interfaces being event-driven in nature have had a problem of not scaling well. Changes/events in one part of a page can have an effect on some other page or part of page. It feels like a fire under control to start with, but when these effects become causes for some other changes in the application, the end of these cascading CAUSE and EFFECT cycles quickly becomes unpredictable. Also, when architected with little decoupling and abstraction between different layers of responsibility, sophisticated design pattern can not make space in building frontend applications. Reasons like these combined quickly give rise to `edge cases` and then `dirty-hacks` on code that cannot be accompanied with automated tests.

ReactJS(and the eco-system around it) is built on the principles of functional reactive programming where the application is always in a predictable state and the entire view is a consistent mapping of that state object. That allows us to make controlled changes to the data to cause a visual change in the application, rather than changing the actual visible page itself (which gets efficiently re-rendered to reflect the state data change). Its not just easy, but also intuitive to add automated tests for these applications, without which a lot of programmers today would call even fresh code legacy. Emerging design patterns like flux (put together by Facebook) make it possible to control the data flow and the side-effects because of its change.

That said, ReactJS in neither a complete frontend solution, nor the only one available today. It fits together as the View (V) with flux and MV* (popular in the near past) paradigms. Its popularity and success have opened doors to its emerging alternates which can even be more performant under certain circumstances (eg: inferno). While react and react-like libraries make the most preferred fits for the V (view) part of modern frontend applications, rest of the pieces like M (model), C (controller), presenter, pipeline, state store, etc are carefully chosen depending upon the requirement of the project.

With the kind of decoupling built into ReactJS, it can be used not only in web applications, but also in mobile/ desktop applications with performance as good as native alternates if not better. This brings a great advantage of being able to efficiently code views for all devices and platforms in a common language(JavaScript) by the same engineers. Facebook and Netflix are good examples of apps that run on Reacts across all platforms of consumption.

reactJS: https://facebook.github.io/react/
reduxJS: http://redux.js.org/
infernoJS: https://infernojs.org/
flux: https://facebook.github.io/flux/