Why do we react?

Functional programming and reactive programming have been more of theoretical concepts for frontend developers in the past because they seemed like overkill tools especially for something as simple as web-pages in the era when the frontend was a dumb static representation of the server state.

But now things have changed. Redux creator Dan Abramov rightly compares asynchronicity and mutation with coke and mentos. While being very common names in our day to day lives, if these come together, the mixture can become explosive enough to get out of control very soon.

Today we create multi-platform, high-performance, rapidly evolving interfaces to cater fairly complex applications.

User Interfaces being event-driven in nature have had a problem of not scaling well. Changes/events in one part of a page can have an effect on some other page or part of page. It feels like a fire under control to start with, but when these effects become causes for some other changes in the application, the end of these cascading CAUSE and EFFECT cycles quickly becomes unpredictable. Also, when architected with little decoupling and abstraction between different layers of responsibility, sophisticated design pattern can not make space in building frontend applications. Reasons like these combined quickly give rise to `edge cases` and then `dirty-hacks` on code that cannot be accompanied with automated tests.

ReactJS(and the eco-system around it) is built on the principles of functional reactive programming where the application is always in a predictable state and the entire view is a consistent mapping of that state object. That allows us to make controlled changes to the data to cause a visual change in the application, rather than changing the actual visible page itself (which gets efficiently re-rendered to reflect the state data change). Its not just easy, but also intuitive to add automated tests for these applications, without which a lot of programmers today would call even fresh code legacy. Emerging design patterns like flux (put together by Facebook) make it possible to control the data flow and the side-effects because of its change.

That said, ReactJS in neither a complete frontend solution, nor the only one available today. It fits together as the View (V) with flux and MV* (popular in the near past) paradigms. Its popularity and success have opened doors to its emerging alternates which can even be more performant under certain circumstances (eg: inferno). While react and react-like libraries make the most preferred fits for the V (view) part of modern frontend applications, rest of the pieces like M (model), C (controller), presenter, pipeline, state store, etc are carefully chosen depending upon the requirement of the project.

With the kind of decoupling built into ReactJS, it can be used not only in web applications, but also in mobile/ desktop applications with performance as good as native alternates if not better. This brings a great advantage of being able to efficiently code views for all devices and platforms in a common language(JavaScript) by the same engineers. Facebook and Netflix are good examples of apps that run on Reacts across all platforms of consumption.

reactJS: https://facebook.github.io/react/
reduxJS: http://redux.js.org/
infernoJS: https://infernojs.org/
flux: https://facebook.github.io/flux/

Why Akka ?

When we have to write a concurrent program, our focus shifts from the real problem and most of our time is spent on ‘How to make it concurrent’. The challenge is to make the problem domain less coupled with the code we are writing to make it concurrent. One solution to this is introducing Akka into your system.

Akka is a toolkit and runtime for building highly concurrent, distributed, and fault-tolerant applications on the JVM, according to the official Akka website.

I will talk about few points worth noticing while considering Akka as a toolkit for your concurrent applications.

Creating a thread is not just creating an object:

When we create a normal Java object, it’s about allocating a new heap memory for this particular instance. But creating a thread comes with its own responsibilities. With every instance of a thread, we need a separate memory space for a thread stack. Along with this, a Java thread has a one to one correspondence with OS-level thread. When a new thread is created, OS comes into the picture and handles the life cycle of the newly created thread. It then needs a program counter, a register set and a stack. It will also be registered with a thread scheduler. In essence, relative to the normal Java objects, thread creation is slower. Also, due to the added liabilities, there is a limitation on the maximum number of threads per process (a few 1,000 threads). When we deal with an actor system, we are only creating Java objects (actors) but not the threads. The actor will use the threads available in the thread pools as and when needed. As the actors are normal objects, millions of actor instances can be created in the memory at once.

Communication by passing messages is one of the greatest forms of abstraction:

Writing a concurrent program is not just about thread or locks, it’s about managing access to state and, in particular, the shared mutable state. It’s also about controlling the concurrent data access. In Akka, states are managed by the actor itself and can only be done by passing messages. One actor can’t control a state defined by another actor. If one actor wishes to get the current state of another actor or wants to update the existing state, it will have to send a message. Another actor will then take an action based on the message and respond with the new state in the form of a message itself. The state modification is abstracted from the outside world. The state of actors are controlled by actors themselves and they can process messages one at a time. So there is no need of adding complexity to handle synchronization explicitly. Messages are passed asynchronously so there is no blocking at all unless specifically stated by using the ask (?) option. This approach makes it easier to reason about your program. Higher-level complexity of concurrency is handled by the library itself and what you write is what you really needed to do.

Location Transparency:

When we need our system to scale out, we need additional resources remotely. We also need our system to be modified in order to adopt remote handling, which has its own complexities. The Akka system says ‘Don’t share mutable state at all’. Actors are responsible for making changes in their own state, thus making the components loosely coupled. Loosely coupled components are easy to handle and process. Not sharing state implies that, for computations, you don’t even require shared resources. Isn’t it easy to scale your system by utilizing multiple cores doing independent units of work! Distributed system is much easy to configure in Akka. Remote actor feels like the actor exists in the local system. The only difference will be with the network latency, which is going to be present anyway, as it is on the other side of the network. The rest is the same. You only have to pass messages in the usual way. Depending upon where the actor is located, it will send messages to that path. A path could be local (akka: //ActorSystem/user/…) or remote (akka.tcp://ActorSystem@host:port/user/…). Creating a replica set is easy as well. When one of the remote systems goes down, the state of an actor will persist. A new actor system can be created initializing actors in their persisting state.

Delegation of tasks:

When we have a big problem to solve, it is easier to split it into sub-problems. Solve them individually and then compose the result. Akka works on the same philosophy. One actor does not do its work completely but spans multiple children. It segregates responsibilities. Failures are part of the system and sometimes they cannot be predicted. The only thing that can be done is to recover from those failures. Let’s say you have a server. It consists of multiple components, i.e. database, mailing, logging etc. All components interact with each other. In a highly coupled system, it is really difficult to make decisions on the behalf of a single component. If one of the components crashes, you cannot handle that failure in isolation, as other components depend on it. You might have to stop the entire server or restart it. But what if components are capable of self-healing or are smart enough to decide what to do on certain failures! Creating such a system is difficult, but becomes easier with Akka’s fault-tolerant mechanism. The parent-child hierarchy of actors makes it easy to take decisions for the children. The supervisor can choose various strategies such as restarting all of its children if one fails or only restarting the child that fails. Akka provides developers enough flexibility to choose the options available depending on their needs.

To summarize, Akka lets the developers think in terms of the problem at hand and focus only on what needs to be built. Handling concurrency is taken off the developer’s hands and that taken care of by Akka itself. Developers also don’t need to worry about the remote systems as Akka unifies them while writing the system. With its flexible fault management, it helps create complex systems in a simpler way.

Crystal – Lets Call it Ruby Plus Plus

Crystal

In the world of thousands of languages, added one more to the list is Crystal, which is General Purpose Object Oriented Programming language. It is a compiled language, and compiles to an ultra optimized native code using Low Level Virtual Machine or LLVM as the backend.

But what’s so special about it you ask?

Well, if you have heard about Ruby, and how every person who has used it, loves the language. Crystal looks almost like Ruby, but it goes one step further and fixes its shortcomings, some of which are:

  •  Concurrency
  •  Speed

Continue reading

Enterprise Integration Pattern With Spring

Enterprise Integration Pattern with Spring

Recently in one of my project I got a requirement to poll a directory and it's sub directories on a constant rate and process the files residing in it to drive some business information out of it. To implement the same we used enterprise integration pattern implementation of spring because of two reasons, firstly – we are already using spring as our backend framework and secondly – it enforce separation of concerns between business logic and integration logic in an intuitive way with well-defined boundaries to promote reusability and portability.

What is Spring Integration?

Spring Integration is an enterprise integration pattern implementation of spring which supports integration with external systems via declarative adapters and these adapters provide a higher-level of abstraction over Spring's support for remoting, messaging, and scheduling. It does not need a container or separate process space and can be invoked in existing program as it is just a JAR which can be dropped with WAR or standalone systems.

As I mentioned it works using adapters, we created InboundChannelAdapter as a spring bean which starts at time of application boot up and constantly polls a directory specified noticing the Scanner and Filter specified as follows:

@Bean
    @InboundChannelAdapter(value = "fileIn", autoStartup = "true", poller = @Poller(fixedDelay = "500"))
    public MessageSource<File> fileMessageSource() throws Exception {
        FileReadingMessageSource fileReadingMessageSource = new FileReadingMessageSource();
        fileReadingMessageSource.setScanner(dirScanner());
        fileReadingMessageSource.setDirectory(new File(pollingDir));
        return fileReadingMessageSource;
    }

Scanner specified above is the strategy for scanning directories and we used WatchServiceDirectoryScanner implementation along with the composite filter as our requirement is to scan the directory and it's sub directories as well for a file ending with predefined pattern.

    @Bean
    public DirectoryScanner dirScanner() throws Exception {
        WatchServiceDirectoryScanner watchServiceDirectoryScanner = new WatchServiceDirectoryScanner(
                    "/Users/ArpitAggarwal/directory/");
        watchServiceDirectoryScanner.setFilter(compositeFilter());
        watchServiceDirectoryScanner.setAutoStartup(true);
        return watchServiceDirectoryScanner;
    }

In this post we will be polling for a file pattern ending with .csv in a /Users/ArpitAggarwal/directory/ directory, so we implemented SimplePatternFileListFilter provided by framework as below:

@Bean
    public CompositeFileListFilter<File> compositeFilter() throws Exception {
        return new CompositeFileListFilter<>(getFileListFilterList("*.csv"));
    }
    private List<FileListFilter<File>> getFileListFilterList(
            final String pattern) {
        List<FileListFilter<File>> fileListFilterList = new ArrayList<>();
        fileListFilterList.add(new SimplePatternFileListFilter(pattern));
        return fileListFilterList;
    }

By default framework keep in-memory track of files read from directory which doesn't suffice our need as we want to process the file only once even on server restart, which make us use FileSystemPersistentAcceptOnceFileListFilter implementation of framework which requires directory location to be specified to save information of files already processed on disk in form of properties file named metadata-store.properties as follows:

@Bean
    public FileSystemPersistentAcceptOnceFileListFilter persistentFilter() throws Exception {
        FileSystemPersistentAcceptOnceFileListFilter fileSystemPersistentAcceptOnceFileListFilter = new FileSystemPersistentAcceptOnceFileListFilter(
                metadataStore(), "");
        fileSystemPersistentAcceptOnceFileListFilter.setFlushOnUpdate(true);
        return fileSystemPersistentAcceptOnceFileListFilter;
    }

    public PropertiesPersistingMetadataStore metadataStore() throws Exception{
        PropertiesPersistingMetadataStore propertiesPersistingMetadataStore = new PropertiesPersistingMetadataStore();
        propertiesPersistingMetadataStore.setBaseDirectory("/Users/ArpitAggarwal/directory/");
        propertiesPersistingMetadataStore.afterPropertiesSet();
        return propertiesPersistingMetadataStore;
    }

Integrating FileSystemPersistentAcceptOnceFileListFilter as part of composite filter results in changing the FileListFilterList bean definition as follows:

private List<FileListFilter<File>> getFileListFilterList(
            final String pattern) {
        List<FileListFilter<File>> fileListFilterList = new ArrayList<>();
        fileListFilterList.add(new SimplePatternFileListFilter(pattern));
        fileListFilterList.add(persistentFilter());
        return fileListFilterList;
    }

That's all about basic configuration to look up a directory and it's sub directories for files with ending with .csv.

Next we need the action to be taken once file is read and to that framework provides @ServiceActivator to be specified with inputChannel over a method definition with file as an argument, as follows:

@Service
public class FileInServiceActivator {

    @ServiceActivator(inputChannel = "fileIn")
    public void run(File file) {
        String fileName = file.getAbsolutePath();
        System.out.println("File to be processed " + fileName);
    }
}

The complete source code is hosted on github.

Big Data Testing

Big Data Testing in Hadoop Ecosystem

This blog is for people who want to understand what to test in the Big Data ecosystem or what are the scenarios to cover in Big Data Testing. We will cover the following topics:-

What is Big Data?

Big Data is the new buzzword in the industry primarily due to large amount of data generated daily. Big Data is used to describe data which is large in size and grows exponentially with time. Big Data is based on 4V’s Volume (amount of data), Velocity (Speed of data in and out), Variety (Range of data type and sources) and Veracity (uncertainty of data). As data increases it becomes difficult to process, handle and manage the data. While traditional Computing infrastructure cannot work efficiently to handle Big Data, New Computing technologies have been created to handle, manage huge amount of data and processing it quicker than the traditional system and technologies.

As Enterprises started to move towards Big Data it becomes important to understand the system and technologies used in order to get the best out of it. Enterprises have a new learning curve while moving towards Big Data. Learning the technologies is just a starting block whereas designing, testing and implementing are the big challenges to consider while moving to a whole new technology.

Why require Big Data Testing?

With introduction of Big Data it becomes very much important to test the big data system with usage of appropriate data correctly. If not tested properly it would affect the business significantly thus automation becomes a key part of Big Data Testing to test the application and it’s functionality. Big Data Testing if done incorrectly will make it very difficult to understand the error, how it occurred and the probable solution with mitigation steps could take a long time thus resulting in incorrect/missing data and correcting it is again a huge challenge in such a way that current flowing data is not affected. As data is very important it is recommended to have a proper mechanism so that data is not lost/corrupted and proper mechanism should be used to handle failovers.

We will be primarily discussing about Big Data Testing in Hadoop. Big Data primarily uses Hadoop for processing and handling large amount of data. Hadoop is a framework which provides cluster of computing resources for processing huge amount of data. In Hadoop extending cluster is easy with addition of nodes as required which should be carefully planned during design /requirement stage.

Big Data Architecture

Let us have a look at the high level Big Data architecture.

Big Data Architecture

In the above diagram, Data Storage block contains Data Ingestion layer and Data Processing layer which are used to store processed data. Ingested data is stored in HDFS which acts as input for data processing. The above diagram also shows the data pipeline from data ingestion to data visualization.

Explanation of Big Data Components

Data ingestion layer is responsible for ingesting data into Hadoop. It is a preprocessing stage and the entry point from which data comes. It is used for batch, file or event ingestion. This layer is very critical because if data is corrupted or missing then data cannot be processed leading to complete loss of data. Handling failures /failover is very important to manage. In this layer storage formats play a crucial role for compression of data which would lead to reduction in I/O.

Data Processing layer is responsible for processing of data ingested, aggregation of data as per business requirements. This layer uses business rules for processing and aggregating the data. Hadoop is used for processing data which uses Map Reduce operations. It is very important to create proper alert mechanisms in order to catch the failure and helping in resolving it as soon as possible.

Data Storage layer is responsible to store the processed data from Hadoop. As data generated is huge it becomes important to design and use this layer in order to store all the data. This needs to be designed very carefully to prevent disk corruption or other failures leading to loss of data. This layer is also referred to data warehouse for storing infrequently accessed data, archived data or old data.

Data Visualization layer is responsible for visualizing the data received from ingestion, processed as per business rules and storing the data. It is used for understanding the data and gathering insights from data. Stored data can be in any format (Excel file, Json file, Text file, Access file etc.) which is used for visualization of data. Also it is not necessary that only data stored in HDFS can be used for data visualization.

Big Data Testing Scenarios

Let us examine the scenarios for which Big Data Testing can be used in the Big Data components:-

Data Ingestion :-

This step is considered as pre-Hadoop stage where data is generated from multiple sources and data flows into HDFS. In this step the testers verifies that data is extracted properly and data is loaded into HDFS.

  • Ensure proper data from multiple data sources is ingested i.e. all required data is ingested as per their defined schema and data not matching schema should not be ingested. Data which has not matched with schema should be stored for stats reporting purpose. Also ensure there is no data corruption.
  • Comparison of source data with data ingested to simply validate that correct data is pushed.
  • Verify that correct data files are generated and loaded into HDFS correctly into desired location.

Data Processing :-

This step is used for validating Map-Reduce jobs. Map-Reduce is a concept used for condensing large amount of data into aggregated data. The data ingested is processed using execution of Map-Reduce jobs which provides desired results. In this step the tester verifies that ingested data is processed using Map-Reduce jobs and validate whether business logic is implemented correctly.

  • Ensure Map Reduce Jobs run properly without any exceptions.
  • Ensure key-value pairs are correctly generated post MR Jobs.
  • Validate business rules are implemented on data.
  • Validate data aggregation is implemented on data and data is consolidated post reduce operations.
  • Validate that data is processed correctly post Map-Reduce Jobs by comparing output files with input files.

Note: – For validation at data ingestion or data processing layers, we should use a small set of sample data (in KB’s or MB). By using a small sample data we can easily verify that correct data is ingested by comparing source data with output data at ingestion layer. It becomes easier to verify that MR jobs are run without any error, business rules are correctly implemented on ingested data and validate data aggregation is correctly done by comparing output file with input file.

Initially for testing at data ingestion or data processing layers if we use large data (in GB’s), it becomes very difficult to validate or verify each input record with output record and validating whether business rules are implemented correctly becomes difficult.

Data Storage :-

This step is used for storing output data in HDFS or any other storage system (such as Data Warehouse). In this step the tester verifies that output data is correctly generated and loaded into storage system.

  • Validate data is aggregated post Map-Reduce Jobs.
  • Verify that correct data is loaded into storage system & discard any intermediate data which is present.
  • Verify that there is no data corruption by comparing output data with HDFS (or any storage system) data.

The other type of testing scenarios a Big Data Tester can do is:-

  • Check whether proper alert mechanisms are implemented such as Mail on alert, sending metrics on Cloud watch etc.
  • Check Exceptions or errors are displayed properly with appropriate exception message so that solving an error becomes easy.
  • Performance testing to test the different parameters to process a random chunk of large data and monitor parameters such as time taken to complete Map-Reduce Jobs, memory utilization, disk utilization and other metrics as required.
  • Integration testing for testing complete workflow directly from data ingestion to data storage/visualization.
  • Architecture testing for testing that Hadoop is highly available all the time & Failover services are properly implemented to ensure data is processed even in case of failure of nodes.

Note: – For testing it is very important to generate data for testing covering various test scenarios (positive and negative). Positive test scenarios cover scenarios which are directly related to the functionality. Negative test scenarios cover scenarios which do not have direct relation with the desired functionality.

List of few tools used in Big Data

Data Ingestion – Kafka, Zookeeper, Sqoop, Flume, Storm, Amazon Kinesis.

Data Processing – Hadoop (Map-Reduce), Cascading, Oozie, Hive, Pig.

Data Storage – HDFS (Hadoop Distributed File System), Amazon S3, HBase.

At the end of this blog you understand the various scenarios which can be tested in Big Data domain. You know what is Big Data and why do we require Big Data Testing. It made you aware of the Big Data architecture with brief explanation of its components. Lastly few tools were mentioned which are used within the Big Data System.

In the next blog we will look at a use-case for a practical scenario as an example for Big Data Testing. It will cover the problem statement followed by testing via traditional method and importance of creating/using automation script to automate testing in Big Data domain.

Kafka Perfromance Blog

           Big Data Testing : Apache Kafka Performance Benchmarking

In this blog we will start from the basic tools/scripts of Apache Kafka and discuss how performance test and benchmarking can be done by performing some load tests for default configuration.

Overview:

Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system.

Let’s go through it’s messaging terminology first:

  • Kafka maintains feeds of messages in categories called topics.
  • We'll call processes that publish messages to a Kafka topic producers.
  • We'll call processes that subscribe to topics and process the feed of published messages consumers.
  • Kafka is run as a cluster comprised of one or more servers each of which is called a broker.

So, at a high level, producers send messages over the network to the Kafka cluster which in turn serves them up to consumers like this:

For further information about Apache Kafka, please refer to link below:

Kafka Documentation

So, while doing performance testing for Kafka there can be two aspects which we need to take in consideration:
1. Performance at Producer End
2. Performance at Consumer End

We need to perform this test for both, Producer and Consumer so that we can make sure how many messages Producer can produce and Consumer can consume in a given time. For a large number of messages we can ensure data loss as well.

Main intent of this test is to find out the following stats:
1. Throughput(messages/sec) on size of data
2. Throughput(messages/sec) on number of messages
3. Total data
4. Total messages

Let’s go ahead with download and setup kafka, starting zookeeper, cluster, producer and consumer.

  • To download kafka refer this link http://kafka.apache.org/downloads.html
  • Once it is downloaded, untar it then switch to the directory
    sh
    tar -xzf kafka_2.9.1-0.8.2.2.tgz
    cd kafka_2.9.1-0.8.2.2
  • As Kafka uses Zookeeper, so first you need to start it, follow the steps below:
    sh
    bin/zookeeper-server-start.sh config/zookeeper.properties
  • Now start the Kafka server:
    sh
    bin/kafka-server-start.sh config/server.properties
  • Once server started we need to create a topic now, say “test”
    sh
    bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
  • To check if the topic created successfully, use the list command:
    sh
    bin/kafka-topics.sh --list --zookeeper localhost:2181***
  • Now let’s start the Producer and Consumer as mentioned below:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test
  • Send some message now by type it on Producer console, once you press enter same message should be consumer on the consumer console.

Once the messages generated by Producer are consumed on Consumer, that’s show you setup Kafka correctly.

Now let’s take the performance stats, to do this follow the steps mentioned below:
1. Launch a new terminal window
2. Set the directory to Kafka/bin
3. Here you can find multiple shell scripts, we will be using following to take performance stats:
kafka-producer-perf-test.sh
kafka-consumer-perf-test.sh

If you want to check help about both the shell scripts(perf tools) just type
sh
./kafka-producer-perf-test.sh --help
or
./kafka-consumer-perf-test.sh --help

for Producer and Consumer respectively.

Performance at Producer End

Type following command on console and hit enter key.
sh
./kafka-producer-perf-test.sh --broker-list localhost:9092 --topic test --messages 100

Let’s understand these command line options one by one,
– First parameter is “broker-list”, in this we need to mention broker info that is the list of broker\s host and port for bootstrap, this is required parameter.
– Second parameter is “topic”, this one is also required parameter and shows message category as we discussed earlier.
– Third one shows how many messages you want to produce and send to take the stats, we set it to 100 for our first scenario.

Once test completed some stats will be printed on console, something like;

| start.time | end.time | compression | message.size | batch.size | total.data.sent.in.MB | MB.sec | total.data.sent.in.nMsg | nMsg.sec |
| ———-| ——– | ———– | ———— | ———- | ——————— | —— | ———————– | ——–
| 2016-02-03 21:38:28:094 | 2016-02-03 21:38:28:449 | 0 | 100 | 200 | 0.01 | 0.0269 | 100 | 281.6901 |

  1. start.time, end.time will show when was test started and completed.
  2. If Compression is ‘0’ as above then it shows message compression was off(Default).
  3. message.size shows the size of each message.
  4. batch.size indicates how many messages will be sent in one batch, by default it is set to 200.
  5. total.data.sent.in.MB shows total data send to cluster in MB.
  6. MB.sec indicates how much data transferred in MB per sec(Throughput on size).
  7. total.data.sent.in.nMsg will show the count of total message which were sent during this test.
  8. And last nMsg.sec shows how many messages sent in a sec(Throughput on count of messages).

There are some more parameters which you can use while doing this performance test, like;

–csv-reporter-enabled : If set, the CSV metrics reporter will be enabled

–initial-message-id : The is used for generating test data, If set, messages will be tagged with an ID and sent by producer starting from this ID sequentially. Message content will be String type and in the form of 'Message:000…1:xxx…', using this parameter you will be able to see messages consuming on the consumer.

–message-size : It indicates the size of each message, it can be useful when you want to load test Kafka with some large messages.

–vary-message-size : If set, message size will vary up to the given maximum.

There are some other options as well which can be use as per need during the Producer performance test.

For this blog, I took some performance numbers based on number of messages and performance was shows by graph inline.

Performance at Consumer End

Now let’s look how can we take performance stats at consumer end, type following command and hit enter key.
sh
./kafka-consumer-perf-test.sh --topic test --zookeeper localhost:2181

Let’s understand it's command line options,

First parameter was “topic”, this one is also required parameter and shows message category.
Second parameter is “zookeeper”, this one is also required parameter and shows the connection string for the zookeeper connection in the form host:port.

Once test completed some stats will be printed on console, something like;

| start.time | end.time | fetch.size | data.consumed.in.MB | MB.sec | data.consumed.in.nMs | nMsg.sec |
| ———- | ——– | ———- | ——————- | —— | ——————– | ——– |
| 2016-02-04 11:29:41:806 | 2016-02-04 11:29:46:854 | 1048576 | 0.0954 | 1.9869 | 1001 | 20854.1667|

  1. start.time, end.time will show when was test started and completed.
  2. fetch.size** shows the amount of data to fetch in a single request.
  3. data.consumed.in.MB**** shows the size of all messages consumed.
  4. ***MB.sec* indicates how much data transferred in MB per sec(Throughput on size).
  5. data.consumed.in.nMsg will show the count of total message which were consumed during this test.
  6. And last nMsg.sec shows how many messages consumed in a sec(Throughput on count of messages).

Performance test for Consumer is also based on number of messages and result was shows by graph inline.

By using the stats we can decide the batch size, message size and number of maximum messages which can be produced/consumed for a given configuration or in other words we can benchmark numbers for Kafka.

All the above analysis is done using the default settings of kafka, there can be multiple scenarios where we can test and take the performance stats for Kafka Producer and Consumer, some of those cases can be :

  1. Change number of topics
  2. Change async batch size
  3. Change message size
  4. Change number of partitions
  5. Network Latency
  6. Change number of Brokers
  7. Change number of Producer/Consumer etc.

Above mentioned changes can be done in the properties files available in folder :
sh
/Kafka/kafka_2.9.1-0.8.2.2/config

To understand the config files you can also refer to the link provided in the beginning of the blog.

This blog is just to give an initial idea about Apache Kafka Performance testing and benchmarking, In further blog/s we will be discussing about some complex Kafka performance aspects.

Design Patterns In Selenium Automation Part1 Pom

Design Patterns in Selenium WebDriver – Part I – Page Object Pattern

Automating manual tests is not only in vogue now-a-days but is a requirement for all projects. And, one of the automation tools currently in demand is Selenium WebDriver. The various language bindings for Selenium WebDriver – Java, Python, Ruby, Javascript etc – allow a developer to easily create an automation solution. Having said that, the main requirements for any automation solution should be:

  • maintainability,
  • reusability,
  • reliability,
  • modularity and,
  • save development time.

And, this is where design patterns help. Gang of Four define design patterns as follows:

"design patterns describe simple and elegant solutions to specific problems in object-oriented software design. Design patterns capture solutions that have developed and evolved overtime …".
Thus, design patterns allows us to write reusable, reliable and modular code.

One of the often used patterns/model in Selenium world is Page Object Model allowing us to improve maintainability of automated tests. This will be the first amongst the many pattern we'll be concentrating on in our journey to learn how to utilize the various design patterns in our test framework.

In Page Object Model we:

  • create classes that model/represent the various pages of the AUT,
  • Create methods that model the various interactions/behaviors we can perform on the page,
  • Create tests to validate the AUT behaviors, states & data.

As a pre-requisite for this tutorial you would need the following installed:
+ GradleBeginner's Gradle tutorial for Java projects ,
+ Java SDK,
+ Groovy,
+ Eclipse/Intellij already installed along with
+ TestNG plugins for the poison of your choice.

(You'll have to add gradle and Java bin folders to the system path. Also, for gradle you do not need to install groovy as it comes with it's own groovy installation.)

Step 1

As we'll be using Gradle for building our project & running the tests, we need to create a build file. Create one by navigating to the folder where the project will be created and running the following command:

gradle init --type groovy-library

This will not only create a build.gradle file for you, but also set up a project structure. The project structure will look something like this:

Project Structure

(I've added gradle & TestNG dependencies to the eclipse project. Also, add src/main/groovy & src/main/test folders as source folders in eclipse. You can do this by going to the context menu of the respective folder, going to Build & Add as source folder)

Step 2

Open the generated build.gradle file & copy-paste the following code (minus the multi-line comments at the top of the build file):

src/main/resources/build.gradle

// Apply the plugin to add support for groovy, eclipse and intellij.
apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'idea'

repositories {
    mavenCentral()
}

dependencies {
    compile 'org.codehaus.groovy:groovy-all:2.3.10'
    compile 'org.seleniumhq.selenium:selenium-java:2.45.0'
    compile 'com.google.inject:guice:3.0'
    // testCompile dependency to testCompile 'org.testng:testng:6.8.1' and add
    // 'test.useTestNG()' to your build script.
    compile 'org.testng:testng:6.8.1'
}

// task Test defined here to run with TestNG
tasks.withType(Test) {
  useTestNG(){
    // use this if you want to create TestNG report along with the build report generated
    // by gradle
    // else omit the commands between the {}
    useDefaultListeners = true
  }

  // to pass specific commands from the command line
  // using the -D switch for JVM system properties
  systemProperties = System.getProperties()

  // run max 2 tests in parallel
  maxParallelForks = 2
}

OK now lets get down and dirty with the code. We'll start with creating a base page object class to derive all the rest of our pages from.

Step 3

Delete src/main/groovy/Library.groovy & src/main/groovy/LibraryTest.groovy.

Step 4

Create a new abstract class BasePageObject in src/main/groovy & paste the code below in it.

src/main/groovy/com/demo/POM/BasePageObject.groovy

package com.demo.POM
import org.openqa.selenium.By
import org.openqa.selenium.WebDriver
import org.openqa.selenium.WebElement
import org.openqa.selenium.support.ui.ExpectedConditions
import org.openqa.selenium.support.ui.WebDriverWait
import org.testng.Assert

abstract class BasePageObject {
    protected WebDriver driver;
    protected WebDriverWait wait;

    BasePageObject(WebDriver driver) {
        this.driver = driver
        wait = new WebDriverWait(this.driver,30,10)

        isLoaded()
    }

    /**
     * Each page object must implement this method to return the identifier of a unique WebElement on that page.
     * The presence of this unique element will be used to assert that the expected page has finished loading
     * @return the By locator of unique element on the page
     */
    protected abstract By getUniqueElement();

    protected def isLoaded() throws Error{
        //Define a list of WebElements that match the unique element locator for the page
        List<WebElement> uniqueElement = driver.findElements(getUniqueElement())

        // Assert that the unique element is present in the DOM
        Assert.assertTrue((uniqueElement.size() > 0),
                "Unique Element \'${getUniqueElement().toString()}\' not found for ${this.class.simpleName}")

        // Wait until the unique element is visible in the browser and ready to use. This helps make sure the page is
        // loaded before the next step of the tests continue.
        wait.until(ExpectedConditions.visibilityOfElementLocated(getUniqueElement()))
    }

}

The class has been made abstract as we want the class to be extended from and, not instantiated. It contains a constructor that takes a WebDriver argument. It also verifies if the page is fully loaded by using the isLoaded() method. The isLoaded() method internally verifies if the user is on the correct page by calling the getUniqueElement() method which returns the locator of the element we use for such verification.

Step 5

For the purpose of this tutorial we'll be conducting tests on Stackoverflow.com – particularly the Home page and the Questions page. So, create two Java classes in the src/main/java folder:

src/main/groovy/com/demo/POM/pages/HomePage.java

package com.demo.POM.pages
import com.demo.POM.BasePageObject
import org.openqa.selenium.By
import org.openqa.selenium.WebDriver
import org.openqa.selenium.WebElement
import org.openqa.selenium.support.FindBy
import org.openqa.selenium.support.PageFactory

class HomePage extends BasePageObject {
    @FindBy(css="div#menus")
    List<WebElement> menuBar;

    @FindBy(id="nav-questions")
    WebElement questionLink;

    @FindBy(id="nav-questions")
    List<WebElement> questionsTab;

    By menuBarLocator = By.cssSelector("div#hmenus");

    HomePage(WebDriver driver) {
        super(driver)
    }

    @Override
    protected By getUniqueElement() {
        return By.cssSelector("div#hmenus")
    }

    def QuestionsPage clickQuestionsTab() {
        questionLink.click()
        return PageFactory.initElements(this.driver, QuestionsPage.class)
    }

    def isQuestionsTabDisplayed() {
        return questionsTab.size() > 0
    }

}

src/main/groovy/com/demo/POM/pages/QuestionsPage.groovy

package com.demo.POM.pages
import com.demo.POM.BasePageObject
import org.openqa.selenium.By
import org.openqa.selenium.WebDriver
import org.openqa.selenium.WebElement
import org.openqa.selenium.support.FindBy

public class QuestionsPage extends BasePageObject {
    @FindBy(css=".youarehere #nav-questions")
    WebElement youAreHere;

    @FindBy(id="nav-users")
    List<WebElement> usersTab;

    public QuestionsPage(WebDriver driver) {
        super(driver);
    }

    @Override
    protected By getUniqueElement() {
        return By.cssSelector(".youarehere #nav-questions");
    }

    public Boolean isUsersTabDisplayed() {
        return usersTab.size() > 0;
    }

}

Both the classes extend from BasePageObject. And, override the getUniqueElement method. The various page elements are annotated with @FindBy. This allows for the Selenium PageFactory class to initialize the WebElements on the page. The PageFactory initialization happens in the method responsible for creating the page object with the following line:

PageFactory.initElements(this.driver, PageObject.class)

for more info on PageFactory click here

The initElements method takes a WebDriver instance & the class/pageobject to initialize as arguments.

Then HomePage class contains a method clickQuestionsTab that navigates the user to the QuestionsPage by creating a new instance of the QuestionsPage (initializing the Questions page WebElements using PageFactory).

Step 6

Now that we have the page objects ready, lets get cracking on creating tests for the pages. Just as we created a base page object, we will start by creating a base test class. This will allow us to define certain actions that need to be defined for each test – initiating the WebDriver, killing the driver at the end of the test, in the base test class. Create the base test class under src/main/test folder:

src/test/groovy/com/demo/POM/BaseTest.groovy

package com.demo.POM
import org.openqa.selenium.chrome.ChromeDriver
import org.openqa.selenium.firefox.FirefoxDriver
import org.openqa.selenium.ie.InternetExplorerDriver
import org.openqa.selenium.remote.DesiredCapabilities
import org.openqa.selenium.remote.LocalFileDetector
import org.openqa.selenium.remote.RemoteWebDriver
import org.testng.annotations.AfterClass
import org.testng.annotations.AfterMethod
import org.testng.annotations.BeforeClass
import org.testng.annotations.BeforeMethod

import java.util.concurrent.TimeUnit

class BaseTest {

    protected static final String WEB_SERVER = System.getProperty("WEB_SERVER", "http://stackoverflow.com/")
    protected static final String BROWSER = System.getProperty("BROWSER", "firefox")
    protected static final boolean REMOTE_DRIVER = Boolean.valueOf(System.getProperty("REMOTE_DRIVER", "false"))
    protected static final String SELENIUM_HOST = System.getProperty("SELENIUM_HOST", "localhost")
    protected static final int SELENIUM_PORT = Integer.valueOf(System.getProperty("SELENIUM_PORT", "4444"))

    public static RemoteWebDriver driver


    @BeforeMethod (alwaysRun = true)
    public void beforeMethod() {
        driver.manage().window().maximize()
        driver.get(WEB_SERVER)
    }

    @AfterMethod (alwaysRun = true)
    public void afterMethod() {
        driver.manage().deleteAllCookies()
    }

    @BeforeClass (alwaysRun = true)
    public void beforeClass() {
        if (REMOTE_DRIVER) {
            setupRemoteDriver()
            driver.setFileDetector(new LocalFileDetector())
        } else {
            setupLocalDriver()
        }
        driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS)
    }

    @AfterClass (alwaysRun = true)
    public void afterClass() {
        driver.quit()
    }

    private void setupLocalDriver() {
        String path = ""
        if (BROWSER.equals("firefox")) {
            driver = new FirefoxDriver()
        } else if (BROWSER.equals("chrome")) {
            path = "lib/chromedriver"
            if (System.getProperty("os.name").contains("Windows")) {
                path = "lib/chromedriver.exe"
            }
            System.setProperty("webdriver.chrome.driver", path)
            driver = new ChromeDriver()
        } else if (BROWSER.equals("internetExplorer")) {
            path = "lib/IEDriverServer.exe"
            System.setProperty("webdriver.ie.driver", path)
            DesiredCapabilities capabilities = DesiredCapabilities.internetExplorer()
            capabilities.setCapability(InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS, true)
            driver = new InternetExplorerDriver(capabilities)
        } else {
            throw new RuntimeException("Browser type unsupported")
        }
    }

    private void setupRemoteDriver() {
        DesiredCapabilities capabilities
        if (BROWSER.equals("firefox")) {
            capabilities = DesiredCapabilities.firefox()
        } else if (BROWSER.equals("internetExplorer")) {
            capabilities = DesiredCapabilities.internetExplorer()
            capabilities.setCapability(InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS, true)
        } else if (BROWSER.equals("chrome")) {
            capabilities = DesiredCapabilities.chrome()
        } else {
            throw new RuntimeException("Browser type unsupported")
        }
        driver = new RemoteWebDriver(
                new URL("http://" + SELENIUM_HOST + ":" + SELENIUM_PORT + "/wd/hub"),
                capabilities)
    }
}

The class defines the @BeforeClass & @AfterClass methods to initialize and destroy the driver instance &, @BeforeMethod & @AfterMethod methods to be executed before every test method. The class defines the various test properties such as the test URL, the browser to test on, whether to run the test on local machine or remote etc. The class also defines methods to setup a local driver (firefox/chrome/IE) & set up a remote driver instance. These will allow the user to vary the URL, browser & other parameters from the command-line.

Step 7

And, now to the actual test we are going to run. Create a TestNG test class under src/test/groovy.

src/test/groovy/com/demo/POM/test/ExampleTest.groovy

package com.demo.POM.test

import org.openqa.selenium.support.PageFactory
import org.testng.Assert
import org.testng.annotations.Test

import com.demo.POM.BaseTest
import com.demo.POM.pages.HomePage
import com.demo.POM.pages.QuestionsPage

class ExampleTest extends BaseTest{

    ExampleTest() {
        super()
    }

    @Test
    public void clickQuestionsTest() {
        HomePage landingPage = PageFactory.initElements(this.driver, HomePage.class)
        QuestionsPage questionsPage = landingPage.clickQuestionsTab()
        Assert.assertTrue(questionsPage.isUsersTabDisplayed())
    }

    @Test
    public void isLogoDisplayedTest() {
        HomePage landingPage = PageFactory.initElements(this.driver, HomePage.class)
        Assert.assertTrue(landingPage.isQuestionsTabDisplayed())
    }
}

And that is it!! We are now ready to run the tests!

Step 8

To run the tests from the command prompt, open the command prompt and navigate to the project folder. Run the following command to launch the tests:

gradle clean test

'clean' cleans out the build directory of any previous builds while 'test' runs the unit/integration/functional tests in the project.

The above command will launch the browser and, you should be able to see the test steps being performed. On successful completion of the tests the command prompt should display the following output:

:clean UP-TO-DATE
:compileJava
:processResources UP-TO-DATE
:classes
:compileTestJava
:processTestResources UP-TO-DATE
:testClasses
:test

BUILD SUCCESSFUL

Total time: 1 mins 17.821 secs

The complete project and code can be got from Page Object Pattern.

Take Aways

As you would have observed with the code above, the creation of class corresponding to each and every page in the AUT allows us to reuse the same class across various tests. In OOPS terminology each Page Object class encapsulates the elements and behaviors of the particular page in question. What it doesn't mean is that a Page Object class to encapsulate all the behaviors exhibited on the page. We can also model a page object with only a partial set of behaviors. This also makes the automation code modular in nature.

Not only is the code reusable, it is maintainable as well. In case the test case related to the Landing Page fails, I now know I need to update my code in the Landing Page class only and no where else. If I need to add a new method for functionality related to the Landing Page, I will do so in the class encapsulating the landing page.

Design Patterns In Selenium Automation part 1 pom

Design Patterns in Selenium WebDriver – Part I – Page Object Pattern

Automating manual tests is not only in vogue now-a-days but is a requirement for all projects. And, one of the automation tools currently in demand is Selenium WebDriver. The various language bindings for Selenium WebDriver – Java, Python, Ruby, Javascript etc – allow a developer to easily create an automation solution. Having said that, the main requirements for any automation solution should be:
* maintainability,
* reusability,
* reliability,
* modularity and,
* save development time.

And, this is where design patterns help. Gang of Four define design patterns as follows- "design patterns describe simple and elegant solutions to specific problems in object-oriented software design. Design patterns capture solutions that have developed and evolved overtime …". Thus, design patterns allows us to write reusable, reliable and modular code.

One of the often used patterns/model in Selenium world is Page Object Model allowing us to improve maintainability of automated tests. This will be the first amongst the many pattern we'll be concentrating on in our journey to learn how to utilize the various design patterns in our test framework.

In Page Object Model:
* Create classes that model/represent the various pages of the AUT (Application Under Test),
* Create methods that model the various interactions/behaviors we can perform on the page,
* Create tests to validate the AUT behaviors, states & data.

As a pre-requisite for this tutorial you would need the following installed:
* GradleBeginner's Gradle tutorial for Java projects ,
* Java SDK &,
* Eclipse/Intellij already installed along with TestNG plugins for the poison of your choice.

(You'll have to add gradle and Java bin folders to the system path. Also, for gradle you do not need to install groovy as it comes with it's own groovy installation.)

  1. As we'll be using Gradle for building our project & running the tests, we need to create a build file. Create one by navigating to the folder where the project will be created and running the following command
    cmd
    gradle init --type java-library

    This will not only create a build.gradle file for you, but also set up a project structure. The project structure will look something like this:

Project Structure

(I've added gradle & TestNG dependencies to the eclipse project. Also, add src/main/java & src/main/test folders as source folders in eclipse. You can do this by going to the context menu of the respective folder, going to Build & Add as source folder)

  1. Open the generated build.gradle file & copy-paste the following code (minus the multi-line comments at the top of the build file):

src/main/resources/build.gradle
“`groovy
// Apply the java plugin to add support for Java and eclipse
apply plugin: 'java'
apply plugin: 'eclipse'

// where to get the various dependency jars from
repositories {
mavenCentral()
}

// dependencies for the current project
dependencies {
compile 'org.seleniumhq.selenium:selenium-java:2.45.0'
//compile 'io.appium:java-client:2.2.0'
compile 'com.google.inject:guice:3.0'
// testCompile dependency to testCompile 'org.testng:testng:6.8.1' and add
// 'test.useTestNG()' to your build script.
compile 'org.testng:testng:6.8.1'
}

// task Test defined here to run with TestNG
tasks.withType(Test) {
useTestNG{
// use this if you want to create TestNG report along with the build report generated
// by gradle
// else replace the useTestNG{} with useTestNG()
useDefaultListeners = true
}

// to pass specific commands from the command line
// using the -D switch for JVM system properties
systemProperties = System.getProperties()

// run max 2 tests in parallel
maxParallelForks = 2
}
“`

OK now lets get down and dirty with the code. We'll start with creating a base page object class to derive all the rest of our pages from.

  1. There will be a class each created under src/main/java and src/test/java directories. Delete the two classes.

  2. Create a new abstract Java class BasePageObject in src/main/java & paste the code below in it.

src/main/java/com/demo/POM/BasePageObject.java
“`java
package com.demo.POM;

import java.util.List;

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
import org.testng.Assert;

public abstract class BasePageObject {
protected WebDriver driver;
protected WebDriverWait wait;

public BasePageObject(WebDriver driver) {
this.driver = driver;
this.wait = new WebDriverWait(this.driver,30,10);

isLoaded();

}

/**
* Each page object must implement this method to return the identifier of a unique WebElement on that page.
* The presence of this unique element will be used to assert that the expected page has finished loading
* @return the By locator of unique element on the page
*/
protected abstract By getUniqueElement();

protected void isLoaded() throws Error{
//Define a list of WebElements that match the unique element locator for the page
List uniqueElement = driver.findElements(getUniqueElement());

    // Assert that the unique element is present in the DOM
    Assert.assertTrue((uniqueElement.size() > 0),
            "Unique Element \'" + getUniqueElement().toString() + "\' not found for " + this.getClass().getSimpleName());

    // Wait until the unique element is visible in the browser and ready to use. This helps make sure the page is
    // loaded before the next step of the tests continue.
    wait.until(ExpectedConditions.visibilityOfElementLocated(getUniqueElement()));

}

}
“`
The class has been made abstract as I want the class to only be extended and not instantiated. It contains a constructor that takes a WebDriver argument. It also verifies if the page is fully loaded by using the isLoaded() method. The isLoaded() method internally verifies if the user is on the correct page by calling the getUniqueElement() method which returns the locator of the element we use for such verification.

  1. For the purpose of this tutorial we'll be conducting tests on Stackoverflow.com – particularly the Home page and the Questions page. So, create two Java classes in the src/main/java folder:

src/main/java/com/demo/POM/pages/LandingPage.java
“`java
package com.demo.POM.pages;

import java.util.List;

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.support.FindBy;
import org.openqa.selenium.support.PageFactory;

import com.demo.POM.BasePageObject;

public class LandingPage extends BasePageObject {
@FindBy(css="div#menus")
List menuBar;

@FindBy(id="nav-questions")
WebElement questionLink;

@FindBy(id="nav-questions")
List questionsTab;

By uniqueElement = By.cssSelector("div#hmenus");

public LandingPage(WebDriver driver) {
super(driver);

PageFactory.initElements(this.driver, this);

}

@Override
protected By getUniqueElement() {
return uniqueElement;
}

public QuestionsPage clickQuestionsTab() {
    questionLink.click();
    return new QuestionsPage(driver);
}

public Boolean isQuestionsTabDisplayed() {
    return questionsTab.size() > 0;
}

}
“`

src/main/java/com/demo/POM/pages/QuestionsPage.java
“`java
package com.demo.POM.pages;

import java.util.List;

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.support.FindBy;
import org.openqa.selenium.support.PageFactory;

import com.demo.POM.BasePageObject;

public class QuestionsPage extends BasePageObject {
@FindBy(css=".youarehere #nav-questions")
WebElement youAreHere;

@FindBy(id="nav-users")
List usersTab;

public QuestionsPage(WebDriver driver) {
super(driver);

PageFactory.initElements(this.driver, this);

}

@Override
protected By getUniqueElement() {
return By.cssSelector(".youarehere #nav-questions");
}

public Boolean isUsersTabDisplayed() {
    return usersTab.size() > 0;
}

}

Both the classes extend from BasePageObject. And, override the getUniqueElement method. The various page elements are annotated with @FindBy. This allows for the Selenium PageFactory class to initialize the WebElements on the page. The PageFactory initialization happens in the constructor with the following line:
Java
PageFactory.initElements(this.driver, this);
“`
for more info on PageFactory click here

The initElements method takes a WebDriver instance & the class/pageobject to initialize as arguments.

Then LandingPage class contains a method clickQuestionsTab that navigates the user to the QuestionsPage by creating a new instance of the QuestionsPage (calling the constructor with the WebDriver instance & initializing the Questions page webelements).

  1. Now that we have the page objects ready, lets get cracking on creating tests for the pages. Just as we created a base page object, we will start by creating a base test class. This will allow us to define certain actions that need to be defined for each test – initiating the WebDriver, killing the driver at the end of the test, in the base test class. Create the base test class under src/main/test folder:

src/test/java/com/demo/POM/BaseTest.java
“`java
package com.demo.POM;

import java.net.MalformedURLException;
import java.net.URL;
import java.util.concurrent.TimeUnit;

import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.ie.InternetExplorerDriver;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.LocalFileDetector;
import org.openqa.selenium.remote.RemoteWebDriver;
import org.testng.annotations.AfterClass;
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.BeforeMethod;

public class BaseTest {

protected static final String WEBSERVER = System.getProperty("WEBSERVER", "http://stackoverflow.com/");
protected static final String BROWSER = System.getProperty("BROWSER", "firefox");
protected static final boolean REMOTEDRIVER = Boolean.valueOf(System.getProperty("REMOTEDRIVER", "false"));
protected static final String SELENIUMHOST = System.getProperty("SELENIUMHOST", "localhost");
protected static final int SELENIUMPORT = Integer.valueOf(System.getProperty("SELENIUMPORT", "4444"));

public static RemoteWebDriver driver;

@BeforeMethod (alwaysRun = true)
public void beforeMethod() {
driver.manage().window().maximize();
driver.get(WEB_SERVER);
}

@AfterMethod (alwaysRun = true)
public void afterMethod() {
driver.manage().deleteAllCookies();
}

@BeforeClass (alwaysRun = true)
public void beforeClass() throws MalformedURLException {
if (REMOTE_DRIVER) {
setupRemoteDriver();
driver.setFileDetector(new LocalFileDetector());
} else {
setupLocalDriver();
}
driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
}

@AfterClass (alwaysRun = true)
public void afterClass() {
driver.quit();
}

private void setupLocalDriver() {
String path = "";
if (BROWSER.equals("firefox")) {
driver = new FirefoxDriver();
} else if (BROWSER.equals("chrome")) {
path = "lib/chromedriver";
if (System.getProperty("os.name").contains("Windows")) {
path = "lib/chromedriver.exe";
}
System.setProperty("webdriver.chrome.driver", path);
driver = new ChromeDriver();
} else if (BROWSER.equals("internetExplorer")) {
path = "lib/IEDriverServer.exe";
System.setProperty("webdriver.ie.driver", path);
DesiredCapabilities capabilities = DesiredCapabilities.internetExplorer();
capabilities.setCapability(InternetExplorerDriver.INTRODUCEFLAKINESSBYIGNORINGSECURITY_DOMAINS, true);
driver = new InternetExplorerDriver(capabilities);
} else {
throw new RuntimeException("Browser type unsupported");
}
}

private void setupRemoteDriver() throws MalformedURLException {
DesiredCapabilities capabilities;
if (BROWSER.equals("firefox")) {
capabilities = DesiredCapabilities.firefox();
} else if (BROWSER.equals("internetExplorer")) {
capabilities = DesiredCapabilities.internetExplorer();
capabilities.setCapability(InternetExplorerDriver.INTRODUCEFLAKINESSBYIGNORINGSECURITYDOMAINS, true);
} else if (BROWSER.equals("chrome")) {
capabilities = DesiredCapabilities.chrome();
} else {
throw new RuntimeException("Browser type unsupported");
}
driver = new RemoteWebDriver(
new URL("http://" + SELENIUM
HOST + ":" + SELENIUM_PORT + "/wd/hub"),
capabilities);
}
}
“`
The class defines the @BeforeClass & @AfterClass methods to initialize and destroy the driver instance &, @BeforeMethod & @AfterMethod methods to be executed before every test method. The class defines the various test properties such as the test URL, the browser to test on, whether to run the test on local machine or remote etc. The class also defines methods to setup a local driver (firefox/chrome/IE) & set up a remote driver instance. These will allow the user to vary the URL, browser & other parameters from the command-line.

  1. And, now to the actual test we are going to run. Create a TestNG test class under src/main/test.

src/test/java/com/demo/POM/test/ExampleTest.java
“`Java
package com.demo.POM.test;

import org.testng.Assert;
import org.testng.annotations.Test;

import com.demo.POM.BaseTest;
import com.demo.POM.pages.LandingPage;
import com.demo.POM.pages.QuestionsPage;

public class ExampleTest extends BaseTest{

public ExampleTest() {
super();
}

@Test
public void clickQuestionsTest() {
    LandingPage landingPage = new LandingPage(driver);
    QuestionsPage questionsPage = landingPage.clickQuestionsTab();
    Assert.assertTrue(questionsPage.isUsersTabDisplayed());
}

@Test
public void isLogoDisplayedTest() {
    LandingPage landingPage = new LandingPage(driver);
    Assert.assertTrue(landingPage.isQuestionsTabDisplayed());
}

}
“`

And that is it!! We are now ready to run the tests!

  1. To run the tests from the command prompt, open the command prompt and navigate to the project folder. Run the following command to launch the tests:
    cmd
    gradle clean test

    'clean' cleans out the build directory of any previous builds while 'test' runs the unit/integration/functional tests in the project.

The above command will launch the browser and, you should be able to see the test steps being performed. On successful completion of the tests the command prompt should display the following output:
“`cmd
:clean UP-TO-DATE
:compileJava
:processResources UP-TO-DATE
:classes
:compileTestJava
:processTestResources UP-TO-DATE
:testClasses
:test

BUILD SUCCESSFUL

Total time: 1 mins 17.821 secs
“`

The complete project and code can be got from Page Object Pattern.

Take Aways

As you would have observed with the code above, the creation of class corresponding to each and every page in the AUT allows us to reuse the same class across various tests. In OOPS terminology each Page Object class encapsulates the elements and behaviors of the particular page in question. What it doesn't mean is that a Page Object class to encapsulate all the behaviors exhibited on the page. We can also model a page object with only a partial set of behaviors. This also makes the automation code modular in nature.

Not only is the code reusable, it is maintainable as well. In case the test case related to the Landing Page fails, I now know I need to update my code in the Landing Page class only and no where else. If I need to add a new method for functionality related to the Landing Page, I will do so in the class encapsulating the landing page.

Swift Sequences

Swift Sequences

I’ve had a lot of fun around the Swift APIs over the past few weeks. I have found there are lot more real gems hidden among the more unremarkable API and language features.

In this blog post I want to have a quick look at the Swift Sequence protocol, which forms the basis for-in loop, and see how this allows us to write code that performs & evaluated on-demand sequence operations.

Swift has a couple of very beautiful language features, the for-in loop and ranges, that when used in combination provide a very short syntax for simple loops:

for i in 1...5 {
  print(i)
}

The above code iterates over the integers in the range 1 through to 5.

Also,

for i in 1..<5 {
  print(i)
}

The above code iterates over the integers in the range 1 through to 4.

The funky range operator (…) & (..<) is a simple shorthand for creating Range instances. As a result, the following code is entirely equivalent:

var range = Range(start: 1, end: 5)
for i in range {
  print(i)
}

However the most important thought is, what is it about Range that allows the for-in loops to iterate over the numbers is generates?

The for-in loop operates on objects that adopt the Sequence protocol, which represents an ordered source of data.

The Sequence protocol exposes a single method, generate(), which returns a Generator, which exposes a single method next() which allows the for-in loop to ‘pull’ a stream of items from the sequence.

This will probably all make a lot more sense by continuing to expand the current example to make explicit use of the sequence’s generator:

var sequence = Range(start: 1, end: 5)
var generator = sequence.generate()
while let i = generator.next() {
  println(i)
}

One interesting point is that the Swift Range is a finite sequence, in that it has a start and an end. The Sequence protocol does not mandate this, and can be used to represent infinite sequences (although we should probable avoid using infinite sequences with for-in loops!)

CONCLUSIONS

Swift sequences are really cool and with a lot of potential. The examples in this blog post have been a little contrived, However, I could certainly see myself operating on sequences generated from streams.

Ddd Angularjs

Domain Driven Design in angularjs

For developing any software we need to have clear understanding of design philosophy which will be followed to solve the given problem.We do have myriads of options. Based on the requirements we choose the best fit.One of the design philosophy is Domain Driven design.

DDD

An object model of the domain that incorporates both behavior and data.When we talk about domain layer it is responsible for representing concepts of the business, information about the business situation, and business rules. State that reflects the business situation is controlled and used here, even though the technical details of storing it are delegated to the infrastructure. This layer is the heart of business software.

Example problem

Let us take an example of simple problem.
We have a user that can have one of the role USER,ADMIN or SUPERUSER
And We have list of jobs.
Now We need to create a dashboard. where in we have to display jobs.
We have following business rules:

  • ADMIN and SUPER USER can see all the jobs.
  • Normal USER can see only those jobs for which his education matches.
  • USER can apply for a job if it is not already applied by him and job is Active and it’s status is OPEN.
  • If job is already applied by USER,He can Un apply.
  • ADMIN and SUPER USER can’t apply jobs.

Firstly we will solve this problem with normal approach
Create angular module

angular.module('app', ['app.service']).controller('userCtrl', function ($scope,UserService,JobService) {
    $scope.user = UserService.getUser();
    $scope.jobs = JobService.getJobs();
    $scope.applyJob = function (job) {
        $scope.user.appliedJob.push(job);
    };
    $scope.unApplyJob = function (job) {
        $scope.user.appliedJob.splice($scope.user.appliedJob.indexOf(job), 1);
    };

});

In the above module we just define a userCtrl. And injected UserService and JobService. UserService provides the user.
and JobService provides the job listing.

We have two methods applyJob and unApplyJob in our controller
. using them user can apply and unapply for job.
We will now define our services


angular.module('app.service', []).service('UserService', function () {
this.getUser = function(){
var user= {
name: "Brij", role: "ADMIN", profile: {
email : "bpant@xebia.com", mobile: 1111111111,
education: {
degree: "MCA"
}
},
appliedJob : []
};
return user;
};
}).service('JobService', function () {
this.getJobs = function (){
var jobs = [
          {   id            : 1,
              active       : true,
              profile      : "MANAGER",
              qualification: ["MBA", "BBA"],
              status       : "OPEN"
          },
          {   id           : 2,
              active       : true,
              profile      : "RECRUITER",
              qualification: ["MBA", "BBA"],
              status       : "OPEN"
          },
          {   id           : 3,
              active       : true,
              profile      : "IT_HEAD",
              qualification: ["MCA", "MTECH"],
              status       : "OPEN"
          },
          {   id           : 4,
              active       : true,
              profile      : "SOFTWARE DEVELOPER",
              qualification: ["MCA", "BCA"],
              status       : "OPEN"
          } ,
          {   id           : 5,
              active       : true,
              profile      : "SOFTWARE TESTER",
              qualification: ["MCA", "BCA"],
              status       : "CLOSED"
          }
      ];
      return jobs;
  };

});

For creating Dashboard we will create index.html page

<!DOCTYPE html>

User Dashboard

Name:
Email:
Education:
Applied Jobs:
Profile Qualification Apply


```

Problem with this approach

You can see all our behavior is lied in the presentation layer. For displaying jobs we have applied all logic in our html file. moreover we have repeated our code to display jobs for normal user and for Admin/Superuser.
Similarly we have applied all the logic for displaying apply and unApply button in the html itself. We definitely should not have these logic in presentation layer.we can put these logics in our controller.But What if we want to bind this page with multiple controllers? We will have to write these logic in all those controllers.Situation will be more horrifying when our business demands some changes in the logic.Go find all the places and make those changes.

Time for refactoring

Let's think in different way.
We know in the current problem space we have two entities user and job. user's behavior changes based in its role.If it is Admin it has access to all jobs. But if it is normal user he can see only jobs which matches his education.
Job could be applied based on the behavior status.
So, now we will create Domain object for user and job. That will bind the behavior with them

 factory('User', function () {
            return function (user) {
                this.id = user.id;
                this.name=user.name;
                this.role = user.role;
                this.profile = user.profile;
                this.appliedJob = user.appliedJob;

                this.userViewableJobs = function (jobs) {
                    var userViewableJobs = [];
                    self =this;
                    if (this.role === 'ADMIN' || this.role === 'SUPER_USER') return jobs;
                    else {
                        angular.forEach(jobs, function (job) {
                            console.log('job '+job);
                            if (job.qualification.indexOf(self.profile.education.degree) !== -1) {
                                userViewableJobs.push(job);
                            }
                        })
                    }
                    return userViewableJobs;
                }
                this.canApplyForJob = function(job){
                  return job.canBeApplied() && !this.hasAlreadyAppliedForJob(job);
                }
                this.hasAlreadyAppliedForJob = function(job){
                    return this.appliedJob.indexOf(job) !=-1;
                }

                this.applyForJob = function (job) {
                    this.appliedJob.push(job);
                };
                this.unApplyJob = function (job) {
                    this.appliedJob.splice(this.appliedJob.indexOf(job), 1);
                };

            };
        })

We have bind following behavior with the User

userViewableJobs : based on role it will decide which jobs can be viewed by the user

canApplyForJob : it will check if job can be applied.

hasAlreadyAppliedForJob :it will check if job already applied by user

applyForJob : it will add job to the applied job list

unApplyJob : it will remove the job from the users listing

create Job domain object
```
factory('Job', function () {
return function (job) {
this.id = job.id;
this.active = job.active;
this.profile = job.profile;
this.qualification = job.qualification;
this.status = job.status;
this.canBeApplied = function(){
return this.active && this.status === 'OPEN';
}
};

Modify service to return User and Jobs as factory object instead of normal object

this.getUser = function(){
var user= { id:1,
name: "Brij", role: "ADMIN", profile: {
email : "bpant@xebia.com", mobile: 1111111111,
education: {
degree: "MCA"
}
},
appliedJob : []
};
return new User(user);
};


this.getJobs = function (){

    var jobs = [];
    angular.forEach([

        {   id            : 1,
            active       : true,
            profile      : "MANAGER",
            qualification: ["MBA", "BBA"],
            status       : "OPEN"
        },
        {   id           : 2,
            active       : true,
            profile      : "RECRUITER",
            qualification: ["MBA", "BBA"],
            status       : "OPEN"
        },
        {   id           : 3,
            active       : true,
            profile      : "IT_HEAD",
            qualification: ["MCA", "MTECH"],
            status       : "OPEN"
        },
        {   id           : 4,
            active       : true,
            profile      : "SOFTWARE DEVELOPER",
            qualification: ["MCA", "BCA"],
            status       : "OPEN"
        } ,
        {   id           : 5,
            active       : true,
            profile      : "SOFTWARE TESTER",
            qualification: ["MCA", "BCA"],
            status       : "CLOSED"
        }
    ],function(job){
       jobs.push(new Job(job))
    });
    return jobs;
};
remove applyJob and unApplyJob function from userCtrl as they are now part of User domain

controller('userCtrl', function ($scope,UserService,JobService) {

$scope.user = UserService.getUser();
$scope.jobs = JobService.getJobs();

})

Finally we will modify our index page

User Dashboard

Name:
Email:
Education:
Applied Jobs:
    <tr ng-repeat=" job in user.userViewableJobs(jobs)">
        <td></td>
        <td><span ng-repeat=" qualification in job.qualification "> </Span></td>
        <td>
            <span ng-if="user.canApplyForJob(job)">
                <button class="btn glyphicon glyphicon-ok" ng-click="user.applyForJob(job)" value="Apply">Apply</button></span> </span>
             <span ng-if="user.hasAlreadyAppliedForJob(job)">
             <button class="btn glyphicon glyphicon-remove" ng-click="user.unApplyJob(job)" value="UnApply">UnApply</button></span>
        </td>
    </tr>
    </tbody>
</table>
<hr>
Profile Qualification Apply

```
You can see now our code has become much cleaner,We need not to worry about the business logic in our presentation layer.Our controller has become free from any sort of responsibilities of modifying the model object.All the behavior related to the domain is abstracted from the outer world. You can use these domain objects any where in your application and can change them without any hiccup.

You can find out sample code here