Blog

Coding Conventions in Java

1. Avoid using printStackTrace() to log exceptionThis is because with printStackTrace(), the trace is written to System.err and its hard to route it elsewhere and even filtering is difficult. The end user has little control in the way the messages are shown.

Solution: Use Logger to log the exception. The reason is as follows:

a) With Logger, exception can be written to different locations (console, file) based on user preference.

b) Log statements can be filtered by severity (error, warning, info, debug etc) and origin (namely class or package based).

Example: Prefer this:

catch(Exception e) {

String message = String.format(“Exception occurred because of divide by zero %s” + e, reader);

LOGGER.warn(message);

}

over this:

catch(Exception e) {

e.printStackTrace();

}

2. Avoid using catch clause with Throwable or Error

Throwable is the superclass of all errors and exceptions in Java while Error is the      superclass of all errors and ideally these should not be caught by applications. Catching either throwable or error will also catch OutOfMemoryError and InternalError from which an application actually should not attempt to recover.

Throwable catches really everything even Thread Death which gets thrown by default to stop a thread from the now deprecated Thread.stop() method. So by catching throwable you can be sure that you’ll never leave the try block  and you should be prepared to also handle OutOfMemoryError and InternalError or StackOverflowError.

Hence, the best practice would be:

Example: Prefer this:

catch(Exception e)

over this:

catch(Throwable e) unless the code demands.

3. Define Constants instead of repeating String literals 

Constant will throw a warning if a mistake is done in constant name but during repeated usage of String literal, error might go unnoticed.

Another advantage, coding style is more consistent if we use constants.

Example: Prefer this

Final String RAW_BYTES = “RawBytes”;

map.put(RAW_BYTES, 123);

map.put(RAW_BYTES, 234);

map.put(RAW_BYTES, 546);

map.put(RAW, 345);       // Compiler will immediately report an error as RAW is not defined. So using constants prevent us to use wrong literals.

over this:

map.put(“RawBytes”, 123);

map.put(“RawBytes”, 234);

map.put(“RawBytes”, 546);

map.put(“Raw”, 345);   // This wont be detected as an error although we might have to use RawBytes as the name but there is no way compiler can detect this.

 

Advertisements

Spring QBE Feature

Introduction

Query by Example is a way in Spring that allows for dynamic query creation without writing any code or queries.

Prerequisites

In order to use Spring QBE feature, repository class need to be extended from QueryByExampleExecutor interface apart from extending from CrudRepository. An example can be seen here: Extending Query By Example Executor

Query by Example API

The API mainly gives the following:

a) Example: Example takes a data object (usually the entity object or a subtype of it) and a specification how to match properties

b) Example Matcher: The ExampleMatcher carries details on how to match particular fields. It can be reused across multiple Examples.

Example

Imagine we have an Employee class which has id, name & position field & in order to search all the employees whose position contains the word “Dev” then how do we do it using Spring JPA?

Step1: We create our filter condition (that is we need employees whose position contains the word “Dev”)

Employee emp1 = new Employee();
emp1.setPosition("Dev")

Step2: We create our matching condition

ExampleMatcher matcher = ExampleMatcher.matching()
      .withStringMatcher(ExampleMatcher.StringMatcher.CONTAINING)
      .withIgnorePaths("employeeName", "employeeId");

// IgnorePaths implies ignore these fields while searching for 
// employee with position containing "Dev"

Example<Employee> example = Example.of(emp1, matcher);

The complete example can be found on my GitHub repo: Spring QBE example

Introduction to AJAX: Part 2

What is XML HTTP Request ?

Just like AJAX, XHR object can be used to exchange data between a web browser and a web server and then updating a webpage without reloading while jQuery’s $.ajax() is just a crossbrowser-compatible wrapper around XMLHttpRequest

XMLHttpRequest Object Methods

Method Description
new XMLHttpRequest() Creates a new XMLHttpRequest object
abort() Cancels the current request
getAllResponseHeaders() Returns header information
getResponseHeader() Returns specific header information
open(method,url,async,user,psw) Specifies the type of request

method: the request type GET or POST
url: the file location
async: true (asynchronous) or false (synchronous)
user: optional user name
psw: optional password

send() Sends the request to the server
Used for GET requests
send(string) Sends the request to the server.
Used for POST requests

XMLHttpRequest Object Properties

Property Description
onreadystatechange Defines a function to be called when the readyState property changes
readyState Holds the status of the XMLHttpRequest.
0: request not initialized
1: server connection established
2: request received
3: processing request
4: request finished and response is ready
responseText Returns the response data as a string
responseXML Returns the response data as XML data
status Returns the status-number of a request
200: “OK”
403: “Forbidden”
404: “Not Found”
statusText Returns the status-text (e.g. “OK” or “Not Found”)

Example of XML HTTP Request

// object used to exchange data with server
var xhttp = new XMLHttpRequest(); 

// defines a function to be called when ready state changes
xhttp.onreadystatechange = function() {

    if (this.readyState == 4 && this.status == 200) {
         if (this.responseText == "OK") {
         // operation: change to success image
         } else if (this.responseText == "ERROR") {
         // operation: change to error image
         } else {
         // operation: change to unknown image
        }
    } else {
     //operation: example server didn't send any response
    }
};

xhttp.open("GET", "http://localhost:8080/, true);
xhttp.send();

Note:

Synchronous XMLHttpRequest (async = false) is not recommended because the JavaScript will stop executing until the server response is ready. If the server is busy or slow, the application will hang or stop.

 

Introduction to AJAX: Part 1

What is AJAX?

It is a web development technique that is used to create interactive web applications. It stands for Asynchronous Java Script and XML. What Ajax does basically is to load data from the server without a page refresh. In other words, it allows to communicate with the backend server, captures the backend response and based on the response performs actions on the frontend.

How does AJAX works?

The jQuery $.ajax() function is used to perform asynchronous HTTP request. The syntax of it as follows:

a) $.ajax(url [, options])

  • Url parameter indicates the URL one wants to reach through AJAX call while
  • Options can include different parameters(explained below) describing the configuration of AJAX request.

b) $.ajax([options])

  • Url option does not exist explicitly but is specified within the options parameter or can be skipped indicating a request to the current page itself.

What does “options” contains in the Ajax call.

  • contentType: indicates the content type of the data sent to server.
  • crossDomain: set to true if the request is sent to another domain.
  • data: indicates the data to be sent to server.
  • dataType: type of data to be received from the server.
  • url: indicates a string url to which Ajax request should be sent.
  • timeout: number that specifies a timeout (in milliseconds) for the request.
  • type: indicates the type of request to be made (GET OR POST)
  • headers: an object of additional header send to server.

The above is a partial list of the option parameters that can be specified in AJAX call.

Example of $.ajax()

The following is the link to Jquery project where $.ajax() is used: jQuery Project Link

Pros and Cons of AJAX

Pros:

  • Better and Quick Interactivity between users and websites as pages are not reloaded for content to be displayed.
  • Compact as several multi purpose applications and features can be handled using a single web page.

Cons:

  • Built on Javascript: Many website surfers prefer to turn javascript functionality off on their browser rendering the AJAX application useless though javascript is secure and heavily used.

 

 

Multi Tenant Architecture

What is Multi Tenancy?

Multi tenancy is a software architecture in which a single instance of a software runs on a server and serve multiple tenants. A tenant here refers to a group of users who share a common access with specific privileges to the software instance.

Multi Tenancy != Multiple Instance Architecture

Why Multi Instance Architecture is not the same as Multi Tenant Architecture?

In multi instance architecture (or a Single Tenant Architecture) as the name suggests there are multiple instances of software running and each instance would be for one tenant.

Advantages of Multi Tenancy?

a) Multi tenancy provides cost saving as multiple instance need not be run on different servers hence, the cost of deploying on different machines, cost of maintenance is reduced etc

b) Upgrades are easy because a simple upgrade would result in all clients to have access to the latest version as there is just one copy of schema available to all clients.

Multi Tenant Models

a) Separate Database Model: In this scenario, each tenant has its own dedicated database. This means data from one tenant will be directed to one database always.

Screen Shot 2017-09-16 at 09.33.52

b) Separate Schema Model:  In this scenario, there exist only one database but one schema per tenant. In other words, each tenant has a dedicated schema. This means that data from one tenant will be directed to a specific schema always within the same database which is shared by other tenants.

Screen Shot 2017-09-16 at 09.34.00

c) Single Database, Single Schema: In this scenario, there exist only one database and one schema, all the tables within the schema needs to include an extra column which would be an indicator for tenantID. This would help to differentiate data for different tenants.

Screen Shot 2017-09-16 at 09.34.06

Comparisons of different models

Characteristics Separate Database Separate Schema Single Database, Single Schema
Scalable  Not favorable  Favorable  so-so
Security  Favorable  so-so Not favorable
Data Customization  Favorable Favorable  Not Favorable
New Customer Addition  Not Favorable  so-so  Favorable

Example:

You can check my GitHub account on Multi Tenancy based on Different Schema per Tenant approach here: Spring-MultiTenancy

Docker for Developers

What is Docker?

Docker is a tool that is made to create, deploy and run applications by using containers where Containers allow a developer to package its application (libraries, dependencies etc)  and deliver as one package. This package would eventually then run on any Linux machine system regardless of the configuration that different machines may have.

What is a Docker Container?

Docker containers are based on Docker images. Docker image is thus a binary which includes all the information for running a single container. Each image has a unique id and these docker images can be stored/retrieved in/from docker registry. A docker registry contains docker image repositories and each repository can have one or more docker images. The official registry provided by Docker is Docker Hub.  The difference between docker container, image and registry is shown through this figure (Image taken from docs.openshift.com) :

Screen Shot 2017-08-20 at 18.09.40.png

My Spring Boot Rest Api project on GitHub can be run using Docker and all the below explanation of images, containers would have reference to this project. Information on how to build the Docker file can be checked in README file of my GitHub project.

Docker images

Building a Docker Image:

Docker image is built from a docker file. Once the docker file is built (docker file example) then a docker build command would be executed at the location where docker file is present in order to build the image.

      docker build -t <name of the docker image> .

For example, we run the command “docker build -t spring1.2 .” and upon success, below screenshot shows what message should appear.

Screen Shot 2017-08-20 at 18.19.09.png

Listing Docker Images created:

docker images

The image id associated with the image can also be determined using the command docker images. An image can be removed using its id. The below figure shows the different images that have been created with their image ids.

Screen Shot 2017-08-20 at 18.23.02.png

Removing a Docker image:

docker rmi <image id>

Docker image can be removed easily if its not yet associated to any container using the command mentioned above. However if it is associated to a container then first container needs to be removed using docker rm <container id> and then the image needs to be removed using docker rmi <imageid>

Docker containers

Running a docker container:

Docker image once built successfully implies a docker container can be run and the command to do the same is as follows:

For example: A spring boot application which needs to run on port 8080 and the name of the docker image is spring1.2 then docker container can be run using

docker run -p 8080:8080 -t spring1.2

Screen Shot 2017-08-20 at 18.31.49.png

Show currently running containers:

docker ps

Screen Shot 2017-08-20 at 18.34.38

Removing a docker container: 

docker rm <docker container id>

 

Apache Flink: Stream Processing

Apache Flink is a framework for distributed stream processing and in a very high level description, Flink is made up of the following:

Screen Shot 2017-08-06 at 13.55.26.png

where Data Source: is the input data to flink for processing.

Transformation: is the processing stage where different algorithms may be applied.

Data Sink: is the stage where flink sends the processed data. This could be kafka queue, cassandra etc.

Flink’s capability to compute accurate results on unbounded data sets is based on the fact that it has the following features:

  1. Exactly Once semantics for stateful computations: where stateful means  application can maintain a summary of the data that has been processed and checkpointing mechanism of Flink ensures exactly once semantics in the case of a failure. In other words, checkpointing allow Flink to recover state and positions in the stream in case of a failure.
  2. Flink supports stream processing with event time semantics where event time refers to the time at which each individual event occurred in the device. The event time semantics makes it easy to compute accurate results over streams when the events arrive out of order or with delay. So, the time at which the event occurred will be present in every event implies it will be easy to group and process the events by assigning them to their corresponding hour window. An hourly event time window will contain all records that carry an event timestamp that falls into that hour, regardless of when the records arrive, and in what order they arrive.

Screen Shot 2017-08-06 at 17.30.09

   3. Flink supports flexible windowing option where the windowing can be done  based on time, session or counts. Apache Flink supports different types of window such as tumbling window, sliding window, global window and session window. Time based window is created as soon as the first event corresponding to this window arrives and window is removed when the time (event time or processing time) passes its end timestamp + user specified allowed lateness.

Screen Shot 2017-08-06 at 18.42.30

    4. Flink’s save point is a mechanism to update the application or reprocess historic data with minimum downtime. Savepoints are externally stored checkpoints that can be used to update the Flink program. It uses Flinks checkpointing mechanism to create a snapshot of the state of the streaming program and write the checkpoint metadata to an external file system.

Screen Shot 2017-08-06 at 22.25.37.png

Example:

A simple word count algorithm using Apache Flink DataSet Api can be found at the github project Apache Flink Git Hub Project

Cassandra-Unit

Cassandra-Unit is a Java Utility test tool that helps us write isolated Junit Tests without having to mock or connecting to real cassandra. There are several ways to use the Cassandra-Unit in a Java project. Before to start the entire project is available at my Git Hub account: Cassandra-Unit Git Hub

This blog will describe how using Junit4 rule annotation (@Rule) cassandra-unit can be used.

  1. Create a CQL file describing the table that needs to be created and the data that needs to be inserted.
CREATE TABLE IF NOT EXISTS EMPLOYEE(
 id int,
 name text,

PRIMARY KEY (id)
);

INSERT INTO employee (id, name) VALUES(1, 'Lakshay');
INSERT INTO employee (id, name) VALUES(2, 'George');
INSERT INTO employee (id, name) VALUES(3, 'Andy');
INSERT INTO employee (id, name) VALUES(4, 'Nicole');

   2. Create Junit Test class using CassandraCQLUnit Rule annotation by mentioning the name of the CQL file created in Step 1 and the Cassandra Keyspace name that you prefer. This in itself sets up and starts an embedded Cassandra. This Cassandra unit active instance should be passed to the DAO/Repository class so that the cassandra session can be used to query the started database.

@Rule
public CassandraCQLUnit cassandraCQLUnit = new CassandraCQLUnit(new ClassPathCQLDataSet("cql/employee.cql", "emp_keyspace"));

private EmpRepository empRepository;

@Before
 public void setUp() throws Exception {
     // pass the cassandra session to the class.
     empRepository = new EmpRepository(cassandraCQLUnit.session);
 }

@Test
 public void testFindEmployeeById() throws Exception {
     EmpDetails empExpected = new EmpDetails(1, "Lakshay");
     EmpDetails empGenerated = empRepository.findEmployeeById(1);
     assertEquals(empExpected, empGenerated);
 }

3. The Java class that need to be tested is the following:

public class EmpRepository {

   private Session session;

   public EmpRepository(Session session) {
      this.session = session;
   }


   public EmpDetails findEmployeeById(int id) {
     EmpDetails empDetails = null;
     ResultSet resultSet = session.execute("select * from employee where id="+ id);
     
     for (Row row: resultSet) {
      empDetails = new EmpDetails(row.getInt("id"), row.getString("name"));
       }
    return empDetails;
 }

 

 

Rest Assured Testing

Introduction:

Unit Testing is done in order to test the individual Java classes in an application but there is also a need to carry out Functional Test in order to test the application end to end apart from the Integration Test which focuses on testing the modules. Rest assured testing thus enables us to test a RESTful application by connecting to the HTTP end points supported by the application and imitate the role of another client/browser. In other words, REST-assured is a way to automate testing of a REST Api.

Structure of a Rest Assured Test

The way Rest Assured Tests are written is in given(), when(), then() format.

1. A sample test code for a testing a GET end point in order to get users:


@Test
public void getUser() {
given()
      .contentType(ContentType.JSON) 
      .port(port). // HTTP port that got allocated at runtime. 
when()
      .get("/users/"). // GET endpoint in order to get the users
then()
      .statusCode(200) // RESPONSE expected
      .body("id", equalTo(128));
}

2. A sample test code for a testing a POST end point in order to add a user:


@Test
public void testAddUser() {
given()
      .contentType(ContentType.JSON)
      .port(port)
      .queryParam("empName", "Andy Murray") // parameters 1
      .queryParam("salary", 50000) // parameters 2 
      .queryParam("Id", 128). // parameters 3 
when()
      .post("/users/add"). //POST endpoint for adding user
then()
      .statusCode(200);
}

3. A sample test code for a testing a DELETE end point in order to remove a user with a specific Id:


@Test
public void testRemoveUserById() {

given()
.contentType(ContentType.JSON)
.port(port).
when()
.delete("/users/remove/{id}", 128). // DELETE end point
then()
.statusCode(200);
}

OR

given()
.contentType(ContentType.JSON)
.port(port)
.pathParam("id", 128). // path parameters specified here
when()
.delete("/users/remove/{id}"). // DELETE end point
then()
.statusCode(200);
}

Why Rest Assured?

  1. Easy HTTP Request Building and Execution: REST-assured allows us to easily define many things such as headers, query parameters, path parameters, request body etc.
  2. Easy Response Checking: REST-assured easily allows us to parse responses by providing constructs for making assertions on response body, response header etc.
  3. Ability to write clean code: REST-assured with its given() – when() – then() style helps to write and understand a code very easily. Predetermined things are under given(), condition under test is specified under when(), and the verification is done under then().

Complete Code Base:

Complete Example can be seen at my git repository: Rest Assured Test Complete Example

SOLID: 5 Object Oriented Design Principle for Software Development

There are 5 principles that any software developer should take into account while programming. One of the biggest advantage, if we follow these principles, in my opinion is that there will be reduced code smells. Here are the principles:

Single Responsibility Principle (S)

This principle tells us that a class should have a particular objective and it should not be created to serve more than 1 objective.

Bad Example:

public class Employee {
     private String empId;
   
     public void generateAnnualReport() {
       // .. generates the report
     }
     
     public void addNewEmployees(Employee e) {  
         // .. adds a new employee
     }
 }

The class above is not having a single responsibility but rather class has more than 1 purpose. Hence, according to the Single Responsibility principle

Good Example:

 public class Employee {
   
   private String empId;
   // getter and setters
 }

public class EmployeeReport {
   // class for generating the employee annual report
}

public class AddNewEmployee {
   // class for adding new employee
}

Open Closed Principle (0)

This principle means that software code (classes, methods, modules etc) should be open for extension but closed for modification.

√ This mean new behaviour can be added in order to meet the new requirements.

 ×This does not mean that new behaviour can be added by modifying the existing code.

Continuing with the previous example, Imagine there are several types of Employee Annual Reports as shown in the example below.

Bad Example:

 public class TravelExpenseReport {}
 public class MealExpenseReport {}

 public class ExpenseReport {
    
   public void getExpenseReport(Object typeOfReport) {
      if (typeOfReport instanceOf TravelExpenseReport) {
         // call the expense method of Travel Expense Report
      } else if (typeOfReport instanceOf MealExpenseReport) { 
 // call the expense method of Meal Expense Report }
 else { 
 // not supported exception
 }
}

 

The above example is bad because imagine for any new report addition, the if else structure will need to be modified thus Open Close Principle is violated.

Good Example:

public abstract class ExpenseBase {
     abstract void expense();
} 

public class TravelExpenseReport extends ExpenseBase {

     @Override
     public void expense() {
        // define the method
     }
}

public class MealExpenseReport extends ExpenseBase {
  
      @Override
 public void expense() { 
 // define the method
 }
}

public class ExpenseReport { 
 public void getExpenseReport(ExpenseBase expenseObj) {
 expenseObj.expense();
 }
 }

The above example is good because an addition of another expense report will not modify any of these classes itself but rather it would be an extension (as another class would be added).

Liskov Substitution Principle (L)

It states objects in a program should be replaceable with instances of their subtypes without altering the correctness of the program.  In other words, it states derived classes should extend the base classes without changing their behaviour. It is an extension of Open Closed principle.

Interface Segregation Principle (I)

It states that a class should not implement an interface if its not intended for its use. If not, a class might have methods that it is forced to implement but were not needed.

Bad Design:

public interface class ExpenseBase {
     void expense();
     void expenseMonthly();
     void expenseMealQuarterly();
} 

public class TravelExpenseReport implements ExpenseBase {

     @Override
     public void expense() {
       // define method
     }

     @Override
     public void expenseMonthly() {
      // define method
     }

     @Override
     public void expenseMealQuarterly() {
      // Do nothing as this method is not intended for class.
      // This class has been forced to implement this method
      }

}

public class MealExpenseReport implements ExpenseBase {

      @Override
 public void expense() { 
 // define method
 }
 
 @Override
 public void expenseMonthly() {
 // define method
 }

 @Override
 public void expenseMealQuarterly() {
 // define method
 }
 }

The above example is bad because Travel Expense Report class is forced to implement the method expenseMealQuarterly() even though its intended for Meal Expense Report class.

Good Design:

public interface class ExpenseBase {
     void expense();
     void expenseMonthly();
} 

public interface class ExpenseMealBase {

     void expenseMealQuarterly();
}

public class TravelExpenseReport implements ExpenseBase {

     @Override
     public void expense() {
       // define method
     }

     @Override
     public void expenseMonthly() {
      // define method
     }
}

public class MealExpenseReport implements ExpenseBase, ExpenseMealBase {

       @Override
 public void expense() { 
 // define method
 } 
 @Override
 public void expenseMonthly() { 
 // define method
 }
 @Override
 public void expenseMealQuarterly() { 
 // define method
 }
 }

The above example is good because interfaces have been segregated now and the class which does not need a specific interface will not implement and hence will not be forced to implement any method that it does not need it.

Dependency Inversion Principle (D)

This principle states that High Level modules should not depend on low level modules. Both of them should depend on abstractions. This mean a particular class should not depend on directly on another class but rather on an abstraction (or interface) of this class.

BAD DESIGN:

public class Employee {
   private EmployeeTeam empTeam;

   public String getEmpData(EmployeeTeam empTeam) {
      //... 
      return empTeam;
   }  
   //.....
}

This is a bad design because Employee and EmployeeTeam are two individual modules. Employee class (High Level) depends on EmployeeTeam (Low Level) class directly for the operations and hence it violates the principle of Dependency Inversion.

GOOD DESIGN:

public inteface EmpData {
    String getEmpData();
     //...
} 
public class EmployeeTeam implements EmpData {
 
   @Override
   public String getEmpData() {
   //...
   return ..
   }
}
public class Employee {
 private EmployeeTeam empTeam;

 public String getData(EmpData empData) {
    // .. return the data
 } 
 //.....
}

This design is considered good because we do not couple the classes together. EmployeeTeam class implements an interface which has a method getEmpData. Employee class will use this interface to call the EmployeeTeam method thus both High Level (Employee) and low level (EmployeeTeam) depend on abstraction now.


Aspect Oriented Programming with Spring

What is Aspect Oriented Programming (AOP)?

Aspect oriented programming (AOP) is a horizontal programming paradigm, where same type of behaviour is applied to several classes. In AOP, programmers implement these cross cutting concerns by writing an aspect then applying it conditionally based on a join point. This is referred to as applying advice. There are several examples of aspects like logging, auditing, transactions, security, caching, etc.

When AOP should be used?

Imagine there is class containing several methods and there is a need to perform logging action (common functionality) before the method call, after the method call and when the method returns.

Normally the way one would be adopt is to log before the method call, log after the method call, and log when the method returns for every method in the class like its shown here Non Aspect Oriented. This technique would involve a lot of CODE DUPLICATION. This is where Aspect Oriented Programming comes into play.

Basic terms related with AOP

Aspect: is the functionality that you want to implement. For instance, logging before every method call or after etc. Here Logging is an aspect.

Pointcut: is the expression that determines what method needs to be invoked where the functionality needs to be implemented. For instance, every method inside Addition class should have the functionality implemented would be written as

@Pointcut("execution(*com.project.MathOperation.Addition.*(..))")

Advice: When a particular pointcut expression is met, then the code that needs to be executed is called an advice.

JoinPoint: Execution of the method when the condition is met is called JoinPoint.

Weaving: The entire process of getting the method executed when the condition is met is called weaving

Practical Example of AOP

The following link to my github account shows how aspect oriented programming can be implemented with Spring and how it differs from non aspect oriented programming – Aspect Oriented Programming