Coding Conventions in Java

1. Avoid using printStackTrace() to log exceptionThis is because with printStackTrace(), the trace is written to System.err and its hard to route it elsewhere and even filtering is difficult. The end user has little control in the way the messages are shown.

Solution: Use Logger to log the exception. The reason is as follows:

a) With Logger, exception can be written to different locations (console, file) based on user preference.

b) Log statements can be filtered by severity (error, warning, info, debug etc) and origin (namely class or package based).

Example: Prefer this:

catch(Exception e) {

String message = String.format(“Exception occurred because of divide by zero %s” + e, reader);



over this:

catch(Exception e) {



2. Avoid using catch clause with Throwable or Error

Throwable is the superclass of all errors and exceptions in Java while Error is the      superclass of all errors and ideally these should not be caught by applications. Catching either throwable or error will also catch OutOfMemoryError and InternalError from which an application actually should not attempt to recover.

Throwable catches really everything even Thread Death which gets thrown by default to stop a thread from the now deprecated Thread.stop() method. So by catching throwable you can be sure that you’ll never leave the try block  and you should be prepared to also handle OutOfMemoryError and InternalError or StackOverflowError.

Hence, the best practice would be:

Example: Prefer this:

catch(Exception e)

over this:

catch(Throwable e) unless the code demands.

3. Define Constants instead of repeating String literals 

Constant will throw a warning if a mistake is done in constant name but during repeated usage of String literal, error might go unnoticed.

Another advantage, coding style is more consistent if we use constants.

Example: Prefer this

Final String RAW_BYTES = “RawBytes”;

map.put(RAW_BYTES, 123);

map.put(RAW_BYTES, 234);

map.put(RAW_BYTES, 546);

map.put(RAW, 345);       // Compiler will immediately report an error as RAW is not defined. So using constants prevent us to use wrong literals.

over this:

map.put(“RawBytes”, 123);

map.put(“RawBytes”, 234);

map.put(“RawBytes”, 546);

map.put(“Raw”, 345);   // This wont be detected as an error although we might have to use RawBytes as the name but there is no way compiler can detect this.



Node.js Tutorial – Part 1 (Basic)

What is Node.js?

Node.js is an open source Java Script framework. Prior to the release of Node.js, server side programming was done mainly using Java, Python etc.

With the release of Node.js, Java script can be used both at the client (frontend) and the server side (backend).

Why Node.js?

Node.js has the following features:

  1. Single Threaded & Highly Scalable: Node.js uses single threaded model with event looping and this permits the server to respond in a non-blocking way.
  2. Asynchronous and Event Driven: Node.js based application will never wait for an API to return data. All API’s of Node.js library are non blocking (asynchronous). The asynchronous nature of Node.js application allows the application to  handle several request concurrently.

  3. Call Back Functionality: Node.js uses callback functions excessively which are called at the completion of a given task. This prevents any blocking, and allows other code to be run in the meantime.

Simple Node.js Program

Node.js uses require directive to load the http module

var http = require("http");

In this program, http.createServer() method call is used to create a server instance and then it is bind to port 8081 using the listen method associated with the server instance. It is passed a function with parameters request and response.

http.createServer(function (request, response) {
   // Send the HTTP header 
   // HTTP Status: 200 : OK
   // Content Type: text/plain
   response.writeHead(200, {'Content-Type': 'text/plain'});
   // Send the response body as "My first program in Node.js"
   response.end('My first program in Node.js\n');

// Console will print the message
console.log('Server running at');

Save the file as firstprog.js and execute the program to start the application as shown below:

node firstprog.js

You can verify the output by visiting the URL –

Call Back Usage

Blocking Code Example

Open a file called input.txt and write the following in the file: Hi, My first node program!

Open (say visual studio) and create a node.js file called display.js

var fs = require("fs");

var text = fs.readFileSync('input.txt');

console.log("Program Finished");

The output of this program will look like:

Hi, My first node program!
Program Finished

This example shows the program blocks until it finishes reading the file & then only it proceeds to end the program.

Non Blocking Code Example

Replace the code in display.js file with the following

var fs = require("fs");

fs.readFile('input.txt', function (err, data) {
   if (err) {
     return console.error(err);

console.log("Program Finished");

The output of this program will look like:

 Program Finished
 Hi, My first node program!


The second example shows that the program does not wait for file reading and proceeds to print “Program Finished” and when the response is obtained after file reading is complete, the corresponding output is shown.

Spring Transaction Management

What is a Transaction?

A transaction is a set of one or more statements that is executed as a unit. Either all the statements will be executed successfully or none of them. It follows the ACID principle where

Atomicity meaning either all changes should happen or nothing.

Consistency meaning changes should leave the data in a consistent state.

Isolated meaning changes should not interfere with other changes.

Durable meaning changes once committed should remain committed.

Flashback: JDBC Transaction Management

In JDBC API, the following steps are required to carry out transaction management:

Step 1: Disable auto commit mode by passing false value to setAutoCommit() method.

Step 2: Call the commit() method to commit the transaction if all the statements are executed as expected.

Step 3: Call the rollback method to cancel the transaction if any of the statement is not executed as expected.

Hence, a transaction management code using JDBC API will look like:

try { 

// get the connection object.
 Connection conn = DBConnection.getConnection(); 

// Step1: set the auto commit to false 

// business logic goes here 
// .... 
// .... 

// Step 2: commit the transaction
 } catch(SQLException e) {
     try { 
       // Step 3: roll back the transaction 
     catch(SQLException e1) { 
       // .... 

Pros of JDBC Transaction Management:

Scope of the transaction is very clear in the code.

Cons of JDBC Transaction Management:

Lot of repeated code as for every transaction the same commit, rollback lines need to be written again & hence it is error prone.

Spring Declarative Transaction Management

The easiest way to carry out transaction management using Spring Framework is through Spring’s @Transactional annotation over a method or a class.

Imagine a service class containing methods that are supposed to insert and read records in a database. The methods would look like:

public void insertData(Employee e) {
  // business logic to insert data.

public void readData(int empId) {
   // business logic to read data.


Things to notice in the above code:

  1. Only the business logic needs to be written while Spring takes care of the complete transaction. There is no need to write explicitly the transaction commit and rollback. This is done internally by Spring.
  2. Only the methods that result in the change in the state of the database need @Transactional annotation while the rest not. That said, the 2nd method is just a read from database and will not result in any changed database state therefore there is no need of transactional annotation.
  3. A class when annotated @Transactional means all its methods would be transactional then.
  4. Transaction management should not be done in the data access layer (DAO), but in the service layer so that your DAOs will perform the actions related with the database while the logic is kept separated in the service layer.

Spring QBE Feature


Query by Example is a way in Spring that allows for dynamic query creation without writing any code or queries.


In order to use Spring QBE feature, repository class need to be extended from QueryByExampleExecutor interface apart from extending from CrudRepository. An example can be seen here: Extending Query By Example Executor

Query by Example API

The API mainly gives the following:

a) Example: Example takes a data object (usually the entity object or a subtype of it) and a specification how to match properties

b) Example Matcher: The ExampleMatcher carries details on how to match particular fields. It can be reused across multiple Examples.


Imagine we have an Employee class which has id, name & position field & in order to search all the employees whose position contains the word “Dev” then how do we do it using Spring JPA?

Step1: We create our filter condition (that is we need employees whose position contains the word “Dev”)

Employee emp1 = new Employee();

Step2: We create our matching condition

ExampleMatcher matcher = ExampleMatcher.matching()
      .withIgnorePaths("employeeName", "employeeId");

// IgnorePaths implies ignore these fields while searching for 
// employee with position containing "Dev"

Example<Employee> example = Example.of(emp1, matcher);

The complete example can be found on my GitHub repo: Spring QBE example

Introduction to AJAX: Part 2

What is XML HTTP Request ?

Just like AJAX, XHR object can be used to exchange data between a web browser and a web server and then updating a webpage without reloading while jQuery’s $.ajax() is just a crossbrowser-compatible wrapper around XMLHttpRequest

XMLHttpRequest Object Methods

Method Description
new XMLHttpRequest() Creates a new XMLHttpRequest object
abort() Cancels the current request
getAllResponseHeaders() Returns header information
getResponseHeader() Returns specific header information
open(method,url,async,user,psw) Specifies the type of request

method: the request type GET or POST
url: the file location
async: true (asynchronous) or false (synchronous)
user: optional user name
psw: optional password

send() Sends the request to the server
Used for GET requests
send(string) Sends the request to the server.
Used for POST requests

XMLHttpRequest Object Properties

Property Description
onreadystatechange Defines a function to be called when the readyState property changes
readyState Holds the status of the XMLHttpRequest.
0: request not initialized
1: server connection established
2: request received
3: processing request
4: request finished and response is ready
responseText Returns the response data as a string
responseXML Returns the response data as XML data
status Returns the status-number of a request
200: “OK”
403: “Forbidden”
404: “Not Found”
statusText Returns the status-text (e.g. “OK” or “Not Found”)

Example of XML HTTP Request

// object used to exchange data with server
var xhttp = new XMLHttpRequest(); 

// defines a function to be called when ready state changes
xhttp.onreadystatechange = function() {

    if (this.readyState == 4 && this.status == 200) {
         if (this.responseText == "OK") {
         // operation: change to success image
         } else if (this.responseText == "ERROR") {
         // operation: change to error image
         } else {
         // operation: change to unknown image
    } else {
     //operation: example server didn't send any response
};"GET", "http://localhost:8080/, true);


Synchronous XMLHttpRequest (async = false) is not recommended because the JavaScript will stop executing until the server response is ready. If the server is busy or slow, the application will hang or stop.


Introduction to AJAX: Part 1

What is AJAX?

It is a web development technique that is used to create interactive web applications. It stands for Asynchronous Java Script and XML. What Ajax does basically is to load data from the server without a page refresh. In other words, it allows to communicate with the backend server, captures the backend response and based on the response performs actions on the frontend.

How does AJAX works?

The jQuery $.ajax() function is used to perform asynchronous HTTP request. The syntax of it as follows:

a) $.ajax(url [, options])

  • Url parameter indicates the URL one wants to reach through AJAX call while
  • Options can include different parameters(explained below) describing the configuration of AJAX request.

b) $.ajax([options])

  • Url option does not exist explicitly but is specified within the options parameter or can be skipped indicating a request to the current page itself.

What does “options” contains in the Ajax call.

  • contentType: indicates the content type of the data sent to server.
  • crossDomain: set to true if the request is sent to another domain.
  • data: indicates the data to be sent to server.
  • dataType: type of data to be received from the server.
  • url: indicates a string url to which Ajax request should be sent.
  • timeout: number that specifies a timeout (in milliseconds) for the request.
  • type: indicates the type of request to be made (GET OR POST)
  • headers: an object of additional header send to server.

The above is a partial list of the option parameters that can be specified in AJAX call.

Example of $.ajax()

The following is the link to Jquery project where $.ajax() is used: jQuery Project Link

Pros and Cons of AJAX


  • Better and Quick Interactivity between users and websites as pages are not reloaded for content to be displayed.
  • Compact as several multi purpose applications and features can be handled using a single web page.


  • Built on Javascript: Many website surfers prefer to turn javascript functionality off on their browser rendering the AJAX application useless though javascript is secure and heavily used.



Multi Tenant Architecture

What is Multi Tenancy?

Multi tenancy is a software architecture in which a single instance of a software runs on a server and serve multiple tenants. A tenant here refers to a group of users who share a common access with specific privileges to the software instance.

Multi Tenancy != Multiple Instance Architecture

Why Multi Instance Architecture is not the same as Multi Tenant Architecture?

In multi instance architecture (or a Single Tenant Architecture) as the name suggests there are multiple instances of software running and each instance would be for one tenant.

Advantages of Multi Tenancy?

a) Multi tenancy provides cost saving as multiple instance need not be run on different servers hence, the cost of deploying on different machines, cost of maintenance is reduced etc

b) Upgrades are easy because a simple upgrade would result in all clients to have access to the latest version as there is just one copy of schema available to all clients.

Multi Tenant Models

a) Separate Database Model: In this scenario, each tenant has its own dedicated database. This means data from one tenant will be directed to one database always.

Screen Shot 2017-09-16 at 09.33.52

b) Separate Schema Model:  In this scenario, there exist only one database but one schema per tenant. In other words, each tenant has a dedicated schema. This means that data from one tenant will be directed to a specific schema always within the same database which is shared by other tenants.

Screen Shot 2017-09-16 at 09.34.00

c) Single Database, Single Schema: In this scenario, there exist only one database and one schema, all the tables within the schema needs to include an extra column which would be an indicator for tenantID. This would help to differentiate data for different tenants.

Screen Shot 2017-09-16 at 09.34.06

Comparisons of different models

Characteristics Separate Database Separate Schema Single Database, Single Schema
Scalable  Not favorable  Favorable  so-so
Security  Favorable  so-so Not favorable
Data Customization  Favorable Favorable  Not Favorable
New Customer Addition  Not Favorable  so-so  Favorable


You can check my GitHub account on Multi Tenancy based on Different Schema per Tenant approach here: Spring-MultiTenancy

Docker for Developers

What is Docker?

Docker is a tool that is made to create, deploy and run applications by using containers where Containers allow a developer to package its application (libraries, dependencies etc)  and deliver as one package. This package would eventually then run on any Linux machine system regardless of the configuration that different machines may have.

What is a Docker Container?

Docker containers are based on Docker images. Docker image is thus a binary which includes all the information for running a single container. Each image has a unique id and these docker images can be stored/retrieved in/from docker registry. A docker registry contains docker image repositories and each repository can have one or more docker images. The official registry provided by Docker is Docker Hub.  The difference between docker container, image and registry is shown through this figure (Image taken from :

Screen Shot 2017-08-20 at 18.09.40.png

My Spring Boot Rest Api project on GitHub can be run using Docker and all the below explanation of images, containers would have reference to this project. Information on how to build the Docker file can be checked in README file of my GitHub project.

Docker images

Building a Docker Image:

Docker image is built from a docker file. Once the docker file is built (docker file example) then a docker build command would be executed at the location where docker file is present in order to build the image.

      docker build -t <name of the docker image> .

For example, we run the command “docker build -t spring1.2 .” and upon success, below screenshot shows what message should appear.

Screen Shot 2017-08-20 at 18.19.09.png

Listing Docker Images created:

docker images

The image id associated with the image can also be determined using the command docker images. An image can be removed using its id. The below figure shows the different images that have been created with their image ids.

Screen Shot 2017-08-20 at 18.23.02.png

Removing a Docker image:

docker rmi <image id>

Docker image can be removed easily if its not yet associated to any container using the command mentioned above. However if it is associated to a container then first container needs to be removed using docker rm <container id> and then the image needs to be removed using docker rmi <imageid>

Docker containers

Running a docker container:

Docker image once built successfully implies a docker container can be run and the command to do the same is as follows:

For example: A spring boot application which needs to run on port 8080 and the name of the docker image is spring1.2 then docker container can be run using

docker run -p 8080:8080 -t spring1.2

Screen Shot 2017-08-20 at 18.31.49.png

Show currently running containers:

docker ps

Screen Shot 2017-08-20 at 18.34.38

Removing a docker container: 

docker rm <docker container id>


Apache Flink: Stream Processing

Apache Flink is a framework for distributed stream processing and in a very high level description, Flink is made up of the following:

Screen Shot 2017-08-06 at 13.55.26.png

where Data Source: is the input data to flink for processing.

Transformation: is the processing stage where different algorithms may be applied.

Data Sink: is the stage where flink sends the processed data. This could be kafka queue, cassandra etc.

Flink’s capability to compute accurate results on unbounded data sets is based on the fact that it has the following features:

  1. Exactly Once semantics for stateful computations: where stateful means  application can maintain a summary of the data that has been processed and checkpointing mechanism of Flink ensures exactly once semantics in the case of a failure. In other words, checkpointing allow Flink to recover state and positions in the stream in case of a failure.
  2. Flink supports stream processing with event time semantics where event time refers to the time at which each individual event occurred in the device. The event time semantics makes it easy to compute accurate results over streams when the events arrive out of order or with delay. So, the time at which the event occurred will be present in every event implies it will be easy to group and process the events by assigning them to their corresponding hour window. An hourly event time window will contain all records that carry an event timestamp that falls into that hour, regardless of when the records arrive, and in what order they arrive.

Screen Shot 2017-08-06 at 17.30.09

   3. Flink supports flexible windowing option where the windowing can be done  based on time, session or counts. Apache Flink supports different types of window such as tumbling window, sliding window, global window and session window. Time based window is created as soon as the first event corresponding to this window arrives and window is removed when the time (event time or processing time) passes its end timestamp + user specified allowed lateness.

Screen Shot 2017-08-06 at 18.42.30

    4. Flink’s save point is a mechanism to update the application or reprocess historic data with minimum downtime. Savepoints are externally stored checkpoints that can be used to update the Flink program. It uses Flinks checkpointing mechanism to create a snapshot of the state of the streaming program and write the checkpoint metadata to an external file system.

Screen Shot 2017-08-06 at 22.25.37.png


A simple word count algorithm using Apache Flink DataSet Api can be found at the github project Apache Flink Git Hub Project


Cassandra-Unit is a Java Utility test tool that helps us write isolated Junit Tests without having to mock or connecting to real cassandra. There are several ways to use the Cassandra-Unit in a Java project. Before to start the entire project is available at my Git Hub account: Cassandra-Unit Git Hub

This blog will describe how using Junit4 rule annotation (@Rule) cassandra-unit can be used.

  1. Create a CQL file describing the table that needs to be created and the data that needs to be inserted.
 id int,
 name text,


INSERT INTO employee (id, name) VALUES(1, 'Lakshay');
INSERT INTO employee (id, name) VALUES(2, 'George');
INSERT INTO employee (id, name) VALUES(3, 'Andy');
INSERT INTO employee (id, name) VALUES(4, 'Nicole');

   2. Create Junit Test class using CassandraCQLUnit Rule annotation by mentioning the name of the CQL file created in Step 1 and the Cassandra Keyspace name that you prefer. This in itself sets up and starts an embedded Cassandra. This Cassandra unit active instance should be passed to the DAO/Repository class so that the cassandra session can be used to query the started database.

public CassandraCQLUnit cassandraCQLUnit = new CassandraCQLUnit(new ClassPathCQLDataSet("cql/employee.cql", "emp_keyspace"));

private EmpRepository empRepository;

 public void setUp() throws Exception {
     // pass the cassandra session to the class.
     empRepository = new EmpRepository(cassandraCQLUnit.session);

 public void testFindEmployeeById() throws Exception {
     EmpDetails empExpected = new EmpDetails(1, "Lakshay");
     EmpDetails empGenerated = empRepository.findEmployeeById(1);
     assertEquals(empExpected, empGenerated);

3. The Java class that need to be tested is the following:

public class EmpRepository {

   private Session session;

   public EmpRepository(Session session) {
      this.session = session;

   public EmpDetails findEmployeeById(int id) {
     EmpDetails empDetails = null;
     ResultSet resultSet = session.execute("select * from employee where id="+ id);
     for (Row row: resultSet) {
      empDetails = new EmpDetails(row.getInt("id"), row.getString("name"));
    return empDetails;



Rest Assured Testing


Unit Testing is done in order to test the individual Java classes in an application but there is also a need to carry out Functional Test in order to test the application end to end apart from the Integration Test which focuses on testing the modules. Rest assured testing thus enables us to test a RESTful application by connecting to the HTTP end points supported by the application and imitate the role of another client/browser. In other words, REST-assured is a way to automate testing of a REST Api.

Structure of a Rest Assured Test

The way Rest Assured Tests are written is in given(), when(), then() format.

1. A sample test code for a testing a GET end point in order to get users:

public void getUser() {
      .port(port). // HTTP port that got allocated at runtime. 
      .get("/users/"). // GET endpoint in order to get the users
      .statusCode(200) // RESPONSE expected
      .body("id", equalTo(128));

2. A sample test code for a testing a POST end point in order to add a user:

public void testAddUser() {
      .queryParam("empName", "Andy Murray") // parameters 1
      .queryParam("salary", 50000) // parameters 2 
      .queryParam("Id", 128). // parameters 3 
      .post("/users/add"). //POST endpoint for adding user

3. A sample test code for a testing a DELETE end point in order to remove a user with a specific Id:

public void testRemoveUserById() {

.delete("/users/remove/{id}", 128). // DELETE end point


.pathParam("id", 128). // path parameters specified here
.delete("/users/remove/{id}"). // DELETE end point

Why Rest Assured?

  1. Easy HTTP Request Building and Execution: REST-assured allows us to easily define many things such as headers, query parameters, path parameters, request body etc.
  2. Easy Response Checking: REST-assured easily allows us to parse responses by providing constructs for making assertions on response body, response header etc.
  3. Ability to write clean code: REST-assured with its given() – when() – then() style helps to write and understand a code very easily. Predetermined things are under given(), condition under test is specified under when(), and the verification is done under then().

Complete Code Base:

Complete Example can be seen at my git repository: Rest Assured Test Complete Example