Coding Conventions in Java

1. Avoid using printStackTrace() to log exceptionThis is because with printStackTrace(), the trace is written to System.err and its hard to route it elsewhere and even filtering is difficult. The end user has little control in the way the messages are shown.

Solution: Use Logger to log the exception. The reason is as follows:

a) With Logger, exception can be written to different locations (console, file) based on user preference.

b) Log statements can be filtered by severity (error, warning, info, debug etc) and origin (namely class or package based).

Example: Prefer this:

catch(Exception e) {

String message = String.format(“Exception occurred because of divide by zero %s” + e, reader);



over this:

catch(Exception e) {



2. Avoid using catch clause with Throwable or Error

Throwable is the superclass of all errors and exceptions in Java while Error is the      superclass of all errors and ideally these should not be caught by applications. Catching either throwable or error will also catch OutOfMemoryError and InternalError from which an application actually should not attempt to recover.

Throwable catches really everything even Thread Death which gets thrown by default to stop a thread from the now deprecated Thread.stop() method. So by catching throwable you can be sure that you’ll never leave the try block  and you should be prepared to also handle OutOfMemoryError and InternalError or StackOverflowError.

Hence, the best practice would be:

Example: Prefer this:

catch(Exception e)

over this:

catch(Throwable e) unless the code demands.

3. Define Constants instead of repeating String literals 

Constant will throw a warning if a mistake is done in constant name but during repeated usage of String literal, error might go unnoticed.

Another advantage, coding style is more consistent if we use constants.

Example: Prefer this

Final String RAW_BYTES = “RawBytes”;

map.put(RAW_BYTES, 123);

map.put(RAW_BYTES, 234);

map.put(RAW_BYTES, 546);

map.put(RAW, 345);       // Compiler will immediately report an error as RAW is not defined. So using constants prevent us to use wrong literals.

over this:

map.put(“RawBytes”, 123);

map.put(“RawBytes”, 234);

map.put(“RawBytes”, 546);

map.put(“Raw”, 345);   // This wont be detected as an error although we might have to use RawBytes as the name but there is no way compiler can detect this.



Sprint Retrospective Techniques

Sprint Retrospective is a very common term known by people using Scrum. It is a technique used to understand

  • what went well during the past Sprint,
  • what the team could have done better during the Sprint &
  • what are the action points.

Sprint Retrospective Techniques being used by most of the teams out there working in Scrum Methodology are:

  1. Traditional (Standard) Way
  2. Oscars Awards
  3. Two Truths and a Lie

Traditional Way

Scrum Master generally gives out post-its to all the team member and each team member is supposed to write down few things that went well & few things that could have been improved during the sprint. The Product Owner or Scrum Master is responsible for writing them for instance in Confluence and having a discussion on the “things that could have been improved” & deriving action points from them.

My opinion: This is one of the most commonly used techniques and Scrum Master’s should come up with new techniques to extract the best out of the team during retrospectives.

Oscar’s Award Technique

Product Owner or Scrum Master display’s all stories completed in the last iterations on a board. Create 2 award categories on white board:

  • Best story
  • Worst story

Team members are given post-its and they have to nominate story(s) as best & worst on post-it’s.  The corresponding post-it’s are then pasted under its category on white board.

The team should then vote for the best story out of the all the stories seen under best category and similarly should vote for the worst story out of all the stories seen under worst category.

Ask the team why they think the user story won in this category and let the team reflect on the process of completing the tasks – what went good or wrong.

My Opinion: It is a very good technique for the team members that don’t speak up using the traditional way and this technique normally can bring out more concrete retrospective points than the general points which would be brought out in traditional category.

Two Truths and a Lie Technique

Each person of the sprint team will make three statements about the past Sprint – of which, two will be true and one will be a lie on 3 post-its.

Statements should be related to any likes/dislikes, experiences, skills or habits.

Each team member presents to the group their three statements. After the statements have been presented, the group will decide which statement they believe is a lie using voting. After the statement has been selected, the presenter will then reveal to the group whether they guessed correctly.

Once each team member has presented, there will be a list of statements “false” and bunch of statements as “true”. Discuss these collection of “false” statements with the team – identifying what could have been done better that could have avoided these points.

My Opinion:  It is a very good alternative to be used when the team gets board with standard (traditional) retrospective techniques thereby making the team collaboration stagnant.

Note: The ultimate intention in all the techniques is to extract the three things mentioned in the start of the blog:

  what went well, what could have been improved and the action points 🙂

Agile Estimation Techniques

Agile Estimation Techniques are used to estimate work effort in agile projects. The commonly used techniques are

  • Planning Poker
  • T-shirt sizing
  • Dot Voting etc

Planning Poker

Planning Poker is used to assign a story point to a feature or item during the sprint planning. For agile estimation purposes, following values for estimating an item: 1, 2, 3, 5, 8, 13, 20, 40, 100. The process that happens during sprint planning:

  • Each team member gets a set of cards.
  • The Product Owner explains item to be estimated.
  • Each team member choses a card that represents his/her estimation.
  • Every one shows their respective card at the same time.
  • The point value shown on the card is the estimate if every team member selected the same card.
  • If the cards are not the same, then the team needs to discuss the item and estimate again.
  • Select again until estimates converge.
  • Repeat for each item.


Quick estimation can be done for instance

  • when few members estimate as 2 and few estimate as 3 then choose the bigger one.
  • If the different cards shown are like few with 2 and few with 8 then ONLY do the discussion.

T-shirt sizing

T shirt sizing estimation is a QUALITATIVE estimation technique and does not include QUANTITATIVE aspect for estimation. Normally this technique is used to estimate EPICS (and can be used for stories too) and it gives a rough estimation.

Where can T-shirt sizing be used?

We had a project to be finished in 2 months but none of us (Product Owner, Scrum Master, Developers) were sure if we could really finish it within 2 months. The only thing we had were the REQUIREMENTS defined in the backlog. Therefore we used this technique to quickly go through all the EPICS to come up to a fair estimation to decide if we could do the project or NOT.

How does T-shirt sizing technique work?

Here a requirement is classified as XS, S, M, L, XL. This means a requirement can be either extra small, small, medium, large or extra large.

  • Each team member gets a set of cards with XS, S, M, L, XL written over them.
  • The Product Owner explains item to be estimated.
  • Each team member choses a card that represents his/her estimation.
  • Every one shows their respective card at the same time.
  • The point value shown on the card is the estimate if every team member selected the same card.
  • Estimation should be done again for an item – if someone estimated XS whilst other one estimated as XL. If majority estimated M or L then go with an L.
  • Repeat until each item is estimated.
  • Once all the items have been estimated, a mapping from t-shirt sizing to the quantitative aspect should be done.

In our case we mapped it as

XS -> 3, S -> 5, M -> 8, L -> 13, XL -> 20.

This way team came up with number of story points needed to finish the project and based on our average sprint velocity we came up with how many sprint we would need to finish the project.

Dot Voting Technique

It is basically a ranking method to decide the order of the Product Backlog from the highest priority stories to lowest priority stories. It is done to select the most important story to be taken forward.

  • Product owner puts all the user stories on the wall using  post-its.
  • Team members are given 4 to 5 dots (mostly in the form of a marker)
  • Everyone has to give their votes on the different user stories that they prefer (keeping into account the max dots that are available with per person is fixed and should not be exceeded).
  • Product Owner then orders the product backlog items from the most preferred (one with most no of dots) to the least preferred (one with least no. of dots).
  • Discussion can be held if a team member is unhappy with a specific task having higher or lower priority.

Node.js Tutorial – Part 1 (Basic)

What is Node.js?

Node.js is an open source Java Script framework. Prior to the release of Node.js, server side programming was done mainly using Java, Python etc.

With the release of Node.js, Java script can be used both at the client (frontend) and the server side (backend).

Why Node.js?

Node.js has the following features:

  1. Single Threaded & Highly Scalable: Node.js uses single threaded model with event looping and this permits the server to respond in a non-blocking way.
  2. Asynchronous and Event Driven: Node.js based application will never wait for an API to return data. All API’s of Node.js library are non blocking (asynchronous). The asynchronous nature of Node.js application allows the application to  handle several request concurrently.

  3. Call Back Functionality: Node.js uses callback functions excessively which are called at the completion of a given task. This prevents any blocking, and allows other code to be run in the meantime.

Simple Node.js Program

Node.js uses require directive to load the http module

var http = require("http");

In this program, http.createServer() method call is used to create a server instance and then it is bind to port 8081 using the listen method associated with the server instance. It is passed a function with parameters request and response.

http.createServer(function (request, response) {
   // Send the HTTP header 
   // HTTP Status: 200 : OK
   // Content Type: text/plain
   response.writeHead(200, {'Content-Type': 'text/plain'});
   // Send the response body as "My first program in Node.js"
   response.end('My first program in Node.js\n');

// Console will print the message
console.log('Server running at');

Save the file as firstprog.js and execute the program to start the application as shown below:

node firstprog.js

You can verify the output by visiting the URL –

Call Back Usage

Blocking Code Example

Open a file called input.txt and write the following in the file: Hi, My first node program!

Open (say visual studio) and create a node.js file called display.js

var fs = require("fs");

var text = fs.readFileSync('input.txt');

console.log("Program Finished");

The output of this program will look like:

Hi, My first node program!
Program Finished

This example shows the program blocks until it finishes reading the file & then only it proceeds to end the program.

Non Blocking Code Example

Replace the code in display.js file with the following

var fs = require("fs");

fs.readFile('input.txt', function (err, data) {
   if (err) {
     return console.error(err);

console.log("Program Finished");

The output of this program will look like:

 Program Finished
 Hi, My first node program!


The second example shows that the program does not wait for file reading and proceeds to print “Program Finished” and when the response is obtained after file reading is complete, the corresponding output is shown.

Spring Transaction Management

What is a Transaction?

A transaction is a set of one or more statements that is executed as a unit. Either all the statements will be executed successfully or none of them. It follows the ACID principle where

Atomicity meaning either all changes should happen or nothing.

Consistency meaning changes should leave the data in a consistent state.

Isolated meaning changes should not interfere with other changes.

Durable meaning changes once committed should remain committed.

Flashback: JDBC Transaction Management

In JDBC API, the following steps are required to carry out transaction management:

Step 1: Disable auto commit mode by passing false value to setAutoCommit() method.

Step 2: Call the commit() method to commit the transaction if all the statements are executed as expected.

Step 3: Call the rollback method to cancel the transaction if any of the statement is not executed as expected.

Hence, a transaction management code using JDBC API will look like:

try { 

// get the connection object.
 Connection conn = DBConnection.getConnection(); 

// Step1: set the auto commit to false 

// business logic goes here 
// .... 
// .... 

// Step 2: commit the transaction
 } catch(SQLException e) {
     try { 
       // Step 3: roll back the transaction 
     catch(SQLException e1) { 
       // .... 

Pros of JDBC Transaction Management:

Scope of the transaction is very clear in the code.

Cons of JDBC Transaction Management:

Lot of repeated code as for every transaction the same commit, rollback lines need to be written again & hence it is error prone.

Spring Declarative Transaction Management

The easiest way to carry out transaction management using Spring Framework is through Spring’s @Transactional annotation over a method or a class.

Imagine a service class containing methods that are supposed to insert and read records in a database. The methods would look like:

public void insertData(Employee e) {
  // business logic to insert data.

public void readData(int empId) {
   // business logic to read data.


Things to notice in the above code:

  1. Only the business logic needs to be written while Spring takes care of the complete transaction. There is no need to write explicitly the transaction commit and rollback. This is done internally by Spring.
  2. Only the methods that result in the change in the state of the database need @Transactional annotation while the rest not. That said, the 2nd method is just a read from database and will not result in any changed database state therefore there is no need of transactional annotation.
  3. A class when annotated @Transactional means all its methods would be transactional then.
  4. Transaction management should not be done in the data access layer (DAO), but in the service layer so that your DAOs will perform the actions related with the database while the logic is kept separated in the service layer.

Spring QBE Feature


Query by Example is a way in Spring that allows for dynamic query creation without writing any code or queries.


In order to use Spring QBE feature, repository class need to be extended from QueryByExampleExecutor interface apart from extending from CrudRepository. An example can be seen here: Extending Query By Example Executor

Query by Example API

The API mainly gives the following:

a) Example: Example takes a data object (usually the entity object or a subtype of it) and a specification how to match properties

b) Example Matcher: The ExampleMatcher carries details on how to match particular fields. It can be reused across multiple Examples.


Imagine we have an Employee class which has id, name & position field & in order to search all the employees whose position contains the word “Dev” then how do we do it using Spring JPA?

Step1: We create our filter condition (that is we need employees whose position contains the word “Dev”)

Employee emp1 = new Employee();

Step2: We create our matching condition

ExampleMatcher matcher = ExampleMatcher.matching()
      .withIgnorePaths("employeeName", "employeeId");

// IgnorePaths implies ignore these fields while searching for 
// employee with position containing "Dev"

Example<Employee> example = Example.of(emp1, matcher);

The complete example can be found on my GitHub repo: Spring QBE example

Introduction to AJAX: Part 2

What is XML HTTP Request ?

Just like AJAX, XHR object can be used to exchange data between a web browser and a web server and then updating a webpage without reloading while jQuery’s $.ajax() is just a crossbrowser-compatible wrapper around XMLHttpRequest

XMLHttpRequest Object Methods

Method Description
new XMLHttpRequest() Creates a new XMLHttpRequest object
abort() Cancels the current request
getAllResponseHeaders() Returns header information
getResponseHeader() Returns specific header information
open(method,url,async,user,psw) Specifies the type of request

method: the request type GET or POST
url: the file location
async: true (asynchronous) or false (synchronous)
user: optional user name
psw: optional password

send() Sends the request to the server
Used for GET requests
send(string) Sends the request to the server.
Used for POST requests

XMLHttpRequest Object Properties

Property Description
onreadystatechange Defines a function to be called when the readyState property changes
readyState Holds the status of the XMLHttpRequest.
0: request not initialized
1: server connection established
2: request received
3: processing request
4: request finished and response is ready
responseText Returns the response data as a string
responseXML Returns the response data as XML data
status Returns the status-number of a request
200: “OK”
403: “Forbidden”
404: “Not Found”
statusText Returns the status-text (e.g. “OK” or “Not Found”)

Example of XML HTTP Request

// object used to exchange data with server
var xhttp = new XMLHttpRequest(); 

// defines a function to be called when ready state changes
xhttp.onreadystatechange = function() {

    if (this.readyState == 4 && this.status == 200) {
         if (this.responseText == "OK") {
         // operation: change to success image
         } else if (this.responseText == "ERROR") {
         // operation: change to error image
         } else {
         // operation: change to unknown image
    } else {
     //operation: example server didn't send any response
};"GET", "http://localhost:8080/, true);


Synchronous XMLHttpRequest (async = false) is not recommended because the JavaScript will stop executing until the server response is ready. If the server is busy or slow, the application will hang or stop.


Introduction to AJAX: Part 1

What is AJAX?

It is a web development technique that is used to create interactive web applications. It stands for Asynchronous Java Script and XML. What Ajax does basically is to load data from the server without a page refresh. In other words, it allows to communicate with the backend server, captures the backend response and based on the response performs actions on the frontend.

How does AJAX works?

The jQuery $.ajax() function is used to perform asynchronous HTTP request. The syntax of it as follows:

a) $.ajax(url [, options])

  • Url parameter indicates the URL one wants to reach through AJAX call while
  • Options can include different parameters(explained below) describing the configuration of AJAX request.

b) $.ajax([options])

  • Url option does not exist explicitly but is specified within the options parameter or can be skipped indicating a request to the current page itself.

What does “options” contains in the Ajax call.

  • contentType: indicates the content type of the data sent to server.
  • crossDomain: set to true if the request is sent to another domain.
  • data: indicates the data to be sent to server.
  • dataType: type of data to be received from the server.
  • url: indicates a string url to which Ajax request should be sent.
  • timeout: number that specifies a timeout (in milliseconds) for the request.
  • type: indicates the type of request to be made (GET OR POST)
  • headers: an object of additional header send to server.

The above is a partial list of the option parameters that can be specified in AJAX call.

Example of $.ajax()

The following is the link to Jquery project where $.ajax() is used: jQuery Project Link

Pros and Cons of AJAX


  • Better and Quick Interactivity between users and websites as pages are not reloaded for content to be displayed.
  • Compact as several multi purpose applications and features can be handled using a single web page.


  • Built on Javascript: Many website surfers prefer to turn javascript functionality off on their browser rendering the AJAX application useless though javascript is secure and heavily used.



Multi Tenant Architecture

What is Multi Tenancy?

Multi tenancy is a software architecture in which a single instance of a software runs on a server and serve multiple tenants. A tenant here refers to a group of users who share a common access with specific privileges to the software instance.

Multi Tenancy != Multiple Instance Architecture

Why Multi Instance Architecture is not the same as Multi Tenant Architecture?

In multi instance architecture (or a Single Tenant Architecture) as the name suggests there are multiple instances of software running and each instance would be for one tenant.

Advantages of Multi Tenancy?

a) Multi tenancy provides cost saving as multiple instance need not be run on different servers hence, the cost of deploying on different machines, cost of maintenance is reduced etc

b) Upgrades are easy because a simple upgrade would result in all clients to have access to the latest version as there is just one copy of schema available to all clients.

Multi Tenant Models

a) Separate Database Model: In this scenario, each tenant has its own dedicated database. This means data from one tenant will be directed to one database always.

Screen Shot 2017-09-16 at 09.33.52

b) Separate Schema Model:  In this scenario, there exist only one database but one schema per tenant. In other words, each tenant has a dedicated schema. This means that data from one tenant will be directed to a specific schema always within the same database which is shared by other tenants.

Screen Shot 2017-09-16 at 09.34.00

c) Single Database, Single Schema: In this scenario, there exist only one database and one schema, all the tables within the schema needs to include an extra column which would be an indicator for tenantID. This would help to differentiate data for different tenants.

Screen Shot 2017-09-16 at 09.34.06

Comparisons of different models

Characteristics Separate Database Separate Schema Single Database, Single Schema
Scalable  Not favorable  Favorable  so-so
Security  Favorable  so-so Not favorable
Data Customization  Favorable Favorable  Not Favorable
New Customer Addition  Not Favorable  so-so  Favorable


You can check my GitHub account on Multi Tenancy based on Different Schema per Tenant approach here: Spring-MultiTenancy

Docker for Developers

What is Docker?

Docker is a tool that is made to create, deploy and run applications by using containers where Containers allow a developer to package its application (libraries, dependencies etc)  and deliver as one package. This package would eventually then run on any Linux machine system regardless of the configuration that different machines may have.

What is a Docker Container?

Docker containers are based on Docker images. Docker image is thus a binary which includes all the information for running a single container. Each image has a unique id and these docker images can be stored/retrieved in/from docker registry. A docker registry contains docker image repositories and each repository can have one or more docker images. The official registry provided by Docker is Docker Hub.  The difference between docker container, image and registry is shown through this figure (Image taken from :

Screen Shot 2017-08-20 at 18.09.40.png

My Spring Boot Rest Api project on GitHub can be run using Docker and all the below explanation of images, containers would have reference to this project. Information on how to build the Docker file can be checked in README file of my GitHub project.

Docker images

Building a Docker Image:

Docker image is built from a docker file. Once the docker file is built (docker file example) then a docker build command would be executed at the location where docker file is present in order to build the image.

      docker build -t <name of the docker image> .

For example, we run the command “docker build -t spring1.2 .” and upon success, below screenshot shows what message should appear.

Screen Shot 2017-08-20 at 18.19.09.png

Listing Docker Images created:

docker images

The image id associated with the image can also be determined using the command docker images. An image can be removed using its id. The below figure shows the different images that have been created with their image ids.

Screen Shot 2017-08-20 at 18.23.02.png

Removing a Docker image:

docker rmi <image id>

Docker image can be removed easily if its not yet associated to any container using the command mentioned above. However if it is associated to a container then first container needs to be removed using docker rm <container id> and then the image needs to be removed using docker rmi <imageid>

Docker containers

Running a docker container:

Docker image once built successfully implies a docker container can be run and the command to do the same is as follows:

For example: A spring boot application which needs to run on port 8080 and the name of the docker image is spring1.2 then docker container can be run using

docker run -p 8080:8080 -t spring1.2

Screen Shot 2017-08-20 at 18.31.49.png

Show currently running containers:

docker ps

Screen Shot 2017-08-20 at 18.34.38

Removing a docker container: 

docker rm <docker container id>


Apache Flink: Stream Processing

Apache Flink is a framework for distributed stream processing and in a very high level description, Flink is made up of the following:

Screen Shot 2017-08-06 at 13.55.26.png

where Data Source: is the input data to flink for processing.

Transformation: is the processing stage where different algorithms may be applied.

Data Sink: is the stage where flink sends the processed data. This could be kafka queue, cassandra etc.

Flink’s capability to compute accurate results on unbounded data sets is based on the fact that it has the following features:

  1. Exactly Once semantics for stateful computations: where stateful means  application can maintain a summary of the data that has been processed and checkpointing mechanism of Flink ensures exactly once semantics in the case of a failure. In other words, checkpointing allow Flink to recover state and positions in the stream in case of a failure.
  2. Flink supports stream processing with event time semantics where event time refers to the time at which each individual event occurred in the device. The event time semantics makes it easy to compute accurate results over streams when the events arrive out of order or with delay. So, the time at which the event occurred will be present in every event implies it will be easy to group and process the events by assigning them to their corresponding hour window. An hourly event time window will contain all records that carry an event timestamp that falls into that hour, regardless of when the records arrive, and in what order they arrive.

Screen Shot 2017-08-06 at 17.30.09

   3. Flink supports flexible windowing option where the windowing can be done  based on time, session or counts. Apache Flink supports different types of window such as tumbling window, sliding window, global window and session window. Time based window is created as soon as the first event corresponding to this window arrives and window is removed when the time (event time or processing time) passes its end timestamp + user specified allowed lateness.

Screen Shot 2017-08-06 at 18.42.30

    4. Flink’s save point is a mechanism to update the application or reprocess historic data with minimum downtime. Savepoints are externally stored checkpoints that can be used to update the Flink program. It uses Flinks checkpointing mechanism to create a snapshot of the state of the streaming program and write the checkpoint metadata to an external file system.

Screen Shot 2017-08-06 at 22.25.37.png


A simple word count algorithm using Apache Flink DataSet Api can be found at the github project Apache Flink Git Hub Project