Monday, July 31, 2017

Using Quartz for scheduling with MongoDB

I am sure most of us have used the Quartz library to handle scheduled activity within our projects. Although I have interacted with the library quite often in the past, it was the first time I had to use Quartz with MongoDB.

By default, Quartz only provides support for the traditional relational databases. Browsing through, I stumbled upon this github repository by Michael Klishin which provides a MongoDB implementation of the Quartz library in a clustered environment.

We will be using a Spring boot application to show you how we can integrate the Quartz library for scheduling in a clustered environment using MongoDB.

The GitHub repository with the code shown in this article can be found here.

All quartz related configuration is stored in a property file. The attributes we will be using are as follows;


 
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Quartz Job Scheduling
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~


# Use the MongoDB store
org.quartz.jobStore.class=com.quartz.mongo.intro.quartzintro.scheduler.CustomMongoQuartzSchedulerJobStore


# --- # Note that all the mongo db configuration are set in the CustomMongoQuartzSchedulerJobStore.java class ---
# MongoDB URI (optional if 'org.quartz.jobStore.addresses' is set)
#org.quartz.jobStore.mongoUri=mongodb://localhost:27017

# Comma separated list of mongodb hosts/replica set seeds (optional if 'org.quartz.jobStore.mongoUri' is set)
#org.quartz.jobStore.addresses=localhost


# Will be used to create collections like quartz_jobs, quartz_triggers, quartz_calendars, quartz_locks
org.quartz.jobStore.collectionPrefix=quartz_

# Thread count setting is ignored by the MongoDB store but Quartz requires it
org.quartz.threadPool.threadCount=1

# Skip running a web request to determine if there is an updated version of Quartz available for download
org.quartz.scheduler.skipUpdateCheck=true

org.quartz.jobStore.isClustered=true

#The instance ID will be auto generated by Quartz for all nodes running in a cluster.
org.quartz.scheduler.instanceId=AUTO

org.quartz.scheduler.instanceName=quartzMongoInstance


Let us look at some of these properties. Others are self-explanatory with the comments provided.


  • org.quartz.jobStore.class : This defines the job store class which will handle storing the job related details in the database. By default, with the GitHub project mentioned before, we are provided with the MongoDBJobStore. For the purposes of this article however, we will extend the functionality provided by this class with our own implementation which will handle the MongoDB configuration based on Spring profiles.

  • org.quartz.jobStore.mongoUri : You will define the comma separated MongoDB URI's here if you wanted to use the default MongoDBJobStore class. On this implementation however, since we are defining a custom job store, we will not be using this property. An example of how you would define this would be mongodb://<ip1>:<port>,<ip2>:<port>

  • org.quartz.jobStore.collectionPrefix : This property defines the prefix for the collections created for the purposes of storing quartz specific details.


Let us first see how our JobStore configuration class looks like;


 
package com.quartz.mongo.intro.quartzintro.scheduler;

import org.apache.commons.lang3.StringUtils;
import org.quartz.impl.StdSchedulerFactory;
import org.springframework.beans.factory.config.YamlPropertiesFactoryBean;
import org.springframework.core.io.ClassPathResource;

import com.novemberain.quartz.mongodb.MongoDBJobStore;
import com.quartz.mongo.intro.quartzintro.constants.SchedulerConstants;
import com.quartz.mongo.intro.quartzintro.constants.SystemProperties;

/**
 * 
 * <p>
 * We extend the {@link MongoDBJobStore} because we need to set the custom mongo
 * db parameters. Some of the configuration comes from system properties set via
 * docker and the others come via the application.yml files we have for each
 * environment.
 * </p>
 * 
 * < These are set as part of initialization. This class is initialized by
 * {@link StdSchedulerFactory} and defined in the quartz.properties file.
 * 
 * </p>
 * 
 * @author dinuka
 *
 */
public class CustomMongoQuartzSchedulerJobStore extends MongoDBJobStore {

 private static String mongoAddresses;
 private static String userName;
 private static String password;
 private static String dbName;
 private static boolean isSSLEnabled;
 private static boolean isSSLInvalidHostnameAllowed;

 public CustomMongoQuartzSchedulerJobStore() {
  super();
  initializeMongo();
  setMongoUri("mongodb://" + mongoAddresses);
  setUsername(userName);
  setPassword(password);
  setDbName(dbName);
  setMongoOptionEnableSSL(isSSLEnabled);
  setMongoOptionSslInvalidHostNameAllowed(isSSLInvalidHostnameAllowed);
 }

 /**
  * <p>
  * This method will initialize the mongo instance required by the Quartz
  * scheduler.
  * 
  * The use case here is that we have two profiles;
  * </p>
  * 
  * <ul>
  * <li>Development</li>
  * <li>Production</li>
  * </ul>
  * 
  * <p>
  * So when constructing the mongo instance to be used for the Quartz
  * scheduler, we need to read the various properties set within the system
  * to determine which would be appropriate depending on which spring profile
  * is active.
  * </p>
  * 
  */
 private static void initializeMongo() {
  /**
   * The use case here is that when we run our application, the property
   * spring.profiles.active is set as a system property during production.
   * But it will not be set in a development environment.
   */
  String env = System.getProperty(SystemProperties.ENVIRONMENT);
  env = StringUtils.isNotBlank(env) ? env : "dev";
  YamlPropertiesFactoryBean commonProperties = new YamlPropertiesFactoryBean();
  commonProperties.setResources(new ClassPathResource("application.yml"));
  /**
   * The mongo DB user name and password are only password as command line
   * parameters in the production environment and for the development
   * environment it will be null which is why we use
   * StringUtils#trimToEmpty so we can pass empty strings for the user
   * name and password in the development environment since we do not have
   * authentication on the development environment.s
   */
  userName = StringUtils.trimToEmpty(commonProperties.getObject().getProperty(SystemProperties.SERVER_NAME));
  password = StringUtils.trimToEmpty(System.getProperty(SystemProperties.MONGO_PASSWORD));
  dbName = commonProperties.getObject().getProperty(SchedulerConstants.QUARTZ_SCHEDULER_DB_NAME);

  YamlPropertiesFactoryBean environmentSpecificProperties = new YamlPropertiesFactoryBean();

  userName = commonProperties.getObject().getProperty(SystemProperties.SERVER_NAME);

  switch (env) {
  case "prod":
   environmentSpecificProperties.setResources(new ClassPathResource("application-prod.yml"));
   /**
    * By deafult, in the production mongo instance, SSL is enabled and
    * SSL invalid host name allowed property is set.
    */
   isSSLEnabled = true;
   isSSLInvalidHostnameAllowed = true;
   mongoAddresses = environmentSpecificProperties.getObject().getProperty(SystemProperties.MONGO_URI);
   break;
  case "dev":
   /**
    * For the development profile, we just read the mongo URI that is
    * set.
    */
   environmentSpecificProperties.setResources(new ClassPathResource("application-dev.yml"));
   mongoAddresses = environmentSpecificProperties.getObject().getProperty(SystemProperties.MONGO_URI);
   break;

  }

 }

}


In this above implementation, we have retrieved the MongoDB details pertaining to the active profile. If no profile is defined it defaults to the development profile. We have used the YamlPropertiesFactoryBean here to read off the application properties pertaining to different environments.

Moving on, we then need to let Spring manage the creation of the Quartz configuration using the SchedulerFactoryBean


 
package com.quartz.mongo.intro.quartzintro.config;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.io.ClassPathResource;
import org.springframework.scheduling.quartz.SchedulerFactoryBean;

/**
 * This class will configure and setup quartz using the
 * {@link SchedulerFactoryBean}
 * 
 * @author dinuka
 *
 */
@Configuration
public class QuartzConfiguration {

 /**
  * Here we integrate quartz with Spring and let Spring manage initializing
  * quartz as a spring bean.
  * 
  * @return an instance of {@link SchedulerFactoryBean} which will be managed
  *         by spring.
  */
 @Bean
 public SchedulerFactoryBean schedulerFactoryBean() {
  SchedulerFactoryBean scheduler = new SchedulerFactoryBean();
  scheduler.setApplicationContextSchedulerContextKey("applicationContext");
  scheduler.setConfigLocation(new ClassPathResource("quartz.properties"));
  scheduler.setWaitForJobsToCompleteOnShutdown(true);
  return scheduler;
 }

}

We define this as a Configuration class so that it will be picked up when we run the Spring boot application.

The call to setApplicationContextSchedulerContextKey method here is in order to get a reference to the Spring application context within our job class which is as follows;


 
package com.quartz.mongo.intro.quartzintro.scheduler.jobs;

import org.quartz.DisallowConcurrentExecution;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
import org.quartz.PersistJobDataAfterExecution;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.ApplicationContext;
import org.springframework.core.env.Environment;
import org.springframework.scheduling.quartz.QuartzJobBean;
import org.springframework.scheduling.quartz.SchedulerFactoryBean;

import com.quartz.mongo.intro.quartzintro.config.JobConfiguration;
import com.quartz.mongo.intro.quartzintro.config.QuartzConfiguration;

/**
 * 
 * This is the job class that will be triggered based on the job configuration
 * defined in {@link JobConfiguration}
 * 
 * @author dinuka
 *
 */
@PersistJobDataAfterExecution
@DisallowConcurrentExecution
public class SampleJob extends QuartzJobBean {

 private static Logger log = LoggerFactory.getLogger(SampleJob.class);

 private ApplicationContext applicationContext;

 /**
  * This method is called by Spring since we set the
  * {@link SchedulerFactoryBean#setApplicationContextSchedulerContextKey(String)}
  * in {@link QuartzConfiguration}
  * 
  * @param applicationContext
  */
 public void setApplicationContext(ApplicationContext applicationContext) {
  this.applicationContext = applicationContext;
 }

 /**
  * This is the method that will be executed each time the trigger is fired.
  */
 @Override
 protected void executeInternal(JobExecutionContext context) throws JobExecutionException {
  log.info("This is the sample job, executed by {}", applicationContext.getBean(Environment.class));

 }
}


As you can see, we get a reference to the application context when the SchedulerFactoryBean is initialised. The part of the Spring documentation I would like to draw you attention to is as follows;

In case of a QuartzJobBean, the reference will be applied to the Job
instance as bean property. An "applicationContext" attribute will
correspond to a "setApplicationContext" method in that scenario.



Next up, we go on to configure the job to be run with the frequency by which to run the scheduled activity.



 
package com.quartz.mongo.intro.quartzintro.config;

import static org.quartz.TriggerBuilder.newTrigger;

import java.time.LocalDateTime;
import java.time.ZoneId;
import java.util.Date;

import javax.annotation.PostConstruct;

import org.quartz.JobDetail;
import org.quartz.JobKey;
import org.quartz.SimpleScheduleBuilder;
import org.quartz.Trigger;
import org.quartz.TriggerKey;
import org.quartz.impl.JobDetailImpl;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Configuration;
import org.springframework.scheduling.quartz.SchedulerFactoryBean;

import com.quartz.mongo.intro.quartzintro.constants.SchedulerConstants;
import com.quartz.mongo.intro.quartzintro.scheduler.jobs.SampleJob;

/**
 * 
 * This will configure the job to run within quartz.
 * 
 * @author dinuka
 *
 */
@Configuration
public class JobConfiguration {

 @Autowired
 private SchedulerFactoryBean schedulerFactoryBean;

 @PostConstruct
 private void initialize() throws Exception {
  schedulerFactoryBean.getScheduler().addJob(sampleJobDetail(), true, true);
  if (!schedulerFactoryBean.getScheduler().checkExists(new TriggerKey(
    SchedulerConstants.SAMPLE_JOB_POLLING_TRIGGER_KEY, SchedulerConstants.SAMPLE_JOB_POLLING_GROUP))) {
   schedulerFactoryBean.getScheduler().scheduleJob(sampleJobTrigger());
  }

 }

 /**
  * <p>
  * The job is configured here where we provide the job class to be run on
  * each invocation. We give the job a name and a value so that we can
  * provide the trigger to it on our method {@link #sampleJobTrigger()}
  * </p>
  * 
  * @return an instance of {@link JobDetail}
  */
 private static JobDetail sampleJobDetail() {
  JobDetailImpl jobDetail = new JobDetailImpl();
  jobDetail.setKey(
    new JobKey(SchedulerConstants.SAMPLE_JOB_POLLING_JOB_KEY, SchedulerConstants.SAMPLE_JOB_POLLING_GROUP));
  jobDetail.setJobClass(SampleJob.class);
  jobDetail.setDurability(true);
  return jobDetail;
 }

 /**
  * <p>
  * This method will define the frequency with which we will be running the
  * scheduled job which in this instance is every minute three seconds after
  * the start up.
  * </p>
  * 
  * @return an instance of {@link Trigger}
  */
 private static Trigger sampleJobTrigger() {
  return newTrigger().forJob(sampleJobDetail())
    .withIdentity(SchedulerConstants.SAMPLE_JOB_POLLING_TRIGGER_KEY,
      SchedulerConstants.SAMPLE_JOB_POLLING_GROUP)
    .withPriority(50).withSchedule(SimpleScheduleBuilder.repeatMinutelyForever())
    .startAt(Date.from(LocalDateTime.now().plusSeconds(3).atZone(ZoneId.systemDefault()).toInstant()))
    .build();
 }

}


There are many ways you can configure your scheduler including cron configuration. For the purposes of this article, we will define a simple trigger to run every minute, three seconds after start up. We define this as a Configuration class so that it will be picked up when we run the Spring boot application.


That is about it. When you now run the Spring Boot application class found in the GitHub repository with a running MongoDB instance, you will see the following collections created;


  • quartz_calendars
  • quartz_jobs
  • quartz_locks
  • quartz_schedulers
  • quartz_triggers

Thank you for reading and if there are any comments, improvements, suggestions, do kindly leave by a comment which is always appreciated.




Friday, July 28, 2017

Spring Boot with the Justice League

Dark times are ahead for the Justice League with the formidable Darkseid coming over to conquer human kind. Batman with the help of Wonder woman are on a quest to get the league together with one critical aspect missing. A proper Justice league member management system. As time is not on their side, they do not want to go through the cumbersome process of setting up a project from scratch with all the things they need. Batman hands over this daunting task of building a rapid system to his beloved trusted Alfred (As robin is so unpredictable) who tells Batman that he recalls coming across something called Spring Boot which helps set up everything you need so you can get to writing code for your application rather than being bogged down with minor nuances of setting up configuration for your project. And so he gets into it. Let's get onto it with our beloved Alfred who will utilize Spring Boot to build a Justice League member management system in no time. Well at least the back-end part for now since Batman like dealing directly with the REST APIs.

There are many convenient ways of setting up a Spring Boot application. For this article, we will focus on the traditional way of downloading the package (Spring CLI) and setting it up from scratch on Ubuntu. Spring also supports getting a project packaged on-line via their tool. You can download the latest stable release from here. For this post, I am using the 1.3.0.M1 release.

After extracting your downloaded archive, first off, set the following parameters on your profile;


 
SPRING_BOOT_HOME=<extracted path>/spring-1.3.0.M1

PATH=$SPRING_BOOT_HOME/bin:$PATH


Afterwards in your "bashrc" file, include the following;


 
. <extracted-path>/spring-1.3.0.M1/shell-completion/bash/spring


What that last execution does is it gives you auto completion on the command line when you are dealing with the spring-cli to create your spring boot applications. Please remember to "source" both the profile and the "bashrc" files for the changes to take affect.

Our technology stack which is used in this article will be as follows;
  • Spring REST
  • Spring Data
  • MongoDB

So let us start off creating the template project for the application by issuing the following command. Note that the sample project can be downloaded from by GitHub repository found here;


 
spring init -dweb,data-mongodb,flapdoodle-mongo  --groupId com.justiceleague --artifactId justiceleaguemodule --build maven justiceleaguesystem

This will generate a maven project with Spring MVC and Spring Data with an emebedded MongoDB.

By default, the spring-cli creates a project with the name set as "Demo". So we will need to rename the respective application class generated. If you checked out the source from my GitHub repository mentioned above then this will be done.

With Spring boot, running the application is as easy as running the jar file created by the project which essentially calls onto the application  class annotated with @SpringBootApplication that boots up Spring. Let us see how that looks like;




 
package com.justiceleague.justiceleaguemodule;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

/**
 * The main spring boot application which will start up a web container and wire
 * up all the required beans.
 * 
 * @author dinuka
 *
 */
@SpringBootApplication
public class JusticeLeagueManagementApplication {

 public static void main(String[] args) {
  SpringApplication.run(JusticeLeagueManagementApplication.class, args);
 }
}


We then move onto our domain classes where we use spring-data along with mongodb to define our data layer. The domain class is as follows;


 
package com.justiceleague.justiceleaguemodule.domain;

import org.bson.types.ObjectId;
import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.index.Indexed;
import org.springframework.data.mongodb.core.mapping.Document;

/**
 * This class holds the details that will be stored about the justice league
 * members on MongoDB.
 * 
 * @author dinuka
 *
 */
@Document(collection = "justiceLeagueMembers")
public class JusticeLeagueMemberDetail {

 @Id
 private ObjectId id;

 @Indexed
 private String name;

 private String superPower;

 private String location;

 public JusticeLeagueMemberDetail(String name, String superPower, String location) {
  this.name = name;
  this.superPower = superPower;
  this.location = location;
 }

 public String getId() {
  return id.toString();
 }

 public void setId(String id) {
  this.id = new ObjectId(id);
 }

 public String getName() {
  return name;
 }

 public void setName(String name) {
  this.name = name;
 }

 public String getSuperPower() {
  return superPower;
 }

 public void setSuperPower(String superPower) {
  this.superPower = superPower;
 }

 public String getLocation() {
  return location;
 }

 public void setLocation(String location) {
  this.location = location;
 }

}

As we are using spring-data, it is fairly intuitive, specially if you are coming from a JPA/Hibernate background. The annotations are very similar. The only new thing would be the @Document annotation which denotes the name of the collection in our mongo database. We also have an index defined on the name of the super hero since more queries will revolve around searching by the name.

With Spring-data came the functionality of defining your repositories easily that support the usual CRUD operations and some read operations straight out of the box without you having to write them. So we utilise the power of Spring-data repositories in our application as well and the repository class is as follows;



 
package com.justiceleague.justiceleaguemodule.dao;

import org.springframework.data.mongodb.repository.MongoRepository;
import org.springframework.data.mongodb.repository.Query;

import com.justiceleague.justiceleaguemodule.domain.JusticeLeagueMemberDetail;

public interface JusticeLeagueRepository extends MongoRepository<JusticeLeagueMemberDetail, String> {

 /**
  * This method will retrieve the justice league member details pertaining to
  * the name passed in.
  * 
  * @param superHeroName
  *            the name of the justice league member to search and retrieve.
  * @return an instance of {@link JusticeLeagueMemberDetail} with the member
  *         details.
  */
 @Query("{ 'name' : {$regex: ?0, $options: 'i' }}")
 JusticeLeagueMemberDetail findBySuperHeroName(final String superHeroName);
}


The usual saving operations are implemented by Spring at runtime through the use of proxies and we just have to define our domain class in our repository.

As you can see, we have only one method defined. With the @Query annotation, we are trying to find a super hero with the user of regular expressions. The options "i" denotes that we should ignore case when trying to find a match in mongo db.

Next up,  we move onto implementing our logic to storing the new justice league members through our service layer.


 
package com.justiceleague.justiceleaguemodule.service.impl;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import com.justiceleague.justiceleaguemodule.constants.MessageConstants.ErrorMessages;
import com.justiceleague.justiceleaguemodule.dao.JusticeLeagueRepository;
import com.justiceleague.justiceleaguemodule.domain.JusticeLeagueMemberDetail;
import com.justiceleague.justiceleaguemodule.exception.JusticeLeagueManagementException;
import com.justiceleague.justiceleaguemodule.service.JusticeLeagueMemberService;
import com.justiceleague.justiceleaguemodule.web.dto.JusticeLeagueMemberDTO;
import com.justiceleague.justiceleaguemodule.web.transformer.DTOToDomainTransformer;

/**
 * This service class implements the {@link JusticeLeagueMemberService} to
 * provide the functionality required for the justice league system.
 * 
 * @author dinuka
 *
 */
@Service
public class JusticeLeagueMemberServiceImpl implements JusticeLeagueMemberService {

 @Autowired
 private JusticeLeagueRepository justiceLeagueRepo;

 /**
  * {@inheritDoc}
  */
 public void addMember(JusticeLeagueMemberDTO justiceLeagueMember) {
  JusticeLeagueMemberDetail dbMember = justiceLeagueRepo.findBySuperHeroName(justiceLeagueMember.getName());

  if (dbMember != null) {
   throw new JusticeLeagueManagementException(ErrorMessages.MEMBER_ALREDY_EXISTS);
  }
  JusticeLeagueMemberDetail memberToPersist = DTOToDomainTransformer.transform(justiceLeagueMember);
  justiceLeagueRepo.insert(memberToPersist);
 }

}


Again quite trivial, if the member already exists, we throw out an error, else we add the member. Here you can see we are using the already implemented insert method of the spring data repository we just defined before.

Finally Alfred is ready to expose the new functionality he just developed via a REST API using Spring REST so that Batman can start sending in the details over HTTP as he is always travelling.


package com.justiceleague.justiceleaguemodule.web.rest.controller;

import javax.validation.Valid;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.ResponseStatus;
import org.springframework.web.bind.annotation.RestController;

import com.justiceleague.justiceleaguemodule.constants.MessageConstants;
import com.justiceleague.justiceleaguemodule.service.JusticeLeagueMemberService;
import com.justiceleague.justiceleaguemodule.web.dto.JusticeLeagueMemberDTO;
import com.justiceleague.justiceleaguemodule.web.dto.ResponseDTO;

/**
 * This class exposes the REST API for the system.
 * 
 * @author dinuka
 *
 */
@RestController
@RequestMapping("/justiceleague")
public class JusticeLeagueManagementController {

 @Autowired
 private JusticeLeagueMemberService memberService;

 /**
  * This method will be used to add justice league members to the system.
  * 
  * @param justiceLeagueMember
  *            the justice league member to add.
  * @return an instance of {@link ResponseDTO} which will notify whether
  *         adding the member was successful.
  */
 @ResponseBody
 @ResponseStatus(value = HttpStatus.CREATED)
 @RequestMapping(method = RequestMethod.POST, path = "/addMember", produces = {
   MediaType.APPLICATION_JSON_VALUE }, consumes = { MediaType.APPLICATION_JSON_VALUE })
 public ResponseDTO addJusticeLeagueMember(@Valid @RequestBody JusticeLeagueMemberDTO justiceLeagueMember) {
  ResponseDTO responseDTO = new ResponseDTO(ResponseDTO.Status.SUCCESS,
    MessageConstants.MEMBER_ADDED_SUCCESSFULLY);
  try {
   memberService.addMember(justiceLeagueMember);
  } catch (Exception e) {
   responseDTO.setStatus(ResponseDTO.Status.FAIL);
   responseDTO.setMessage(e.getMessage());
  }
  return responseDTO;
 }
}


We expose our functionality as a JSON payload as Batman just cannot get enough of it although Alfred is a bit old school and prefer XML sometimes.

The old guy Alfred still wants to test out his functionality as TDD is just his style. So finally we look at the integration tests written up by Alfred to make sure the initial version of the Justice league management system is working as expected. Note that we are only showing the REST API tests here although Alfred has actually covered more which you can check out on the GitHub repo.


 
package com.justiceleague.justiceleaguemodule.test.util;

import java.io.IOException;
import java.net.UnknownHostException;

import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.web.servlet.AutoConfigureMockMvc;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.test.web.servlet.MockMvc;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.justiceleague.justiceleaguemodule.domain.JusticeLeagueMemberDetail;

import de.flapdoodle.embed.mongo.MongodExecutable;
import de.flapdoodle.embed.mongo.MongodStarter;
import de.flapdoodle.embed.mongo.config.IMongodConfig;
import de.flapdoodle.embed.mongo.config.MongodConfigBuilder;
import de.flapdoodle.embed.mongo.config.Net;
import de.flapdoodle.embed.mongo.distribution.Version;

/**
 * This class will have functionality required when running integration tests so
 * that invidivual classes do not need to implement the same functionality.
 * 
 * @author dinuka
 *
 */
@RunWith(SpringRunner.class)
@SpringBootTest
@AutoConfigureMockMvc
public abstract class BaseIntegrationTest {

 @Autowired
 protected MockMvc mockMvc;

 protected ObjectMapper mapper;

 private static MongodExecutable mongodExecutable;

 @Autowired
 protected MongoTemplate mongoTemplate;

 @Before
 public void setUp() {
  mapper = new ObjectMapper();
 }

 @After
 public void after() {
  mongoTemplate.dropCollection(JusticeLeagueMemberDetail.class);
 }

 /**
  * Here we are setting up an embedded mongodb instance to run with our
  * integration tests.
  * 
  * @throws UnknownHostException
  * @throws IOException
  */
 @BeforeClass
 public static void beforeClass() throws UnknownHostException, IOException {

  MongodStarter starter = MongodStarter.getDefaultInstance();

  IMongodConfig mongoConfig = new MongodConfigBuilder().version(Version.Main.PRODUCTION)
    .net(new Net(27017, false)).build();

  mongodExecutable = starter.prepare(mongoConfig);

  try {
   mongodExecutable.start();
  } catch (Exception e) {
   closeMongoExecutable();
  }
 }

 @AfterClass
 public static void afterClass() {
  closeMongoExecutable();
 }

 private static void closeMongoExecutable() {
  if (mongodExecutable != null) {
   mongodExecutable.stop();
  }
 }

}



 
package com.justiceleague.justiceleaguemodule.web.rest.controller;

import org.hamcrest.beans.SamePropertyValuesAs;
import org.junit.Assert;
import org.junit.Test;
import org.springframework.http.MediaType;
import org.springframework.test.web.servlet.request.MockMvcRequestBuilders;
import org.springframework.test.web.servlet.result.MockMvcResultMatchers;

import com.justiceleague.justiceleaguemodule.constants.MessageConstants;
import com.justiceleague.justiceleaguemodule.constants.MessageConstants.ErrorMessages;
import com.justiceleague.justiceleaguemodule.domain.JusticeLeagueMemberDetail;
import com.justiceleague.justiceleaguemodule.test.util.BaseIntegrationTest;
import com.justiceleague.justiceleaguemodule.web.dto.JusticeLeagueMemberDTO;
import com.justiceleague.justiceleaguemodule.web.dto.ResponseDTO;
import com.justiceleague.justiceleaguemodule.web.dto.ResponseDTO.Status;

/**
 * This class will test out the REST controller layer implemented by
 * {@link JusticeLeagueManagementController}
 * 
 * @author dinuka
 *
 */
public class JusticeLeagueManagementControllerTest extends BaseIntegrationTest {

 /**
  * This method will test if the justice league member is added successfully
  * when valid details are passed in.
  * 
  * @throws Exception
  */
 @Test
 public void testAddJusticeLeagueMember() throws Exception {

  JusticeLeagueMemberDTO flash = new JusticeLeagueMemberDTO("Barry Allen", "super speed", "Central City");
  String jsonContent = mapper.writeValueAsString(flash);
  String response = mockMvc
    .perform(MockMvcRequestBuilders.post("/justiceleague/addMember").accept(MediaType.APPLICATION_JSON)
      .contentType(MediaType.APPLICATION_JSON).content(jsonContent))
    .andExpect(MockMvcResultMatchers.status().isCreated()).andReturn().getResponse().getContentAsString();

  ResponseDTO expected = new ResponseDTO(Status.SUCCESS, MessageConstants.MEMBER_ADDED_SUCCESSFULLY);
  ResponseDTO receivedResponse = mapper.readValue(response, ResponseDTO.class);

  Assert.assertThat(receivedResponse, SamePropertyValuesAs.samePropertyValuesAs(expected));

 }

 /**
  * This method will test if an appropriate failure response is given when
  * the member being added already exists within the system.
  * 
  * @throws Exception
  */
 @Test
 public void testAddJusticeLeagueMemberWhenMemberAlreadyExists() throws Exception {
  JusticeLeagueMemberDetail flashDetail = new JusticeLeagueMemberDetail("Barry Allen", "super speed",
    "Central City");
  mongoTemplate.save(flashDetail);

  JusticeLeagueMemberDTO flash = new JusticeLeagueMemberDTO("Barry Allen", "super speed", "Central City");
  String jsonContent = mapper.writeValueAsString(flash);
  String response = mockMvc
    .perform(MockMvcRequestBuilders.post("/justiceleague/addMember").accept(MediaType.APPLICATION_JSON)
      .contentType(MediaType.APPLICATION_JSON).content(jsonContent))
    .andExpect(MockMvcResultMatchers.status().isCreated()).andReturn().getResponse().getContentAsString();

  ResponseDTO expected = new ResponseDTO(Status.FAIL, ErrorMessages.MEMBER_ALREDY_EXISTS);
  ResponseDTO receivedResponse = mapper.readValue(response, ResponseDTO.class);
  Assert.assertThat(receivedResponse, SamePropertyValuesAs.samePropertyValuesAs(expected));
 }

 /**
  * This method will test if a valid client error is given if the data
  * required are not passed within the JSON request payload which in this
  * case is the super hero name.
  * 
  * @throws Exception
  */
 @Test
 public void testAddJusticeLeagueMemberWhenNameNotPassedIn() throws Exception {
  // The super hero name is passed in as null here to see whether the
  // validation error handling kicks in.
  JusticeLeagueMemberDTO flash = new JusticeLeagueMemberDTO(null, "super speed", "Central City");
  String jsonContent = mapper.writeValueAsString(flash);
  mockMvc.perform(MockMvcRequestBuilders.post("/justiceleague/addMember").accept(MediaType.APPLICATION_JSON)
    .contentType(MediaType.APPLICATION_JSON).content(jsonContent))
    .andExpect(MockMvcResultMatchers.status().is4xxClientError());

 }

}



And that is about it. With the power of Spring boot, Alfred was able to get a bare minimum Justice league management system with a REST API exposed in no time. We will build upon this application in the time to come and see how Alfred comes up with getting this application deployed via docker to an Amazon AWS instance managed by Kubernetes in the time to come. Exciting times ahead so tune in.

Thursday, January 5, 2017

Bidding Adieu To My South African Family


It is 6pm on the first day of 2017 and I am here on my laptop writing this one final goodbye letter with a heavy heart to one of the most amazing teams I have had the privilege of working with. I am going to take this to a personal level and mention each individual person on the team and how they have impacted my life at a person and a professional level.

First off, a little bit about the journey to South Africa. It was the year 2014 when I was first began working on the MTN (The second largest telecommunications prodder in South Africa) South Africa project as a contractor via CSG (Cable Service Group) International through my company in Sri Lanka, Virtusa Polaris. It was a challenging few months as we worked tirelessly to get through the knowledge transfer sessions successfully on the systems we were taking over. Some valuable lessons learned here which I would take forward with me for life.

South Africa then became my second home on the 11th of July 2015 when I finally arrived permanently in South Africa to work on the MTN project. I am not going to bore you with the details of the work as this is not about the work but about the people I am going to be saying goodbye to. So let us take this show on the road shall we?

Starting off with the person who was the reason I got the opportunity to travel to South Africa, Peter Hebden (a.k.a Pete). Pete is the lead architect for the MTN engagement. I first spoke to Pete during the initiation of the project while I was in Sri Lanka. He and I got off to a great start from that first call we ever had. A very gregarious person by nature which made it an absolute pleasure to work with him. When it comes to work, he is 100% committed and there is no slipping pass mediocre work with Pete. He expects a level of quality from his team and nothing less will get you his approval. This was just fabulous as I now had someone who was passionate about quality as I was which meant I had to be on my “A” game always if I were to get his approval on the work we carried out as part of the MTN engagement. Pete comes from a civil engineering background which I must say kept us on our toes at times. One specific moment which I remember like it was yesterday. My team and I were working on the architecture and detailed technical design documents for the systems we were taking over. When the time came to review those documents with Pete, I remember him asking, “Why are those boxes in the diagrams not aligned and of different sizes?”. Sadly we did not have a plausible answer to provide and we got back to working on those diagrams until they were perfect. That moment right there, I realized how important even the minuscule details are to the overall success of the project. Integrity and honesty are two traits that Pete expects from each member of his team and he is the kind of person who will go out his way to help out anyone on his team even if the consequences are detrimental to his own career. That I must say is the kind of leader I admire and respect. He personally stood up for me when I had a few issues along the way, at moments when I was flustered and down. Although you will not see us giving high fives around at office, I consider Pete to be a very good friend rather than a boss I report to on a daily basis. He was always there for me professionally and personally. Showing me around South Africa and helping me out on apartment hunting, inviting myself and my wife to his house for Christmas are just a few moments I would like to mention. It was not something he had to do though he was thoughtful enough to do all of those things. For me, Pete is the epitome of a great leader.

Moving on, the next person I know I am going to miss dearly is my partner in crime, 006(long story to this name, which I will skip for now) and my sister from another mother, Nkateko Makhuvele(a.k.a Kat). Kat currently serves as a business analyst for the MTN engagement as part of CSGI. Oh my, where do I even start to describe this beautiful soul.  I first met Kat when I arrived in South Africa in July 2015. She and I just hit it off from the first day we ever met. Became even better friends with my wife. A very religious and God fearing person who is always lending a helpful hand to the poor and the needy. My dance partner for our year end functions where we would simply bring the house down ;). You will never see her being acrimonious to anyone even if you caught her on her worst day. Always has a smile on her face and was and still is there for me whenever I needed her. Took me and my wife on our very first game drive in South Africa. I see Kat as a very strong minded person who is independent, career driven, kind hearted and just a pure blessing to this world we live in. I will miss you so dearly Kat, though this is surely not a goodbye as I am sure our paths will cross one day.

My geek counter-part Yeshkal Nanhoo(a.k.a Yesh). He serves as a Solutions Architect as part of CSGI for the MTN engagement. When you first see him, the thought that comes to your mind is “You should not pick a fight with this guy”. But when you actually get to know him, that statement just invalidates it by itself. He is such a kind and good hearted guy and you will never see him being hostile to anyone no matter how much people get on his nerves. A very calm and collected person. Both of us are DC comics fanboys and that was the common ground from which we built our friendship. The only other person who loved the Batman vs Superman: Dawn of Justice just as much as I did for the artistic value in that masterpiece. Hacking is his passion and you can see his eyes light up when he is presented with a new challenge which most times he would have resolved with a few days. Professionally, he is a person who is always approachable. Even if he is inundated with work, he will just stop what he is doing and help you out. I will surely miss our fruitful conversations about the multi-verse J. Thank you Yesh for being such an amazing friend and lending a helping hand whenever I needed. I am sure you will do even greater things in the time to come.

Rupin Mehta(a.k.a Rupz) is the equivalent of the “road runner” J. Serves as a Solutions Architect for CSGI as part of the MTN engagement. An Architect by day and a professional marathon runner by well early morning. Persistence is something that I admire about Rupin. If he sets his heart on something, he will work towards that through the obstacles. Can you imagine that this guy ran almost 160km? And this too while being a father of two daughters. Next time you have a reason to back out of achieving your goals, remember this guy. A very calm person in nature and never takes anything said to him personally. Willing to help anyone who needs his assistance any time you approach him. Was an absolute pleasure working with you Rupin and I am sure you will achieve even bigger and better things in the future.

Gareth Hall, the youngest lad and the only Jew in the team. Serves as a business analyst for CSGI as part of the MTN engagement. I must say this guy is quite sharp and is one of the best performers in the CSGI team. The go to guy when issues arise. His approach to solving problems is impeccable. Another very gregarious person who is always approachable. Being the youngest in the team is no barrier to this guy as he effortlessly leads off-shore teams with amazing results. The passion to learn is one very admirable and commendable quality. You will see him tirelessly work with people who need his help until the issue is resolved. For his age, what he has achieved is simply amazing and I am sure Gareth, that you will reach the pinnacle of your career in the time to come with ease. Oh and congratulations once again on the engagement and I wish you both a blessed wedded life ahead of you,

Justin Serra, the football fanatic. Served as the test manager as part of CSGI for the MTN engagement before he left us for greener pastures J. Justin and I became the friends right after our very first and last heated argument with regards to a change my team had just done. What I love about Justin is his relentless tenacity to always achieve the best. He will never compromise quality of a deliverable even if the Devil himself ordered it. That in essence raised the quality of each deliverable which in the end pleased the client. Always there to crack a joke during difficult times to boost the morale of the team. Building relationships was more important to him than simply just getting the work done and that inadvertently gained the respect and the trust from his team members. Thank you Justin for everything and I wish you nothing but the best in the time to come.


A few others that I wanted to mention but did not in detail just to maintain the brevity of this goodbye are Simon Dobbin, Renita Govendar, Chris Wakeman, Tony Ballard, Maesi Mpeko, Tristan Hannaford, Keressa Jeevarathanam, Ridwaan Catterall, Itumeleng Ntshoe, Mawabo Nkewana and Hugo Meyer. Thank you all of you for the immense support and guidance provided during my stay here in South Africa.

As I leave this wonderful set of people, will always cherish the amazing moments we all shared and if I ever annoyed, irritated or even offended you in anyway unintentionally, please accept my sincere apologies and wish all of you the very best with God’s blessings being showered upon you all always.

 Although my stint at CSGI has come to an end, I am sure we will remain friends, quoting Buzz lightyear from Toy Story “till infinity and beyond”.

The fondest memories that I shared with each and every one of you are the moments that I will cherish for the rest of my life and until we meet again this is Dinuka (a.ka. Dinu) signing off from CSGI.




Saturday, December 3, 2016

Is it just about being the best coder?

Having worked in the software development industry for nearly a decade, I wanted to take a step back and look back on the journey so far. When I initially began my career, for me, it was about getting on board with the latest technological trends, learning new things I was interested in, experimenting with it and just learning everything I can about the programming languages I was fascinated about. This was very interesting stuff for a young lad just out of University and I loved every moment of it. I am still an enthusiastic technical geek and would never stop learning as it is not just a career but a passion.

Pondering on the fact of whether it is just about being the best coder you can be, I have to say no. Being good at what you do is just the beginning. One of the important skills to build up when you progress in your career are your soft skills. That will encompass you reading, writing and very importantly, speaking skills. As you progress in your career, it is of paramount importance that you learn the art of communicating effectively with your peers/clients. Some of the best coders I have met during my career struggle when it comes to expressing their thoughts of what they are working on to the outside world only because of the fact that they really did not give much thought to improving their soft skills.

My father always said, if you want to improve your language, make reading a daily habit. This was something that was inculcated in me and my sister from our younger days. I remember my father handing me a copy of the “reader’s digest” one day. To be quite honest, I initially read just the “Laughter is the best medicine” and “All in a day’s work” sections because that was where the humor was.  As the days went by, reading was like a daily routine in my life. I always made it a point to keep a dictionary with me (digital of course when the smart-phone era began) so when I came across a word I did not know, I stopped and learned it. Then I would find a way of remembering it by using it in an appropriate context.

Financial literacy is another important skill to possess as you progress in your career. Your software development career will make much more sense if you understood how your work contributes to the bottom line of your company. I am no expert in the financial domain, yet there are a few books out there that will help you understand the fundamentals that you need. “The Ten-Day MBA” is a very clear and concise book that explains the points in a straightforward manner.

Although most of the time the work of a developer is done in isolation, it is always best to be a bit more gregarious and maintain a personal relationship with your colleagues. Learning more about the people you work with will enable you to understand them better which in turn will help you maintain a better work relationship. I have the pleasure of working with an amazingly astute team currently. Having built a personal relationship with each one of them have enabled me to work better with them as they trust me on that personal level and we work together as one due to that relationship. There is no blame game played. If we fail, we fail as one. That is a strong bond to hold. Even after you leave a company, these relationships will remain.

In ending this short article, I would like to say these are just my personal opinions and I am sure peopl will have their own interpretations of the same and would definitely love to hear your views on the same. No matter where you are in your career, always remember the following quote as you progress.




Wednesday, August 10, 2016

An introduction to working with JAXB

I am in the process of migrating a few modules that are dependent on Apache XMLBeans to JAXB. It has been an exciting and challenging few days. I thought of jotting down a few important things I came across for anyone who might find it useful in the future.

First of all, let us look at setting up the maven plugin for the JAXB code generation. As of the time of writing this post, I came across two maven plugins;
Ended up using the first one as I found the configuration to be quite straightforward.

Your maven project structure will be as follows;
Project Folder->src->main->xsd
This will hold all the XSD files from which you would want to generate the JAXB objects.

Project Folder->src->main->xjb
This will holder your “bindings.xml” file, which is your data binding file used for any customization required as part of running the JAX generation task(xjc).

The plugin configuration for maven will be as follows;
 
<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>jaxb2-maven-plugin</artifactId>
     <version>2.2</version>
    <executions>
     <execution>
      <id>xjc</id>
      <goals>
       <goal>xjc</goal>
      </goals>
     </execution>
    </executions>
    <configuration>
     <target>2.1</target>
     
     <sources>
      <source>src/main/xsd</source>
     </sources>
     
    </configuration>
  </plugin>


  • One thing that we were quite used with XMLBeans was the “isSet” type of methods for all optional elements which will check if the element is set or not. By default JAXB does not generate this method and you have to end up using the not null condition on each element. Thankfully, the binding configuration allows for this with the following;

<jxb:bindings 
   xmlns:xs="http://www.w3.org/2001/XMLSchema"
    xmlns:jxb="http://java.sun.com/xml/ns/jaxb"
    xmlns:xjc="http://java.sun.com/xml/ns/jaxb/xjc"
    jxb:extensionBindingPrefixes="xjc"
    version="2.1">
<jxb:globalBindings  generateIsSetMethod="true"
</jxb:globalBindings>
</jxb:bindings>


  • By default, JAXB does not generate Java enumerations for the enumerations defined on the XSD files. The sad part is I could not find a way to apply this generation at a global level and could only handle it per XSD. But with XMLBeans, this was automatically done. In order to generate Java enumerations, the following should be done;
Sample XSD:

<xs:complexType name="EndpointType">
  <xs:attribute name="protocol">
   <xs:simpleType>
    <xs:restriction base="xs:string">
     <xs:enumeration value="HTTP"/>
     <xs:enumeration value="HTTPS"/>
     <xs:enumeration value="PAYLOAD"/>
    </xs:restriction>
   </xs:simpleType>
  </xs:attribute>
 </xs:complexType>


JAXB binding:
 
<jxb:bindings 
   xmlns:xs="http://www.w3.org/2001/XMLSchema"
    xmlns:jxb="http://java.sun.com/xml/ns/jaxb"
    xmlns:xjc="http://java.sun.com/xml/ns/jaxb/xjc"
    jxb:extensionBindingPrefixes="xjc"
    version="2.1">
<jxb:bindings schemaLocation="../xsd/testconfig.xsd">
       
  <jxb:bindings node="//xs:complexType[@name='EndpointType']/xs:attribute[@name='protocol']/xs:simpleType">
               <jxb:typesafeEnumClass name="Protocol" />
        </jxb:bindings>
 
   </jxb:bindings>
</jxb:bindings>

schemaLocation – This is the relative path to the XSD I want to refer to. Since my “bindings.xml” resided in the “xjb” directory, I had to go one step up and go into the XSD directory to get the required XSD file.

node – Here you need to provide the xquery path of the simple type that has the enumeration defined. If you cross check this with the XSD provided, you will figure out how the XQuery path retrieves the given element.

Note: If in any event, your xpath returns multiple elements with the same name, you can still handle this by introducing the element multiple=”true” on the <jxb:bindings> element.
E.g : <jxb:bindings node="//xs:complexType[@name='EndpointType']/xs:attribute[@name='protocol']/xs:simpleType" multiple="true">


typesafeEnumClass – On this element you can provide the Java enumeration name to be generated.

  • XMLBeans by default converts all XSD date and date time elements to a Java Calendar object. With JAXB however, by default the XMLGregorianCalendar is used. Yet again the global bindings came to the rescue and this was handled with the below configuration which converted all XSD date elements to a Java Calendar object.


<jxb:bindings 
   xmlns:xs="http://www.w3.org/2001/XMLSchema"
    xmlns:jxb="http://java.sun.com/xml/ns/jaxb"
    xmlns:xjc="http://java.sun.com/xml/ns/jaxb/xjc"
    jxb:extensionBindingPrefixes="xjc"
    version="2.1">

<jxb:globalBindings>

   <jxb:javaType name="java.util.Calendar" xmlType="xs:dateTime"
            parseMethod="javax.xml.bind.DatatypeConverter.parseDateTime"
            printMethod="javax.xml.bind.DatatypeConverter.printDateTime"/>

        <jxb:javaType name="java.util.Calendar" xmlType="xs:date"
            parseMethod="javax.xml.bind.DatatypeConverter.parseDate"
            printMethod="javax.xml.bind.DatatypeConverter.printDate"/>

        <jxb:javaType name="java.util.Calendar" xmlType="xs:time"
            parseMethod="javax.xml.bind.DatatypeConverter.parseTime"
            printMethod="javax.xml.bind.DatatypeConverter.printTime"/>
    </jxb:globalBindings>

</jxb:bindings>


  • If there is a need to make your JAXB objects serializable, this can be achieved with the following global binding configuration;

<jxb:bindings 
   xmlns:xs="http://www.w3.org/2001/XMLSchema"
    xmlns:jxb="http://java.sun.com/xml/ns/jaxb"
    xmlns:xjc="http://java.sun.com/xml/ns/jaxb/xjc"
    jxb:extensionBindingPrefixes="xjc"
    version="2.1">

 <jxb:globalBindings >
 <xjc:serializable />
  
  </jxb:globalBindings>
 
 
</jxb:bindings>




The element that does the trick is the “<xjc:serializable/>” element.


  • With JDK 1.8, I faced an issue whereby if one of your XSD’s had an import for another schema to retrieve another XSD via HTTP, this was being blocked. An excerpt of the error thrown out was “because 'http' access is not allowed due to restriction set by the accessExternalDTD property”. The work-around in this case was to use the following maven plugin to set the VM arguments required to bypass this restriction. More information on this issue can be found here.

<plugin>
    <!-- We use this plugin to ensure that our usage of the
    maven-jaxb2-plugin is JDK 8 compatible in absence of a fix
    for https://java.net/jira/browse/MAVEN_JAXB2_PLUGIN-80. -->
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>properties-maven-plugin</artifactId>
   <version>1.0.0</version>
    <executions>
        <execution>
            <id>set-additional-system-properties</id>
            <goals>
                <goal>set-system-properties</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <properties>
            <property>
                <name>javax.xml.accessExternalSchema</name>
                <value>file,http</value>
            </property>
    <property>
                <name>javax.xml.accessExternalDTD</name>
                <value>file,http</value>
            </property>
        </properties>
    </configuration>
</plugin>


That is about it. I will keep updating this post as I go on. As always, your feedback on the same is always much appreciated.

Thank you for reading, and have a good day everyone.

An introduction to the Oracle Service Bus

We are in the process of designing a new system for a telecommunication provider where we have looked at the Oracle Service Bus (OSB) to be used as the enterprise service bus. One of the first plus points for me was the amazing tooling support it encompasses. Oracle has integrated all their enterprise integration software stack into a cohesive whole by bundling it up as the Oracle SOA Suite. In this article, the focus would be on the Oracle OSB 11g which is part of the Oracle SOA Suite 11g. There are considerable changes that has been done with the new Oracle SOA Suite 12c which we will not delve into in this article. However, one feature I love about the new Oracle SOA Suite 12c is the fact that the developers can use JDeveloper to develop BPEL(Business process execution language) and OSB code in one IDE(Integrated Development Environment).

Couple of main components one needs to be aware of with the OSB is as follows;

Proxy Service
A proxy service as its name implies, is a service that is hosted to the external parties which acts as a facade for an internal service. By having a proxy service, you have more control over the changes in your internal services as the proxy service can do the required transformations if your internal services ever change.

Business Service
A business service, in terms of the OSB, represents an internal application service. It can be a WebService, JMS queue/topic, REST service, FTP service and many more. The business service will encompass the functionality to call the actual service.

So the scenario we will focus on this article is as follows;
  1. We have an internal service that returns subscriber information if the user passes in either the MSISDN or the SIM Card number and depending on the input, data will be fetched and returned.
  2. This service will have to be exposed to the external party in a more meaningful manner by making use of a proxy service.
The sample project can be downloaded here.

First of all, we create the business service which will act as the facade to our internal service. In your OSB project,  create the following four folders;
  • proxy
  • business
  • transformation
  • wsdl
Then we need to copy the internal service WSDL and the proxy service WSDL created for this example into the “wsdl” folder.

Configuring the business service
Right click on the “business” folder and select New->Business Service. When the business service is created, you will then first be presented with the “General” tab. In this we do the following;

  • Select “WSDL Web Service” and click on browser. Then select “Browse”, select the WSDL file and you will be presented with two options. Select the one ending with “(port).


  • Then go the “Transport” tab and change the URI as http://localhost:8088/mockInstalledBaseSubscriberClassificationQueryWSServiceSoapBinding. This is because we will use the SOAPUI mock service feature to test this out and the URI represents the mock service endpoint of SOAPUI for the service represented by the WSDL. 
  • The SOAPUI project use for this example can be downloaded from here.
That is all we need to do to configure our business service. Then we move onto our proxy service where all the action takes place.

Configuring the proxy service
  • Right click on the “proxy” folder created, select New->Proxy Service and provide a valid name. 
  • In the “General” tab, select “WSDL Web Service” and click on browse.
  • Now in the proxy service, you need to select the proxy WSDL file we have created which will be exposed to the external clients.























  • Go to the “Message Flow” tab. In that tab, first drag a “Route” element from the “Design Palette” on the right side. 
  • Afterwards, drag a “Routing” element into the “Route” element.
  • Click on the “Routing” element and in the bottom pane, go into the “Properties” tab where you will provide the business service that this proxy service will access and the operation name.

  • The result will be as follows;







  • Then drag a “Replace” action into the “Request Action” component.
  • Before we provide the information on the “Properties” tab for the “Replace” action, we need to create the XQuery transformation files which will map the proxy service request to the business service request and then the business service response back to the proxy service response.
  • Right click on the “transformation” folder and select New->XQuery Transformation. Enter a valid name. This should be done for both the request and response transformation files.
  • The request transformation file used is as follows;

 
(:: pragma bea:global-element-parameter parameter="$fetchSubscriber1" element="ns2:FetchSubscriber" location="../wsdl/SubscriberProxyService.wsdl" ::)
(:: pragma bea:local-element-return type="ns1:InstalledBaseSubscriberClassificationQuery/ns0:InstalledBaseSubscriberClassificationQuery" location="../wsdl/subscriber_classfication.wsdl" ::)

declare namespace ns2 = "http://www.example.org/SubscriberProxyService/";
declare namespace ns1 = "http://www.openuri.org/";
declare namespace ns0 = "http://mtnsa.co.za/si/IB/IBSubscriberClassificationQuery";
declare namespace xf = "http://tempuri.org/OSB%20training%201/transformation/subscriberrequest/";

declare function xf:subscriberrequest($fetchSubscriber1 as element(ns2:FetchSubscriber))
    as element() {
     <ns1:InstalledBaseSubscriberClassificationQuery>
        <ns0:InstalledBaseSubscriberClassificationQuery>
            <ns0:Request>
              
                    {
                        if (data($fetchSubscriber1/EquipmentType) = "MSISDN") then
                           <ns0:MSISDN>  { (data($fetchSubscriber1/EquipmentValue))}</ns0:MSISDN>
                        else 
                           <ns0:SIMCard> { data($fetchSubscriber1/EquipmentValue)}</ns0:SIMCard>
                    }

            </ns0:Request>
        </ns0:InstalledBaseSubscriberClassificationQuery>
        </ns1:InstalledBaseSubscriberClassificationQuery>
};

declare variable $fetchSubscriber1 as element(ns2:FetchSubscriber) external;

xf:subscriberrequest($fetchSubscriber1)

Here as you can see, we check if the equipment type is equal to “MSISDN” and then set the appropriate element on the business service.

  • The response transformation file used is as follows;
 
(:: pragma bea:global-element-parameter parameter="$installedBaseSubscriberClassificationQueryResponse1" element="ns1:InstalledBaseSubscriberClassificationQueryResponse" location="../wsdl/subscriber_classfication.wsdl" ::)
(:: pragma bea:global-element-return element="ns2:FetchSubscriberResponse" location="../wsdl/SubscriberProxyService.wsdl" ::)

declare namespace ns2 = "http://www.example.org/SubscriberProxyService/";
declare namespace ns1 = "http://www.openuri.org/";
declare namespace ns0 = "http://mtnsa.co.za/si/IB/IBSubscriberClassificationQuery";
declare namespace xf = "http://tempuri.org/OSB%20training%201/transformation/subscriberresponse/";

declare function xf:subscriberresponse($installedBaseSubscriberClassificationQueryResponse1 as element(ns1:InstalledBaseSubscriberClassificationQueryResponse))
    as element(ns2:FetchSubscriberResponse) {
        <ns2:FetchSubscriberResponse>
            <TradeCustomerCode>{ data($installedBaseSubscriberClassificationQueryResponse1/ns0:InstalledBaseSubscriberClassificationQuery/ns0:Response/ns0:Subscriber/@ServiceProviderCode) }</TradeCustomerCode>
            <PackageCode>{ data($installedBaseSubscriberClassificationQueryResponse1/ns0:InstalledBaseSubscriberClassificationQuery/ns0:Response/ns0:Subscriber/ns0:Package/@ProductCode) }</PackageCode>
            <PaymentOption>{ data($installedBaseSubscriberClassificationQueryResponse1/ns0:InstalledBaseSubscriberClassificationQuery/ns0:Response/ns0:Subscriber/@PaymentOption) }</PaymentOption>
        </ns2:FetchSubscriberResponse>
};

declare variable $installedBaseSubscriberClassificationQueryResponse1 as element(ns1:InstalledBaseSubscriberClassificationQueryResponse) external;

xf:subscriberresponse($installedBaseSubscriberClassificationQueryResponse1)

This is a simple transformation where we map the response elements to the proxy response elements as required.
    Now we move back to our proxy service, click on the “Replace” action, go to the “Properties” tab.
    • In the “In Variable” insert the value “body”.
    • Click on the “Expression” link. Go to the “XQuery Resources” tab, click on “Browse” and select the request transformation file.
    • In the “Variable Structures” component on the right side, expand the “body” element, and then select the request element and drag and drop it into the “Binding” text box as follows;
















    • Then select “OK” which will take you back to the “Properties” tab.
    • Select “Replace node contents” radio button. The end result will look as follows;


    • Now let us drag and drop a “Replace” action to the “Response Action” component.
    • Same as before, select the response transformation “$body/ins:InstalledBaseSubscriberClassificationQueryResponse”.
    • You will now get an error stating that the “ins” namespace is not recognized.
    • In order to resolve that, in the same “Properties” tab, select the tab “Namespaces” and click on add. Enter the prefix as “ins” and the URI as “http://www.openuri.org/”. 





    And that is it. Now we can test out the functionality. Before you do, remember to first start the mock service created on SOAP UI. 
















    Now let us log into the service bus console, go to the proxy service and launch the test console. This is the result that I got by running a sample;






















    You can see a trace of what exactly happened if you go further down on the same screen within the “Invocation Trace” section. The request and response transformation done by the OSB can be seen as follows;





















    That ends our introduction to the Oracle Service Bus. If you do have any queries on the same, please do not hesitate to leave a comment by and I will respond to it as soon as possible. Also, if there are any areas of improvement you may see, kindly leave your feedback as well which is always much appreciated.