DevConf.cz 2020 in Brno

I was invited to DevConf.cz this year and I used this great opportunity to learn more about the new trends in the software development. Especially in the areas like microservices, clouds and continuous delivery.

Friday

Quarkus: Java development turned into delight

Unfortunately I arrived a bit late. But, the main point for me has been that for a start of a new project, I should use code.quarkus.io. Will take a look. I have seen in a demo that Quarkus compilation of a Java code may take more time and the resulting executable file is bigger than ordinary JAR file. However, this executable is native, without the need of a JRE and it executes very fast.

Keep Your Secrets Secret – Kerberos in Java

Kerberos basics were explained. Example with Wireshark. Then the talk focused on the GSS-API. I have learned that the GSS-API is a somewhat similar framework to the TLS framework. GSS stands for Generic Security Service. So the service implementation is pluggable. We saw also an example of Kerberos inside a JAAS client-server application.

CodeReady Containers: Run OpenShift 4 locally

Unfortunately I did not get much from this presentation. We have seen how we can setup an OpenShift console on a localhost machine. It is quite a complex task.

Cloud native CD/CI: Tekton and Jenkins X

Excellent talk by Paolo Carta! The complex topic was explained in a very clear way and there were many funny moments too 🙂 I have learned Jenkins CI is not suitable for the cloud. A new project called Tekton was introduced. Tekton was designed with intention to consume less resources and to be Kubernetes native.

Tekton pipelines are cloud native, orchestrated by Kubernetes and decoupled. We saw a demo, in which we saw a Kubernetes cluster setup. Tekton has its yaml configurations based on Kubernetes. We saw several recommendations and best practices, for example to separate repositories for code and configuration. Tekton is console based. We saw how to list tasks in Tekton and how to work with logs.

Defining Kubernetes pipelines is complex and error prone. That is why Jenkins X project emerged. It is a new project, an abstraction on top of Tekton. We were introduced the GitOps – it means every configuration is in Git. GitOps is opinionated. It has jx-staging and jx-production environments by default. It uses single Kubernetes cluster. Multiple clusters are not supported by Jenkins X yet.

Do we need a docker file? Not necessarily, we can use Kanikal or scaffold.

Overall excellent presentation, clear, illustrative and practical.

Dogfooding Tekton project with Tekton

Tekton is based on declarative pipelenes with Kubernetes custom resources (CRDs). It is possible to scale pipelines on demand with containers in Kubernetes. Images can be built with Kubernetes tools. It is possible to deploy applications to multiple platforms like serverless, virtual machines and Kubernetes. Tekton has a powerful command line tool.

Pipeline concept was introduced: Step, Task (runs in a pod), Pipeline, PipelineResource, PipelineRun, TaskRun

To sum it up Tekton is a platform to do a CI on top of Kubernetes. Tekton runs inside Kubernetes, but it can be deployed anywhere.

Deploy Complex Application Stacks with Ansible

Ansible basics were summed up, and its main idea to strive for idempotency. We saw a demo with deployment of a complex stack example: Elasticsearch, Logstash, Kibana, Nginx.

In the demo we were shown how the deployment is done, work with Ansible system facts, conditional vars, vars_files and ansible-lint project for testing your playbooks.

Most useful browser APIs

A bit different talk from the others, but I have seen several concepts useful for frontend developers, like several file drop handling options, long pooling, web socket, mouse cursor locking and a bit of canvas.

Saturday

Unfortunately I missed the Quarkus workshop by Daniel Oh. However I’ll definitely take a look once it’s available online.

Progressive migration from Jakarta EE to Microprofile

I liked the example with Italian cuisine and its comparison to code writing practices – spaghetti code 90′, monoliths (lasagne) 00′, microservices (tortelli) 10′.

Two methods for migration were introduced: progressive migration and migration by adding microprofile to the monolith app.

I have seen a simple example using Microprofile. The output of the Microprofile metric (invocation count) was quite verbose, so we were told that it could be effectively consumed by Prometheus (with its configuration in yaml). The internal application calls can be measured by a metric too, not just REST APIs (method findById example).

Another interesting example was open tracing example with Jaeger. It is used to trace calling of services and internal methods in microservices architecture. Tracing is similar to debugging, but we can’t see the called data, optionally we can set our own ‘value’ argument to @Traced annotation. With Fault tolerance we can add retry mechanism, setting max retries. Retries work both on REST API calls, but they are also supported in CDI. Another introduced features were OpenApi documentation, RestClient and health check.

I have already worked a bit with Microprofile. So it was nice to see its features in practice and to hear several best practices. I have asked many questions.

Building reactive microservices with MicroProfile

The reactive concept was demonstrated on a coffee shop example. Customers were ordering their coffee and their coffees were asynchronously prepared by baristas. We could see asynchronous (reactive) behavior very well on a real web page example.

Technically the talk was very interesting and it contained a lot of live coding and a lot of logic implementation, including integration with Apache Kafka.

Kogito workshop – From zero to cloud ready

This workshop was a bit special for me. I directly contributed to Drools and jBPM 5 and jBPM 6 in the past. The new Kogito project is another evolution of these two projects and I would describe it is a cloud ready combination of both jBPM and Drools.

Simply said BPMN process models are now understood and can be implemented as microservices.

This was an interactive workshop so my hands ‘got dirty’ as I downloaded VS Code and a few extensions and used them to implement a business process model as a cloud ready microservice. Kogito uses Quarkus internally for compilation of the microservices. The resulting microservices are native and cloud-ready.

Summary

The information technology world is changing very quickly. Last week I saw a presentation by a modern bank with its cloud architecture, which is able to deliver fast and has a very scalable cloud solution. At devconf.cz I have deepened my understanding of the cloud possibilities and learned a lot of new stuff. All the talks were very interesting for me.

Review of Clean Code

The Clean Code is a book by Robert Cecile Martin, a pioneer of agile software craftmanship. This book was recommended to me last year on an Agile software development training by Cegeka in Belgium. I have read it and I must admit that it has had an impact on my work as a Java developer. Of course many similar books are available. But this is the first one of this kind I read.

In my company we had one hour lasting sessions, in which we read two chapters and discussed content of the book together with our practical experience we had. It was good approach and I liked it. Therefore I think I received more from the book then I would have learned alone just by normal reading from start to end.

I will do my best to do a review of the Clean Code for you. I am going to start with a little motivation, then what I learned and used after one year and then a summary of pros and contras.

“I am a good programmer, I can do my new task fast, it is normal that code is messy…”

I sometimes deal with such statements. They are true or false based on the perspective. If you were a typical startup developer, then I would say yes. There is no high demand for a good design. Time and a number of implemented prototypes is what it counts. But if we took a look for example on a complex middleware system of a large business, then I think that such statement as in the headline would be wrong. Let’s focus on these large enterprise systems. Based on my experience, some programmers are not able to foresee or are not capable of long-term thinking. Unfortunately I already spent too much time analyzing badly written code some else’s. Because it was written without thinking that another programmer would later need to work on it, to understand it. The original target was to finish the task as soon as possible (and unfortunately without seeing the consequences). Of course the latter task is expected to be finished as soon as possible too. But now it is different, now it is more difficult.

Let’s admit it. The most of the development projects are about team work. If we want to consider ourselves to be good team players, we should write our code with respect to our valuable colleagues. To make them their job as easy as possible. To keep in mind that they will once most likely read our code. They will have to understand it, change it and refactor it.

Now from the management perspective. I understand how deadlines are important. However I think it is beneficial to support developers and let them write clean code. The subsequent software support costs will be much lower.  A good manager should take care not only for the next approaching deadline, but he should always plan the project in a way, that there is a space left for improvements and clean code. In other words I think that a good manager must treat deadlines and long-term project sustainability equally to produce the best value for their company. What I mean – the first barely working solution is not the same as the working solution that is already easy to understand and easy to change.

Clean Code principles as I remember them after one year

The book contains quite a lot of chunks of the code. The reader may train his skills on those. I read this book about one year ago, so I do not remember everything exactly, but I’ll try to summarize what I remembered and applied in my projects. I am going to structure the list of the principles as I remember them and order them depending on how often I use with them.

Responsibility principle

Every class or method should have one clear responsibility. If I find a class or method that it deals with more responsiblities, then I’ll separate the responsibilties by creating a new class or a method.

Usually when the class or method is too long it is also an issue. In such long implementations we can usually identify more responsibilities and find out which responsibilities should be separated. It also means that the operations should not have side effects.

Distinctive class and method names

This rule applies more often for method names. Simply make their name as distinctive as possible. Better a long distinctive name than a shorter one, which is ambiguous or unclear. Also a long descriptive method name is better than a short name and a piece of Javadoc.

Understandable method calls

Very common problem. It is very difficult to read and understand method calls with too much parameters. I try to use one argument, two in fewer cases and three only exceptionally. It really helps. Another thing to mention is that it is better to write methods that do not need to accept arguments as null (what is this null?!), true/false (what is actually true or false?!), etc.

 Keep classes small, single purpose and unit testable

Small classes are not just about lines of code. Also the amount class fields and method needs to be reasonable.

Testability is very important. Always design with testability in mind. Write unit tests for your classes. Then you may freely refactor without fear that you’ll break something. Such fear is very common and many programmers rather stay out of trouble, e.g. they don’t dare to change the code they do not understand. I think this is bad habit and also a sign of badly written code. Another common problem is that the unit test is trying to test more than is expected from the tested class. In such a case, the test needs to be moved to the appropriate place.

Pros and contras

I took a look on the book. Of course the book contains more topics, like concurrency, error handling, etc. It contains a lot of practical examples to force you to think. I did not cover everything. Let’s summarize pros and contras.

Pros

  • A lot of useful information for your developer career. Senior developers are basically those, who are able to work independently, but I believe that knowledge of such book belongs to their knowledge as well.
  • Experience from the experts in the field.
  • Good structure.
  • Clear argumentation.
  • Practical examples.

Contras

  • Examples are sometimes too long and it’s quite difficult to focus on them.
  • Our reading group in my job agreed, that not everything can be easily applied. Some of the rule recommendations seemed unnecessarily strict to us.

Conclusion

This book is about opinions, how the good code should look like. It carries knowledge and experience of the author and his friends. You may not agree with him, but it is still definitely worth reading. At least you can confront your development style with the one offered by the author.

I liked this book. It helped me to improve my Java coding abilities. I am sure it helped our team as well. I would recommend it to every Java developer, who works in a team. As a bonus I would recommend to organize a reading group in your company as we did. It forces you to read the book regularly. The discussions afterwards with my colleagues were very beneficial.

If you have something to add or you disagree, feel free to leave a comment!

How to call DB procedure with MyBatis and Java annotations

According to MyBatis pages: “MyBatis is a first class persistence framework with support for custom SQL, stored procedures and advanced mappings. MyBatis eliminates almost all of the JDBC code and manual setting of parameters and retrieval of results. MyBatis can use simple XML or Annotations for configuration and map primitives, Map interfaces and Java POJOs (Plain Old Java Objects) to database records.”
I often face MyBatis configuration hidden in mapping .xml files. I wanted to try something different. As I learned from Spring, configuration can be also stored not only in XML files, but also in Java annotations. Spring enabled this possibility in its later versions. I found on the net that the MyBatis configuration may be stored there as well, but on the other hand I lacked enough of good tutorials. That is why I decided to write a blog post about this topic. The information, that was needed, I found on stackoverflow.com and I gathered the pieces together here.
Let’s imagine a stored procedure, which is able to change a status of a car entity. For example we want to change the status of the car to ‘sold’. So we need to define an own Java annotation, which is registered in Spring configuration file. Then we define a mapper interface. If you are already familiar with MyBatis, you should have an idea, how this looks like. But on the contrary this time we create no .xml file in resources. We just define everything in the annotation. Let’s see an example:

@MyBatisCarProceduresMapper
public interface ChangeCarStatusMapper {
    
    @Insert(value = { 
            "{ call DATABASE.PACKAGE.CHANGE_CAR_STATUS ( #{car_id, mode=IN, jdbcType=VARCHAR, javaType=String },  #{car_status, mode=IN, jdbcType=VARCHAR, javaType=String },  #{result_code, mode=OUT, jdbcType=NUMERIC, javaType=Integer},  #{error_description, mode=OUT, jdbcType=VARCHAR, javaType=String} ) }"
            })
    @Options(statementType = StatementType.CALLABLE)
    void changeCarStatus(Map<String, Object> parameters);
}

This deserves an explanation. To call a procedure successfully we need an @Insert annotation. We can see two input and two output parameters. Also we need to define type of the statement. That is all. So it’s a more compact and effective solution, but it has also some disadvantages. For procedures with large interfaces it may become confused. Also big problem is lack of friendliness of Java in case multiline String constants. Usually you have to split such long String to more lines and use always concatenation. Also the fact that you are combining more languages in one source file may be questionable from the clean code and design perspective. But I think that despite these disadvantages it can be still advantageous for small procedure interfaces.

Devconf.cz 2016 impressions

This weekend I travelled to Brno to attend Devconf.cz conference. In the last years I also gave there jBPM presentations or helped with a workshop. This time I went to Brno only as a visitor and I concentrated on visiting some interesting presentations on Saturday.
In the following lines I’ll try to resume and sum up my thoughts and impressions from the conference talks.

Docker for Java EE developers

Ok, Docker rocks. I was wondering if it was possible to just prepare a Docker image with a dev-ready application server for use in development. I face the following situation – setup of development environment takes quite a lot of time like hours or days. So it would be perfect if one could just use Docker image for development environment setup in couple of minutes. But when the Docker is running the prepared image, you often need to replace .war file with developed application. And from my point of view I think that one needs to know several unnecessarily complex Docker commands in order to do that. So it is not so easy to be used. Easy use is very important for me. It’s possible that I am missing something, so any comment below the article will be very appreciated!

C# on Linux

Open sourcing of .NET framework by Microsoft looks really interesting and promising. I look forward to the future, perhaps one day C# and .NET ecosystem will replace Java? 🙂 As a Java developer I have to admit the fact that Java has some design issues that were addresses well by newer C# language. On the other you do not have to use problematic syntax of Java, you can just stick to latest and/or proven techniques to overcome these minor shortcomings. I mean for example checked exceptions, getters and setters etc.

Reactive extensions/programming

This talk was really interesting. Good topic and good presentation with simple and nice examples. Now I am motivated to try this stuff in an example project. I’ve also heard before an opinion that doing concurrency stuff in reactive way can substantially minimize risks of race condition issues. So time to learn it!

Let’s encrypt Best Practices

I find security topics always interesting, not only because I studied IT security, but I was always fascinated by inventiveness of both people who develop the security technologies and algorithms and then those who misuse security flaws.

Go for Java developers

Main focus was put solely on particular differences between various Java and Go language constructs. Some of them I liked and some not. I heard also that it’s troublesome to debug in Go, because there is no debugger available. It was difficult to imagine how Go development works in reality.

Gentle introduction to Node.js (not only) for Java devs

I have already attended a talk about Node.js previous year. This time at least I learned something new once more. Interesting technology to try it out!

Install and run Freenetis in a Docker container

According to wikipedia, Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization on Linux.

Freenetis is an open source information system for managing community networks.

Why to learn Docker and container technologies in general?

I am fascinated with the idea behind the Docker. It enables you to just create your own docker image with your application and you don’t have to care about deployment details at all. It can be deployed locally, on your server or directly into a cloud. So the first benefit is the significant reduce of requirements with respect to deployment.

You can achieve isolation benefits like with virtual machines, but with significantly smaller overhead and memory footprint. Docker allows you to use configuration as a code – so called Dockerfile – which is lightweight and makes it easy to share container definitions. It also offers simple way to distribute ready-to-deploy containers.

There are also disadvantages, it is currently targeted more only to Linux hosts and images, but this may change in time.

Current possibilities to install Freenetis

Freenetis can be installed currently in two ways. Either it can be installed directly by checking out sources and configuring everything from scratch or it can be installed automatically using the provided deb packages. The first option is substantially difficult and requires deep knowledge of the Freenetis system. The second option to use prepared deb packages is much more user friendly.

I currently use Fedora on my laptop and it uses rpm packaging system. So there is currently no simple way to install Freenetis on my machine, as the only option for me is to install manually from the sources. However there is a solution. I can install Docker, pull there an image of Debian operating system, start the image as a container and install there Freenetis using the steps provided. Currently only in Czech. That’s it! I don’t have to install heavy-weight full virtualization of Debian.

Installing Docker

The first thing is to install docker. The following commands are applicable and were tested on Fedora 20. You need to install docker-io package. To make further work easier, it is convenient to add a system user to a docker group, that will make your life easier and you will not have to work ordinarily with Docker as a superuser.

# yum install docker-io
# usermod -a -G docker user_id

We can look for all debian images available in a central repository and install the basic one with the second command:

$ docker search debian
$ docker pull debian

I have to mention several useful commands here. The first one is to list currently installed Docker images and the second one is to remove unnecessary images. This will be useful, as after each update of the upstream Docker image, the old one is preserved.

$ docker images
$ docker rmi c9fa20ecce88

The last command is to check if the image is working properly. It should print the content of a default directory:

$ docker run -t debian ls

Installing Freenetis

At the beginning the container already needs to be running, we can achieve this by starting the container in interactive mode:

$ docker run -it -p 12345:80 debian /bin/bash

Remember that all changes done after starting the container are automatically lost. To save them, we need to create a container snapshot and save it as a new image. That’s why number of container images may grow quickly and it is useful to use Dockerfiles or run commands in batch. Let’s return to the command itself. Container will be run in interactive mode, with a new console open in the same terminal window. Port 80 of the container will be mapped to port 12345 of the host system. Console of the container will be started by definining the last command, which tells the docker to start bash console in the container.

Let’s install necessary tools for installation:

root@fa644d58b14:/# apt-get update && apt-get install vim wget

We have to add new Debian package repository:

root@fa644d58b14:/# vim /etc/apt/sources.list
deb http://repository.freenetis.org/debian/stable/ wheezy main

Then we can install Freenetis itself:

wget -O - http://repository.freenetis.org/debian/freenetis_repo.gpg.key | apt-key add -
apt-get update && apt-get install freenetis

During Freenetis installation I chose option one to install Freenetis into location http://localhost/freenetis.

In the end I had manually start Apache 2 httpd server:

root@5949a66e334c:/# service apache2 start

Now I am ready on my host computer to access Freenetis installed on the following page http://localhost:12345/freenetis.

Summary

In this article I’ve tried to sum up benefits of container technologies and Docker in particular. My current aim is to install Freenetis easily on unsupported non-debian system like Fedora. However the implications of Docker are big, for example a cloud that supports Docker should be easily able to consume images containing Freenetis. So try it now and let me know your feedback, I’ll appreciate it!

Upgrade of jbpm-6-examples to jBPM 6.2 services

In my previous post I described an example implementation of a web application, which demonstrates usage of jBPM business process suite as an embedded workflow library. The focus was on CDI services provided by jBPM libraries and their integration with producers and consumers in the web application itself.

The latest version jBPM 6.2 contains services, which are better designed, provide completely new API and significantly simpler usage. They also provide basic kie-services implementation, which is framework agnostic (now they do not use CDI internally) and other specific services for a particular bean framework – EJB 3.1 or CDI 1.0.

This article is going to describe the current state of jbpm-6-examples demos and the upgrade from version 6.0 to 6.2. The article will be separated to two parts, because both examples are slightly different. The new jbpm-6-examples can be found on current master branch on the github.

rewards-basic web example migration to jBPM 6.2 EJB services

This example application combines jBPM 6.2 EJB services together with servlets and Java Server Pages (JSP). Here is the list of new services used in the example. All of them extend the core service and are already annotated with EJB @Local annotation.

  • DeploymentServiceEJBLocal – used to deploy/undeploy kjar artifacts to/from runtime
  • ProcessServiceEJBLocal – used to start/signal/abort process instance and get their process instance data
  • RuntimeDataServiceEJBLocal – provides information about runtime data, process instances, user tasks, process variables, etc.
  • UserTaskServiceEJBLocal – provides operation to work on a user task

So what has changed? Many things! First of all rewards process with two human tasks is not loaded from the application classpath, but is part of a kjar artifact with maven GAV “org.jbpm.examples:rewards:1.0”. That means that the maven client inside jBPM 6.2 has to be able to resolve this artifact from maven repositories. For this purpose you can just clone and build the rewards project from jbpm-6-examples-assets:

git clone https://github.com/jsvitak/jbpm-6-examples-assets.git
cd jbpm-6-examples-assets/rewards
mvn clean install

Also you do not have to care of session strategy and runtime manager now. Session strategy can be defined in a deployment descriptor file inside the kjar, or it can be decided at the deployment time.

StartupBean class

EJB annotation @Startup marks this bean for eager initialization, so before the application is ready, we register here our custom UserGroupCallback class and also we use injected deploymentService to deploy a kjar artifact from maven, which contains a business process definition that we want to use.

ProcessServlet class

This is a simple servlet that handles requests coming to /process context. It gets a recipient variable from a POST request and uses injected processService to start a process instance.

TaskServlet class

TaskServlet is bound to handle /task context. It overrides doGet() method to handle GET requests. Based on the command it does two different operations.

  • The first one uses runtimeDataService to retrieve all tasks for a particular user. The rewards process has two human tasks, the first one is for user jiri and the second one for user mary.
  • The second operation is to approve a user task. Operation approve does not exist in the jBPM context, or in userTaskService API in particular, so it is implemented as a single composite operation, consisting of a StartTaskCommand and a CompleteTaskCommand. So the human task is started and immediately completed.

RewardsUserGroupCallback class

UserGroupCallback is necessary not only for mapping between task service users and their membership in groups, but also to just register users in runtime TaskService.

Summary

That’s all! Only four classes are now necessary for rewards-basic example, which uses provided jBPM 6.2 EJB services.

rewards-cdi-jsf web example with jBPM 6.2 CDI services

This example combines jBPM 6.2 services together with Context Dependency Injection (CDI) and Java Server Faces (JSF) frameworks. The services share the same API, so the description of the services and their purpose is the same as in the previous example.

CDI is a powerful and very flexible framework how to build Java EE 6 applications, however in the contrary to EJB framework, it does not provide transactions and may have other limitations. In our example, the CDI requires a bit more code to get it properly working using jBPM 6.2 services.

StartupBean class

The purpose is the same as in the previous example, it uses CDI mechanisms to run the initialization code on the application startup.

ProcessBean and TaskBean classes

Again, the purpose is the same as in previous example. Both classes are annotated with a CDI stereotype @Model, which means they are both instantiated for one request (@RequestScoped annotation) and their public methods can be used in the expression language (EL) of the JSF frontend (@Named).

RewardsApplicationScopedProducer class

This class is necessary, because it contains several important producers.

  • The first one produces EntityManagerFactory, which is consumed inside jBPM library and used to setup persistence.
  • The second one produces a deploymentService. The producer class already consumes a qualified injection (using @Kjar qualifier), but it is important also to produce it for StartupBean to be consumed as a default injection.
  • The last instance to produce is a TaskLifeCycleEventListener, which is necessary, because when a human task is completed, it is used to trigger again the process engine and so advance in the process flow.

RewardsIdentityProvider

Since the example does not implement authentication, this is just a placeholder class. However it is necessary for it to be there.

WebUtil

This class is not that important, it serves more as a CDI demonstration, that you can initialize Logger and FacesContext instances in a single place.

Summary

Ok, that’s everything for now! In this article you have seen new jBPM 6.2 services in action. These services can reduce much of the code, which was necessary before, like working with RuntimeEnvironment, RuntimeManager, RuntimeEngine and other stuff. Clone the jbpm-6-examples, read the README.md instructions and try the examples. I am looking forward to your comments and feedback!

MariaDB tips for Fedora 20

Installation

When I did installation of MySQL in Fedora 20, it automatically installed MariaDB database. In order to install MariaDB now, the current commands will be for example:

$ sudo su -
# yum install mariadb mariadb-server phpmyadmin
# sudo systemctl enable mariadb.service
# sudo systemctl start mariadb.service

In order to setup root password for MariaDB after installation you have to do the following:

$ mysql -u root
MariaDB [(none)]> set password for root@localhost = password('your_password');
MariaDB [(none)]> exit;

Now you can login into http://localhost/phpmyadmin as a user root using the defined password. In order to login into MariaDB console again, you have to change the command to request password authentication:

$ mysql -u root -p

Usage

The database has been installed, set up to start every time after reboot. What else can be done? In order to use it as a developer, we have to create also a new database inside. The simplest way is to login to phpmyadmin manager of MariaDB database. Select tab ‘Users’ and click on the button ‘Add user’. Fill in the user name, password, repeat the password and check option ‘Create database with same name and grant all privileges.’. That’s it! You’ll see the all SQL commands, which were executed. If you are able to type the commands into the console, you can use it as well, for example:

MariaDB [(none)]> create database jbpm;
MariaDB [(none)]> use jbpm;

For example now we can deploy and run any arbitrary application, which uses jbpm schema for persistence. The schema will be generated using DDL scripts automatically by hibernate. Later we can inspect the content of the database using other SQL commands.

jBPM 6 at jeeconf.com 2014 in Kiev

Hello!

Two weeks ago I was in Kiev at jeeconf.com. Colleague of mine recommended me this conference and I applied a talk there. There were many internationally known Java gurus, so it was an honor for me to give a talk there too!

My presentation was called “Streamline your processes with jBPM 6”. I focused on three things:

  • why are business process models useful in information systems
  • what are the most important jBPM 6 features in kie workbench and engine
  • how to embed jBPM into a web application (CDI, EJB, JSF) as a workflow engine library

So the third one was a demonstration of my example application I already blogged about.


The conference was well organized. I really enjoyed its atmosphere. I attended several talks in English:

  • Tooling of a test driven developer by Pawel Lipinski (basically about jParams and AssertJ and some Java 8 hints)
  • Mobile functional testing with Arquillian Droidium by Stefan Miklosovic (how you can create complex testing scenarios which may include several application servers and Android devices)
  • Holding down your Technical Debt with SonarQube by Patroklos Papapetrou (code quality management tool)

I also ‘tried to attend’ a presentation in Russian, but my very basic level of this language prevented me to understand it. Majority of the talks were in Russian, so this is the only drawback of this conference.

To sum it up, despite the current crisis, Kiev was safe to visit and I really enjoyed the city and the conference! Kudos to the organizers team.

jBPM 6 presentation at devconf.cz 2014

Hello all those who are interested in jBPM 6! I had a presentation called ‘Integration with jBPM 6’ at Developer Conference organized by Red Hat Czech Republic in Brno.

Main focus of the presentation was to shortly describe new major features in jBPM 6 business process suite and also to demonstrate its capabilities when it comes to embedding jBPM as a library into a web application.

The presented web application was rewards-jsf example, which demonstrates usage of new RuntimeManager API and CDI interfaces used together with Java Server Faces web framework.

My talk was recorded and you can see it on youtube:

jBPM 6 web application examples

After a long time, let’s see new technology in action. jBPM 6 was released in the end of the last year, so it’s quite fresh and still lacks good examples to easily start with. I’ll focus on my recent example projects, which demonstrate jBPM 6 in use as a workflow engine embedded inside a web application.

rewards-basic application

So far the git repo contains just two projects. The first one is called rewards-basic. It was developed by Toshiya Kobayashi, who created the original application using jBPM 5 and Java Enterprise Editon (EE) 5. I’ve rewritten his application to work with new jBPM 6 and also later standard Java EE 6. There are many useful improvements in the new engine. For example now you can use runtime managers with advanced session strategies to better separate process contexts or to gain a better performance, especially in concurrent environment. Seamless integration with Java EE world is done by Context and Dependency Injection (CDI). Many people still don’t like CDI, but once you learn it, you may realize its advantages. You may even continue to use jBPM without CDI and initialize your environment using ordinary Java constructors or factories.

Both applications are build around one simple business process. After start it creates a task for user John and after he approves his task, the process creates a second task for user Mary. After she approves her task, the process finishes. Example applications provide web interfaces to these operations – in particular to start a business process, to list human tasks available and to approve them. More information including steps how to run the programs can be found on Github link above.

ProcessBean class

This article focuses on internal structure suitable to software developers. Let’s start and take a look on a code snippet from a process service Enterprise Java Bean (EJB):

@Inject
@Singleton
private RuntimeManager singletonManager;

This demostrates CDI and how it can be used. You can just inject an object of defined class to your bean and you don’t have to care about how and where it is initialized. Don’t worry, we’ll get to that! The second annotation is interesting as well. You may choose also others for advanced session strategies like @PerProcessInstance and @PerRequest. Their names are self-explanatory, just to be sure – the first one keeps session (context) for a process instance and the second one is just stateless, no state is kept. What can be done with the runtime manager then?

RuntimeEngine runtime = singletonManager.getRuntimeEngine(EmptyContext.get());
KieSession ksession = runtime.getKieSession();
...
ProcessInstance processInstance = ksession.startProcess("com.sample.rewards-basic", params);

Keep in mind that runtime manager usually follows singleton design pattern in your application. For each request you should get an instance of runtime engine from it based on the context. Note that runtime engine is just an encapsulation of a kie session and a task service. The session can be used to start a process instance. The task service is automatically linked to this session and can be used for human task related queries and commands. The last command just starts a process instance, so the business process is finally running.

The last thing that can be noted are user transactions. You don’t have to use user transactions, but they are useful in many cases. For example you may want to save some information into a corporate database at the same time with the execution of the process engine. This way you ensure that all operations are either committed or rolled back together.

TaskBean class

Similarly you can just simply inject a task service. Do you remember jBPM 5? The task service was decoupled from the process engine and had to be connected to the engine for each process run in order to support human tasks. jBPM 6 integrated the task service back to the engine.

@Inject
TaskService taskService;
...
List list = taskService.getTasksAssignedAsPotentialOwner(actorId, "en-UK");

As you can see in TaskBean, you can easily run an arbitrary task service method.

CDI producer classes

In order to get CDI working you have to provide producers for injected class instance where necessary. One of the CDI advantages is the loose coupling. For example you may use a service in your beans, which functionality is dependent on running application server. This is ok, but how to easily unit test your application without setting up awkward application server? For example you can easily write a mock service for unit testing and don’t even have to modify a Java code. All you would have to do is just to change an alternative class in CDI configuration file beans.xml.

On the other hand debugging a CDI application may be difficult as the errors of unsatisfied dependencies are thrown mostly during run time, which slows the development process. If you know how to cope with this, please leave me a comment, thanks!

Application scoped producers

These producers can be found in ApplicationScopedProducer class. @ApplicationScoped annotation tells the application server (or more precisely the CDI container), that instances produced here should be instantiated for the whole life cycle of this web application deployment. It’s logical – for example the persistence unit (database connection) won’t change during the runtime of the web application.

Very important is also the RuntimeEnvironment producer, which provides whole setup for runtime manager, for example including our “com.sample.rewards-basic” process definition to be used. Runtime manager is injected from a jBPM library, but this same library also internally injects RuntimeEnvironment instance to get our environment that it doesn’t know.

You may also notice RewardsUserGroupCallback class. It is a simple user group callback and is defined as an alternative. This demonstrates that by using alternatives you may use your own classes, if you are not satisfied with prepared implementations coming from jBPM libraries.

Presentation layer

Web presentation layer is really simple. It uses Java Server Pages (JSP) technology and Java Servlets to handle presentation logic. Web user interface is just plain HTML, because the focus was put on demonstrating the internal service logic.

rewards-jsf

The second example application stays internally mostly the same, based on the same concepts. However Java Server Faces (JSF) technology including CDI is used for presentation layer.

Summary

I have described briefly two simple web applications demonstrating together jBPM 6 and Java EE 6. Please write a comment if you’ve liked them or if you have some feedback, I would be happy to hear your opinion. Thanks.