Monthly Archives: January 2020

DevConf.cz 2020 in Brno

I was invited to DevConf.cz this year and I used this great opportunity to learn more about the new trends in the software development. Especially in the areas like microservices, clouds and continuous delivery.

Friday

Quarkus: Java development turned into delight

Unfortunately I arrived a bit late. But, the main point for me has been that for a start of a new project, I should use code.quarkus.io. Will take a look. I have seen in a demo that Quarkus compilation of a Java code may take more time and the resulting executable file is bigger than ordinary JAR file. However, this executable is native, without the need of a JRE and it executes very fast.

Keep Your Secrets Secret – Kerberos in Java

Kerberos basics were explained. Example with Wireshark. Then the talk focused on the GSS-API. I have learned that the GSS-API is a somewhat similar framework to the TLS framework. GSS stands for Generic Security Service. So the service implementation is pluggable. We saw also an example of Kerberos inside a JAAS client-server application.

CodeReady Containers: Run OpenShift 4 locally

Unfortunately I did not get much from this presentation. We have seen how we can setup an OpenShift console on a localhost machine. It is quite a complex task.

Cloud native CD/CI: Tekton and Jenkins X

Excellent talk by Paolo Carta! The complex topic was explained in a very clear way and there were many funny moments too 🙂 I have learned Jenkins CI is not suitable for the cloud. A new project called Tekton was introduced. Tekton was designed with intention to consume less resources and to be Kubernetes native.

Tekton pipelines are cloud native, orchestrated by Kubernetes and decoupled. We saw a demo, in which we saw a Kubernetes cluster setup. Tekton has its yaml configurations based on Kubernetes. We saw several recommendations and best practices, for example to separate repositories for code and configuration. Tekton is console based. We saw how to list tasks in Tekton and how to work with logs.

Defining Kubernetes pipelines is complex and error prone. That is why Jenkins X project emerged. It is a new project, an abstraction on top of Tekton. We were introduced the GitOps – it means every configuration is in Git. GitOps is opinionated. It has jx-staging and jx-production environments by default. It uses single Kubernetes cluster. Multiple clusters are not supported by Jenkins X yet.

Do we need a docker file? Not necessarily, we can use Kanikal or scaffold.

Overall excellent presentation, clear, illustrative and practical.

Dogfooding Tekton project with Tekton

Tekton is based on declarative pipelenes with Kubernetes custom resources (CRDs). It is possible to scale pipelines on demand with containers in Kubernetes. Images can be built with Kubernetes tools. It is possible to deploy applications to multiple platforms like serverless, virtual machines and Kubernetes. Tekton has a powerful command line tool.

Pipeline concept was introduced: Step, Task (runs in a pod), Pipeline, PipelineResource, PipelineRun, TaskRun

To sum it up Tekton is a platform to do a CI on top of Kubernetes. Tekton runs inside Kubernetes, but it can be deployed anywhere.

Deploy Complex Application Stacks with Ansible

Ansible basics were summed up, and its main idea to strive for idempotency. We saw a demo with deployment of a complex stack example: Elasticsearch, Logstash, Kibana, Nginx.

In the demo we were shown how the deployment is done, work with Ansible system facts, conditional vars, vars_files and ansible-lint project for testing your playbooks.

Most useful browser APIs

A bit different talk from the others, but I have seen several concepts useful for frontend developers, like several file drop handling options, long pooling, web socket, mouse cursor locking and a bit of canvas.

Saturday

Unfortunately I missed the Quarkus workshop by Daniel Oh. However I’ll definitely take a look once it’s available online.

Progressive migration from Jakarta EE to Microprofile

I liked the example with Italian cuisine and its comparison to code writing practices – spaghetti code 90′, monoliths (lasagne) 00′, microservices (tortelli) 10′.

Two methods for migration were introduced: progressive migration and migration by adding microprofile to the monolith app.

I have seen a simple example using Microprofile. The output of the Microprofile metric (invocation count) was quite verbose, so we were told that it could be effectively consumed by Prometheus (with its configuration in yaml). The internal application calls can be measured by a metric too, not just REST APIs (method findById example).

Another interesting example was open tracing example with Jaeger. It is used to trace calling of services and internal methods in microservices architecture. Tracing is similar to debugging, but we can’t see the called data, optionally we can set our own ‘value’ argument to @Traced annotation. With Fault tolerance we can add retry mechanism, setting max retries. Retries work both on REST API calls, but they are also supported in CDI. Another introduced features were OpenApi documentation, RestClient and health check.

I have already worked a bit with Microprofile. So it was nice to see its features in practice and to hear several best practices. I have asked many questions.

Building reactive microservices with MicroProfile

The reactive concept was demonstrated on a coffee shop example. Customers were ordering their coffee and their coffees were asynchronously prepared by baristas. We could see asynchronous (reactive) behavior very well on a real web page example.

Technically the talk was very interesting and it contained a lot of live coding and a lot of logic implementation, including integration with Apache Kafka.

Kogito workshop – From zero to cloud ready

This workshop was a bit special for me. I directly contributed to Drools and jBPM 5 and jBPM 6 in the past. The new Kogito project is another evolution of these two projects and I would describe it is a cloud ready combination of both jBPM and Drools.

Simply said BPMN process models are now understood and can be implemented as microservices.

This was an interactive workshop so my hands ‘got dirty’ as I downloaded VS Code and a few extensions and used them to implement a business process model as a cloud ready microservice. Kogito uses Quarkus internally for compilation of the microservices. The resulting microservices are native and cloud-ready.

Summary

The information technology world is changing very quickly. Last week I saw a presentation by a modern bank with its cloud architecture, which is able to deliver fast and has a very scalable cloud solution. At devconf.cz I have deepened my understanding of the cloud possibilities and learned a lot of new stuff. All the talks were very interesting for me.