Agility, Java programming, New technologies and more…
  • rss
  • Home
  • Management
  • Agile Programming
  • Technology
  • Linux
  • Event
  • Android app
  • Contact
  • About the author
  • English
  • Francais

Flagger – Get Started with Istio and Kubernetes

Fabian Piau | Saturday May 2nd, 2020 - 06:40 PM
  • Click to print (Opens in new window) Print
  • Click to share on X (Opens in new window) X
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on WhatsApp (Opens in new window) WhatsApp

 Version française disponible

Update
October, 17th, 2020 : Use newer versions (Helm 3, Kube 18, Istio 1.7).

This series of articles is dedicated to Flagger, a tool that integrates with Kubernetes, the popular container orchestration platform. Flagger enables automated deployments and will be one step closer to a continuous deployment process.

This article is the first of the series and also the only one where we won’t use Flagger yet… this article will walk through how you to run a Kubernetes cluster on your local environment and deploy an application which will be accessible via an Istio gateway.

Note
This is a hands-on guide and can be followed step by step on MacOS. It will require some adjustments if you are using a Windows or Linux PC. It is important to note that this article will not go into details and only grasp the concepts & technologies so if you are not familiar with Docker, Kubernetes, Helm or Istio, I strongly advise you to check some documentation yourself before continuing reading.


Docker

Install Docker by installing the Docker Desktop for Mac application, you can refer to the official installation guide. For Windows users, the equivalent application “Docker for Windows” exists.

In the next part, we will also use Docker for Mac to set up our local Kubernetes cluster. Note that this tutorial has been tested with Docker for Mac 2.4.0.0 that includes a Kubernetes Cluster in version 1.18.8, this is the latest at the moment of writing.

If you use a different version, technology is moving fast so I cannot guarantee that the commands used in this series will work without any adjustment.


Mirror HTTP Server

First a few words about the application Mirror HTTP Server we will use in this series of articles.

MHS is a very simple JavaScript application based on Node.js using the framework Express which allows you to customize the HTTP response received by setting specific HTTP headers in the request. The Docker image is publicly available on the Docker Hub. You can consult the Github repo of the project to find out more, please note that I am not the author.

This little app is exactly what we need to test the capabilities of Flagger to simulate 200 OK responses and 500 Internal Server Error responses.

Let’s pull the Docker image:

docker pull eexit/mirror-http-server

And run a new container that uses it:

docker run -itp 8080:80 eexit/mirror-http-server

Then let’s make sure it is functioning properly:

curl -I 'http://localhost:8080'

You should receive an HTTP 200 OK response:

HTTP/1.1 200 OK
X-Powered-By: Express
Date: Fri, 01 May 2020 17:57:17 GMT
Connection: keep-alive

While:

curl -I -H X-Mirror-Code:500 'http://localhost:8080'

will return an HTTP 500 response:

HTTP/1.1 500 Internal Server Error
X-Powered-By: Express
Date: Fri, 01 May 2020 17:57:45 GMT
Connection: keep-alive

For simplicity, we use the curl command, but you can use your favourite tool, e.g. Postman.


Kubernetes

Now that you’ve installed Docker for Mac, having a Kubernetes cluster running locally will be a simple formality. You just need to check a box!

Enable Kubernetes with Docker for Mac

Enable Kubernetes with Docker for Mac

If the light is green, then your Kubernetes cluster has successfully started. Please note, this requires a significant amount of resources, so don’t panic if the fan is running at full speed and it takes a bit of time to start…


Kube dashboard

We will install our first application in our Kubernetes cluster.

Kubernetes via Docker does not come with the dashboard by default, you have to install it yourself. This dashboard is very practical and provides a graphical interface of what is going on in your cluster and will save you from having to enter kubectl commands.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml

The dashboard is protected, but you can use the default user to access it. You can generate a default token via this command:

kubectl -n kube-system describe secret default | grep token: | awk '{print $2}'

Copy it.

You will need to re-use this command and /or the token copied if your session has expired, this happens when you don’t interact with the dashboard for a little while.

Finally, create a proxy to access the dashboard from the browser (this command will need to run indefinitely):

kubectl proxy

If you access http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login and use the token that you copied to authenticate, you should see this screen.

Kube Dashboard

Kube Dashboard


Helm

We use Homebrew for the installation of Helm. Homebrew is a handy package manager available for Mac.

We will use Helm to install Istio and the MHS application in our cluster. Helm is a bit like Homebrew, but for Kubernetes. We are using version 3. Helm will save you from having to enter many kubectl apply commands.

Let’s install Helm 3 with:

brew install helm@3

To verify that Helm has been installed:

helm version

You should have a similar output (note that Helm 3.3.4 is the latest version at the time of writing):

version.BuildInfo{Version:"v3.3.4", GitCommit:"a61ce5633af99708171414353ed49547cf05013d", GitTreeState:"dirty", GoVersion:"go1.15.2"}


Istio & Prometheus

Now, we are going to install the Istio Service Mesh. For full explanations and the benefits of using a Service Mesh, I invite you to read the official documentation.

First of all, you must increase the memory limits of your Kubernetes via Docker, otherwise you will run into deployment issues. Your laptop’s fans will recover, don’t worry…

Here is my configuration:

Kubernetes Configuration in Docker for Mac for Istio

Kubernetes Configuration in Docker for Mac for Istio

I followed the Docker Desktop recommendations for Istio.

Let’s go and install Istio 1.7.3 (the latest version at the time of writing). First, download the source:

curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.7.3 sh -

cd istio-1.7.3

Add the istioctl client to your path:

export PATH=$PWD/bin:$PATH

Install Istio with the provided client, we use the demo profile:

istioctl install --set profile=demo

After a few minutes, you should get a message confirming that Istio has been installed. And voilà!

To install the latest version of Istio, you can simply replace the first line with curl -L https://istio.io/downloadIstio | sh -.

Add Prometheus as it’s required for Flagger:

kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.7/samples/addons/prometheus.yaml

From the Kube dashboard, verify that a new namespace has been created istio-system and that it contains the Istio tools including Prometheus.

Istio is deployed in your cluster

Istio is deployed in your cluster

Why is Prometheus important? Because it is an essential component for Flagger which will provide the metrics to show if the new version of your application is healthy or not, thus it will know when to promote or rollback a version. I will come back to this in detail in the next article.


Deploying Mirror HTTP Server

Before deploying MHS, let’s create a new namespace application, we don’t want to use the default one at the root of the cluster (this is good practice). The name is too generic, but sufficient for this tutorial, in general you will use the name of the team or the name of a group of features.

kubectl create ns application

Do not forget to activate Istio on this new namespace:

kubectl label namespace application istio-injection=enabled

To deploy MHS, I created a Helm chart.

This chart was created with the helm create mhs-chart command, then I updated to use the latest image of MHS. I also added a gateway.yaml file to configure the Istio gateway so it can be accessible outside of the cluster.

Clone the chart repo:

git clone https://github.com/fabianpiau/mhs-chart.git

And install MHS:

cd mhs-chart
helm install --name mhs --namespace application ./mhs

After a few moments, if you look at the dashboard, you should see 1 replica of MHS in the namespace application.

MHS is deployed in your cluster

MHS is deployed in your cluster

You now have 1 MHS pod running in your Kubernetes cluster. The pod is exposed to the outside world via an Istio gateway.

To test, use the similar commands that we used against the docker container earlier:

curl -I -H Host:mhs.example.com 'http://localhost'

You should receive an HTTP 200 OK response that was handled by Envoy, the proxy used by Istio:

HTTP/1.1 200 OK
x-powered-by: Express
date: Fri, 01 May 2020 17:37:19 GMT
x-envoy-upstream-service-time: 17
server: istio-envoy
transfer-encoding: chunked

And:

curl -I -H Host:mhs.example.com -H X-Mirror-Code:500 'http://localhost'

should return an HTTP 500 response:

HTTP/1.1 500 Internal Server Error
x-powered-by: Express
date: Fri, 01 May 2020 17:38:34 GMT
x-envoy-upstream-service-time: 2
server: istio-envoy
transfer-encoding: chunked

Congratulations, you’ve come to the end of this first tutorial!

For information, you can also access MHS with your favourite browser if you run a proxy command first to expose the pod:

export POD_NAME=$(kubectl get pods --namespace application -l "app.kubernetes.io/name=mhs,app.kubernetes.io/instance=mhs" -o jsonpath="{.items[0].metadata.name}")

kubectl port-forward --namespace application $POD_NAME 8080:80

Then, navigate to http://localhost:8080/.

You should see a… blank page. This is normal, MHS does not return a body in the response and there is no HTML output!


Cleaning up resources

You can delete the MHS application and its namespace.

helm delete mhs --namespace application

kubectl delete namespaces application

We don’t remove Istio / Prometheus because we will need it in the next article, but if you want to free up some resources, you can use these commands:

kubectl delete -f https://raw.githubusercontent.com/istio/istio/release-1.7/samples/addons/prometheus.yaml

istioctl manifest generate --set profile=demo | kubectl delete -f -

kubectl delete namespaces istio-system


What’s next?

The next article will focus on the installation of Flagger and use different versions of MHS to try canary deployments. Stay tuned! In the meantime, you can stop the Kubernetes cluster by unchecking the box and restarting Docker Desktop. Your computer deserves a break.

Related posts

kubernetesFlagger – Canary deployments on Kubernetes kubernetesFlagger – Monitor your Canary deployments with Grafana hostingChoose the web hosting service that fits your needs devoxxDevoxx UK 2018 – Day 2
Comments
No Comments »
Categories
Agile programming
Tags
cloud, docker, flagger, helm, istio, kubernetes
Comments rss Comments rss

A Java 11 migration successful story

Fabian Piau | Thursday December 27th, 2018 - 11:45 AM
  • Click to print (Opens in new window) Print
  • Click to share on X (Opens in new window) X
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on WhatsApp (Opens in new window) WhatsApp

 Version française disponible

This post summarizes the work we have achieved within my team to migrate our micro-services from Java 8 to Java 11 for the website Hotels.com.

In summary, for each of the services we own, we have made the following steps:

  • Make the code compile with Java 11
  • Run the Java 11 compatible service on Java 8
  • Run the service on Java 11

In reality, we had some extra steps because when we began the migration, Java 11 was not released yet, we could only use Java 10.
The assumption was if the code compiles on Java 10, there will not be so much work to migrate to Java 11 as the biggest change about modularity was introduced with Java 9 and the Jigsaw project. Thankfully, it was the case!


Star Wars lightsaber upgrade...

Star Wars lightsaber upgrade...


1. Make the code compile with Java 11

This was the longest part. Indeed, we had to bump the version up of most of the frameworks and tools we are using. Especially, we had to handle the migration from Spring Boot 1 to 2 and Spring 4 to 5. As these are major versions, we had to fix a couple of breaking changes.

Spring Boot

For Spring Boot 2, the official migration guides for Spring Boot 2.0 and Spring Boot 2.1 are well written and detailed.

  • The profile loading has evolved
  • The relaxed binding of properties is a bit less relaxed
  • Some properties were renamed and others made unavailable (e.g. security.basic.enabled property that has to be replaced with a WebSecurityConfigurerAdapter)
  • Some endpoints were renamed (e.g. the actuator healthchecks)
  • The bean overriding is now disabled by default, which is something we were using in our integration tests, we had to re-enable it with the new property spring.main.allow-bean-definition-overriding.

Spring

The migration to Spring 5 was quite straightforward from a code point of view with few minor changes. The hard part was related to the fact the project is legacy and we had to deal with complex Spring XML configuration and the migration to Dropwizard Metrics 4.

Misc

Few frameworks are still not compatible with Java 9+ as they are not actively maintained by the community. In our case, we had to find a workaround for Cassandra Unit. We did not want to invest time to change of testing framework as we are planning to move to DynamoDB.

We also had to deal with the Maven dependency hell because some required dependencies were bringing old dependencies not compatible with Java 9+. In most of the cases, adding some exclusions in the POM solved it.

Local environment

Locally, we added a simple set of aliases to our bash profile file to switch between the Java versions.

alias setjava8="export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_172.jdk/Contents/Home/"
alias setjava10="export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk-10.0.2.jdk/Contents/Home/"
alias setjava11="export JAVA_HOME=/Library/Java/JavaVirtualMachines/openjdk-11.jdk/Contents/Home/"

On IntelliJ, changing of Java version can be done from the project settings (“Project SDK” dropdown).

CI environment

For the Continuous Integration side (we are using Bamboo), we updated the agent to use Java 11.
We noticed that it’s not possible to have different versions of the agent for the branch plans and the master plan as the plan configuration is global. It means updating the agent to Java 11 will break master if someone else was pushing a change to master (e.g. a new feature or a bug fix totally independent from the Java 11 migration).
To mitigate this issue and avoid a red build, it was important for us to make sure the project was compiling with Java 11 and that all tests were passing locally before updating the agent in order to merge the Java migration pull request quickly. Another option would be to set back the agent to Java 8 temporarily once the branch plan was green on Java 11 without forgetting to set it back to Java 11 just before the merge.


2. Run the Java 11 compatible service on Java 8

Once everything was fixed, merged and the master build was green, we had to ensure the Java 11 compatible version was running fine in our test environments. Basically make sure nothing was broken… We had unit, integration and end-to-end tests so our level of confidence was quite high. Just to be safe, we did some extra exploratory and manual testing on the API with some exotic and edge case requests to make sure it was behaving correctly. We also ensure the logs and Grafana dashboards were fine.

The next step was to push the new version in production. The service was still running with Java 8 even if the code was Java 11 compatible (and compiled), we did not want to introduce too many changes at the same time, we don’t like risky releases after all. We handle this release with extra care because of the multiple refactoring and versions bump up. After looking at the Grafana dashboards for a few days, comparing metrics before and after the migration, it shows that all went well.


3. Run the service on Java 11

The ultimate goal was to run the service on Java 11. In theory, it will be as simple as updating the Docker file to use the Java 11 image and push the artifact in production. However, in practice it was not that simple…

First of all, we had to update the Java JVM arguments (the -d64 parameter is deprecated and will prevent the service to start, we also had to update the GC logs argument).

Then, we quickly realized that the service logs had disappeared from Splunk in our on-premises production environment, the logs were actually showing up in the future while it was working fine on AWS. We had to update the logback config to fix this “temporal distortion” ;) by updating the date pattern from %d{ISO8601} to %d{yyyy-MM-dd'T'HH:mm:ss.SSSZ,UTC}.

We had another weird error VerifyError: Bad type on operand stack that arises during the deployment to production, AppDynamics was preventing some instances from starting due to some exotic bytecode manipulation. For some reason, it was fine on prod-canary, then started to fail after a successful deployment on a couple of instances! We had to disable AppDynamics, which was fine as we are not using this tool in our team.

As we were moving to Java 11, we also had to update some of our Grafana dashboards to reflect the use of a new Garbage Collector – G1.


Conclusion

Today, 3 services providing the user notifications using Spring Boot 2, and 1 service providing the website header & footer using Spring 5 are running smoothly on Java 11, it has been several weeks now. They are using the default G1 Garbage Collector and we did not encounter any weird behavior related to memory footprint or any other performance issue. On the other hand, we did not see any improvement in our response time. But now we are using a Java LTS (Long Term Support) release, the migration was a success.

What’s next? Java 12 is going to be released in March 2019. At this time, we still don’t know if we will use this version or wait for the next Java LTS. It will probably depend on which features are included.

Related posts

java-9QCon London 2016 – Project Jigsaw in JDK 9 – Modularity comes to Java springQCon London 2016 – Spring Framework 5 – Preview & Roadmap Java EE vs SpringJava EE & CDI vs. Spring devoxxDevoxx UK 2018 – Day 1
Comments
3 Comments »
Categories
Agile programming
Tags
java, migration
Comments rss Comments rss
Page 2 of 1512345…10…15
Download CarmaBlog App

RSS feeds

  • RSS feed RSS - Posts
  • RSS feed RSS - Comments

Most viewed posts

  • Changing the language in Firefox - 116,514 views
  • Using Google Forms / Drive / Docs to create an online survey - 64,609 views
  • FAQ – Online survey with Google Forms / Drive / Docs - 56,657 views
  • Customizing Gnome 3 (Shell) - 30,938 views
  • The meaning of URL, URI, URN - 18,543 views
  • Java EE & CDI vs. Spring - 16,039 views
  • Open Street Map, better map than Google Maps? - 16,002 views
  • Comparing NoSQL: Couchbase & MongoDB - 14,759 views
  • API, REST, JSON, XML, HTTP, URI… What language do you speak? - 13,809 views
  • First steps with Apache Camel - 13,780 views

Recent Comments

  • Fabian Piau on FAQ – Online survey with Google Forms / Drive / DocsOui, dans Google Forms, vous pouvez empêcher les p…
  • BENECH Fabien on FAQ – Online survey with Google Forms / Drive / DocsBonjour, J'ai crée 1 questionnaire via Forms,…
  • SANKARA TIDIANE on Free online MongoDB trainingJ'aimerai suivre
  • Pauline on FAQ – Online survey with Google Forms / Drive / DocsMerci Fabian, mais le but étant que nos clients pu…
  • Fabian Piau on FAQ – Online survey with Google Forms / Drive / DocsProbablement mais ces options sont en général paya…

Recent posts

  • How to write a blog post? At least my way! - 2 years and 10 months ago
  • Bot Attacks: You are not alone… - 4 years and 5 months ago
  • Flagger – Monitor your Canary deployments with Grafana - 5 years and 3 months ago
  • Flagger – Canary deployments on Kubernetes - 5 years and 4 months ago
  • Flagger – Get Started with Istio and Kubernetes - 5 years and 5 months ago
  • Expedia CoderDojo in London - 6 years and 2 months ago
  • Volunteering at Devoxx4Kids - 6 years and 5 months ago
  • A Java 11 migration successful story - 6 years and 9 months ago
  • Tips to make your WordPress website secure - 7 years and 3 days ago
  • Devoxx UK 2018 – Day 2 - 7 years and 4 months ago
  • Devoxx UK 2018 – Day 1 - 7 years and 4 months ago
  • Wise, Revolut and Monzo, a small revolution for travelers and expats - 7 years and 8 months ago
  • Autocomplete for Git - 8 years and 4 months ago
  • Swagger, the automated API documentation - 8 years and 7 months ago
  • Microservices architecture – Best practices - 9 years and 1 week ago
Buy me a coffee

Language

  • Français
  • English

Follow me!

Follow me on Linkedin
Follow me on Twitter
Follow me on Stackoverflow
Follow me on Github
Follow me on Rss
Link to my Contact

Email subscription

Enter your email address to receive notifications of new posts.

Tags

.net agile agility android bash best practices blog cache cloud computing conference continuous integration css developer devoxx docker eclipse extreme programming firefox flagger google helm hibernate istio java job jug kubernetes london mobile computing overview performance plugin programmer script security sharing society spring tdd test tool ubuntu windows wordpress

Links

  • Blog Ippon Technologies
  • Blog Publicis Sapient
  • Blog Zenika
  • Classpert
  • CommitStrip
  • Coursera
  • Le Touilleur Express
  • Les Cast Codeurs Podcast
  • OCTO talks !
  • The Twelve-Factor App

Categories

  • Event (15)
  • Linux (3)
  • Management (8)
  • Agile programming (29)
  • Technology (45)

Archives

  • December 2022 (1)
  • April 2021 (1)
  • June 2020 (1)
  • May 2020 (2)
  • July 2019 (1)
  • May 2019 (1)
  • December 2018 (1)
  • October 2018 (1)
  • June 2018 (1)
  • May 2018 (1)
  • January 2018 (1)
  • May 2017 (1)
  • March 2017 (1)
  • October 2016 (1)
  • April 2016 (2)
  • March 2016 (1)
  • November 2015 (1)
  • May 2015 (1)
  • February 2015 (1)
  • December 2014 (1)
  • November 2014 (1)
  • September 2014 (2)
  • August 2014 (1)
  • July 2014 (2)
  • June 2014 (1)
  • April 2014 (1)
  • March 2014 (1)
  • February 2014 (2)
  • January 2014 (1)
  • December 2013 (1)
  • November 2013 (1)
  • October 2013 (3)
  • September 2013 (5)
  • July 2013 (1)
  • June 2013 (1)
  • May 2013 (1)
  • April 2013 (1)
  • March 2013 (2)
  • February 2013 (1)
  • January 2013 (2)
  • December 2012 (2)
  • October 2012 (1)
  • September 2012 (1)
  • July 2012 (1)
  • May 2012 (1)
  • April 2012 (1)
  • March 2012 (1)
  • February 2012 (1)
  • January 2012 (2)
  • December 2011 (1)
  • November 2011 (2)
  • October 2011 (2)
  • September 2011 (1)
  • July 2011 (1)
  • June 2011 (2)
  • April 2011 (1)
  • March 2011 (1)
  • February 2011 (1)
  • January 2011 (2)
  • November 2010 (2)
  • September 2010 (1)
  • August 2010 (1)
  • July 2010 (1)
  • June 2010 (1)
  • May 2010 (1)
  • April 2010 (1)
  • March 2010 (1)
  • February 2010 (1)
  • December 2009 (1)
  • November 2009 (1)
  • October 2009 (2)
  • September 2009 (2)
  • August 2009 (3)
  • July 2009 (1)
  • June 2009 (2)
Follow me on Twitter
Follow me on Linkedin
Follow me on Stackoverflow
Follow me on Rss
Link to my Contact
Follow me on Github
 
Fabian Piau | © 2009 - 2025
All Rights Reserved | Top ↑