Workload Reachability 2: Service Types

Workload Reachability 2: Service Types

The Service object is the foundation for DNS and load-balancing in Kubernetes, and there are different Service types available to fit various use cases, including allowing for external traffic to reach cluster-internal workloads or making an external workload available from within the cluster.

The previous blog post introduced you to the basics of the Service object and what problem it solves in Kubernetes, so let’s now spice things up a little and introduce the various different Service types to the party!

Workload Reachability 1: The Service Object

Workload Reachability 1: The Service Object

Kubernetes’ dynamic nature makes it somewhat hard to communicate with Pods running a workload – those Pods can come and go quickly and frequently, so simply using their IPs won’t work very well. This begs for a kind of abstraction layer between the workload Pods and those wishing to consume them – ideally one that provides a DNS name and some load-balancing capabilities, too…

This blog post will introduce you to the Service object, Kubernetes’ way of implementing service discovery and load balancing that is both reliable and easy to use for clients.

On Modelling Clay And Glue

On Modelling Clay And Glue

Labels are a fundamental concept in Kubernetes – they provide users with the flexibility to group their applications as they see fit, and they are the reason why Kubernetes can be a decoupled system of many components working together. This blog post will introduce you to the necessity for flexibility, the basics of labels, and how to use them.

Worry not, dear reader, despite the heading of this blog post indicating otherwise for added catchiness, we are still in the Kubernetes world! The following blog post will introduce you to something called labels (and a bit of annotations, too), and in so doing uncover the mystery of just what Kubernetes might have to do with modelling clay and glue.

Giving GraphQL A Closer Look

Giving GraphQL A Closer Look

In this blog post, we’ll build a small demo application to explore and highlight the advantages of GraphQL using Spring Boot, Hibernate, and some very handy GraphQL dependencies.

We’ve established previously how GraphQL’s emphasis on types and fields constitutes a profound paradigm shift compared to REST that makes GraphQL APIs fantastically easy to consume for clients: Clients can (a) ask the API precisely for the data they need, and they can (b) traverse arbitrary nesting levels given that the server’s implementation of the business domain’s data model permits it.

Meet GraphQL!

Meet GraphQL!

The server-centric approach of REST sometimes makes REST APIs difficult to query elegantly. GraphQL and its Schema Definition Language encourage thinking differently about data exchange by placing the emphasis a lot more on the client’s perspective, thus solving REST’s disadvantages.

We have examined elsewhere that REST’s server-focused approach can make APIs built adhering to it unnecessarily clunky and unelegant to handle for clients. In particular, we’ve established it’s sometimes hard for clients to elegantly access nested resources and to maintain – specifically when attempting the former – good “response efficiency”, i.e. a good ratio between amount of information sent back by the server versus amount actually used.

To REST Or Not To REST

To REST Or Not To REST

REST has become the de-facto standard for building modern APIs, but it’s not without its drawbacks. Therefore, in today’s time with alternatives being available, it seems reasonable to question that standard.

In many teams I’ve worked with so far, the question of whether or not to use the Representational State Transfer (REST) architecture style to build the API of some new application or service wasn’t even raised – using REST was just obvious. To REST, then, seemed almost as natural as breathing, and seriously questioning that probably would have been considered something like heresy.

The Pod: Not Only A Group Of Whales

The Pod: Not Only A Group Of Whales

In the whale world, a Pod is a group of whales, and carrying on Docker’s whale theme, Kubernetes calls a group of containers a Pod. Pods are the smallest artifacts that can be deployed to a Kubernetes cluster, and in this blog post, we’ll get to know their fundamentals.

In this blog post, we’ll deploy our first workload to a hosted Kubernetes cluster using an abstraction called a Pod.

Meet Your Helmsman!

Meet Your Helmsman!

Kubernetes, from an Ancient Greek word meaning helmsman or pilot, has become the de-facto standard for running containerized application workloads in the cloud. This blog post provides a short introduction and explains why it’s such a powerful platform for microservices.

The following section will provide an introduction to what Kubernetes is and how it facilitates microservice architecture advantages. But, first and foremost: Kubernetes is awesome! Therefore, I’m very delighted you joined me on this journey to learn how we can leverage its powers.

Guidelines To Reduce Coupling And Increase Cohesion

Guidelines To Reduce Coupling And Increase Cohesion

Orthogonality improves a software system’s quality attributes, and low coupling and high cohesion are cornerstones for achieving orthogonality. Therefore, being aware of the most important guidelines to reduce coupling and increase cohesion is crucial for one’s work as a software developer.

The previous blog post talked about the importance of building loosely coupled classes or modules having high cohesion as a means to achieve orthogonality and hence improve the overall quality attributes of the resulting software system. In the following sections, we’re going to build on that knowledge by introducing guidelines for how to achieve low coupling and high cohesion. As always, you can find the full source code for the given examples in this GitHub repository.

Google Jib And Why You Will Love It

Google Jib And Why You Will Love It

Google Jib is a great tool for building Docker images from Maven and Gradle applications – it integrates easily with both and it’s super simple to set up and use. Therefore, it is the perfect companion on anyone’s journey to assemble an application from containerized microservices.

In the context of microservices, you’ll often find yourself building Docker images or, ideally, operating some tool that does it for you. By far the best build tool I’ve encountered so far is Google Jib – it’s very convenient to use, you don’t need a Docker daemon running locally (although you can pass the build to a local Docker daemon), and its smart arranging of Docker filesystem layers ensures only the layer containing modified source code has to be rebuilt, thus speeding up the overall build-and-push process. In this short blog post, we’ll take a look at how to use Google Jib in a Maven project to build and push a Docker image.

Pagination