in Hazeltest / Hazeltest on Hazeltest, Hazelcast, Testing, Chaos monkeyLast modified at:
In case you’ve always wanted a pet monkey, this one is definitely not what you’re looking for, because this particular specimen is one noisy fellow indeed. Designed to wreak havoc among the unsuspecting members of your Hazelcast cluster, it will let you test your cluster’s resilience before your production environment gets a chance to do it.
The concept of a Chaos Monkey was first introduced in the context of Hazeltest by the previous blog post as a means to “spice up” the primary job of Hazeltest’s runners, namely, to generate load on the Hazelcast cluster under test. The Chaos Monkey was described as an automated actor within Hazeltest whose goal is to deliberately wreak havoc among Hazelcast cluster members in order to test the cluster’s resilience towards member failures.
in Hazeltest / Hazeltest on Hazeltest, Hazelcast, Testing, Dev update
Curious about the news on Hazeltest and the two additional load scenarios previously hinted at? Then this new blog post has you covered!
The previous blog post on Hazeltest featured, in its more practical part, a basic How to Hazeltest, demonstrating shortly how the application can be configured, as well as the first of three load generation scenarios. The following paragraphs, then, will pick up the action from there and walk you through the remaining two load generation scenarios. Beyond that, we’re going to spend some time looking at the improvements made to Hazeltest since the previous blog post was published, as well as at the (likely) road ahead for Hazeltest.
in Hazeltest / Hazeltest on Hazeltest, Hazelcast, LivestreamLast modified at:
What if you wanted to check out the contents of the first livestream on Hazeltest beforehand? Or revisit some concept introduced therein afterwards? If such is the case, then this blog post might serve you well.
The content accompanying livestream 1 was published on hazelcast.com:
in Kubernetes / Kubernetes on Kubernetes, Rke, Cluster, Bare-metal
What’s better than a certain number of RKE cluster nodes? Well, more cluster nodes, of course!
For an upcoming live stream about Hazeltest, the RKE cluster I use needs a bit more juice (a lot more, actually). In the following sections, you’ll see that introducing additional Linux machines as new nodes to an existing RKE cluster is very easy, and you’ll also get acquainted with a cool Kubernetes dashboard called Skooner. Finally, we’re going to use said Hazeltest to put the new nodes to a good test.
in Hazeltest / Hazeltest on Hazeltest, Golang, Refactoring, Software engineeringLast modified at:
The focus with Hazeltest in the past weeks has been to implement a certain feature set so I can use the application as early as possible in a project with my current client. While this goal was achieved, there is, of course, still “no free lunch”, and the price tag here was a couple of things I didn’t like about the code. Because I’m an early learner of Go, these things combined with the new Go-foo I’ve learned in the past weeks provided just the perfect opportunity to get out the gardening gloves and do a little refactoring.
Since the previous blog post was published, the Hazeltest source code has seen both a couple of refactorings and some new features. In the following sections, I’d like to talk about the refactorings I’ve made in the code, and cover the new features in a dedicated blog post.
in Hazeltest / Hazeltest on Hazelcast, Hazeltest, In-memory data grid, Helm, Kubernetes, Testing, Map testsLast modified at:
In its current state, Hazeltest can automate the process of generating load in the maps of a Hazelcast cluster. By means of a simple, three-scenario example, this blog post demonstrates how Hazeltest and its configuration options can be used for this purpose with the goal of finding weaknesses in the given Hazelcast map configurations.
What is this Hazeltest you’ve recently talked about and how can I make use of it even in this very early stage of development? Which configuration options do I need to tweak in order to configure the two map-related runners the application offers? And: How does all this help identify incorrect or incomplete Hazelcast map configurations?
in Hazeltest / Hazeltest on Hazelcast, Hazeltest, In-memory data grid, Helm, Kubernetes, Testing, Map testsLast modified at:
What if the release candidate whose production fitness you’re supposed to test is a Helm chart describing a Hazelcast cluster? Well, ideally, there’s a little testing application that puts realistic load on a Hazelcast cluster, thus facilitating the discovery of misconfigurations or other errors that might have creeped into the chart, helping you to assert the release candidate’s fitness more effectively and more comfortably.
If your’re reading this, it’s likely you work in IT like me, and so you may have faced a situation like the following: Something – let’s call it the release candidate – needs to be properly tested before it’s released. Sounds familiar? If so, then you also may have asked yourself a question akin to the following: How can I make sure the release candidate is actually fit for release?
in Kubernetes / Kubernetes on Kubernetes, Replicaset, ScalingLast modified at:
The ReplicaSet is a very useful basic building block in Kubernetes that other objects, like the Deployment object, rely on. As a kind of Pod manager running in your cluster, a ReplicaSet makes sure the desired number and type of a certain Pod is always up and running. Its functionality is based on the notion of desired vs. observed state, so it also provides a fantastic opportunity to talk about the basics of reconciliation loop awesomeness.
In case you have taken a look at some of the manifests files used in scope of the previous blog posts (such as this one, for example), you’ll no doubt have noticed the object employed to run the sample workload is the Deployment object. The way it’s set up – having a Pod template baked into it – may seem to imply the Deployment manages these Pods directly, but that’s not the case – in fact, the Deployment manages and configures a ReplicaSet, and it is the ReplicaSet that manages the Pods. As it turns out, in Kubernetes, the ReplicaSet is a basic building block for running and managing workloads that other, higher-level objects – such as the Deployment object – rely upon. In order to lay the foundation for covering the latter in future content, the following sections will introduce you to the ins and outs of the ReplicaSet object – the problem it solves, how it works, its specification, and how to interact with it.
in Kubernetes / Kubernetes on Kubernetes, Rke, Cluster, Bare-metalLast modified at:
The mousepad of my notebook breaking created the perfect opportunity to finally put into practice a long-held plan: to install my very own, bare-metal Kubernetes cluster. The Rancher Kubernetes Engine turned out to be a great fit for that because, since it’s all Docker-based, its installation and operation is comparatively simple. In this blog post, you’ll get introduced to the installation process as well as the preparation work preceding it.
Recently, I’ve done a little experiment: How useful is a fake Chinese MacBook Air called an AirBook? Not very, as it turns out, because very shortly after having started using it, its mouse pad stopped working. So what could you do with a laptop having a dysfunctional mouse pad? Obvious: You could install an operating system on it that does not require a mouse pad, such as a Linux server operating system. And then you could go one step further and take two other machines on top to create a three-node, RKE-based Kubernetes cluster, in order to then write a blog post on the installation of RKE plus the necessary preparation steps…
in Kubernetes / Kubernetes on Kubernetes, Ingress, Ingress controller, Http load balancing, TraefikLast modified at:
One entry point to reach them all, one consolidated set of rules to find them, one Service exposed to bring them all, and in the cluster distribute them; in the Land of Kubernetes where the workloads lie.
In the upcoming sections, you’ll get introduced to the concept of Ingress – as you might have guessed from the slightly re-interpreted version of the infamous One ring to rule them all Lord of the Rings quote in this blog post’s description, Ingress is a means to expose many workloads using only a single exposed Service. This will be an interesting journey, so get a fresh mug of coffee and buckle up!