What’s the first thing you’ll have to do with any application doing anything on a Hazelcast cluster? That’s right – tell the application how to connect to that cluster.
Here it is – the first video on Hazeltest! (Well, the second, if you count the livestream that took place quite a while ago, but the first as far as the video series I’d like to do is concerned.)
The concept of a Chaos Monkey was first introduced in the context of Hazeltest by the previous blog post as a means to “spice up” the primary job of Hazeltest’s runners, namely, to generate load on the Hazelcast cluster under test. The Chaos Monkey was described as an automated actor within Hazeltest whose goal is to deliberately wreak havoc among Hazelcast cluster members in order to test the cluster’s resilience towards member failures.
The previous blog post on Hazeltest featured, in its more practical part, a basic How to Hazeltest, demonstrating shortly how the application can be configured, as well as the first of three load generation scenarios. The following paragraphs, then, will pick up the action from there and walk you through the remaining two load generation scenarios. Beyond that, we’re going to spend some time looking at the improvements made to Hazeltest since the previous blog post was published, as well as at the (likely) road ahead for Hazeltest.
The content accompanying livestream 1 was published on hazelcast.com:
For an upcoming live stream about Hazeltest, the RKE cluster I use needs a bit more juice (a lot more, actually). In the following sections, you’ll see that introducing additional Linux machines as new nodes to an existing RKE cluster is very easy, and you’ll also get acquainted with a cool Kubernetes dashboard called Skooner. Finally, we’re going to use said Hazeltest to put the new nodes to a good test.
Since the previous blog post was published, the Hazeltest source code has seen both a couple of refactorings and some new features. In the following sections, I’d like to talk about the refactorings I’ve made in the code, and cover the new features in a dedicated blog post.
What is this Hazeltest you’ve recently talked about and how can I make use of it even in this very early stage of development? Which configuration options do I need to tweak in order to configure the two map-related runners the application offers? And: How does all this help identify incorrect or incomplete Hazelcast map configurations?
If your’re reading this, it’s likely you work in IT like me, and so you may have faced a situation like the following: Something – let’s call it the release candidate – needs to be properly tested before it’s released. Sounds familiar? If so, then you also may have asked yourself a question akin to the following: How can I make sure the release candidate is actually fit for release?
In case you have taken a look at some of the manifests files used in scope of the previous blog posts (such as this one, for example), you’ll no doubt have noticed the object employed to run the sample workload is the Deployment object. The way it’s set up – having a Pod template baked into it – may seem to imply the Deployment manages these Pods directly, but that’s not the case – in fact, the Deployment manages and configures a ReplicaSet, and it is the ReplicaSet that manages the Pods. As it turns out, in Kubernetes, the ReplicaSet is a basic building block for running and managing workloads that other, higher-level objects – such as the Deployment object – rely upon. In order to lay the foundation for covering the latter in future content, the following sections will introduce you to the ins and outs of the ReplicaSet object – the problem it solves, how it works, its specification, and how to interact with it.