How To Hazeltest 1: Connecting To A Hazelcast Cluster

How To Hazeltest 1: Connecting To A Hazelcast Cluster

After having watched the introduction to Hazeltest, you might now be scratching your head and wondering just how to connect Hazeltest to the target Hazelcast cluster under test…

What’s the first thing you’ll have to do with any application doing anything on a Hazelcast cluster? That’s right – tell the application how to connect to that cluster.

Hazeltest In A Nutshell

Hazeltest In A Nutshell

Have you ever wondered just how load-testing your Hazelcast clusters could be simplified, and thus make sure that the “release candidates” that describe these clusters are fit for your prodution environment?

Here it is – the first video on Hazeltest! (Well, the second, if you count the livestream that took place quite a while ago, but the first as far as the video series I’d like to do is concerned.)

Chaos Monkey

Chaos Monkey

In case you’ve always wanted a pet monkey, this one is definitely not what you’re looking for, because this particular specimen is one noisy fellow indeed. Designed to wreak havoc among the unsuspecting members of your Hazelcast cluster, it will let you test your cluster’s resilience before your production environment gets a chance to do it.

The concept of a Chaos Monkey was first introduced in the context of Hazeltest by the previous blog post as a means to “spice up” the primary job of Hazeltest’s runners, namely, to generate load on the Hazelcast cluster under test. The Chaos Monkey was described as an automated actor within Hazeltest whose goal is to deliberately wreak havoc among Hazelcast cluster members in order to test the cluster’s resilience towards member failures.

Dev Update, More Load Scenarios

Dev Update, More Load Scenarios

Curious about the news on Hazeltest and the two additional load scenarios previously hinted at? Then this new blog post has you covered!

The previous blog post on Hazeltest featured, in its more practical part, a basic How to Hazeltest, demonstrating shortly how the application can be configured, as well as the first of three load generation scenarios. The following paragraphs, then, will pick up the action from there and walk you through the remaining two load generation scenarios. Beyond that, we’re going to spend some time looking at the improvements made to Hazeltest since the previous blog post was published, as well as at the (likely) road ahead for Hazeltest.

Hazeltest Livestream 1: Let There Be Load

Hazeltest Livestream 1: Let There Be Load

What if you wanted to check out the contents of the first livestream on Hazeltest beforehand? Or revisit some concept introduced therein afterwards? If such is the case, then this blog post might serve you well.

The content accompanying livestream 1 was published on hazelcast.com:

More Cattle

More Cattle

What’s better than a certain number of RKE cluster nodes? Well, more cluster nodes, of course!

For an upcoming live stream about Hazeltest, the RKE cluster I use needs a bit more juice (a lot more, actually). In the following sections, you’ll see that introducing additional Linux machines as new nodes to an existing RKE cluster is very easy, and you’ll also get acquainted with a cool Kubernetes dashboard called Skooner. Finally, we’re going to use said Hazeltest to put the new nodes to a good test.

A Bit Of Gardening

A Bit Of Gardening

The focus with Hazeltest in the past weeks has been to implement a certain feature set so I can use the application as early as possible in a project with my current client. While this goal was achieved, there is, of course, still “no free lunch”, and the price tag here was a couple of things I didn’t like about the code. Because I’m an early learner of Go, these things combined with the new Go-foo I’ve learned in the past weeks provided just the perfect opportunity to get out the gardening gloves and do a little refactoring.

Since the previous blog post was published, the Hazeltest source code has seen both a couple of refactorings and some new features. In the following sections, I’d like to talk about the refactorings I’ve made in the code, and cover the new features in a dedicated blog post.

Working With Hazeltest

Working With Hazeltest

In its current state, Hazeltest can automate the process of generating load in the maps of a Hazelcast cluster. By means of a simple, three-scenario example, this blog post demonstrates how Hazeltest and its configuration options can be used for this purpose with the goal of finding weaknesses in the given Hazelcast map configurations.

What is this Hazeltest you’ve recently talked about and how can I make use of it even in this very early stage of development? Which configuration options do I need to tweak in order to configure the two map-related runners the application offers? And: How does all this help identify incorrect or incomplete Hazelcast map configurations?

Introducing Hazeltest

Introducing Hazeltest

What if the release candidate whose production fitness you’re supposed to test is a Helm chart describing a Hazelcast cluster? Well, ideally, there’s a little testing application that puts realistic load on a Hazelcast cluster, thus facilitating the discovery of misconfigurations or other errors that might have creeped into the chart, helping you to assert the release candidate’s fitness more effectively and more comfortably.

If your’re reading this, it’s likely you work in IT like me, and so you may have faced a situation like the following: Something – let’s call it the release candidate – needs to be properly tested before it’s released. Sounds familiar? If so, then you also may have asked yourself a question akin to the following: How can I make sure the release candidate is actually fit for release?

The Power Of Many: ReplicaSets

The Power Of Many: ReplicaSets

The ReplicaSet is a very useful basic building block in Kubernetes that other objects, like the Deployment object, rely on. As a kind of Pod manager running in your cluster, a ReplicaSet makes sure the desired number and type of a certain Pod is always up and running. Its functionality is based on the notion of desired vs. observed state, so it also provides a fantastic opportunity to talk about the basics of reconciliation loop awesomeness.

In case you have taken a look at some of the manifests files used in scope of the previous blog posts (such as this one, for example), you’ll no doubt have noticed the object employed to run the sample workload is the Deployment object. The way it’s set up – having a Pod template baked into it – may seem to imply the Deployment manages these Pods directly, but that’s not the case – in fact, the Deployment manages and configures a ReplicaSet, and it is the ReplicaSet that manages the Pods. As it turns out, in Kubernetes, the ReplicaSet is a basic building block for running and managing workloads that other, higher-level objects – such as the Deployment object – rely upon. In order to lay the foundation for covering the latter in future content, the following sections will introduce you to the ins and outs of the ReplicaSet object – the problem it solves, how it works, its specification, and how to interact with it.

Pagination