Speeding up your build in the cloud with Nexus Operator

Ricardo Zanini
6 min readNov 20, 2020

In this article, you will learn how to deploy a Nexus Server on a Kubernetes cluster using the Nexus Operator. We assume that you’re familiar with Kubernetes’ basic concepts.

Sonatype Nexus is a well-known artifact repository, especially among Java developers, for its capability to manage and serve libraries for Maven projects.

Deploying services like Nexus in containerized environments can be tricky and lead to many operational tasks, such as updating the service to the latest version or restore the service to its desired state in a new zone.

Kubernetes Operators are specialized applications acting as a human operator to do such tasks. There are many operators out there provided by the community to help developers and administrators run their applications.

The Nexus Operator was designed to deploy and manage a Nexus server instance in Kubernetes clusters. Being a specialized operator, it knows how to update the container’s image and to interact with the Nexus server REST API. This interaction allows auto-creation of popular repositories, configuring the default Maven central mirror, and many other features.

In the next sessions you will see how to deploy the Nexus Operator and a Nexus server instance in your Kubernetes cluster.

Deploying the Nexus Operator

To start using the Nexus Operator, you will need to install it in your cluster. You can install it manually or use the Operator Lifecycle Management (OLM) platform.

Deploying with OLM

OLM is a platform to help users install, update, and manage the lifecycle of all operators and their associated services running in Kubernetes clusters.

On the Operator Hub page, you will find the instructions to deploy OLM in your cluster and install the Nexus Operator bundle. Click on the “Install” button to read the detailed instructions on how to do it. The instructions will guide you through the OLM installation and the Nexus Operator as well.

OLM is already installed on OpenShift 4.x clusters by default. All you have to do is go to the left menu in the OpenShift Web Console, click on “Operators” and search for “Nexus” in the Community Catalog:

Nexus Operator on OpenShift OperatorHub

Follow the on-screen instructions and you should be able to install the Nexus Operator after two clicks. There are a few options to install the operator you can choose from; we recommend installing it in all namespaces:

Installing the Nexus Operator on “All Namespaces”

Manual deployment

If you don’t want to install OLM in your cluster or if you’re just trying the operator in a local environment, you can use our installation script available for Unix environments:

Alternatively, you can use `kubectl` to apply our YAML installer available on the Nexus Operator release page:

To verify if the operator has been correctly installed, run:

You should see the numbers “1/1” in the “READY” column.

Deploying a Nexus instance with Nexus Operator

Having deployed the Nexus Operator, now it’s time to deploy your Nexus instance. The first thing you should do is create a new namespace to deploy your instance.

Deploying the instance in its namespace will make it easy for you to administrate it, like reserving a special zone in your cluster for the instance to run, or limiting the resource usage in the namespace. To create a new namespace run:

$ kubectl create ns nexus

To deploy the Nexus instance simply run:

The operator will deploy the latest version of Nexus 3 in your cluster, limiting its resources to 2 units of CPU and 2GB of RAM with no persistence enabled. You can follow the status of your deployment by running:

Notice that we are not using kubectl get pods or kubectl get deployments, we are telling the cluster to give us the status of the “nexus” type. By installing the Nexus Operator in your cluster, you’ve registered a new Kubernetes type in the cluster! Now you can make any operation that you would normally do on a standard Kubernetes resource, but with “nexus” instead.

To edit the deployed instance, run:

$ kubectl edit nexus -n nexus

The default text editor of your terminal will open and you can edit any parameters of the deployed instance. Try adding more resources to the instance by updating the values in the YAML file like:

We are increasing the memory limit to 4GB. After a few minutes, the instance will be deployed again with the new parameters, thanks to the operator. You can explore the examples directory in the Nexus Operator GitHub page to see other configuration possibilities to deploy the Nexus instance.

Exposing the Nexus service externally

The example provided in the previous session deployed the Nexus instance exposed with “NodePort”. This means that your service will be accessible via the main endpoint of your Kubernetes cluster in a special port reserved for the service.

If you’re running on Minikube, you can get the external endpoint with the following command:

Use your browser to navigate to the displayed URL and you should see the Nexus Server start page. Click on “login” and use the default credentials: admin/admin123.

This is fine for testing scenarios or nonproduction environments, but in real-world scenarios, you would like to expose the service via a more reliable approach. The next sections describe how to do this on Kubernetes and OpenShift.

Exposing the Nexus service with NGINX Ingress on Kubernetes

IMPORTANT! If you’re running on OpenShift, skip to the next section.

By default, the Nexus Operator can create an NGINX Ingress for you. If you’re running on Minikube, read the article “Set up Ingress on Minikube with the NGINX Ingress Controller” to enable NGINX Ingress locally before continuing.

Edit the Nexus instance deployed before and change the following parameters:

The Nexus instance won’t be accessible via “NodePort” anymore. If you’re running on Minikube, you will need to add an entry in your “/etc/hosts” file to match the service address. First, take the Minikube IP:

$ minikube ip

Add an entry in your hosts file like this:

192.168.32.5 nexus.example.com

Access the URL https://nexus.example.com and you should be able to navigate in the Nexus web console again.

Exposing the Nexus service with OpenShift routes

Alternatively, if you’re running on OpenShift (or CRC), you can expose the service via Routes.

Simply edit the deployed Nexus instance and change the following parameters in the YAML file:

To see your exposed URL, just run:

Use your browser to access the service using the given endpoint.

We have an extensive guide about networking on our GitHub repository that you can read to troubleshoot your configuration or to go above the simple scenarios we demonstrated here.

Integrating the Nexus Server with other Kubernetes Services

Having deployed Nexus Server in your Kubernetes cluster opens a lot of possibilities to integrate with other services, mainly those related to building and deploying new services. When you run the following command:

You will see the public Maven Group URL that is accessible only by internal services in the Kubernetes cluster. This URL can be injected into any Kubernetes resource to give access to the Maven Central mirror repository. This will decrease the build time of your applications considerably since the services won’t have to download the entire internet every time you need to run a build.

Here are some examples of use cases when a Nexus Server can help speed up your builds on Kubernetes or OpenShift:

  1. Running Jenkins Maven Slave on OpenShift 4.x
  2. Running Maven builds with Tekton
  3. Configure Camel-K to run with a Maven mirror
  4. Configure Kogito builds to use an external Maven mirror server
  5. Configure OpenShift s2i builds with a Maven mirror

Whenever you search for alternatives to speed builds on OpenShift or Kubernetes clusters, you will always see someone recommending the creation of persistent storage to hold your libraries between build cycles. Having a Nexus server to manage these libraries for all those servers is a no-brainer for any real-world CI/CD architecture.

Conclusion

As we see, having a Kubernetes Operator to deploy and manage a Nexus Server can come in handy and avoids many cumbersome and tedious tasks like configuring templates or Helm charts to keep an application in your desired state.

Having a Nexus server available within a Kubernetes cluster can also help in various scenarios where an application build will occur, mainly in CI/CD scenarios. In the future, we intend to add more features to help administrators like automatic backup/restore of managed libraries and service discovery to integrate the Nexus Operator with Tekton, Jenkins, Camel-K, Kogito, and so on.

If you’re keen to learn more about Kubernetes Operators or just wish to help with the Nexus Operator development, we are always looking for contributions! Please visit us on GitHub (and star it!) to know more.

--

--

Ricardo Zanini

Yogi, open source lover, software engineer @ Red Hat. Opinions are my own.