Subscribe to the feed

This article is part of an ongoing series that has already covered quite a bit of ground, including:


When it comes to modernizing an existing code base, it comes down to one of two choices: Either embark on a total rewrite (essentially creating a whole new application), or buckle down and refactor your existing code and configuration.

The decision to rewrite is a huge one—a choice that comes down to budget, talent and time, which will vary depending on the enterprise. If there is a push to rewrite, there will be another article in this series about feedback loops and opinionated workflows to help drive product development that might be of interest. 

In this article our focus will be on refactoring an existing code base.

By this point, we have determined the desired future state driving the modernization effort. For example, it could be moving to a less expensive application server, or maybe moving off of a deprecated version of Java.

Regardless of the desired future state, if the code is being changed, the following two strategies can provide lots of benefits:

  1. Make the code more testable and write tests

  2. Update the code and configuration to be more container friendly

The order of these may vary. It could be getting the code more container friendly is trivial and non-breaking. Conversely, the code might be in a state where test coverage cannot be easily added (perhaps it's manually tested). In this case, it might make sense to get the code running in a container environment for improved feedback loops, then make it more testable. 

The high level goals of these strategies are:

  1. Make the code safer to change without breaking functionality

  2. Get the applications more "feedback loop"-friendly to drive innovation—essentially this means making it easier to deploy and start getting feedback (logs and metrics)

Let’s dig into these points. And if you are new to containers, not to worry, an explanation is provided below. Also, if your modernization goal IS to move the code into containers, hopefully some of the talking points here can be of use (or at least reinforce your own project goals).

Refactoring: The pathway to improving current state

The definitive guide on this subject is Refactoring: Improving the Design of Existing Code by Martin Fowler. I recommend this book to anyone interested in this subject.

Fowler defines refactoring as, “change made to the internal structure of software to make it easier to understand and cheaper to modify without changing the observable behavior.”

Changing a code base to be more testable and more container friendly, if done well, will result in the code being easier to understand and cheaper to change. As we have decided to move from the current state to a beneficial future one (modernize), this is highly desirable.

The drive to make code more testable will make cleaner, more modular code. This means code that is easier to change and easier for new developers to understand. Of course, having test coverage means you can make changes and get feedback on whether the changes have broken anything quickly and in lower level environments without the need for expensive manual testing teams.

Making the code container friendly will also necessitate some best practices around making the code cleaner to read and easier to deploy (see 12FactorApp practices below). When code becomes container friendly, the application will also be able to operate on a container platform, which opens the door to some interesting operational possibilities (which we'll get into later) as well as improved feedback loops.

The importance of testing 

Modernization means change, and given that the changes are to an existing application, verification is required to verify that the changes aren't breaking anything.

One approach is to deploy the refactored application into a lower level environment and pay an army of human testers to hammer on it, and then provide a report about what works and what’s broken. The issue with this strategy is that it’s expensive, slow and time-consuming. You also miss a bunch of benefits that come with making the code more testable.

There is a whole hierarchy of testing, and each level provides a benefit. In these articles I focus on writing tests at the unit test level. These are tests that can be self-executed during the build phase and written by the developers as part of the code base.

Mocking can help deal with entanglement

I strongly believe that all functionality related to downstream service operations should be mocked. If you are new to the term mocking, this is where we mock (create objects that mimic the behavior of real objects) an external dependency in order to test our classes and methods.

When you have third-party dependencies, as much as possible these should be mocked in the self-running test cases. This will provide a lot of flexibility for teams writing tests.

I once had a debate with a person who, when building an artifact for an application, insisted that a database be spun up to run test suites against. They argued that if there was an issue with the database logic, the application should not be built. Efficiencies aside, this person’s concern is understandable. However, unless the database they were testing against was exactly the same as the one in production, their sense of security around a working test could be false. Not to mention, the build could potentially become brittle should the database it tests against run into issues. Now, there is a case where you can include an external service as part of the test, but I would like to discuss mocking prior to introducing this.

If you mock all functionality related to downstream service operations, including simulating potential outputs, good and bad, the database can still blow up—however, if you have done your mocking and testing well, you will at least know your code can handle it.

Projects like Mockito are particularly helpful for this. When combined with a framework like the Spring Framework or Quarkus, they provide easy ways to intercept your logic and return:

  • expected results to test the happy path,

  • incorrect data to test the unhappy path, and

  • errors in place of results to test error handling and logging.

You can also spy on certain components to ensure they are being called to execute when expected. Mocking can make your tests portable and performant. That said, it can be a lot of work to mock all the good and bad responses from a third-party service.

A happy medium between mocking and testing against a service is Test Containers. The team describes the project as a "Java library that supports JUnit tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container." 

Essentially this means you can easily have your JUnit test spin up a container with an instance of a database or cache that is under control of the test, so you can populate/configure it with a good (or bad state) depending on what you are looking to test.

The Test Containers team has thought of pretty much everything in terms of management of the container and ingress/egress to the service. Test Containers even includes modules of existing databases, caches, message brokers and modes. The only requirement is that you use Maven, or Gradle to build your application. And JUnit or Spock as your test framework. 

Of course, if your code base has no tests or does not use Maven or Gradle, adding mocks is easier said than done. I will come back to this in a future article when I discuss specific testing strategies later.

Container-friendly code: Leveraging the "Twelve-Factor App" methodology

The second of our goals is to make our code more container friendly. As previously stated, getting the code base into containers might be the desired endstate for your modernization project. If so, you can skip this section. However, if it's not, the following outlines some of the dividends that moving to containers will pay to any code base that will be undergoing change.

An app that becomes more "container friendly" is generally easier to deploy. This means that you can get a version of the application running quickly, which is good for verifying functionality and getting user feedback. Once the application can start and run in a container, it can also make use of a Kubernetes platform to really power our development feedback loop.

A Kubernetes platform (such as Red Hat Openshift) greatly improves not only your ability to deploy, but also your ability to observe the deployment, as many of these platforms provide easily-accessible workflows to get logs and metrics. We will talk in detail about feedback loops and safely changing software in a later article.

The Twelve-Factor App is a very useful methodology for building easy-to-change software. Following these factors can guide developers in building code that can run in any environment (including containers). 

I will detail particularly useful aspects of the Twelve Factor App in a future blog. However, it should be noted that trying to follow all twelve factors is unrealistic, especially when it comes to legacy code. You can follow the factors enough to get your application closer to its future state and to be able to iterate on both the product and operations loops.

For example, if the goal is to make the code container friendly so it can run in a Kubernetes distribution, then getting the application to deploy and run in the container, with some observability around it, might suffice. To achieve this, only a few of the factors from the twelve are required. As stated, I'll be discussing this at length in a future blog post.

Having reviewed testing and container options and the Twelve-Factor App methodology, you now have some useful strategies to help you get started with your project. The type of work you are doing is starting to take shape. But before you can begin, a few essential tools should be in place for the team to effectively do that work. This will be the focus of my next article.

Container-friendly blockers: Tight coupling to middleware

Sometimes a code base will be tightly coupled to middleware or a service that is not container friendly. For example, while pursuing a modernization goal to move away from an Enterprise Java application server that is too costly, you may find your code base contains annotations that couple it to that application server. Simple things like the application server being used for connection pooling to a database, or for accessing JNDI can be untangled through a refactoring process. However, something like message driven beans (MDB) might prove trickier to decouple from.

This is where our previous article on picking the right patterns to start on can be critical in establishing success early in the project, but also not wasting valuable resources on work that cannot be completed in a timely manner (or at all).

What about containers?

Linux containers are technologies that allow you to package and isolate applications with their entire runtime environment—all of the files necessary to run. This makes it easy to move the contained application between environments (dev, test, production, etc.) while retaining full functionality.

From a modernization point of view, moving code to run in containers can be a modernization goal because it opens the door for Kubernetes-based container platforms, such as Red Hat OpenShift. Container platforms can provide huge benefits to teams working on a modernization project.

A brief overview of containers

Containers have been around a while

Back in 1979, the chroot system call was introduced, changing the root directory of a process and its children to a new location in the filesystem. This was the beginning of process isolation. With process isolation, an application/service could run in an operating system (OS) and be isolated from the other applications/services running on the same OS, as well as from the OS’s own processes. Over the years FreeBSD, Solaris Zones, Process Containers and LXC continued to refine concept.

Docker makes it easier

In 2013, Docker came along and provided an easy and powerful way to determine what gets set up in the container environment before the application starts running in it. Dockerfiles (describing what goes into the Linux image and how it should be built) and image repositories like Dockerhub (containing the resulting binary from running the Docker file) have become staples in most software projects’ required tools.

Containers versus Virtual Machines

Unlike virtual machines (VMs) that need to be managed by a hypervisor and require an operating system (OS) to be set up before being usable, containers do not require an OS installed for each instance Containers have the benefit of being able to pack more resources into a single host machine without the overhead of the hypervisor and guest OS.

Container orchestration platforms

Containers are not as reliable as VMs—they can fail, or their host OS can fail, which results in the disappearance of all the containers the host OS spawned.

To deal with the transient nature of containers, many container-provisioning platforms have been created to manage all the efforts around keeping container workloads up and running and managing the traffic in and out. Examples that are based on the Kubernetes project include Red Hat OpenShift, Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE). Non-Kubernetes options include Google Cloud Run and Amazon Elastic Container Service (ECS).

How do containers affect development?

Containers come and go. They can be scaled up, meaning there could suddenly be three versions of the same workload (each running on its own container) and they can be moved from one resource pool to another to satisfy redundancy requirements.

This means that getting an application to run by manually executing a series of steps (like might be done for apps running in a VM) becomes very difficult, if not impossible. 

Also, certain middleware that an application depends upon (e.g., certain application servers) might not run in a container. To help develop applications that will succeed in such an environment, the Twelve-Factor App was created as a set of principles for a development team to follow.

Stay focused on the goal

Here we've discussed some high-level intentions that can help make a code base more changeable. That said, there is a trap in here. Focusing on test coverage or over-rotating on the Twelve Factors could result in failure. At the end of the day, as pretty as passing tests might be, progress needs to be shown on moving the application to the future state (proving out the value promised).

It will be up to the Project Lead to juggle these high-level intentions with the concrete work needed to move the application to the desired future state. It might not be easy, but this is why putting together the right team is so important.

In the next article I'll discuss getting all the tools and resources in place to get the team working in the most efficient way.


About the author

Luke Shannon has 20+ years of experience of getting software running in enterprise environments. He started his IT career creating virtual agents for companies such as Ford Motor Company and Coca-Cola. He has also worked in a variety of software environments - particularly financial enterprises - and has experience that ranges from creating ETL jobs for custom reports with Jaspersoft to advancing PCF and Spring Framework adoption in Pivotal. In 2018, Shannon co-founded Phlyt, a cloud-native software consulting company with the goal of helping enterprises better use cloud platforms. In 2021, Shannon and team Phlyt joined Red Hat.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech