Contact Us 1-800-596-4880

Testing Strategies

Building a comprehensive suite of automated tests for your Mule project is the primary factor that ensures its longevity: you gain the security of a safety net catching any regression or incompatible change in your applications before they even leave your workstation.

We look at testing under three different aspects:

  • Unit testing: these tests are designed to be fast, with a very narrow system under test. Mule is typically not run for unit tests.

  • Functional testing: these tests usually involve running Mule, though with a limited configuration, and should run fast enough to be executed on each build.

  • Integration testing: these tests exercise a full Mule application with settings that are as close to production as possible. They are usually slower to run and not part of the regular build.

In practice, unit and functional testing are often merged and executed together.

Unit Testing

In a Mule application, unit testing is limited to the code that can be realistically exercised without the need to run it inside Mule itself. As a rule of thumb, code that is Mule aware (for example, code that relies on the registry), is best if exercised with a functional test

With this in mind, the following are good candidates for unit testing:

  • Custom transformers

  • Custom components

  • Custom expression evaluators

  • All the Spring beans that your Mule application uses. Typically, these beans come as part of a dependency JAR and are tested while being built, alleviating the need for re-retesting them in your Mule application project.

Mule provides abstract parent classes to help with unit testing. Turn here for more information about them.

Functional Testing

Functional tests are those that most extensively exercise your application configuration. In these tests, you have the freedom and tools for simulating happy and unhappy paths.

The "paths" that you should include are for:

  • Message flows

  • Rule-based routing, including validation handling within these flows

  • Error handling

If you’ve modularized your configuration files, you’ve put yourself in a great position for starting functional testing.

Let’s see why:

  • Imported configurations can be tested in isolation. This means that you can create one functional test suite for each of the different imported configuration. This reduces the size of the system under test, making it easier to write tests for each of the different cases that need to be covered.

  • Side-by-side transport configuration allows transport switching and failure injection. This means you not need to use real transports (say HTTP, JMS or JDBC) when running your functional test but are able to run everything through VM in-memory queues. You also have the possibility to create stubs for target services and make them fail to easily simulate unhappy paths.

Real transports or not? That is the question you’re maybe asking and it is a valid one, as many in-memory alternatives exist for the different infrastructures your Mule application connects to (for example: ActiveMQ for JMS, HSQLDB for JDBC). The real question is: what are you really testing? Is it relevant for your functional tests to exercise the actual transports, knowing that they’re already tested by MuleSoft and that the integration tests takes care of exercising them.

Mule provides a lot of supporting features for implementing functional tests. Let’s look into an example and discover them as we go. The following diagram illustrates the flow we are testing:

This flow accepts incoming messages over HTTP, validates them and dispatches them to JMS if they are acceptable. For the actual implementation, we use the Validator configuration pattern and check that the incoming message payload is XML. Keep in mind that the same testing principles and tools apply if you’re testing a flow.

Testing With Side-By-Side Configurations

Let’s look at the configuration files for this application. First, we have the configuration file that contains the Validator:

<mule xmlns="http://www.mulesoft.org/schema/mule/core"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns:mule-xml="http://www.mulesoft.org/schema/mule/xml"
    xsi:schemaLocation="
        http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
        http://www.mulesoft.org/schema/mule/xml http://www.mulesoft.org/schema/mule/xml/current/mule-xml.xsd">

  <validator name="WorkAcceptor"
             inboundEndpoint-ref="NewWorkEndpoint"
             ackExpression="#[string:OK:#[message:id]]"
             nackExpression="#[string:NOT_XML]"
             outboundEndpoint-ref="AcceptedWorkEndpoint">
    <mule-xml:is-xml-filter/>
  </validator>
</mule>

Note how the inbound and outbound endpoints are actually references to global ones. These global endpoints are configured in a separate configuration file designed to be loaded side-by-side with the above one. Here is its content, without the JMS connector configuration omitted for brevity:

<mule xmlns="http://www.mulesoft.org/schema/mule/core"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns:http="http://www.mulesoft.org/schema/mule/http"
      xmlns:jms="http://www.mulesoft.org/schema/mule/jms"
    xsi:schemaLocation="
        http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
        http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
        http://www.mulesoft.org/schema/mule/jms http://www.mulesoft.org/schema/mule/jms/current/mule-jms.xsd">

    <http:endpoint name="NewWorkEndpoint"
                   host="${web.host}"
                   port="8080"
                   path="api/work">
      <object-to-string-transformer/>
    </http:endpoint>

    <!-- JMS connector configuration omitted -->

    <jms:endpoint name="AcceptedWorkEndpoint"
                  queue="work"
                  connector-ref="WorkQueueJmsConnector" />
</mule>

Note how this configuration provides the actual configuration of the global endpoints used by the other configuration. In order to functionally test this, create an alternative configuration that provides global endpoints with the same name but use the VM transport. Here it is:

<mule xmlns="http://www.mulesoft.org/schema/mule/core"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns:vm="http://www.mulesoft.org/schema/mule/vm"
    xsi:schemaLocation="
        http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
        http://www.mulesoft.org/schema/mule/vm http://www.mulesoft.org/schema/mule/vm/current/mule-vm.xsd">

    <vm:endpoint name="NewWorkEndpoint"
                 path="work.new"
                 exchange-pattern="request-response" />

    <vm:endpoint name="AcceptedWorkEndpoint"
                 path="work.ok"
                 exchange-pattern="one-way" />
</mule>

Now let’s write two tests: one for each possible path (message is XML or not). You can subclass Mule’s FunctionalTestCase, an abstract class designed to be the parent of all your functional tests!

The FunctionalTestCase class is a descendant of JUnit’s TestCase class.

Here is the test class, without the Java import declarations:

public class WorkManagerFunctionalTestCase extends FunctionalTestCase
{
    @Override
    protected String getConfigResources()
    {
      return "mule-workmanager-config.xml,mule-test-transports-config.xml";
    }

    public void testValidJob() throws Exception
    {
      MuleClient client = new MuleClient(muleContext);
      MuleMessage result = client.send("vm://work.new", "<valid_xml />", null);
      assertTrue(result.getPayloadAsString().startsWith("OK:"));

      MuleMessage dispatched = client.request("vm://work.ok", 5000L);
      assertEquals("<valid_xml />", dispatched.getPayloadAsString());
    }

    public void testInvalidJob() throws Exception
    {
      MuleClient client = new MuleClient(muleContext);
      MuleMessage result = client.send("vm://work.new", "not_xml", null);
      assertTrue(result.getPayloadAsString().startsWith("NOT_XML"));

      MuleMessage dispatched = client.request("vm://work.ok", 5000L);
      assertNull(dispatched);
    }

Notice in testValidJob() how we ensure we receive the expected synchronous response to our valid call (starting with "OK:") but also how we check that the message has been correctly dispatched to the expected destination by requesting it from the target VM queue. Conversely in testInvalidJob() we verify that nothing has been sent to the valid work endpoint.

As standard JUnit tests, you can now run these tests either from Eclipse or the command line with Maven.

Using a VM queue to accumulate messages and subsequently requesting them (as we did with vm://work.ok) can only work with the one-way exchange pattern. Using a request-response pattern would make Mule look for a consumer of the VM queue, as a synchronous response is expected. So what do we do when we have to test request-response endpoints? We use the Functional Test Component!

Stubbing Out with the Functional Test Component

The Functional Test Component (FTC) is a programmable stub that can be used to consume messages from endpoints, accumulate these messages, respond to them and even throw exceptions. Let’s revisit our example and see how the FTC can help us, as our requirements are changing.

We have decided to use a Validator’s feature that wasn’t used previously, which ensures that the message has been successfully dispatched to the accepted job endpoint and otherwise returns a failure message to the caller. Here is it’s new configuration:

<validator name="WorkAcceptor"
           inboundEndpoint-ref="NewWorkEndpoint"
           ackExpression="#[string:OK:#[message:id]]"
           nackExpression="#[string:NOT_XML]"
           errorExpression="#[string:SERVER_ERROR]"
           outboundEndpoint-ref="AcceptedWorkEndpoint">
  <mule-xml:is-xml-filter/>
</validator>

The only difference is that an error expression has been added. This addition yields the following changes:

  • The Validator now behaves as fully synchronously, preventing us from using an outbound VM queue as an accumulator of dispatched messages: we use the FTC to play the role of accumulator,

  • Test a new path as we want to check the behavior of the system when dispatching fails. We also use the FTC here, configuring it to throw an exception upon message consumption.

Let’s see how introducing the FTC has changed our test transports configuration:

<mule xmlns="http://www.mulesoft.org/schema/mule/core"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns:vm="http://www.mulesoft.org/schema/mule/vm"
      xmlns:test="http://www.mulesoft.org/schema/mule/test"
    xsi:schemaLocation="
        http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
        http://www.mulesoft.org/schema/mule/vm http://www.mulesoft.org/schema/mule/vm/current/mule-vm.xsd
        http://www.mulesoft.org/schema/mule/test http://www.mulesoft.org/schema/mule/test/current/mule-test.xsd">

    <vm:endpoint name="NewWorkEndpoint"
                 path="work.new"
                 exchange-pattern="request-response" />

    <vm:endpoint name="AcceptedWorkEndpoint"
                 path="work.ok"
                 exchange-pattern="request-response" />

    <simple-service name="WorkQueueProcessorStub"
                    endpoint-ref="AcceptedWorkEndpoint">
      <test:component />
    </simple-service>
</mule>

As you can see, the FTC manifests itself as a <test:component /> element. We used the convenience of the Simple Service pattern to make it consume the messages sent to the AcceptedWorkEndpoint.

The FTC supports plenty of configuration options. Read more about it there: Functional Testing.

Now that we have this in place, let’s see first how we can test the new failure path. Here is the source code of the new test method added to our previously existing functional test case:

public void testDispatchError() throws Exception
{
  FunctionalTestComponent ftc =
      getFunctionalTestComponent("WorkQueueProcessorStub");
  ftc.setThrowException(true);

  MuleClient client = new MuleClient(muleContext);
  MuleMessage result = client.send("vm://work.new", "<valid_xml />", null);
  assertTrue(result.getPayloadAsString().startsWith("SERVER_ERROR"));
}

Note how we get hold of the particular FTC we’re interested in: we use getFunctionalTestComponent, a protected method provided by the parent class, to locate the component that sits at the heart of our Simple Service (located by its name).

Once we have gained a reference to the FTC, we configure it for this particular test so it throws an exception anytime it is called. With this in place, our test works: the exception that is raised makes the Validator use our provided error expression to build its response message.

Now let’s look at how we’ve refactored the existing test methods to use the FTC:

public void testValidJob() throws Exception
{
  MuleClient client = new MuleClient(muleContext);
  MuleMessage result = client.send("vm://work.new", "<valid_xml />", null);
  assertTrue(result.getPayloadAsString().startsWith("OK:"));

  FunctionalTestComponent ftc =
      getFunctionalTestComponent("WorkQueueProcessorStub");
  assertEquals("<valid_xml />", ftc.getLastReceivedMessage());
}

public void testInvalidJob() throws Exception
{
  FunctionalTestComponent ftc =
      getFunctionalTestComponent("WorkQueueProcessorStub");
  ftc.setThrowException(true);

  MuleClient client = new MuleClient(muleContext);
  MuleMessage result = client.send("vm://work.new", "not_xml", null);
  assertTrue(result.getPayloadAsString().startsWith("NOT_XML"));
}

In testValidJob(), the main difference is that we now query the FTC for the dispatched message instead of requesting it from the outbound VM queue.

In testInvalidJob(), the main difference is that we configured the FTC to fail if a message gets dispatched despite the fact it is invalid. This approach actually leads to a better performance of the test because, previously, requesting a nonexistent message from the dispatch queue was blocking until the 5 seconds time-out was kicking in.

Integration Testing

Integration tests are the last layer of tests we need to add to be fully covered. These tests actually run against Mule running with your full configuration in place. We are limited to testing the paths that we can explore when exercising the system as a whole, from the outside. This means that some failure paths, like the one above that simulates a failure of the outbound JMS endpoint, is not tested.

Though it is possible to use Maven to start Mule before running the integration tests, we recommend that you deploy your application on the container that runs in production (either Mule standalone or a Java EE container).

Since integration tests exercise the application as a whole with actual transports enabled, external systems are affected when these tests run. For example, in our case a JMS queue receives a message: we ensure this message has been received, which implies that no other system consumes it (or else we would have to check in these systems that they have received the expected message).

In shared environments, this is tricky to achieve and usually requires the agreement of all systems about the notion of test messages. These test messages exhibit certain characteristics (properties or content) so other systems realize they should not consume or process them.

To learn more about test messages, and for more testing strategies and approaches, see LaSalle University’s Test-Driven Development in Enterprise Integration Projects PDF.

Another very important aspect is the capacity to trace a message as it progresses through Mule flows and reaches external systems: this is achieved by using unique correlation IDs on each message and consistently writing these IDs to log files. As you see later on, we also rely on unique correlation IDs for integration testing. For now, here is our inbound HTTP endpoint refactored to ensure that the Mule correlation ID is set to the same message ID value that is returned in the OK acknowledgement message:

<http:endpoint name="NewWorkEndpoint"
               host="${web.host}"
               port="8080"
               path="api/work">
  <object-to-string-transformer/>
  <message-properties-transformer>
    <add-message-property key="MULE_CORRELATION_ID"
                          value="#[message:id]" />
  </message-properties-transformer>
</http:endpoint>

Mule does the rest: it ensures that the correlation ID that is been set with the message properties transformer shown above, gets propagated to any internal flow or external system receiving the message.

Maven Failsafe to Feel Safe

In order to keep our example simple, we assume that no other system attempts to consume the messages dispatched on the target JMS queue: they sit there until we consume them.

To show that no specific tooling is needed to build integration tests, we build them in Java, as JUnit test cases, and run them with Maven’s failsafe plug-in . Feel free to use instead any tool you’re more familiar with.

For our current needs, soapUI used in conjunction with HermesJMS would give us a nice graphical environment for creating and running integration tests. See Introduction to JMS Testing for more information. Also note that soapUI can be run from Maven too: Maven 2.X.

Since the main entry point of our application is exposed over HTTP, we use HttpUnit in our tests. Let’s look at our test case for invalid work submissions:

@Test
public void rejectInvalidWork() throws Exception
{
    String testPayload = "not_xml";
    ByteArrayInputStream payloadAsStream = new ByteArrayInputStream(testPayload.getBytes());

    WebConversation wc = new WebConversation();
    WebRequest request = new PostMethodWebRequest(WORK_API_URI, payloadAsStream, "text/plain");
    WebResponse response = wc.getResponse(request);

    assertEquals(200, response.getResponseCode());
    String responseText = response.getText();
    assertTrue(responseText.startsWith("NOT_XML"));
}

In this test, which is a Junit 4 annotated test, we send a bad payload to our work manager and ensure that it gets rejected as expected. The WORK_API_URI constant is of course pointing to the Mule instance that is tested.

The test for valid submissions is slightly more involved:

@Test
public void acceptValidWork() throws Exception
{
  String testPayload = "<valid_xml />";
  ByteArrayInputStream payloadAsStream = new ByteArrayInputStream(testPayload.getBytes());

  WebConversation wc = new WebConversation();
  WebRequest request = new PostMethodWebRequest(WORK_API_URI, payloadAsStream, "application/xml");
  WebResponse response = wc.getResponse(request);

  assertEquals(200, response.getResponseCode());
  String responseText = response.getText();
  assertTrue(responseText.startsWith("OK:"));

  String correlationId = responseText.substring(3);
  Message jmsMessage = consumeQueueMessageWithSelector("work", "JMSCorrelationID='" + correlationId + "'", 5000L);

  assertTrue(jmsMessage instanceof TextMessage);
  assertEquals(testPayload, ((TextMessage) jmsMessage).getText());
}

private Message consumeQueueMessageWithSelector(String queueName,
                                              String selector,
                                              long timeout) throws JMSException
{
  ConnectionFactory connectionFactory = getConnectionFactory();
  Connection connection = connectionFactory.createConnection();
  connection.start();

  Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
  MessageConsumer createConsumer = session.createConsumer(session.createQueue(queueName),
      selector);
  Message result = createConsumer.receive(timeout);
  connection.close();
  return result;
}

Note that getConnectionFactory() is specific to the JMS implementation in use and, as such, hasn’t been included in the above code snippet.

The important take away is that we use the correlation ID returned by the Validator as a mean to select and retrieve the dispatched message from the target JMS queue. As you can see, Mule as propagated its internal correlation ID to the JMS-specific one, opening the door to this kind of characterization and tracking of test messages.

It’s time to run these two tests with the Failsafe plug-in. By convention integration test classes are named IT or *IT or *ITCase and are located under src/it/java. This path is not by default on a standard Maven project build path, so we need a little bit of jiggery-pokery to make sure they’re compiled and loaded. Because we do not want to always add the integration test source path to all builds, we create a Maven profile (named it) and store all the necessary configuration in it.

Note:

<profile>
  <id>it</id>
  <build>
    <plugins>
      <plugin>
        <groupId>org.mojohaus</groupId>
        <artifactId>build-helper-maven-plugin</artifactId>
        <executions>
          <execution>
            <id>add-test-source</id>
            <phase>generate-test-sources</phase>
            <goals>
              <goal>add-test-source</goal>
            </goals>
            <configuration>
              <sources>
                <source>src/it/java</source>
              </sources>
            </configuration>
          </execution>
        </executions>
      </plugin>
      <plugin>
        <groupId>org.mojohaus</groupId>
        <artifactId>failsafe-maven-plugin</artifactId>
        <executions>
          <execution>
            <id>integration-test</id>
            <goals>
              <goal>integration-test</goal>
            </goals>
          </execution>
          <execution>
            <id>verify</id>
            <goals>
              <goal>verify</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
  <dependencies>
    <dependency>
      <groupId>httpunit</groupId>
      <artifactId>httpunit</artifactId>
      <version>1.7</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
</profile>

With this configuration in place in your pom.xml, you can run these commands to execute your first automated Mule integration tests:

mvn -Pit verify

See Also