JAX London Blog

JAX London Blog

JAX London, 09-12 October 2017
The Conference for JAVA & Software Innovation

Sep 22, 2016

In this article, JAX London speaker Andrew Morgan, introduces testing patterns you can use for test automation in applications with external dependencies.

The original post was published on OpenCredo.

Sometimes, it can be difficult to write automated tests for parts of your application due to complexities introduced by an external dependency. It may be flaky or have some sort of rate limiting, or require sensitive information which we don’t want to expose outside of our production environment. To get around this, teams might take the approach of manually stubbing the service or using mocks – but the former is tedious and error prone, whereas the latter doesn’t test collaboration at all.

To explain a testing pattern which we can use to help us, I’ve chosen to create a fictional web application with a requirement to sign in with Github. This would involve clicking on a login link, authorising Github to to access your basic user details, and then creating a session for that user. Behind the scenes we are using OAuth2 Authorisation Code Grant, which is complicated and involves lots of redirects in the browser along with serverside exchanges of tokens for access tokens.

Testing against the real Github OAuth2 flow

The first thing we can do is write some automated Webdriver tests against the real Github and get them to pass. I’m not going to explain how to implement OAuth2, but we can assume it’s done correctly. The tests may look something like this:

public void shouldBeAbleToLogInWithGithub() {
    // When
    // Log-in
    // Then
    // Check for user on landing page


As I previously stated, it’s not great for our tests to be dependant on the real Github. Some of the reasons are:

1. We don’t have isolation.
2. We need to use a real Github account to get it to pass – the means storing usernames and passwords in the project, or having to pass them in each time we run the test.
3. You can’t keep running the test, because eventually Github will pick up on the amount of authorization attempts for the user and display a warning message. Essentially a rate limit.

Recording our interactions

Because we have a green build, what we can do is record all our Github interactions in order to play them back and simulate it in future test runs. A simple way to do this is with the Hoverfly JUnit Rule – simply include the following at the top of your test:

public HoverflyRule hoverflyRule = HoverflyRule.inCaptureMode("src/test/resources/github-oauth2-login.json").build();


Behind the scenes, the rule starts a Hoverfly process on an unused port in capture mode, and then sets the JVM proxy system properties to use it. In my case, I’m writing a Spring @WebIntegrationTest, which means the Webdriver code runs in the same JVM as the application. So the serverside exchange of tokens between the application and Github will be automatically intercepted. If you want to run the test and application separately (which is quite a reasonable thing to do), then you can run your own Hoverfly externally and set the JVM proxy yourself to use it when starting the application. Either way, what has been captured will be written to the given oauth2-login.json file

We also want to record the various Github pages and redirects in Firefox – which in this case does not respect the JVM proxy settings. We can configure it explicitly as follows:

private void configureFireFoxProxy() {
    final DesiredCapabilities desiredCapabilities = new DesiredCapabilities();
    final Proxy proxy = new Proxy();
    proxy.setHttpProxy("localhost:" + hoverflyRule.getProxyPort());
    proxy.setSslProxy("localhost:" + hoverflyRule.getProxyPort());
    desiredCapabilities.setCapability(CapabilityType.PROXY, proxy);
    desiredCapabilities.setCapability(CapabilityType.ACCEPT_SSL_CERTS, true);
    driver = new FirefoxDriver(desiredCapabilities);
    driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS);


Unfortunately this is a tad clunkier, but to summarise:

1. We create a proxy configuration to route traffic through the Hoverfly created at runtime.
2. We set noProxy to localhost, because the pages being served by it are under test – it’s only external dependencies that we want to capture
3. And we’ve set it to ACCEPT_SSL_CERTS, because Hoverfly uses a self signed certificate which we would like firefox to trust.

If you run the test again, it’ll still go green – but this time it will create an oauth2-login.json containing the requests and responses made to Github. It’s just json so you can open it up and take a look.

Simulating Github OAuth2 flow

As we’ve captured the data during our previous test run, we can change the rule back into simulate mode and give it the directory of our file:

public HoverflyRule hoverflyRule = HoverflyRule.buildFromClassPathResource("github-oauth2-login.json").build();


We can re-run the tests and our build will still be green, only this time Hoverfly is responding with the pre-recorded data. There’s no interaction with the real Github, and by doing this we’ve resolved a couple of problems:

1. We have isolation
2. No more issues with rate limiting – we’ll always get the same response.

We still can’t push though, because we have some secrets in our source code, such as a github username and password, and an access token generated by the OAuth2 Handshake. Fortunately, all we need to do is open up the json file and change any information that’s sensitive. Then we can just update our test to reflect our changes:

public void shouldBeAbleToLogInWithGithub() {
    // When
    // Log-in
    // Then
    // Check for user on landing page


The data generated by the automated tests will now match the simulation, so we’ll still get a green test run – just with a user that doesn’t actually exist in the real Github.

The only issue now is whether the Github Simulation becomes stale – I don’t think they’re going to make a breaking change to OAuth2 any time soon, but assuming they might we can just intermittently run our test against the real service and update our simulations accordingly. It could be a pain with data sanitisation but it means we would pick up all breaking changes eventually.


I hope this opens up a few ideas on how to get around the problems of integration testing with external dependencies. Rather than going through a tedious process of setting up stubs (which can take a long time and be error prone), or even manual testing, we can just write against the real thing, record, sanitise, and then play back.

Behind the Tracks

Software Architecture & Design
Software innovation & more
Architecture structure & more
Agile & Communication
Methodologies & more
DevOps & Continuous Delivery
Delivery Pipelines, Testing & more
Big Data & Machine Learning
Saving, processing & more