Mark59 Logo Blog About Documentation Downloads Overview
Mark59 Banner

P&V Framework

image courtesy wickipedia: Exorcism_of_the_Gerasene_demoniac  

Mark59 Version 4.0.0-rc-1 is now Available

This release is targeted at demonstrating and reporting Selenium’s new ability to access the Network Layer. A few extra lines of code in your test allow for exciting possibilities!

Overview

"Mark59 enables development teams to perform their own performance testing by allowing the re-use of Java Selenium test automation scripts within JMeter"

Mark59 is an open-source Java-based framework, or more accurately a set of tools, with the aim of providing the capability of performance testing applications, particularly web-based applications, in a way that is achievable without necessarily having purely specialist performance test skills.

In fact we don’t really like calling our Mark59 tooling a ‘framework’. Too often we have seen in the world of automation ‘clever’ frameworks hiding or overlaying the underlying technology, so people using them don't have a fair chance to learn the proper skills they need in the industry. Our goal is the opposite. Our simple integration of two of the most popular products in automation today, JMeter and Selenium, along with the work we have put into our examples and documentation, we hope will give Performance Testers and Automation Testers a way into each other’s world.

Our tooling is intended to work in an environment where performance tests are maintainable and repeatable, perhaps being triggered from a Continuous Integration server such as Jenkins. We hope our Docker samples and guide can assist with picking up these important skills. Mark59 is perfectly targeted for a project team working using practices that have become popularised since we first started this project as DevOps. It is built to run on Windows or Linux-based operating systems.

Sample Screens:

Trend Analysis

Visualise your test against trends, highlight failed SLA's (and more)

Metrics

Create your own bespoke metrics

Reports

Split your test reports into logical groupings

Selenium

Easily convert your Java Selenium test automation for use in JMeter

Downloads

Select the appropriate zip version to download the executable jar files and samples. Linux and Windows compatible.

As of Mark59 v3+ all projects are contained in a single zip file.

If you are looking for the source code, see: https://github.com/mark-5-9/mark59

 

Current Release:

mark59-4.0.0-rc-1.zip (280 MB)

Release Summary:

Version 4.0.0-rc-1

Significant changes for this release:

Documentation for the release can be found in the "Documentation" section.

 

Previous Release:

mark59-3.3.zip (206 MB)

Documentation

Our Mark59 User Guide documentation can be found below:

The Mark59 User Guide for Version 4.pdf (4 MB)

Get a feeling of how it works in a few minutes by going through the 'Quick Start' chapters.

About

The Background to Mark59

Mark59 started from some ideas conceived over 5 years ago, and has since developed to our latest Version 4 release. It was developed by a team of working Performance and Volume testers at the Australian Insurance Company IAG in Melbourne. Our team, more out of necessity to maintain multiple and varied applications, over time changed practices from a traditional way of testing to something very similar to what is now called Dev Ops, and created a set of tools on the way that has become Mark59.

A core team has worked on the project for most of its life, but many, many ideas came from the excellent Performance Testers that have been part of the team over the years. Not to mention (the sometimes rather blunt but valuable) feedback we have received from our client projects and others. We hope we haven't missed too many from the acknowledgements, but great ideas and suggestions have come to us from many, so we fear we have.

Contact Us

 You can contact us to give suggestions or feedback at mark59ptf@gmail.com. We are a small working team, but we will do our best to respond.

The People
The Core Team:
Major Contributors:
Grateful Acknowledgements:
Our Banner and the name 'Mark59'

The banner is part of a medieval illumination of the Gerasene demoniac exorcism, courtesy wickipedia: Exorcism_of_the_Gerasene_demoniac

The biblical story of the 'Exorcism of the Gerasene demonic' appears in the New Testament in all of the synoptic gospels (Mathew, Mark and Luke), but the most well known account is from Mark's gospel. At a critical point in the story Jesus challenges the demon in a possessed man to name itself, and discovers he is not facing one demon but many when the famous reply comes "My name is Legion, for we are many" (Mark 5:9).

We couldn't help relating our (admittedly trivial) struggles with turning a single Selenium script into many with this wonderful story, and so 'Mark59.com'.

Blog

Check out our blog entries for recent Mark59 discussion by selecting a tab below:

28 Aug 2020 - Mark59 v3.0 Release and User Guide

The v3.0 release is a major release of Mark59, it includes the summary below:

  • New project: mark59-server-metrics-web. Significant upgrade of server metric capture in mark59.
  • Project rename: dataHunterPVTest to dataHunterPerformanceTestSamples
  • Rename MySQL database pvmetrics to metricsdb (naming consistency across projects)
  • All project can use the h2, MySQL and Postgres database (enable quick start-up for demo and learning)
  • Align bat files to new download structure (a single zip file with all projects)
  • Using OpenCSV for csv reads/writes (some edge case issues found using exists methods)
  • To JMeter_Java 5.3 , multiple dependency jar updates (confirmed working to chromedrver 85)
  • Display mark59 build info on JMeter Java Request panels
  • Multiple small changes and code clean-up

Download the release via the "Downloads section.

Documentation for the release can be found within the "Documentation section.

 

05 Feb 2020 - Using Continuous Integration for Performance Testing. Is it worth the effort?

By: Philip Webb

Short answer is a definitive yes. But as you may suspect there’s a little more to it.

During my #NeotysPAC talk, I described the long and winding road my team took developing our “Mark59” open-sourced solution which enables us to execute performance tests daily via a Jenkins server, and how we automated SLA results analysis.

In this blog, I’m going to discuss some of the experiences we’ve had running Performance Testing via CI/CD (Continuous Integration/Continuous Delivery). We’ve been running a CI server in one form or another for at least three years now, so I’d like to think we’ve learned some (not always easy) lessons along the way. A lot of this relates to our Mark59 framework, but the principals are more general, so I hope at least some of our ideas may be useful to someone thinking about a CI/CD approach in their workplace.

How my team runs a CI/CD pipeline.

We run CI/CD using Jenkins, on which we deploy and run JMeter, and until recently LoadRunner, performance tests in a mix of daily and weekly runs. We also work with the development and automation test teams, particularly with Selenium script development and deployment. DevOps is the new buzz word for this I hear, I’ve just read Stijn Scheper’s PAC blog where he talks about similar principals to what we have adopted.

Selenium scripting targeted for JMeter has become a core component of our work. From a DevOps perspective, I’d like to add an extra dot point to Stijn’s framework suggestions:

  • keep the application (script) logic self-contained: that is, it can be run stand-alone

For example, this week I’ve been working with an application delivery team that is using our Selenium scripts in a CI run to verify production environments. They do have their own regression suite, but it’s complex and the application logic is difficult to extract. DevOps in action!

Our Experiences: The Good Stuff

So, what are the pros of continuous testing we have found? I’d like to discuss something that happened to several of our tests this month. Have a look at this graph.

Graph Image

It shows injector CPU % utilization over the last 50 or so days for one of our most important tests. Test days along the bottom running left to right, past to present, with CPU % the dependent axis. As an aside, this graphic comes from the Mark59 Trending Analysis tooling – the ability to display historical run data graphically was a game-changer for us. Anyway, this is a test that runs Selenium scripts very heavily, so when CPU utilization on this injector went from 40-something to over 60 it affected transaction times (over 50% is the point transaction times tend to get hit). In this case, the CPU hit wasn’t quite enough to break the transaction SLAs, we actually picked it up via our metric SLAs:

Metric SLA Failed Warning : metric out of expected range for CPU_UTIL Average on {server name}. Range is set as 1.0 to 55.0, actual was 63.754

So, we raised a problem request on which we were able to state the day the issue started. Even so, as happens at large corporate sites, it took a week or so to identify the team and change responsible. It turned out our injector, a Windows Virtual Server, had been moved off one physical cluster onto another. We asked for it to be moved back, but in fact, the Server Admin team moved it to a newer physical cluster, with the result that CPU utilization dropped dramatically. We got the same CPU specs in terms of the number of cores and reported clock speed, but the newer CPU stack is much better at handling concurrent processes. A critical factor when running multiple concurrent chrome browsers being driven by Selenium tests (in case you were wondering, we’re on Xeon Gold servers now).

Key learning: The ability of a CPU to handle concurrent processes is an important requirement for the Selenium component of our framework.

But what if we hadn’t been running the test via CI? What if we’d been using a traditional project-by-project based approach to testing? Well, there is a good chance we may not of tested this application until next year, so it would have been an extremely difficult task to track down this change. As we would most likely of been making script changes for the new project, we would be very probably of been confused about what had happened, and of assumed it was a script thing – and missed our key learning.

The take away from this is that we can identify changes that impact performance as they occur. It’s proven critical several times now.

There are many other pros as well that I could talk about. In my PAC talk, I discussed some, but you can really summarise them as being the advantages you get from automating a process, and so reducing the risk of human error with a more hit-and-miss manual approach.

Our Experiences: The Challenges

Of course, there can be cons to Continuous Integration testing as well. A judgement call needs to be made about the importance of an application, and if the nature of the application itself to determine whether it should be included in a CI/CD testing pipeline. Is this application ever going to require more than a few performance test runs? If this application fails in production, what are the consequences? Can it be down for a few weeks while it’s being fixed, or is it mission-critical? What is the appetite of the application team or application owner for load testing? Do they just see it as a box that needs ticking, or is it of importance to them? How stable is the application data? Is it controllable, or too dynamic to keep running the same scripts against?

Probably the nastiest issues come when addressing the interfaces or dependencies of the application under question and determining the consequence to downstream systems of running the test continuously. We have largely mitigated these issues by using mocked responses, but it can be a complex problem.

Get any of these judgements wrong, and you can find yourself trying to run a test that no-one cares about, that is troublesome to maintain, and basically a big waste of time and money. On the other hand, if you don t run a test in CI that you should of, and it incurs a spectacular performance failure one day, you are going to know there was a risk and a cost you could have avoided.

Mitigation: Where we have Improved

One question you may be wondering about is why we decided to use Selenium scripting. Originally, we used LoadRunner with Vugen Web scripting for our CI/CD pipeline. As this technology works at the HTTP level, we found that we were constantly having to update the scripts even for the most minor application changes. Script maintenance became a major headache. So, we made the call to use Selenium via JMeter. Both use Java, also our most critical systems are Java-based, so it was a natural fit for us. I won’t go into the implementation details, suffice to say that the maintenance of our scripts dramatically reduced. In fact, our team size has halved from our peak. But no need to stress about job loss – as people in our team have picked up extra skills outside pure performance testing, they’ve got work in all sorts of interesting areas.

By the way, our Mark59 framework can still cater to a CI/CD pipeline using LoadRunner and well as JMeter. Generally, if a performance test tool produces well-defined output, it should be possible to load and process results in a CI/CD pipeline. Hint, hint for anyone wanting to give a go with NeoLoad.

Bottom line, the problem with CI/CD is…

Complexity. It doesn’t really matter how much gloss, marketing, or whatever hype you want to put on it, with the current technology and tools available the building of a CI/CD DevOps pipeline is complex. Perhaps by its nature, it’s never going to be an easy thing.

In our CI/CD solution, we have adopted a few practices to help manage-ably. We try to create as few as different types of jobs and job streams as possible. We use a template approach to our jobs to allow easy parameterization to achieve this. We break up our application streams into different Jenkins tabs, so we can see at a glance the state of our applications. We send out daily results email with the appropriate job links so we and the application teams are aware of issues without digging through Jenkins. Within our Mark59 framework, we have tried very hard to make problem resolution as easy as we can by having various options and types of logging available.

But at the end of the day, there is a steep learning curve to overcome with CI/CD. One way we hope to improve things for anyone using our framework is that we will document the major jobs and job steps involved, and as much a possible provide samples to help with setup.

Also, we hope to create a publicly available AWS AMI (AWS Machine Image) on which we will place all our tooling and sample CI/CD pipelines. So much easier to build things if you can start from a working example.

Anyway, I hope that gives you a few things to think about, good luck!

 

03 Feb 2020 - Neotys PAC to the Future of Performance - Mark59

By: Philip Webb

View our Mark59 presentation from the "Neotys PAC to the Future of Performance" conference.

YouTube link: https://youtu.be/2x_BOF0SINM