Ten years ago, in the summer of 2005, I was part of a team that was developing the first generation of an Anubex automated testing tool for migrated applications called Record and Play. This development was crucial for a large BS2000 mainframe to UNIX migration project that we had started earlier in the year. The architecture of Record and Play (the recording part to be precise) relied heavily on the fact that each interactive program passed through one fixed point in the application code.
I have very fond memories of those days where we were discussing how to best implement different testing strategies in the tool, sitting around whiteboards and coming up with algorithms such as the faster-incorrect-replay, which sounds worse than it actually was. While the tool helped to successfully bring the customer live, limiting the number of customer testing resources involved and issues discovered, all of us knew that much more could be achieved in the realm of automated testing…
Now fast forward to today: Spring of 2015 as I’m writing this article. These days I’m a part of the Anubex Migration Factory, and we use TestMatch to reduce testing efforts, from both Anubex and customer staff, in all of our projects. TestMatch can best be described as Record and Play on steroids.
One of the main changes under the hood is that TestMatch no longer relies on the structure of the legacy application to create the recording that will be replayed: the dependency on all code passing through a common entry point no longer exists. Instead, TestMatch operates on the low-level protocols used by the different legacy platforms (IBM, BS2000, Unisys, UNIX).
This (better) approach not only makes the solution independent of the structure of the application, it also reduces the amount of system resources required to produce this recording (or trace file as it is currently implemented).
But that is just the tip of the iceberg. As we have come to learn the hard way, a legacy application rarely has just terminal emulators connected to it during the daily operational window. Typically there is one or more message based interface supported as well (MQ, TIBCO, EntireX, …), and it turns out that no environment is every really free from its share of batch processing during that same window (started from a Job Scheduler or otherwise). All of these more complicated interfaces now have their place in TestMatch, and new technology can easily be supported by a pluggable design.
One of the most recent steps forward that we have made is that TestMatch can now also deal, to some extent, with the EBCDIC to ASCII sorting differences that are inherent when you make the transition from mainframe to distributed systems. Where previously such differences resulted in the termination of the executing session, the latest versions of TestMatch can recover from such differences and continue to replay the remainder of the session.
From a reporting point of view, we have added a wealth of statistics that go beyond reporting good or bad but also include names of transactions that were recorded but never executed, or average performance statistics to help identify potential candidates for performance optimizations. In that last category, stress testing functionality is also available to monitor the system under more than standard system loads to determine system breaking points.
Aside from these strides forward in the underlying core TestMatch technology, we have also picked up a few best practices along the way. Practices that we now share with our customers and partners.
For example, one of the most common rookie mistakes is made while defining the project’s testing strategy. Many people want to start with a massive production recording (a full day is a common request), and expect this can replayed 100% identically. There are two issues with this approach. The first is that achieving 100% is typically not possible due to e.g. the aforementioned EBCDIC to ASCII sorting issues, and possible impact of differences in speed of the migrated programs that may affect the remainder of the replay operation.
The second issue is more fundamental, and it relates to the misconception that such a massive production recording has more value than a well scoped, smaller recording made by key test users. The latter approach typically takes a bit more effort to set up but it means the recording is more valuable: redundancy within the recording is reduced (after all what is the value of executing the same program 5,000 times), and its smaller size means it can be manipulated much faster than the large version…
Combining this wealth of power, and flexibility in TestMatch with our extensive experience using automated testing tools in legacy migration projects, we have significantly increased the turnaround time in our projects while at the same time limiting the effort required to achieve that. Going back to the early days of Record and Play would be a great trip down memory lane, but would feel very much like getting back in the car my dad bought me when I was first learning how to drive. No air conditioning, four gears, noisy, and no power steering. On the other hand, I once taped the bumper back on after a small incident, but that is another story.