How do you test core enterprise applications that are being migrated from a mainframe to a private cloud in the timeframe of a weekend? An Anubex migration shows a successful result.

The challenge

The goal of the migration was to move the core application, its data, and its users to a service-oriented architecture supported by a Java/J2EE framework and an infrastructure with a RDBMS running on Linux in the company’s private cloud. The drivers of the migration included a desire for cost cutting, reducing technology risk, increased business flexibility and system availability.

Moving such system over a weekend is already an enormous technical challenge, but ensuring that all functions work as they did before is a gigantic task. A real challenge is to limit as much as possible the work that is required to test and accept the applications that are moved to a completely different OS. Two important concerns companies need to address before they put their transformed system into production and try to run their business with it are: “Does it all work?” and “How can we be sure?”

Standard procedure

Application software and system testing is a standard part of application design, development, maintenance and implementation, and likewise for systems that are migrated. It is performed to provide users with an accurate view of the stability and quality of the application software and system, in order to assist users to proceed to acceptance for implementation.

Elements of importance evaluated during testing are:

  • Fitness – design requirements are met,
  • Performance – the functions are executed within an acceptable time,
  • Usefulness – the application can be used in the intended environment,
  • Correctness –the expected results are achieved

To support application software and system testing, usually a test plan is developed/updated, including very detailed test scenarios. A test scenario is the method followed by the tester in which way the application software and system is tested according to the user requirement. A typical test scenario contains details of what is being tested, the test data to use and what is expected to happen when the test is executed. Selecting or creating test data and test systems is quite intricate because the developer of the test scenario needs to think about the many diverse aspects of the intended purpose with the test.

The number of potential test cases for even a simple application function can be very high. Consequently, the development of a new or the maintenance of an existing test scenario is labor intensive, because it contains details of every single application function that needs to be evaluated, and each test case requires a very precise specification, such as:

  • Do the application programs open and close properly?
  • Can data be entered, read from and written to the database?
  • Can data be sent to and retrieved from external systems?
  • Is invalid data rejected?
  • When something goes wrong, does the correct error message appear?

In this traditional approach to testing, often companies use the help of end users to test the application software and system. In other cases, companies have a specialized team to conduct testing or sometimes companies ask the assistance from externals who specialize in testing services. There are nevertheless some important drawbacks:

  • Testing can only be successful when up-to-date scenarios are available for all cases that require testing
  • As mentioned before, creating test scenarios, and keeping the scenarios up-to-date, is a huge task
  • Human testers who rely on visual inspection can miss errors so the testing method is not 100% reliable

A smart testing strategy to select feasible test cases is required with increasing application software and system complexity. When more test and examination cases exist, subsequently the complexity of the test strategy grows. And, more time and resources required for testing means increased cost.

The case at hand

Take the system of Société Générale Private Banking in Luxembourg with approximately 10 million lines of Software AG’s Natural in close to 20000 application programs, accessing hundreds of gigabytes of on-line data stored in Software AG’s pre-relational database management system ADABAS. The system is used by thousands of users, executes 7000 scheduled batch processing jobs daily, receives some 1000 files transferred per day from external sources and handles more than 50000 service calls over Entire-X each day.

The project involves the use of tools to automate the migration of this core system in a single big-bang step to avoid business interruption while the project is in progress. A key benefit of these tools is the speed with which converted code can be made available for testing. The functionality of the system is entirely preserved, which means that this is a project of a highly technical nature. Consequently the user community is hardly involved, except during final acceptance of the system.

Despite the fact that the functionality of the system is not touched by the migration project, testing is still indispensable. And in this case the responsibility of all tests, before acceptance of the system by the user community, lies with the migration team which is not necessarily acquainted with the working of the system. Ensuring that all common cases are documented in test scenarios ready for use by the migration team for a system of this size is an immense task, let alone running and verifying all cases until the business users can accept that the migrated system is production ready.

Many companies refrain from performing such a migration project because they realize that the impact on the ROI, by the volume of work related to preparation, execution and validation of tests, destroys the business case. How to remove the burden of manual development and execution of test scenarios, and how to minimize human interaction to validate the test results?

If it would be possible to extract and to store test scenarios from the day-to-day system activities, in other words from all transactions generated by the users and by other sources, for later use during the testing, that would already be very helpful.

This made it possible for Société Générale to significantly reduce the human efforts required to test this large and complex system, up to a level where it was stable enough to enter final acceptance testing, before the switch-over to production status was made.