How we help

After more than 50 years of developing mission critical systems, at some point during the last decade, the IT industry definitely has reached a tipping point with an interesting characteristic: almost every organization on the planet has systems, application code, and databases that can be labeled “legacy”.

Enterprise IT has witnessed the times of the mainframes (IBM, Siemens, Bull, ICL, Unisys, etc.), of the mini’s (Wang, DEC, HP, etc.), of terminal processing, of Client-Server architectures and GUI-interfaces, of the Web and Internet, and now there is no doubt it is entering the era of the Cloud. Each time TP-processing or application server technologies, database technologies, programming languages and development environments evolved with these evolutions in a quite disruptive way. However, a new generation most often did not replace the previous one, but it was put in usage on top of the previous one. To complicate things even further, all kinds of complex interfaces between all these systems of various age and evolution had to be built.

As mentioned earlier, 50 years of innovation however has caused almost every company on the planet, large or small, public or private sector, to have a “legacy” issue. At the same time, there is no slow-down in the needs to cope with new request to support their businesses, quite on the contrary.

The following conclusions are ineluctable:

  • Currently enterprises spend on average between 60% and 80% of their IT budget on maintaining existing systems instead of on new developments, and half of these costs are related to time and resources spent on the research and adaption of legacy systems.
  • There is a growing understanding that the problem, if not dealt with properly, will rather increase than decrease: the adoption of every single innovation risks to create a legacy issue at the same time, if not dealt with properly.   

Apart from the budgetary implications, legacy environments are considered problematic for several other reasons:

  • Mere lack of platform support of the (original) vendor, or dramatically increased support and maintenance costs.
  • Diminishing skill base (40% of the legacy IT-workers are estimated to retire in a span of 5 years).
  • Potential lack of agility demonstrated by a reduced level of new functionality (an accelerating pace of technology obsolescence), lack of flexibility (vs. new business initiatives: digital enablement creating new demands), or lack of native support for modern enhancements (e.g. web services, mobility, etc.).
  • A very serious obstacle to move to a cloud base solution, and save money.

Many approaches to solve the problem have been undertaken in the past (see Figure).

A very widely used approach has been to replace the legacy mainframe environment by rewriting the applications or by implementing packaged software. Too often however, these legacy data migration projects – once of a certain size – have not been successful, because the business processes had to be adapted to the supporting application instead of the other way around, or because the correct specifications could never be documented properly, or not agreed upon, or simply because the implementation became too costly. Eventually many such projects were even cancelled.

A limited number of vendors also offer an approach to reverse engineer legacy artifacts into development frameworks such as Unified Modelling Language (UML) and give re-writing a head start. This is a promising alternative to legacy data conversion and may be the correct choice for companies willing to take managed risk, with the objective to convert the legacy language properly right away and exploit for example fully all OO capabilities of the new environment.

Often, companies have tried also to componentize and wrap or package the existing legacy code in such a way that it can be exposed to other and new technologies via a Service Oriented Architecture (SOA). While this legacy data migration solution enables flexibility and the opportunity to implement modern enhancements, it will not help to decrease support and maintenance costs, or allow to de-commission an unsupported or poorly supported platform, nor will it address the  most pressing issue of all: the diminishing skill base challenge.

Several competitors also offer ‘re-hosting via emulation’ solutions for legacy data conversion. These solutions enable a company to move its existing legacy environment to an ostensibly lower-cost platform with as little change as possible. The original environment is often then “emulated” on the new platform. These solutions typically are perceived to generate more immediate cost savings, but for many organizations, the existing applications are insufficient to meet business needs, and preserving them in this way is not desirable. Again, this type of solution does not solve the skill evaporation issue (which may or may not be desirable in the short term, but is often a serious issue in the longer term), and it does often lead to new recurrent (emulation) costs, and vendor lock-in. This solution leads to one certainty: there will be a second project needed one day to deal with the issue. Also, a well-funded company has entered the market (LzLabs), offering the value proposition to ‘’re-define” the mainframe and offer bit-compatibility with a X86 platform.

Finally, ‘re-fresh’ solutions, which convert the legacy artifacts natively to the new environment, often seem to be the proper approach to mainframe migration, and is the one Anubex stands for. Mainframe rehosting overcomes all legacy issues and can be executed in a two-phase process in which the mainframe transformation is followed by a subsequent modernization phase that can target full object oriented refactoring.  Without any extra costs, this process solves most of the skills issue, allows for continuous new functionality, and offers full flexibility and native support for desired modern enhancements, in the Cloud if desired. One could even distinguish between two flavors: one in which the programming language is kept and adapted to a new compiler (e.g. COBOL to COBOL), labelled ‘re-hosting’, and one in which the programming language itself is also converted (e.g. COBOL to C#, or Natural to COBOL), called ‘re-platforming’.