Monday, February 25, 2013

Explain the levels of Decision Making process in organization

source: http://businesscasestudies.co.uk/cima/improving-strategic-decision-making/levels-of-decision-making.html#axzz2LvzkSi5j
-----

Decisions are made at different levels in an organisation's hierarchy:
Strategic decisions are long-term in their impact. They affect and shape the direction of the whole business. They are generally made by senior managers. The managers of the bakery need to take a strategic decision about whether to remain in the cafe business. Long-term forecasts of business turnover set against likely market conditions will help to determine if it should close the cafe business.
Tactical decisions help to implement the strategy. They are usually made by middle management. For the cafe, a tactical decision would be whether to open earlier in the morning or on Saturday to attract new customers. Managers would want research data on likely customer numbers to help them decide if opening hours should be extended.
Operational decisions relate to the day-to-day running of the business. They are mainly routine and may be taken by middle or junior managers. For example, a simple operational decision for the cafe would be whether to order more coffee for next week. Stock and sales data will show when it needs to order more supplies.

What is Information Technology?

source: http://www.merriam-webster.com/dictionary/information%20technology
-----
the technology involving the development, maintenance, and use of computer systems, software, and networks for the processing and distribution of data

What is Decision Making?

source: http://www.businessdictionary.com/definition/decision-making.html
-----

The thought process of selecting a logical choice from the available options.
When trying to make a good decision, a person must weight the positives and negatives of each option, and consider all the alternatives. For effective decision making, a person must be able to forecast the outcome of each option as well, and based on all these items, determine which option is the best for that particular situation.

What is the difference between Data and Information?

source: http://wiki.answers.com/Q/What_is_the_difference_between_data_and_information_in_computer_terms
-----

Data is raw material for data processing. data relates to fact, event and transactions. Data refers to unprocessed information.
Information is data that has been processed in such a way as to be meaningful to the person who receives it. it is any thing that is communicated.

For example,researchers who conduct market research survey might ask a member of the public to complete questionnaires about a product or a service. These completed questionnaires are data; they are processed and analyze in order to prepare a report on the survey. This resulting report is information.

What is Goal Conflict?

source: http://www.ehow.com/list_6802470_types-goal-conflict.html
-----

Goal conflict is a business term that typically refers to either strategy or data plans that are made but cannot be effectively completed because of inherent differences and problems between goals. Some goals are independent and do not affect each other at all, but many goals are interdependent and depend on the same resources, systems or workers to be accomplished. When multiple goals intersect, goal conflict can occur and reduce work efficiency.

What is Goal Congruence?

source: http://www.businessdictionary.com/definition/goal-congruence.html
-----

The integration of multiple goals, either within an organization or between multiple groups. Congruence is a result of the alignment of goals to achieve an overarching mission.

What are the typical components of Information System?

source: http://en.wikipedia.org/wiki/Information_system


Components of Information System

An Information System (IS) consists of five basic resources,[7] namely:
  1. Personnel, which consists of IT specialists (such as a Database Administrator or Network Engineer) and end-users (such as Data Capture Clerks).[7]
  2. Hardware, which consists of all the physical aspects of an information system, ranging from peripherals to computer parts and servers.[7]
  3. Software, which consists of System Software, Application Software and Utility Software.[7]
  4. Networks, which consists of communication media and network support.[7]
  5. Data, which consists of all the knowledge and databases in the IS.[7]

What is Information System?


Information system - Wikipedia, the free encyclopedia

en.wikipedia.org/wiki/Information_system
An information system (IS) - is any combination of information technology and people's activities that support operations, management and decision making.

What is System?


  1. sys·tem  

    /ˈsistəm/
    Noun
    1. A set of connected things or parts forming a complex whole, in particular.
    2. A set of things working together as parts of a mechanism or an interconnecting network.
    Synonyms
    method - order - scheme - process



Monday, January 21, 2013

Wave Accounting Software Review

Open Bravo Accounting Software Review

SQL Accounting Software Review

What is a Mainframe computer?


What is a Mainframe computer?
A Mainframe computer is a big computer. IBM builds Mainframe Computers. Today, a Mainframe refers to IBM's zSeries computers. Big companies; Banks, Insurance Companies, Travel and Retail Sector, Telecom Companies employ Mainframes for processing their business data. Today, thousands of people around the globe book flights, do Electronic Money Transfers, swipe their credit-cards for purchases. These transactions are processed in a snap by a Mainframe computer.

Companies rely on Mainframe Computers
Today, all businesses trust Mainframe Computers to process their critical business data. What distinguishes a Mainframe from other line-of-computers, its close cousins such as Micro and Mini-Computers?

Available : Companies use mainframes for their mission-critical work. If a mainframe system goes offline, and access to the applications and data is affected, the company would lose millions of dollars of business.

Mainframe computers are always available, they are up and running all the time. They just don't fail. Once a Mainframe computer is started and powered on(IPL'ed), they run for 5 to 10 years at a stretch, without failing. IBMensures that mainframe systems are available and running 99.999% of the time. Mainframe computers have very good up-times. The Mean Time Between Failures(MTBF) ranges from several months to even years.

Reliable : IBM boasts that you can bet all your money on a Mainframe, when it comes to Reliability. Very often, you must have seen the horrific Blue-Screen of Death(BSOD) on Desktop Computers, and they crash! A Mainframe Computer reliably processes huge volumes of commercial data, without crashing.

Serviceable : Faults can be detected early on a Mainframe Computer. When some components fail, some of IBM's systems can automatically call the IBM Service center. Repairs can be done without disrupting the day-to-day operations. The RAS(Reliability-Accessibility-Serviceability) features of a Mainframe Computer give it an edge over many other computing systems.

Are Mainframe computers good at everything?
Well, not quite. Mainframes are not good at number-crunching or don't do scientific calculations. A Mainframe is not a Super-computer. You wouldn’t use a Mainframe computer to calculate the value of Pi, up to 1000-decimal Places. Mainframe-Computers are not meant for speed. They aren’t fast, rather they can process humungous data reliably. You can't play games like Counter-Strike or Half-Life on a Mainframe.

Mainframe computers don’t have a beautiful user-interface like the PC at your home. You wouldn’t find a desktop wallpaper or icons on a mainframe computer.

Mainframe Hardware
A Mainframe computer has processing units(PUs), memory, I/O channels, control-units and peripheral devices. A processing unit(PU) is the brain of the mainframe computer, that executes instructions. A mainframe computer has many processors. The total of all processor units(PUs) are inside a cage(frame) called the Central Processing Complex(CPC).

There are specialized PUs capable of performing specific tasks. The main processors are called the Central processor(CPs). There are PUs for encryption and decryption(CPACF), Linux workloads(IFL), the co-ordination of the system efforts(ICF), those that assist the CP for any workload on the I-O subsystem(SAP), spares that come in handy when a CP fails, others for executing Java code(zAAP) and providing accelerated DB2 performance(zIIP).

Buy a z10 Mainframe server, and you’d get 12 central processors, 12 IFLs, 12 ICFs, 6 zAAPs and 6 zIIPs(Source: IBM).

The CPC cage also contains main storage(RAM). A z10 mainframe can provide upto 384 GB of RAM memory.

channel is an independent data and control path between I/O devices and the memoryPeripheral devices like disk-drives, tapes, printers, card-readers, card-punch are connected to amainframe computer using channels.

Because peripheral devices are relatively slower than the CPU, the CPU could waste time waiting for data from a peripheral device. A I-O Control unit is a self-contained processor, and has logic to work with a particular type ofperipheral deviceChannels connect to control units, which in turn manage the peripheral device.

image
A personal computer offers different ports to connect peripherals. It has USB ports, SCSI bus. The Mac has high-speed firewire port. Likewise, on the mainframe, channels can be OSAESCON or FICONOSA Express channels are used to connect to standard networking technologies such as LAN(Ethernet), Token Ring. ESCON and FICON channels have fiber-optic cables.

Mainframe Peripherals
Just as you use a keyboard, mouse and a CRT display to operate a Personal Computer, in the early days you operated a mainframe computer by a terminal. A terminal had a display and a keyboard. A very popular terminal manufactured by IBM in the 70’s was the 3278 Terminal. Have a look at the photograph below.



Terminals connected to a mainframe computer remotely, over a network. They used the IBM 3270 protocol for communication. IBM no longer manufactures terminals, instead you use a PC running a software that mimics the terminal. Terminal emulators such as the IBM Personal Communications are quite popular.

Storage devices such as Magnetic Disk drives are used as secondary memory. IBM uses the term DASD(Direct Access Storage Device) pronounced as dazz-dee for hard-disks. Disk drives support random access to data. IBM 3380and 3390 DASD models are widely in use. The picture below shows a 3380 DASD assembly(Source:IBM Archives).

File:IBM3380DiskDriveModule.agr.jpg

Tape drives are also used for storage. A Tape drive allows sequential access of data. Very large files can be stored on tape-drives. Tape drives are often used for a backup of data. IBM continues to innovate the tape drive technology. IBM 3480 tapes were very popular in the last century. The picture below shows two 3480 tapes(in the front) and two 3420 tapes(the tape reels at the back). Tapes now come in a square cartridge form, instead of a tape reel.

The IBM 3480 (two drives in front) and IBM 3420 (two drives in back) Tape Drives

Punched Cards(Hollerith Cards) although obsolete, have been used over a century for recording information. A punched card was a stiff paper, that represented information by the presence or absence of holes. Punched cards were used by Joseph Jacquard in the early 19th century in textile looms. By using a particular punched card, you could select a weaving pattern.

Herman Hollerith proposed the use of punched cards to store information about US nationals during the 1890 census. Jacquard cards were read-only. Hollerith envisioned that punched cards could be used as a read/write technology. A key-punching machine could be used to punch data such as age, sex or marital status of the citizen. In the picture below, you see the operators preparing data for the census.(Source: Computer Science Labs).

 

By the early 20th century, punched cards began to be used everywhere in the United States. Your gas bill would arrive every month with a punched card, that stored the customer name, the address, the bill amount and the due-date. In the 20th century, punched cards were used as legal documents such as US government checks.

Hollerith founded a company, The Tabulating Machine Company(1896), which after a few buyouts, became known as International Business Machines(IBM)(1911). IBM manufactured the standard 80 column punched card. A deck of cards were fed through a hopper into a card-reader. Output data could be punched onto cards using a card-punch.

80 column Punched Card

The photograph above(Source: IBM Archives) shows a standard 80-column IBM card. A legacy of 80-column card is, even today terminals connected to a mainframe server have a display of 24 rows x 80 columns. The columns 73-80 contained sequence numbers and were used for sorting the punched cards.
Mainframe computer industry and the people involved
To oversee the running of a mainframe computer, you need to hire professionals called System Operators(Sysops).  Mainframe sysops have good knowledge of the mainframe architecture and the operating system. The operating system on the z10 mainframe computer is the zOS(pronounced as zee-OS). A Mainframe operator starts(IPL – Initial Program Load) or shuts down the mainframe and various regions(applications). He must monitor workload on the system, and reply to any messages. He can issue commands to vary various peripheral devices online or offline.

Just as MS-DOS or Linux offer a command-line interface, the zOS too has a command-line, at which the system operator issues MVS commands to start or shut-down applications(e.g. START DB2), to monitor the workload(DISPLAY A,L) etc. Console is another term for the command-line interface of zOS. The photograph below shows how the console looks.


     *13.24.28          *IWM048E WLM RUNNING IN GOAL MODE WITH THE DEFAULT      
     * POLICY                                                                   
     *13.26.56 STC00002 *02 ISTEXC200 - DYN COMMANDS MAY BE ENTERED             
     *13.28.36          *IFB081I LOGREC DATA SET IS FULL,13.28.36,              
     *        DSN=SYS1.LOGREC                                                   
    - 13.30.18 STC00024  +DFHWB1008 CICS CICS Web environment initialization is 
    -  complete.                                                                
    - 13.30.19 STC00024  +DFHSI1517 CICS Control is being given to CICS.        
    - 13.30.19 STC00024  +DFHEJ0102 CICS Enterprise Java domain initialization  
    -  has ended.                                                               
      13.33.26 STC00037  $HASP373 FTPD     STARTED                              
    - 13.33.31 STC00034  +EZY2702I Server-FTP: Initialization completed at      
    -  19:33:31 on 11/13/12.                                                    
    - 13.33.39 STC00037  IEF404I FTPD - ENDED - TIME=13.33.39                   
    - 13.59.47 STC00025  IEF404I BPXAS - ENDED - TIME=13.59.47                  
      14.14.24 TSU00038  $HASP373 SYSADM   STARTED                              
    - 02.24.00 STC00024  +DFHIC0801 CICS CICS time altered from 24.00.000 to    
    -  02.23.590 - date 11/14/12 - relative day 001                             
  00- 00.00.02 STC00024  +DFHIC0801 CICS CICS time altered from 24.00.000 to    
    -  00.00.016 - date 11/15/12 - relative day 002                             
  IEE612I CN=01       DEVNUM=0700 SYS=P390                                      
                                                                                
                                                                                
  IEE163I MODE= RD                                                              

Application programmers design and build application software that runs on Mainframe computers for companies. First, the customers(end-users) specify the business requirements. Business analysts gather these business requirements and with the experience they have acquired, translate it into functional and technical requirements, and give a briefing to the application developers. The application developers then create technical designs, write code in High-level languages such as Cobol, C/C++, PL/1, and test it.

System programmers perform hardware-software upgrades, capacity planning and trains other sysops. He is a watch-dog. He installs the operating system. He applies patches(patches are called PTFs on mainframe). He also maintains other system software or products running on the mainframe.

Connecting to a Mainframe computer
Typically at a customer site, mainframe servers are generally housed in a large area of building space. This is called the data center or the raised floor. People around the world connect to the Mainframe-Computer, remotely over a network, from their work-place or home, using the keyboard and the dumb-terminal, or a PC running a software that pretends to be a dumb-terminal. Gee, you don’t have to sit physically near a Mainframe Box to do your work. This is how the Mainframe Screen looks like, when you first connect to it - 


   Menu  Utilities  Compilers  Options  Status  Help                            
 ------------------------------------------------------------------------------
                            ISPF Primary Option Menu 
 Option ===>                                                                                                                            
 0  Settings      Terminal and user parameters            User ID . : SYSADM   
 1  View          Display source data or listings         Time. . . : 15:09    
 2  Edit          Create or change source data            Terminal. : 3278     
 3  Utilities     Perform utility functions               Screen. . : 1        
 4  Foreground    Interactive language processing         Language. : ENGLISH  
 5  Batch         Submit job for language processing      Appl ID . : ISR      
 6  Command       Enter TSO or Workstation commands       TSO logon : ISPFPROC 
 7  Dialog Test   Perform dialog testing                  TSO prefix: SYSADM   
 9  IBM Products  IBM program development products        System ID : P390     
 10 SCLM          SW Configuration Library Manager        MVS acct. : ACCT#    
 11 Workplace     ISPF Object/Action Workplace            Release . : ISPF 5.2 
 M  More          Additional IBM Products                    

         
      Enter X to Terminate using log/list defaults

Taming the Beast: Integrating with Legacy Mainframe Applications



Subhajit Bhattacherjee

February 2008
Summary: Distributed midrange technology must not only coexist with, but must also integrate with and leverage, mainframe assets. (9 printed pages)

Contents

Introduction

"Program—A set of instructions, given to the computer, describing the sequence of steps the computer performs in order to accomplish a specific task. The task must be specific, such as balancing your checkbook or editing your text. A general task, such as working for world peace, is something we can all do, but not something we can currently write programs to do."
–Unix User's Manual, Supplementary Documents
A long, long time ago, I was working in the R & D division of a major global financial institution. Microsoft Internet Explorer 4.0 had just been released, and we were discovering cool things that could be done with DHTML, ActiveX Controls, and DirectX. My group was working on creating a kiosk-based banking solution, in which people could walk in; chat with a banker over video; fill out a form; open an account; make a deposit in the account; and leave with a valid, working ATM card—all within 15 minutes. We did mock-ups and trial runs. We built the entire front end, stored data that was collected from forms into a Microsoft SQL Server database, and used that to emboss test ATM cards. We wowed the business with our demos.
Everything was going great, until someone asked, "How are you going to get this thing working for real?" At that point, all conversation died. Jaws dropped. An uncomfortable silence took over. The problem was that the account-opening, customer-management, and accounts-receivable platforms were all on IBM 390 mainframe systems. No one on our team knew anything about mainframes. Our prototypes were built without any requirements around sending data to mainframe applications. We had spent a significant amount of money building this prototype, and we knew that if we could not get this working, we were at serious risk of losing our jobs.
Finally, our manager stepped in. He ventured, "This is a proof-of-concept. The plumbing is not all hooked up. As R&D, we have proved that this concept can be done. Now, we will have to work with the project teams to hook up the plumbing, and get this thing talking to the mainframe systems. That is a lot of grunt work, but not rocket science. You will have our estimates by the end of the week."
Some adroit verbal footwork saved the day. However, we had a lot of work to do. After the demo, we had a team meeting. For the next three months, all leaves were cancelled; we were told that we would have to work evenings and weekends.
For our team, the big problem was a total lack of knowledge and awareness of mainframes. To remedy that, a flurry of activity began. Action teams were formed. Consultants were hired. Experts from different parts of the country flew in. The overall design was divided and carved out to different teams composed of people with different backgrounds: distributed, mainframe, and enterprise application integration (EAI).

Mainframe Development and Design

Slowly, a picture of the problem emerged. We (midrange systems) knew when to send data to mainframes. The mainframe experts knew what to do with that data. The missing piece was how data would get to the mainframe applications, and how that data would come back to the midrange systems. Adding to the problem was the fact that large chunks of activity on the mainframe systems happened in end-of-day batches, while our requirement was to issue an ATM card in 15 minutes.
It is surprising how many times otherwise well-planned projects end up being less than successful, due to the lack of consideration of integration issues. It is even more surprising how many such issues involve the integration of mainframe systems with midrange environments. Considering the large mainframe asset base that enterprises have, every architect at some point has to develop solutions that involve mainframe systems.
Given the pressure to deliver that entire project teams—and, by association, architects—feel, mainframe integration often gets the short shrift during project conceptualization and design, which results in costly rework (or even redesign) during integration testing. However, this need not necessarily be the state of affairs. Mainframe integration is like world peace in the preceding quote: a general task that does not automatically happen; it must be planned for. With a little consideration up front, mainframe integration can be made as streamlined as the integration of distributed components—without rework or headaches.

Mainframe Integration

Architecting solutions to enterprise-level requirements in current times requires engaging a variety of components that are deployed in a wide spectrum of environments. In most enterprises, mainframes are workhorse systems that run the majority of business transactions. On the other hand, customer interfaces through Web sites and IVRs (telephone systems), customer-service applications, enterprise resource planning (ERP), customer-relations management (CRM), supply-chain management (SCM) applications, and business-to-business (B2B) interactions are usually on distributed systems.
Any (conscious) activity that is performed to allow data transfer and logic invocation between distributed and mainframe systems is mainframe integration. This article will discuss only IBM mainframes that run a derivative of the MVS operating system.
There are a lot of companies that built and sold mainframe machines. However, IBM mainframes are currently dominant, with approximately 90 percent of the market share. The operating system that is most prevalent on IBM mainframes is MVS or a descendant (OS/390 or z/OS). For a good introductory article on mainframes, check out the Further Study section.
There is some ambiguity over whether IBM systems that run an OS like AS/400 or TPF are mainframe systems, or whether they should be considered midrange systems. These systems will not be explicitly discussed, although techniques that we discuss in this article might be applicable to these systems, in some cases.
In the hype and debate about mainframe and distributed computing, it is easy to forget that mainframe computers are just another computing environment. They are neither easy nor difficult, compared to distributed systems; they just follow a different philosophy. It is also important to remember that mainframe machines are not dinosaurs. Almost every technology that is available on midrange systems is available on mainframes (including XML parsers and Web services). Each mainframe deployment is unique, and features that are available on one environment might not be available on another. That is why this article will focus mostly on basic technology.

Overall Solution Architecture

To begin the process of integration with mainframe applications, it is necessary to look at the integration from the viewpoint of the overall solution architecture. The following process might be adopted for analysis, design, and delivery of integration interfaces.

Identifying Interaction Points and Interfaces

In the overall solution architecture, you should identify the interaction points or interfaces between distributed systems and mainframes. This can be performed at a very early stage, when you are evolving the conceptual solution architecture. This will provide an understanding of the extent of the integration activity that is involved, and will enable you to estimate and focus appropriately on the integration process early on.
It is important to keep these interfaces unidirectional, as much as possible. If an interface is bidirectional in nature, it might be split into two unidirectional interfaces.

Assigning Attributes

For each interface that is identified, you should assign attributes to these interfaces as early on as possible. In some cases, the value of these attributes might not be known; those decisions might not have yet been made. That is okay; this exercise might have an input into that decision process. Some of these attributes can be:
· Direction of interaction (that is originating and recipient systems).
· Originating-system OS.
· Originating-system application platform.
· Originating-system development environment.
· Originating application type (online or batch).
· Recipient-system OS.
· Recipient-system application platform.
· Recipient-system development environment.
· Recipient application type (online or batch).
· Nature of data that is interchanged:
·    File
·    Message/Event
·    Request (non-transactional)/Request (transactional)
·    Other
· Data volume and usage (high volume at specific times, low volume spread over the day).
· Constraints, if any (such as, this interface has to be MQ-based; or, files must be sent once an hour, on the hour).
· Boundary conditions, if any.
Additional attributes that are important from the perspective of the project can be added to the preceding list. It is important to consult all stakeholders and subject-matter experts (mainframe and distributed), to address appropriate concerns:
· Based on the preceding, it is beneficial to create validation criteria to ensure successful interface development and execution. Special care must be taken to factor-in constraints and boundary conditions.
· During design, for each previously listed interface, and based on the attributes that are captured, appropriate decisions must be taken on how to realize that interface. Again, constraints and boundary conditions must be carefully considered before arriving at the interface design.
· Based on design specs and validation criteria, the interface must be built and tested. Because it usually takes time to build applications, interfaces very often are built and unit-tested last (very often, on the day before testing begins), with the result that interfaces fail and deadlines slip. It is beneficial to perform mock testing of interfaces by using handcrafted artifacts (files, messages, and so on) to ensure that the interface will work long before build finishes.
· The validation criteria should be used to drive integration testing, to ensure that the interfaces are effect and robust.

Interface Design

The other piece of the puzzle is to determine how each interface should be designed. The following is not intended to be a comprehensive list—only a starter kit to get the thought process started. Of course, in certain cases, constraints might automatically define the interaction method. If that is not the case, you might use the following.

File-Based

If the interface requirement is to send high-volume data at specific points of time, and no response is immediately or synchronously required, this data can be sent as a file. There are different ways to transmit files to mainframe systems. The simplest way (if possible) is to send the file via FTP either programmatically or by using shell scripts (provided that the mainframe has the FTP subsystem and that there are no security concerns). There are additional factors of which to be aware when sending data to mainframe systems, such as record length and type (fixed or variable block).
If FTP is not an option, tools such as XCOM or NDM might be used. Both tools facilitate transfer of files between midrange and mainframe environments. Expert note: These tools can work also on systems network architecture (SNA) or SNA/IP environments. A hybrid approach also works well, if available. In this option, the midrange system sends the file via FTP to an intermediate system (for example, a Windows Server or STRATUS system) that is configured to transmit the file to the mainframe by using an appropriate protocol.
There are some issues with transferring files to mainframe systems. It is notoriously difficult to get reliable status notifications on the success state of the file transfer. Additionally, mainframe systems offer the option to trigger jobs when a file is received, to process the file. If such triggers are set up, aborted file transfers or multiple file transfers into generation data groups (GDGs) might cause issues with unnecessary job scheduling or with the same file being processed multiple times. (GDGs provide the ability on mainframes to store multiple files with the same base file name. A version number is automatically added to the file name as it gets stored. Each file can then be referenced by using the base file name or the individual file name.)
If file-based triggers are used, care must be taken to design and develop explicitly, so as to prevent these situations. Regardless, transferring files to mainframe systems remains a reliable, robust interface mechanism.

Message Queuing

In case the mainframe-application interaction cannot be file-based, due to technical or other considerations, high-volume data can also be interchanged by using message queuing (MQ). Tools are available that will perform file-to-queue and queue-to-file operations on both midrange and mainframe systems. Even if tools are not an option, moderately skilled development teams on midrange and mainframe systems should be able to author their own tools or libraries to perform file-to-queue and queue-to-file operations at either end. It is a good idea to split the file or data stream record by record into individual MQ messages, to ensure responsiveness. (Expert note: On the mainframe side, unit-of-work considerations will have to be addressed carefully, to ensure robustness.)

Screen Scraping

If the requirement is to Web-enable existing green-screen applications or to reuse transactions that have been built with green-screen front ends, screen mapping and/or screen scraping might be a viable option. HACL and (E)HLLAPI application programming interfaces (APIs) and tools allow development of these interfaces in C/C++, Java, Visual Basic, and other languages. HACL and HLLAPI provide a reliable way to reuse existing green-screen applications without a major reengineering effort. They provide the protocol for screen scraping. There are numerous tools that facilitate screen scraping by handling the low-level protocols—allowing the developer to focus on business logic. In such cases, Web forms capture data and send that data to the Web server. At the server side, this data is sent to mainframes by using these APIs.
There are issues in using screen-scraping technologies. It is not possible to change these transactions without changing the underlying screens. If existing screen-based behavior must be modified, however, and the screens themselves cannot be modified, due to constraints, screen scraping might not be a viable option. Also of concern are scalability (the number of concurrent connections) and extensibility.

Connectors and Bridges

Mainframes have myriad transaction-processing systems. CICS (Customer Information Control System) is a transaction server that runs primarily on mainframes and supports thousands of transactions per second or more. CICS applications can be written in COBOL, C, C++, REXX, Java, IBM assembly language, and more. (For more information on CICS, please see the second reference in the Further Study section.)
IBM Information Management System (IMS) is a combined hierarchical database and information-management system that has extensive transaction-processing capability that was originally developed for the Apollo space program. The database-related IMS capabilities are called IMS DB and have a legacy of four decades of evolution and growth. IMS is also a robust transaction manager (also known as IMS TM or IMS DC). IMS TM typically uses IMS DB or DB2 as the back-end database. (For more information on IMS, please see the Further Study section.) A vast majority of mainframe systems use IMS TM or CICS as the transaction-processing environment of choice.
In case the requirements are for the interface to be transactional (that is, to execute transactions at the other end), there are myriad options that are based on the nature of the mainframe transactions to be called. IMS or CICS transactions are relatively easily exposed over MQ by using the open transaction manager access (OTMA) bridge for IMS transactions, or the MQ/CICS bridge for CICS transactions.
In such cases, data can be interchanged in copybook format. Later versions of CICS and IMS also support Web services. (For IMS, Web services support is provided by using an IMS connector). Issues that are involved in using connectors and bridges for transactions concern the time that is required to configure and set up the initial environment, and the effort that is required to ensure data-element mapping from the source to destination formats. IBM's MQSI/WMQI servers provide an easy way to map between source and destination formats, as well as message integration services.

Wrappers

If the interface is not to IMS/CICS transactions but to custom-built programs, there are myriad options—from creating wrapper transactions in IMS or CICS, to writing helper programs that accept requests over an interface, pass the requests to the target program, and then pass the results to the calling system back over the interface.

Adapters

In the case of packaged solutions on distributed systems (for example, SAP, Siebel, and PeopleSoft), adapters are available to integrate seamlessly with mainframe systems. Adapters simplify the work that is required on midrange systems to interface with mainframes. However, adapters are susceptible to configuration problems and, in some cases, issues of scalability and robustness.

Design Criteria

Important considerations that should govern the decision of interface options for interfaces are:
· Fire and forget versus request/response. That is, is a response expected back from the interface?
· Synchronous versus asynchronous. That is, is the response expected back in the same sequence flow as the request, or can the response arrive outside of the request sequence flow?
· Interface characteristics:
·    Is the interface transactional?
·    Is it a two-phase transaction?
· What are the transaction-recovery requirements?
· What are the consequences of a failed transaction?
Back to our story: After about two months of rework, we finally came up with a working design that involved a combination of these techniques: file transfers, MQ messages, screen scraping, and creation of online transactions from existing batch jobs. It took us six more months to build a fully working system in the test environment. We did get the kinks out of the system, and everything finally worked. We built a couple of functional prototypes and rolled it out for pilot testing with a sample population. The results from the sampling studies were encouraging. However, due to cost overruns and additional rollout expenses, the project was stopped.
To sum up this article, let's look at some of the key takeaways from our discussion.

Lessons Learned and Takeaways

If you are new to mainframes, the one thing to remember is that mainframes are just computers, albeit with a different philosophy. Mainframe computing is neither hard nor easy; it is just different. Mainframes are not legacy dinosaurs; almost every midrange technology is available on mainframes.
Almost every architect will have to work with mainframes and/or mainframe integration, at some point. There is a huge repository of mainframe code that is not going to be ported to midrange anytime soon. You must ensure that you keep integration in mind as you plan your work. Successful integration is neither automatic nor assured; it is the result of hard work.
There are many different ways to integrate with mainframes. You must analyze your requirements and design judiciously—balancing cost, environment, and performance considerations. Other things to consider are interaction type, synchronization requirements, and transaction type. Ensure that you adequately assess risks and have a risk-mitigation plan in place. Remember:
"The most likely way for the world to be destroyed, most experts agree, is by accident. That's where we come in; we're computer professionals. We cause accidents."
–Nathaniel Borenstein (1957–)

Critical-Thinking Questions

· Mainframe environments (operating systems, transaction processors, databases, and so on) have been evolving for over four decades and are still going strong. On the other hand, midrange systems usually go through significant paradigm shifts every three to five years. Why do you think that is the case?
· If you were architecting a mission-critical system (say, NASA's moon-base system), would you go for a full-distributed system, a mainframe-based system, or a hybrid approach? Why? If you selected a hybrid approach, what functionality would you keep on the distributed side, and what would you put on the mainframes? What interfacing challenges do you foresee in architecting a three-component messaging system between earth, the International Space Station, and the moon base?
· Imagine that you are architecting a network of ATMs for a bank that will run on Microsoft Windows XP. The core consumer-banking system, however, is on the mainframe and uses IMS. How would you design the ATMs to interface with the mainframe system? What if the network were disrupted—say, by floods, broken cables, or some other disaster? Would you still be able to function? What if one ATM (or even a range of ATMs) is compromised?

Further Study

· Crigler, Rob. "Screen Mapping: Web-Enabling Legacy Systems." eAI Journal. January 2002. [Accessed January 24, 2007.]
· Gilliam, Robert. "IMS and IMS Tools: Integrating Your Infrastructure." IBM Web site. [Accessed January 24, 2007.]
· IBM Corporation. "IMS Connect Strategy: Providing IMS Connectivity for a Millennium." IBM Web site. [Accessed January 24, 2007.]
· Lotter, Ron. "Using JMS and WebSphere Application Server to Interact with CICS over the MQ/CICS Bridge." IBM Web site. November 2005. [Accessed January 24, 2007.]
· Various. "IBM Information Management System." Wikipedia: The Free Encyclopedia. January 4, 2007. [Accessed January 24, 2007.]
· Various. "CICS." Wikipedia: The Free Encyclopedia. January 24, 2007. [Accessed January 24, 2007.]
· Various. "Mainframe computer." Wikipedia: The Free Encyclopedia. January 23, 2007. [Accessed January 24, 2007.]

About the author

Subhajit Bhattacherjee is a software architect of 12 years. He currently works as a principal architect for a world leader in express and logistics in the Phoenix, AZ, region.

This article was published in Skyscrapr, an online resource provided by Microsoft. To learn more about architecture and the architectural perspective, please visit skyscrapr.net.

A study in project failure



A study in project failure

Person with their head in their hands Dr John McManus and Dr Trevor Wood-Harper
Research highlights that only one in eight information technology projects can be considered truly successful (failure being described as those projects that do not meet the original time, cost and (quality) requirements criteria).
Despite such failures, huge sums continue to be invested in information systems projects and written off. For example the cost of project failure across the European Union was €142 billion in 2004.
The research looked at 214 information systems (IS) projects at the same time, interviews were conducted with a selective number of project managers to follow up issues or clarify points of interest. The period of analysis covered 1998-2005 the number of information systems projects examined across the European Union.

Number of IS projects examined within European Union

RankSectorNo. of projects examined
1Manufacturing43
2Retail36
3Financial services33
4Transport27
5Health18
6Education17
7Defence13
8Construction12
9Logistics9
10Agriculture6
Total 214

Project value in millions of Euros

Value range in millions (€)Number of
projects
Percentage
(%)
Accumulative
(%)
0 – 15123.83123.831
1 – 2209.34633.177
2 - 3115.14038.317
3 - 53315.42153.738
5 - 1041.86955.607
10 - 208740.65496.261
20 - 5062.80499.065
50 - 8020.935100.000
Totals214100.00100.00

At what stage in the project lifecycle are projects cancelled (or abandoned as failures)?

Prior research by the authors in 2002 identified that 7 out of 10 software projects undertaken in the UK adopted the waterfall method for software development and delivery. Results from the analysis of cases indicates that almost one in four of the projects examined were abandoned after the feasibility stage of those projects completed approximately one in three were schedule and budget overruns.
Project completions, cancellations and overruns
Waterfall method
lifecycle stage
Number of projects cancelledNumber of projects completedNumber of projects overrun
(schedule and/or cost)
FeasibilityNone214None
Requirements analysis3211None
Design2818332
Code1516857
Testing416457
Implementation116369
HandoverNone16369
Percentages23.8%76.2% 

Of the initial 214 projects studied 51 (23.8 per cent were cancelled) - a summary of the principal reasons why projects were cancelled is given below. Our earlier research elaborated on the symptoms of information systems project failure in three specific areas: frequent requests by users to change the system; insufficient communication between the different members of the team working on the project and the end users (stakeholders); and no clear requirements definitions. Whilst communication between team and end users was still perceived as an issue within some projects; the top three issues from this study are: business process alignment; requirements management; and overspends.
One notable causal factor in these abandonment's was the lack of due diligence at the requirements phase, an important factor here was the level of skill in design and poor management judgement in selecting software engineers with the right skill sets. Equally the authors found some evidence in poor tool set selection in that end users found it difficult to sign-off design work - in that they could not relate process and data model output with their reality and practical knowledge of the business processes.

Key reasons why projects get cancelled

  • Business reasons for project failure
  • Business strategy superseded;
  • Business processes change (poor alignment);
  • Poor requirements management;
  • Business benefits not clearly communicated or overstated;
  • Failure of parent company to deliver;
  • Governance issues within the contract;
  • Higher cost of capital;
  • Inability to provide investment capital;
  • Inappropriate disaster recovery;
  • Misuse of financial resources;
  • Overspends in excess of agreed budgets;
  • Poor project board composition;
  • Take-over of client firm;
  • Too big a project portfolio.

Management reasons

  • Ability to adapt to new resource combinations;
  • Differences between management and client;
  • Insufficient risk management;
  • Insufficient end-user management;
  • Insufficient domain knowledge;
  • Insufficient software metrics;
  • Insufficient training of users;
  • Inappropriate procedures and routines;
  • Lack of management judgement;
  • Lack of software development metrics;
  • Loss of key personnel;
  • Managing legacy replacement;
  • Poor vendor management
  • Poor software productivity;
  • Poor communication between stakeholders;
  • Poor contract management;
  • Poor financial management;
  • Project management capability;
  • Poor delegation and decision making;
  • Unfilled promises to users and other stakeholders.

Technical reasons

  • Inappropriate architecture;
  • Insufficient reuse of existing technical objects;
  • Inappropriate testing tools;
  • Inappropriate coding language;
  • Inappropriate technical methodologies;
  • Lack of formal technical standards;
  • Lack of technical innovation (obsolescence);
  • Misstatement of technical risk;
  • Obsolescence of technology;
  • Poor interface specifications;
  • Poor quality code;
  • Poor systems testing;
  • Poor data migration;
  • Poor systems integration;
  • Poor configuration management;
  • Poor change management procedures;
  • Poor technical judgement.

What is the average schedule and budget overrun?

In examining the cases it was noted that the average duration of a project was just over 26 months (115 weeks) and the average budget was approximate 6 million Euros, (Table 5). In many instances information on a project being over schedule and over budget will force senior management to act, however, the search for the underlying factors should begin else where in the projects history.
The pattern that emerges from a synthesis of case data is complex and multifaceted. In a few of the of cases examined the project commentary and history was ambiguous; however, once a decision had been made to support a project which was over schedule or over budget the ends usually justified the means irrespective of the viewpoints of individual project managers or stakeholders.
Cost and schedule overruns (N=69)
Projects
From Sample
2
(2)
11
(13)
19
(32) 
25
(57)
12
(69)
Schedule
Overrun
 11
weeks
29
weeks
46
weeks
80
weeks
103
weeks
RangeAverage  Budget + 10%Average  Budget + 25%Average Budget + 40%Average Budget + 70%Average Budget + 90%
Cost Overrun€600,000€1,500,000€2,400,000€4,200,000€5,400,000

What are the major causal factors contributing to project failure?

Judgements by project stakeholders about the relative success or failure of projects tend to be made early in the projects life cycle. On examination of the project stage reports it became apparent that many project managers plan for failure rather than success. 
If we consider the inherent complexity of risk associated with software project delivery it is not too surprising that only a small number of projects are delivered to the original time, cost, and quality requirements.
Our evidence suggests that the culture within many organisation's is often such that leadership, stakeholder and risk management issues are not factored into projects early on and in many instances cannot formally be written down for political reasons and are rarely discussed openly at project board or steering group meetings although they may be discussed at length behind closed doors.
Despite attempts to make software development and project delivery more rigorous, a considerable proportion of delivery effort results in systems that do not meet user expectations and are subsequently cancelled. In our view this is attributed to the fact that very few organisation's have the infrastructure, education, training, or management discipline to bring projects to successful completion.
One of the major weaknesses uncovered during the analysis was the total reliance placed on project and development methodologies. One explanation for the reliance on methodology is the absence of leadership within the delivery process. Processes alone are far from enough to cover the complexity and human aspects of many large projects subject to multiple stakeholders, resource and ethical constraints.
Although our understanding of the importance of project failure has increased, the underlying reasons still remain an issue and a point of contention for both practitioners and academics alike. Without doubt there is still a lot to learn from studying project failure.
Going back to the research undertaken there is little evidence that the issues of project failure have been fully addressed within information systems project management. Based on this research project failure requires recognition of the influence multiple stakeholders have on projects, and a broad based view of project leadership and stakeholder management.
Developing an alternative methodology for project management founded on a leadership, stakeholder and risk management should lead to a better understanding of the management issues that may contribute to the successful delivery of information systems projects.
June 2008