I was involved with a project in Australia for one of the major banks. The whole project was a fiasco. One marketing guy wrote a one-page "wouldn't it be nice if..." proposal. Bank management liked the idea and allocated $10 million to the project. A project manager was assigned. He didn't have a lot of experience, but got permission to hire a company which I will call MT (not their real initials).
MT brought in 5 programmers at $2000 per day each and their own project manager at $3000 a day.
The bank project manager and MT project manager discussed the one page proposal, and designed (on a white board) a 3 tier system; the backend would use the existing IBM mainframe, the front end would be PCs located in the retail branches and used by tellers, and in the middle would be a Tandem minicomputer collecting transactions from the PCs and converting them into transactions the mainframe could handle. This took a few days, maybe a week, I forget. In the meantime, the programmers had nothing to do, so they took long lunches, read magazines, and played cards. At $2000 a day.
When the two project managers finalised their concept, the bank ordered the Tandem minicomputer.
Neither project manager thought it necessary to write a system design document. The MT project leader had a one hour talk with his 5 programmers and allocated tasks: you will work on the GUI on the front end PCs running Windows, you will work on the communications with the Tandem, you will work on SQL server, etc.
No documentation was produced.
I was hired to manage a test team of 8 testers. I asked for the system design documents, and was handed the single piece of paper which was the marketing proposal. I knew right then, half an hour into the job, that this project was going to fail.
I cajoled and nagged the MT and bank project leaders for a couple of weeks, while my testers sat around with nothing to do, trying to at least get some high-level design documents and requirement specifications that we could test against.
After a couple of weeks, the MT project manager handed me a single page flowchart showing in reasonable detail what the system would look like. That was the sum total of the documentation. I looked at it, and 30 seconds later, said, "Where's the audit trail?"
He said, "We don't need no steenking audit trail, we're using SQL server."
I said, "This is a financial system. You need audit trails at each step so you can trace transactions from start to finish, and prove that no transaction gets lost, changed, or diverted."
He said, "We don't need no steenking audit trail, we're using SQL server."
Sigh.
After a couple more weeks, the MT team delivered the first version of the GUI, which runs on Windows PCs. We finally had something to test, and my testers were all fired up. We found a number of problems, which were fixed, and the cycle repeated.
MT delivered several versions of the GUI over the next few weeks, and my testers were happy.
The Tandem arrived at the bank's datacentre and was installed and running 24 hours a day, doing nothing.
The MT people working on the communications code, that allows the PCs and the Tandem to talk to each other, had major problems. The communications frequently failed and transactions were lost.
The Tandem support centre was in the UK so there were many phone calls. Eventually, the Tandem support guy flew to Australia to work first hand on this problem, which took a further month.
By this time, we were approaching the deadline for application delivery. The bank had rented a large office, installed 50 phone lines, and set up 50 work stations with PCs for the new call centre. They had also hired a professional trainer plus 20 people to work in the call centre. This all came out of the $10 million budget.
Since there was no software, there was nothing to train with, so all 21 people sat around for a couple of months waiting. Several of the call centre people quit out of boredom. The bank decided not to replace them until the software was delivered.
To make a long story shorter, the deadline arrived and was passed with no software delivered. the money ran out so the bank allocated another $5 million. Eventually, the MT project leader said, "We've got it working, I would like you to test the flow of transactions."
I said, "You mean validate the audit trail, which doesn't exist."
"Yes," he said, "we have no steenking audit trail and now I wish I had one."
I said, "You have SQL Server's logs, but they are inadequate."
"Correcto mundo," he replied.
So he and I sat down in a conference room with stacks of paper a couple of feet high trying to trace transactions. In two hours, we had validated 5 or 6 transactions out of the thousands.
He said, "Piss on this, that's enough. Let's go get a beer."
So that was the end of the audit.
Shortly after that the project was terminated, and everyone but the bank project manager had their contract cancelled.
For more information on Software Testing visit http://www.qa-software-testing.com
Wednesday, November 5, 2008
Poor Software Testing Costs Companies Money
Infoworld reported in April 2008 that many companies are reluctant to devote adequate resources to the testing phase of a project, even though almost 90% of those same companies realised that poor testing ultimately cost the company money.
I've been programming for 45 years, and have been involved in formal software testing projects since 1988 when I worked on a NASA project. In Australia, I managed a dozen testing projects.
I have found that, quite frequently, development runs over the projected deadline, and to keep up with the promised application delivery date, the testing phase is either shortened or eliminated entirely. This is a huge mistake. User Acceptance Testing, in particular is often cancelled or shortened, but this is the only chance that the business (the people who will use the software) can verify that the software meets the requirements.
For more information on Software Testing visit http://www.qa-software-testing.com
I've been programming for 45 years, and have been involved in formal software testing projects since 1988 when I worked on a NASA project. In Australia, I managed a dozen testing projects.
I have found that, quite frequently, development runs over the projected deadline, and to keep up with the promised application delivery date, the testing phase is either shortened or eliminated entirely. This is a huge mistake. User Acceptance Testing, in particular is often cancelled or shortened, but this is the only chance that the business (the people who will use the software) can verify that the software meets the requirements.
For more information on Software Testing visit http://www.qa-software-testing.com
Tuesday, November 4, 2008
Automating Website Functional Testing
contributed by Umair Khan; edited by Doug Anderson
You just finished building your company's website. You have tested it yourself and had other company employees test it. The website now goes live. A few weeks later you start getting emails from irate customers who complain that they are unable to place their orders because certain steps in the "Buy Now" process give errors. You quickly fix the problem. A few days later you get complaints about some other issue and you again react quickly to fix the website. This continues for a few months until the complaints finally halt and things stabilize.
At this point, you make some enhancements to your website. A few days later, a customer e-mail alerts you to the fact that, in the process of making this enhancement, you "broke" something else on the website. Again you spend time to find and fix the problem but by now you are perplexed and not a little frustrated.
These issues have cost you many customers in the last few months and potentially spread ill will across the broader customer community. It seems to you that the only way to have detected these issues before they went "live" was to have employed a large army of software testers, something your company is unable to afford.
Enter automated software testing. While nothing can replace good human testers, broad test coverage requires some degree of software automation for it to be economically feasible. Automated testing tools can provide a huge workforce multiplier and do a very good job complimenting human testers.
Every change to your website no matter how small requires thorough testing to ensure that nothing else was affected. This becomes very time consuming very quickly due to the large number of possible cases to test. A strategy whereby tests are automated using software becomes an economic necessity.
Regression Testing
There are two classes of automated testing tools. The first kind, functional and regression testing tools, helps to make sure that the website behaves as it should: for example if a customer clicks on button X, page Y is displayed without errors. Functional and regression testing tools are able to automate a large number of scenarios to ensure that your website works as intended.
Load Testing
The second type, load testing tools, gauge how well your website performs when subjected to a large stress, such as a large number of simultaneous users. In this article, I will be discussing functional testing only, leaving load testing for a separate article.
Functional Testing Concepts
I will now give you an overview of the basic characteristics of functional testing. Before you can begin any kind of functional test automation, you will need to identify the test scenarios you wish to automate. Once this is done, you will need to generate test scripts that cover these scenarios.
The reason you create a script is so that the same test can be repeated later, in exactly the same process. Scripts may be written by hand or automatically recorded by a software program that captures keystrokes and mouse movements around the screen.
An automated functional testing tool will typically record user interactions with a website. As you perform various operations on your website or application, the tool records every step. When you finish recording, it generates an automated script from your interactions with your website. Alternatively you could use the tool to construct the script by hand. Typically testers tend to do a combination of the two. They will use the recorder to generate the basic framework of their scripts and then tweak the scripts by hand to incorporate special cases.
Scripts can be graphical and/or text based in nature. A good functional testing tool does not require users to have a programming background. Users not proficient in programming will work predominantly with graphical scripts. In most tools, graphical scripts will typically show all interactions in a tree structure and users can edit any node of the tree to modify the script. Some users, however, who have programming backgrounds, may wish to program their scripts. These users will typically work with a text script written in a standard language such as JavaScript or VBScript.
Once you have generated your script, you will need to insert checks in your scripts to test if your website is functioning correctly. Such checks are usually called checkpoints. A checkpoint verifies that values of a property obtained when testing the website match expected values. Checkpoints enable you to set the criteria for comparing expected values with obtained values. The expected value of a property is derived from recording interactions with the web site. It is viewed and modified from checkpoints. The current value is retrieved during replay (i.e. during the execution of the test case).
There are many different kinds of checkpoints. A page checkpoint verifies the source of a page or frame as well as its statistical properties. You can check for broken links, verify link URLs, image sources, the hierarchy of HTML tags or even the entire HTML source of the Web page or frame. You can also set thresholds for the loading time of a page. A text checkpoint verifies that a given text is displayed or is not displayed in a specified area on a web page. A web object checkpoint verifies the properties of a web object e.g. the value of an HTML INPUT field. A database checkpoint verifies the contents of a database used by your website.
When you replay a test script, the testing tool will open the recorded application and perform the recorded steps in the same sequence they were specified in the script. As it replays the script, it will also run through all the checkpoints you have inserted into the script. In addition, you can test your application's behaviour with varying data inputs. For example, you can try to submit a page after entering different values in the edit box of a web page. At the end of the replay, a detailed report is typically be generated.
Functional test automation allows you to automate the repetitive testing of a large number of scenarios across your website. Functional testing tools are an important weapon in your development arsenal whose use provides a huge productivity gain and allows for small testing groups to accomplish significantly more work. There is a very strong economic case for the use of Functional Testing Tools as part of the development and deployment cycle of a website.
About the Author:
Umair Khan is Founder and Chairman of Verisium, Inc., a provider of products for automated functional and regression testing, load testing, bug tracking, and test and requirement management.
Verisium is the maker of vTest, an automated functional and regression testing tool for web applications.
For more information on Software Testing visit http://www.qa-software-testing.com
You just finished building your company's website. You have tested it yourself and had other company employees test it. The website now goes live. A few weeks later you start getting emails from irate customers who complain that they are unable to place their orders because certain steps in the "Buy Now" process give errors. You quickly fix the problem. A few days later you get complaints about some other issue and you again react quickly to fix the website. This continues for a few months until the complaints finally halt and things stabilize.

These issues have cost you many customers in the last few months and potentially spread ill will across the broader customer community. It seems to you that the only way to have detected these issues before they went "live" was to have employed a large army of software testers, something your company is unable to afford.
Enter automated software testing. While nothing can replace good human testers, broad test coverage requires some degree of software automation for it to be economically feasible. Automated testing tools can provide a huge workforce multiplier and do a very good job complimenting human testers.
Every change to your website no matter how small requires thorough testing to ensure that nothing else was affected. This becomes very time consuming very quickly due to the large number of possible cases to test. A strategy whereby tests are automated using software becomes an economic necessity.
Regression Testing
There are two classes of automated testing tools. The first kind, functional and regression testing tools, helps to make sure that the website behaves as it should: for example if a customer clicks on button X, page Y is displayed without errors. Functional and regression testing tools are able to automate a large number of scenarios to ensure that your website works as intended.
Load Testing
The second type, load testing tools, gauge how well your website performs when subjected to a large stress, such as a large number of simultaneous users. In this article, I will be discussing functional testing only, leaving load testing for a separate article.
Functional Testing Concepts
I will now give you an overview of the basic characteristics of functional testing. Before you can begin any kind of functional test automation, you will need to identify the test scenarios you wish to automate. Once this is done, you will need to generate test scripts that cover these scenarios.
The reason you create a script is so that the same test can be repeated later, in exactly the same process. Scripts may be written by hand or automatically recorded by a software program that captures keystrokes and mouse movements around the screen.
An automated functional testing tool will typically record user interactions with a website. As you perform various operations on your website or application, the tool records every step. When you finish recording, it generates an automated script from your interactions with your website. Alternatively you could use the tool to construct the script by hand. Typically testers tend to do a combination of the two. They will use the recorder to generate the basic framework of their scripts and then tweak the scripts by hand to incorporate special cases.
Scripts can be graphical and/or text based in nature. A good functional testing tool does not require users to have a programming background. Users not proficient in programming will work predominantly with graphical scripts. In most tools, graphical scripts will typically show all interactions in a tree structure and users can edit any node of the tree to modify the script. Some users, however, who have programming backgrounds, may wish to program their scripts. These users will typically work with a text script written in a standard language such as JavaScript or VBScript.
Once you have generated your script, you will need to insert checks in your scripts to test if your website is functioning correctly. Such checks are usually called checkpoints. A checkpoint verifies that values of a property obtained when testing the website match expected values. Checkpoints enable you to set the criteria for comparing expected values with obtained values. The expected value of a property is derived from recording interactions with the web site. It is viewed and modified from checkpoints. The current value is retrieved during replay (i.e. during the execution of the test case).
There are many different kinds of checkpoints. A page checkpoint verifies the source of a page or frame as well as its statistical properties. You can check for broken links, verify link URLs, image sources, the hierarchy of HTML tags or even the entire HTML source of the Web page or frame. You can also set thresholds for the loading time of a page. A text checkpoint verifies that a given text is displayed or is not displayed in a specified area on a web page. A web object checkpoint verifies the properties of a web object e.g. the value of an HTML INPUT field. A database checkpoint verifies the contents of a database used by your website.
When you replay a test script, the testing tool will open the recorded application and perform the recorded steps in the same sequence they were specified in the script. As it replays the script, it will also run through all the checkpoints you have inserted into the script. In addition, you can test your application's behaviour with varying data inputs. For example, you can try to submit a page after entering different values in the edit box of a web page. At the end of the replay, a detailed report is typically be generated.
Functional test automation allows you to automate the repetitive testing of a large number of scenarios across your website. Functional testing tools are an important weapon in your development arsenal whose use provides a huge productivity gain and allows for small testing groups to accomplish significantly more work. There is a very strong economic case for the use of Functional Testing Tools as part of the development and deployment cycle of a website.
About the Author:
Umair Khan is Founder and Chairman of Verisium, Inc., a provider of products for automated functional and regression testing, load testing, bug tracking, and test and requirement management.
Verisium is the maker of vTest, an automated functional and regression testing tool for web applications.
For more information on Software Testing visit http://www.qa-software-testing.com
Frequently Asked Questions About Software Testing
contributed by Jerry Ruban; edited by Doug Anderson
1. What is the purpose of the testing?
Software testing is the process used to help identify the
Software Testing is the process of executing a program or system with the intent of finding errors.
2. What is quality assurance?
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented towards 'prevention'.
3. What is the difference between QA and testing?
Testing involves operation of a system or application under controlled conditions and evaluating the results. It is oriented towards 'detection'.
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented towards 'prevention'.
4. Describe the Software Development Life Cycle
It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.
5. What are SDLC and STLC and the different phases of both?
SDLC - Software Development Life Cycle
A Test Bed is an execution environment configured for software testing. so that it does not interfere with any existing application. It consists of specific hardware, network topology, Operating System, configuration of the product to be under test, system software, and other applications. The Test Plan for a project should be developed from the test beds to be used.
A test bed can be, and often is, destroyed and rebuilt in order to totally eliminate a previous version of the application under test. Virtual PCs are often used as Test Beds because they are easily created and destroyed, without affecting the host hardware or software.
7. What is Test data?
Test Data is data that is run through a computer program to test the software. Test data can be used to test the compliance with effective controls in the software. Test data is often designed to test the limits of the application. For example, if the application expects an account number with 8 digits, the test data might include:
Miscommunication or no communication about the details of what an application should or shouldn't do can cause the programmers to write faulty code.
Programming errors - in some cases the programmers can make mistakes in logic that seemed reasonable at the time the program was written, but turned out to be faulty in execution..
Changing requirements - there are chances of the end-user not understanding the effects of changes, or may understand and request them anyway to redesign, causing rescheduling of engineers, effects on other projects, and work already completed may have to be redone or thrown out.
Time force - preparation of software projects is difficult at best, often requiring a lot of guesswork. When deadlines are given and the crisis comes, mistakes will be made.
9. What is the Difference between Bug, Error, and Defect?
Error: It is the Deviation from actual and the expected value.
Bug: It is found in the development environment before the product is shipped to the respective customer.
Defect: It is found in the product itself after it is shipped to the respective customer.
10. What is the difference between validation and verification
Verification is done by frequent evaluation and meetings to appraise the documents, policy, code, requirements, and specifications. This is done with checklists, code walk-throughs, and inspection meetings.
Validation is done during actual testing and it takes place after all the verifications have been done.
11. What is the difference between structural and functional testing?
Structural testing is a "white box" testing and it is based on the algorithm or code. The tester knows how the application works internally.
Functional testing is a "black box" (behavioral) testing where the tester verifies the functional specification. The tester does not know how the application works internally.
12. What is the difference between bottom-up and top-down approaches
Bottom-up approach: In this approach, testing is conducted from sub-module to main module. If the higher-level modules are not yet developed, a temporary program is used to simulate that module. These dummy modules are often called Drivers.
Top-down approach: In this approach, testing is conducted from main module to sub-module. If the sub module is not developed, a temporary program, often called a Stub, is used to simulate the sub-module. Quite often a stub will set a constant into a variable rather than actually performing its function.
13. What is Re-test? What is Regression Testing?
Re-test - Retesting means we are testing only the certain part of an application again and not considering how it will affect any other parts or the whole application.
Regression Testing - Testing the application after a change in a module or part of the application to verify that the code change will not adversely affect the rest of the application.
14. What is the difference between Load Testing, Performance Testing, and Stress Testing
Load Testing and Performance Testing are commonly said to be positive testing whereas Stress Testing is said to be negative testing.
Say, for example, there is a application which can handle 25 simultaneous user logins at a time. In load testing, we will test the application for 25 users and check how the application is working in this stage; in performance testing, we will concentrate on the time taken to perform the operation. In stress testing. we will test with more users than 25 and the test will continue to any number and we will check where the application is cracking.
15. What is User Acceptance Testing?
User Acceptance Testing (UAT) - is carried out from the user perspective and it is usually done before the release. The testers are from the client organization (the people who will be actually using the new software) and are not programmers.
-----------------------------------
For more FAQs in Software Testing visit : http://softwaretestingguide.blogspot.com
For more information on Software Testing visit http://www.qa-software-testing.com
1. What is the purpose of the testing?
Software testing is the process used to help identify the
- Correctness,
- Completeness,
- Security, and
- Quality
Software Testing is the process of executing a program or system with the intent of finding errors.
2. What is quality assurance?
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented towards 'prevention'.
3. What is the difference between QA and testing?
Testing involves operation of a system or application under controlled conditions and evaluating the results. It is oriented towards 'detection'.
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented towards 'prevention'.
4. Describe the Software Development Life Cycle
It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.
5. What are SDLC and STLC and the different phases of both?
SDLC - Software Development Life Cycle
- Requirement phase
- Design phase (High Level Design, Detailed Level Design i.e., Program specifications)
- Coding
- Testing
- Release
- Maintenance
- System Study
- Test planning
- Writing Test case or scripts
- Review the test case
- Executing test case
- Bug tracking
- Report the defect
A Test Bed is an execution environment configured for software testing. so that it does not interfere with any existing application. It consists of specific hardware, network topology, Operating System, configuration of the product to be under test, system software, and other applications. The Test Plan for a project should be developed from the test beds to be used.
A test bed can be, and often is, destroyed and rebuilt in order to totally eliminate a previous version of the application under test. Virtual PCs are often used as Test Beds because they are easily created and destroyed, without affecting the host hardware or software.
7. What is Test data?
Test Data is data that is run through a computer program to test the software. Test data can be used to test the compliance with effective controls in the software. Test data is often designed to test the limits of the application. For example, if the application expects an account number with 8 digits, the test data might include:
- a valid 8-digit account number
- zero
- blanks
- negative numbers
- a 9 digit number
- a 7 digit number
Miscommunication or no communication about the details of what an application should or shouldn't do can cause the programmers to write faulty code.
Programming errors - in some cases the programmers can make mistakes in logic that seemed reasonable at the time the program was written, but turned out to be faulty in execution..
Changing requirements - there are chances of the end-user not understanding the effects of changes, or may understand and request them anyway to redesign, causing rescheduling of engineers, effects on other projects, and work already completed may have to be redone or thrown out.
Time force - preparation of software projects is difficult at best, often requiring a lot of guesswork. When deadlines are given and the crisis comes, mistakes will be made.
9. What is the Difference between Bug, Error, and Defect?
Error: It is the Deviation from actual and the expected value.
Bug: It is found in the development environment before the product is shipped to the respective customer.
Defect: It is found in the product itself after it is shipped to the respective customer.
10. What is the difference between validation and verification
Verification is done by frequent evaluation and meetings to appraise the documents, policy, code, requirements, and specifications. This is done with checklists, code walk-throughs, and inspection meetings.
Validation is done during actual testing and it takes place after all the verifications have been done.
11. What is the difference between structural and functional testing?
Structural testing is a "white box" testing and it is based on the algorithm or code. The tester knows how the application works internally.
Functional testing is a "black box" (behavioral) testing where the tester verifies the functional specification. The tester does not know how the application works internally.
12. What is the difference between bottom-up and top-down approaches
Bottom-up approach: In this approach, testing is conducted from sub-module to main module. If the higher-level modules are not yet developed, a temporary program is used to simulate that module. These dummy modules are often called Drivers.
Top-down approach: In this approach, testing is conducted from main module to sub-module. If the sub module is not developed, a temporary program, often called a Stub, is used to simulate the sub-module. Quite often a stub will set a constant into a variable rather than actually performing its function.
13. What is Re-test? What is Regression Testing?
Re-test - Retesting means we are testing only the certain part of an application again and not considering how it will affect any other parts or the whole application.
Regression Testing - Testing the application after a change in a module or part of the application to verify that the code change will not adversely affect the rest of the application.
14. What is the difference between Load Testing, Performance Testing, and Stress Testing
Load Testing and Performance Testing are commonly said to be positive testing whereas Stress Testing is said to be negative testing.
Say, for example, there is a application which can handle 25 simultaneous user logins at a time. In load testing, we will test the application for 25 users and check how the application is working in this stage; in performance testing, we will concentrate on the time taken to perform the operation. In stress testing. we will test with more users than 25 and the test will continue to any number and we will check where the application is cracking.
15. What is User Acceptance Testing?
User Acceptance Testing (UAT) - is carried out from the user perspective and it is usually done before the release. The testers are from the client organization (the people who will be actually using the new software) and are not programmers.
-----------------------------------
For more FAQs in Software Testing visit : http://softwaretestingguide.blogspot.com
For more information on Software Testing visit http://www.qa-software-testing.com
Subscribe to:
Posts (Atom)