Monday, November 16, 2009

Software Testing & Its Functionality

Contributed by Shyamolima Mutsuddi

Software testing is the technical process used to recognize the correctness, completeness,security, and quality of developed computer software. Software testing is executed to display quality-related information about a tested product. Software testing is a vital part of software quality assurance.

Some of the common quality attributes of a product a tester looks for are capability, reliability, efficiency, portability, maintainability, compatibility and usability. A good test not only bring out errors, it also displays interesting informations new to a project community. Software testing play a crucial strategic role for transporting the quality of the product higher in the hierarchy in the software development process. It also underlines the customer's requirements all the way through the product cycle.

Some of the vital software testing procedures involved in testing a product are functional testing, negative testing, customer scenario testing, stress testing, performance testing, scalability testing, international testing, and more. The sole purpose of software testing is to assure that customers receive maximum product quality.

Some of the common types of testing a test engineer consider while testing a product are Black box testing,White box testing ,incremental integration testing,functional testing, system testing,end-to-end testing,sanity testing or smoke testing,regression testing,acceptance testing,acceptance testing,performance testing,usabilitytest,inginstall /uninstall testing, recovery testing, failover testing, security testing, exploratory testing, ad-hoc testing, mutation testing and more.

Though all projects are profited from testing, some projects generally don't need independent test staff. The requirement of test stuffs depend upon the size and context of the project, the risks, the development methodology, the developer's skill and experience and more. A short term, low risk project handled by experienced programmers employing unit testing or test-first development do not need test engineers. Considering the different goals in software testing, different roles are established for software testers. They are test lead/manager, tester, test designer, test automater/automation developer and test administrator.

About the Author:

Shyamolima Mutsuddi SEO & Content Writer http://www.xponsewebs.com

For more information on Software Testing, visit Software Testing Concepts.

Wednesday, April 29, 2009

An Example Service Level Agreement for your PBX Department

Contributed by Charles Carter

A Service Level Agreement's primary goal is to establish and manage expectations of customers, thus reducing confusion while defining acceptable service.

Below is an example of a Service Level Agreement. Simply replace the bracket information with your information.

SERVICE LEVEL AGREEMENT

BETWEEN

'Company PBX Department'

&

'Internal Customers'

Terms in this document:

PBX is a direct reference to the Private Branch Exchange Department and its personnel.

Telecommunications refers to Voice communications and Voice Mail.

Service refers to the Move, Add or Change of telecommunications sets, fax lines, modems and voicemail programming.

Local Switch refers to the 'Meridian 81C" switch that services the departments at 'Company Name'.

Distant Switch refers to the outlying 'Meridian systems' at 'First Location', 'Second Location'.

Central Office refers to 'Bell South'.

Internal Customer refers to the employees and departments of 'Company Name'.

External Customers refers to those individuals or business outside the scope of control of the PBX Department.

Turnaround Time shall be considered a good faith effort of the PBX Department and its staff to remedy a problem.

Business day indicates normal business hours between Monday and Friday, excluding Federal Holidays.

Goal of this Technical Support Service Level Agreement:

The goal of this agreement is to enhance, support and manage our internal customer's telecommunications requirements. A secondary goal is to identify response times, assist in our internal customer's achievement of maximum proficiency and reliability in their telecommunications environments and set forth escalation procedures.

Specific Goal Topics covered in this agreement:

- Dependable support for a standard telecommunications platform for core business departments,

- Response times to problems, new users, and other service requests,

- Statement of operational hours,

- Reduction in labor cost via standardized call out procedures,

- Procedures for escalation of service request and/or outage reports.

Technical Support Service Level Agreement:

'Company Name''s Private Branch Exchange Department has identified a standard business approach to providing technical services to the various departments of 'Company Name'. The PBX Department will provide support, to include; new set installations and services, movement of existing telecommunication services, reprogramming of existing telecommunications services and maintenance of existing telecommunications services.

This document is the Service Level Agreement (SLA) that defines the scope of support and the services that 'Company Name' users can expect:

Support:

The PBX Department will provide technical support to our internal customers via the 'PBX call center or trouble desk'.

The 'PBX call center or trouble desk' is responsible for:

- Initial point of contact for the telephone questions and problems

- Issuing trouble tickets,

- Communicating expected response time,

- Tracking of service/problem ticket.

Hours of Service

8:00am until 5:00pm, Monday through Friday will be the normal hours of service. However, if additional coverage is required outside of these hours, the hours may be expanded.

Turnaround Times

The following turnaround times for services will be in effect:

New Service at local switch - within 1 business day with properly submitted requisition.

New Service at distant switch - within 2 business days with properly submitted requisition.

Move or Change of service at local switch - within 1 business day with proper notification.

Move or Change of service at distant switch - within 2 business days with proper notification.

Outage Report at local switch of non-essential telecommunications* - within 4 hours on business days. The following Monday if trouble is reported after 2:00 PM on Fridays.

Outage Report at distant switch of non-essential telecommunications* - within 6 hours on business days. The following Monday if trouble is reported after 2:00 PM on Fridays.

Outage Report at local switch of essential telecommunications** - within 1 hour on a business day. Within 2 hours on non-business days.

Outage Report at distant switch of essential telecommunications** - within 2 hours on business days. Within 2 hours on non-business days.

'Indicate your call out procedures here.'

The following actions require scheduled turnaround times.

- Custom programming,

- Telecommunications projects.

- Upgrade and patches for software releases ,

- Software licensing and maintenance,

- PMI's,

- System backups and maintenance.

Problem Escalation :

Not all problems are emergencies. But those problems that are not addressed and resolved expediently can become emergencies. After pursuing the standard problem reporting mechanism via a trouble service ticket submitted directly to the 'Pbx call center or trouble desk'. The user will have a service ticket that can be used to reference the problem reported. The user can progressively escalate emergency problems in the following manner:

- Obtaining approval for escalation from his/her management,

- Communicating a new acceptable response time.

Recognize, however, that a user(s) who set a pattern of problem escalation (attempting to circumvent the problem resolution queue) will be admonished to respect the service queues and established turnaround times guarantees.

Customer Responsibilities:

Customers of 'Company Name" telecommunications services, as part of this SLA in which the services they will receive are detailed, also have some responsibilities:

- Customers should report problems using the problem reporting procedures detailed in this SLA, including clear description of the problem,

- Provide input on the quality and timeliness of service,

- Recognize when software testing and/or maintenance are causing problems that interfere with standard business functions.

"Ever greening:"

Telecommunication environments and requirements inevitably change, and this SLA needs to define an "ever greening" process to ensure that the support agreement keeps pace with the reality of user requirements. As the telecommunications infrastructure moves from the standard Legacy system switch to communications servers and local area networks -- the PBX department and the Information Technology department will share in the resolution of a problem, thus possibly extending installation times and response to outages.

The management of PBX and IT will need to create a committee with cross- representation to meet quarterly to review technical support service success, service shortcomings, technology updates, and user requirement changes. Once the support service is initiated, the committee will note the results and recommend the changes and improvements.

Acknowledgment:

The PBX Department and Customers have both acknowledged and accept the terms and responsibilities required for effective and efficient service delivery. Should there be a need to modify the level of support, this will be done by designated individuals/teams of each party.

*- Indicate a general discription of services or departments that are considered non-essential communicatioins (Modem lines, offices with multiple sets, etc.)

**- Indicate a general discription of services or departments that are considered essential communicatioins (Security Department, Executive Office, Offices with only one set, etc.)

About The Author

Charles Carter is an administrator for the Nortel Portal and Vice President of www.pbxinfo.com. He has 20 years experience in the telecommunications field, is a software owner/programmer, author of the fictional book "Chaos Theorem" and is currently the President of CS2Communications (www.cs2communicatons.com) - A Southern Mississippi Telecommunications LLC specializing in Nortel Meridian Programming, Nortel BCM Programming, Cable Plant Installations and Nortel Symposium Programming

For more information on Software Testing, visit Software Testing Concepts.

Tuesday, April 28, 2009

When Is A Software Engineer Not A Software Engineer

Contributed by V. B. Velasco Jr

The term "software engineer" has got to be among the most highly abused job titles in the corporate high-tech world. It's also one of the most popular.

And why not? After all, it sounds a lot more impressive than just "computer programmer," and it looks better on one's business card. Unfortunately, it's often inaccurate. Engineering is, after all, the application of sound technical principles to develop systems that are robust, efficient and elegant. I've found that a great many software engineers can develop working programs, but do little or no real engineering design.

Does this sound harsh? Maybe, but I think it's hard to deny. I've encountered very few software engineers, for example, who have clean, crisp and readable coding styles -- an essential element of elegant software design. I've also encountered a preponderance of cryptically written functions, clumsy software abstractions and bizarre spaghetti code. To my dismay, I've discovered that even among computer science graduates, many reduce object-oriented programming to the mere use of private data, public functions and object instantiations. It's enough to break a teacher's heart.

Now, I wouldn't go so far as to say that most programmers write spaghetti code. That would not be fair. However, I do think that relatively few programmers have a deep appreciation for the artistry of software development. That's not to say that they're ignorant of such things; not at all. Rather, it's more that the engineering aspects of elegant code design are all too often neglected.

This happens because modern programming tools have made proper code design seem like a nuisance. In the early years of computing, people were forced to write out their software designs, pondering many fine details before they ever sat down in front of the computer. Nowadays, with our fast compilers and interactive debugging systems, programmers often find it more convenient to simply sit down and start coding, with just a modicum of software design. Mind you, I do understand that this is sometimes more efficient -- when the programming task is fairly routine, for example. However, when such design-as-you-go software development becomes standard practice, then you have the makings of utter chaos.

In part, this problem is also rooted in the flexible nature of computer software. No self-respecting civil engineer would design a bridge by slapping girders together until he has something that works; after all, if the bridge collapses, it could take months to rebuild it. Similarly, no sensible architect would want to build a house without blueprints and floor plans. Yet it is commonplace for programmers to develop software using poorly chosen functions and only the sketchiest of designs. After all, if the software doesn't work, they can always find the bug and fix it -- at least, in theory. In practice, these bugs are often difficult to detect, and fixing them can require extensive surgery. The consequences of an ill-designed software program can be disastrous indeed.

For this reason, I believe that high-tech industry needs to give software engineering the respect that it deserves. They need to develop a true culture of systematic software design, instead of merely settling for "whatever works." A company that's looking toward the future must pay proper devotion to the principles of software maintainability, proper documentation and elegant, robust design. It must also inculcate a culture of true software engineering among its employees. The failure to do so may work in the short-term, but it is a recipe for long-term disaster.

About the Author:

V. B. Velasco Jr is a senior electrical and software engineer at a small immunology biotech company that provides immunogenicity testing, cryopreserved PBMC and elispot analysis software.

For more information on Software Testing, visit Software Testing Concepts.

Sunday, April 26, 2009

Testing Web applications with multiple browsers

Contributed by Tony Patton

One of the messier aspects of delivering Web applications to the Internet is comprehensive testing to ensure a consistent user experience with different browsers. Given the wealth of browsers and versions along with operating systems, this is easier said than done. Here's a look at various avenues for proper application testing. Who will use it?

A key ingredient when approaching the testing phase of a Web application is deciding what browser platforms will be used to access it � or, more appropriately, what browser platforms will be supported. With intranet applications, the browser is more easily controlled, but the public Internet is wide open, as users are free to use what they want.

A quick glance at browser statistics for December 2007 on TheCounter shows Internet Explorer with a commanding lead in browser usage (version 6.x at 44% and version 7.x with a 35% share) and Firefox and Safari with smaller shares. You may examine such statistics and decide to test an application with the top four browsers, or the client may decide what browsers will be supported. (It is worth noting that the growth in the use of handheld devices like cell phones and PDAs means you may need to test these as well -- depending upon the application.) Once you decide what browsers are supported, you need to decide how to actually test with these browsers. Testing platforms

You need to decide how to properly test with a set of browsers. The simplest, and most costly, solution is to set up test machines with each browser installed. Or, you may choose to install each browser on the same machine; however, this can get hairy when dealing with multiple versions of the same browser platform (like Internet Explorer 6.x and 7.x). One issue with using multiple browser versions is actually getting copies of older browsers. A great resource for locating older browsers is evolt.org.

One browser you may not want to ignore is the text-based Lynx browser, which is still available. It is good for testing how a site looks to nongraphical browsers like search engines. Also, it can help with testing accessibility issues because it shows how the site appears when presented as text -- with this text processed by screen readers and so forth.

Along with using multiple browser versions, is testing with the numerous operating systems in use today. You may test Internet Explorer with Windows Vista, Windows XP, and Windows 2000, while using Safari with the various OS X versions like Leopard, Tiger, and Panther. Also, you may test Firefox on these platforms along with Linux.

It's costly to set up individual computers for each browser and operating system configuration. Dual booting and virtualization provide alternatives that allow you to consolidate testing environments and reduce costs. Dual booting can be time consuming because you have to reboot every time you switch to a different operating system. Virtualization allows you to run multiple virtual machines with heterogeneous operating systems at the same time on the same physical machine. You can switch to the different machines without any lag time with rebooting. Some popular virtualization platforms are VMware and Virtual PC.

You can get the most control by conducting all application testing with multiple platforms in-house, but this may be out of the realm of possibility for smaller organizations. Smaller shops may turn to a set of users or use third-party services. Another path

I have worked on numerous projects where an established set of users outside of the organization are tapped for application testing. In addition to using various platforms, it also provides the opportunity for real-world testing where users have their own Internet connections, and testing does not rely on high-speed corporate connections.

These users can provide valuable feedback on application behavior and performance. In addition, organizations often use this type of setup even if testing is conducted in-house. These users may be viewed as beta testers where they offer a second wave of testing to ensure proper functionality in the real world.

Another path that may be followed is using a third-party service to test a Web application via multiple browser platforms. You could choose an offshore company to test with various platforms or use a free service like Browsershots or a paid service like BrowserCam.

The mobile world

The boom in mobile device usage means this ever-expanding user community should not be ignored. Like personal computers, you can assemble a group of mobile devices to use for testing, or you can use third-party services and products to assist with mobile testing. A great resource is the DotMobi Virtual Developer Lab, which provides access to hundreds of mobile devices for testing. Make sure it works

While most developers think an application is ready once their work is done, you still need to conduct extensive testing to ensure the product delivered actually meets project expectations and behaves consistently within the target set of browsers. There are many ways to go when testing with multiple browsers as you may choose to set up multiple machines, use virtualization, or even go with a third-party service or organization.

The key issue is to test an application so it functions properly within supported browsers.

About the Author:

Tony Patton began his professional career as an application developer earning Java, VB, Lotus, and XML certifications to bolster his knowledge.

For more information on Software Testing, visit Software Testing Concepts.

Testing And Monitoring of Networks for Security

eBay, iTunes, PayPal -- these are just a few of the places that most of us enter our credit card and personal information every day. Since the Internet became an integral part of each of our lives, the treat of identity theft is a daily reality for all but the most paranoid of Internet users.

While we assume that the sites listed above are secure, how many of us have in fact checked to see to what lengths these companies go to keep their user's information safe from hackers? I'm sure very few.

Because we can't count on our registrants to be careful when entering information onto the registration website, as event planners, we must make sure that we do the research to ensure that our registrant's information is safe with our online registration company. We want to send potential registrants to a site that portrays our event in a positive light. This means a website designed to our specifications, with professional quality and ease of use. But, most importantly, it means knowing that all information put online for our event will be safe from identity thieves.

One of the most important aspects of a strong security system is frequent testing and monitoring of those systems. To receive the highest rank of Level 1 PCI compliance from Visa, companies must invest a large number of resources to ensure that they are as secure as major banks and credit card companies. As of yet, very few registration companies hold this ranking, but wouldn't it be nice to know that your registration company values your registrants' security enough to make it one of their highest priorities?

Constant monitoring and testing of security is a vital part of maintain the highest possible level of security. Some methods of monitoring include an independent daily audit for over 3,000 security checks, (exceeding the highest government standards including the FBI "Top twenty security vulnerabilities" test) and separate hourly, daily, weekly, and monthly backups that are archived for at least two years.

Other important factors to look for are the TRUSTe and Thawte logos. These companies monitor the strength and maintenance of privacy policies and information encryption. According to the website, to be certified by TRUSTe, companies must have their privacy policy open for review by TRUSTe, post notice and disclosure of collection and use practices of personally identifiable information, and give users choice and consent over how their information is used and shared.

While TRUSTe ensures that companies hold to their privacy policies and never use information without the user's consent, Thawte verifies SSL (Secure Socket Layer) encryption, meaning that the encryption of credit card information entered on the site if of the highest level possible. However, to be verified by Thawte, companies must meet stringent checklist of qualifications including both authentication and verification processes. For the authentication process, Thawte must confirm that the company registration details are entirely true and that the domain is in fact owned by the requesting party. To complete the verification process, Thawte uses a third party telephone listing to confirm that the authorized person requesting a certificate is employed by requesting party.

These are just a couple of the certifications to look for when choosing your online event registration system. When you send your attendees to the registration site, you want to be 100% sure that their data will be 100% safe so they won't have to research the security, but if they do, you can be confident that they'll like what they find.

About the Author:

Ryan is a member of the marketing team for RegOnline, a producer of easy-to-use online registration software, and a company dedicated to making event planners' lives easier.

For more information on Software Testing, visit Software Testing Concepts.

Saturday, April 25, 2009

Offshore Website Development

What started as a simple website development has taken a large pie of market share. Let me start with the literal meaning of website development - starting from the baseline, Offshore + Web + Site + Development in simple terms means developing, coding for your website and making web page available on the web and later offshoring your website services. Or you can say, website development means developing a website and making it available on the World Wide Web to promote your product and services and contribute to global information base.

Basically web development follows web design phase and spans from coding for simple static web pages to complex applications pages, complete website development lifecycle needs to be monitored with caution. Web Development is one of the fastest growing industry in this IT edge. Graphics designer, web editor, flash developer, designers all together look into the requirement, i.e. requirement analysis is must and then step into development and coding phase, development is not design but coding for design template.

Web Development cost depends on various factors: complexity of design, content of the website. With advancement in technology, technology has gifted developers with many free web development tools like LAMP (Linux, Apache, MySQL, PHP) that will bring down your development cost, CMS (Content Management System -- Typo3, Joomla), WYSIWYG, all these help to control, edit, manage contents with having in-depth knowledge of the softwares. With the Microsoft product Dot.Net has enabled to run applications online.

With digitalization of world and integration of various modules separated geographically all changes are reflected dynamically in this real world and fetching information is easy these days. E-commerce is best example for this that lets visitors to shop online, all transactions being managed from the backend; you just enjoy shopping, placing orders without bothering about its other side happening. Emerging social networking sites are another set of examples of web development that influenced the worldwide communication network and helped disseminating useful information among members of the communities.

Starting with the generic web template and moving towards customized web templates we get benefits like more clarity and better placement of various web components, more flexibility in functionality and presentation of website, its look and feel can be improved. Many firms these days specialize in providing custom web site development solutions for all size of business that helps you to promote your website's product and services and improve traffic. Security, software testing are important aspects that follow website development. Quality web services promise to take your business to new level of business.

Web Development covers different areas:

Client Side Coding for layout and design that includes: CSS -- Cascading Style Sheets is used to describe presentation of document written in markup language. Flash -- Adobe flash player (most popular) is used to create content for movies, games, mobile phones Javascript -- is a scripting language used for client side web development XHTML -- Extensible Hyper text Markup Language is an application of XML

Server Side Coding for website functionality and back end system that includes:

ASP and MySQL- ASP.NET and MySQL CGI and Perl Cold Fusion Java, J2EE PHP and MySQL Python Ruby

Looking inside the website development phase:

First step is analysis, studying and understanding client's requirement, base requirement of the website, its target market and audience, its benefits over the existing system, its incorporation with the existing system with the help of chat, documents, discussions. Every plan should be realistic and based on certain real figures like resources involved, documents needed, hardware and software requirement, cost involved, manpower and finally the cost benefit.

Second step is Building Specification; the base specifications are withdrawn from the requirement analysis report. All the real information gathered in the analysis phase are used for building requirement specification. After requirement specification preliminary document is sent for approval then a written proposal is made and scope and effort estimation is also prepared.

Third step Design & Development, in this step after requirement specification and getting all proposal documents, contract documents and money signed for, and graphics and layouts specification documents from client we move to design phase.

Customer/client can be in touch, by sending e-mails, feedback, can send comment using contact us form, for very urgent message can use facsimile services or can contact directly through telephones.

Before actual design is finalized, design and layout are designed as a prototype with different variations offering customer with choice. Customer can be offered full prototype with interactivity, based on customer feedback lots of changes may be required to make. All the required changes should be made and all problems should be fixed before moving ahead.

Test plan is major milestone in this step which needs to be developed during design phase to assure quality. Finally site template, design, images are sent for client approval.

Fourth step involves writing quality, theme content for the website. This is very crucial from visitor's point as the site needs to be very informative and theme related and this will in real drive your website as these days websites are quality content driven. Content writer can use template finalized in the design step.

Moving to the coding step, it is developers turn now but before proceeding developers should understand the design code and navigation and proceed in a way so that design code, template look and feel is retained. If needed, developers may interact with designers for proper co-ordination and better understanding the design template. Coding team should also generate test plans to check all forms and field and make development step bug free and maintain integrity with different segments. Development team uses the SRS (Software Requirement Specification) or FSD (Functional Specification Documentation) that are prepared by technical writer and approved by client as a guide that speeds up the over all process. At the other end coding team can prepare document for end-user that can be used for preparing user manuals and help guides.

Testing is an important step in development of a website. Whether automated testing or manual testing both are important and mandatory. Many online testing tools and online free testing tools are available for testing various applications. White box, black box testing and application testing is done. Also application is tested if it can run on different browsers and similar other functionalities are tested. Finally live testing after website is made available online is carried.

Website needs to be promoted and it should reach the potential customers once it is online. With the changing search engine strategies website design should be search engine friendly (designers at the beginning should take care otherwise it may become difficult to optimize the website). Website promotion is an ongoing strategy, first step is competitor analysis, target market research, keyword selection for targeting the market, then websites are initially submitted to directories and submission continues at regular interval.

Websites need to be maintained and update, as search engine always index sites that have new information to offer, else your site will gradually faint away. Regular analysis and bug fixing is part of this maintenance process. This also involves educating; training team members to meet challenges and keep themselves update with the knowledge of the emerging latest technologies and meet the challenges of tomorrow.

Highlights in true prospect include web engineering and re-engineering and quality and timeliness will be the resultant for your product and services. For more information on website development and offshore web site development feel free to contact us any time.

Rakhi, is a SEO content writer at Development India. Development India is website development Company in India and a leading provider of offshore website development services.

For more information on Software Testing, visit Software Testing Concepts.

Automated Testing to Boost Confidence in IT Systems

Contributed by Sug Sahadevan

In backroom IT offices of the application development process, software testing -- specifically of the automated type -- is gaining newfound respect and momentum. Fuelled by widespread business expeditions into the Internet economy, testing has surfaced not only as a critical IT issue, but also as an even more critical business issue. This is true both in the private sector and the public sector where a majority of their services are becoming e-enabled.

Companies intent on transacting revenue-generating business or offering enhanced customer services online are increasingly turning to automated testing solutions to gain confidence in their IT systems, fully understand how applications will behave under real-world conditions, uncover and rectify issues, and systematically manage growth.

The worldwide market for automated software quality tools, including mainframe and distributed environments, reached $2.6 billion in 2004, a 23.6% increase over 2003 figures, and the market is slated to double by 2007. Consequently there is a shift in services towards testing. Demand for good quality testers with the right skills is growing at the same rate, with salaries increasing at comparable rates.

IDC recently commented that the widespread growth in the adoption of automated testing solutions is being fuelled by the steps that businesses are taking to leverage the Web. The old paradigm of forgoing a structured quality initiative in exchange for faster deployment with plans to address quality issues with application updates doesn't fly in the Internet economy. Businesses must know how applications will perform and behave once they're open to the Web, As businesses move from having isolated front-end Web applications to integrated Web-enabled enterprises, with multiple application interdependencies within or between businesses, the issue of testing is especially crucial. It's a simple matter of mitigating business risk, maintaining integrity, and gathering knowledge and confidence in the IT systems that are so heavily relied upon to transact daily business.

Business acceptance of automated testing as a mainline business practice hasn't come quickly or easily. For traditional client-server environments, it's been more common for testing efforts to be short changed in exchange for more development time or faster deployments. Internal conflicts that result in poor communication between developers and testers have also contributed to weakened efforts, along with a loss of focus on the fact that both facets of IT must work together to achieve the end result. Too often, automated tools have been shelved due to inadequate test process support, lukewarm business management support, and staff turnover.

However, in the ever-widening Internet economy, much is changing. Today, automated testing tools are viewed as a necessary purchase and many organisations have annual budget allocations for them. Majority of these organisations have proven that automated testing solutions work to help them deliver their Web-based product to market sooner, with more accuracy and less user-found errors. In turn, business support continues to strengthen.

The equation is simple: Application performance and transaction precision equate to the efficient business services that lead to customer satisfaction, which ultimately boils down to revenue. At the opposite end you have loss in revenue, bad publicity and in case of the public sector severe political embarrassment. So it goes that IT managers and business executives are speaking the same language "bottom-line revenue" and, therefore, the business investments in automated testing solutions are more easily justified and understood.

As Web application delivery cycles are so much faster than with traditional client-server or legacy systems, the potential for errors is greater, creating, in turn, a greater need for testing.

The recognition of the value that automated testing brings to the business has changed enormously. Information such as how many database connections were needed and what the maximum load requirements were known for the system. Building to known requirements was tough, but at least it was manageable and quantifiable. The problem with the Internet is the surge activity.

For example. Advertising campaigns typically cause traffic to spike to five or six times the daily average and in some exceptional cases traffic surged to nearly 20 times the daily average. These types of scenarios are difficult to plan and test for, so there is a need to use the tools in order to know ahead of time what will happen when these types of traffic surges occur. Will performance slow, will it grind to a halt? How much business could we potentially be losing? How much frustration are we causing our customers and, as a result, how much damage to our business reputation?"

The "gut instinct" testing mentality that previously existed for determining readiness has been replaced with quantifiable facts about how the application will perform under real-world conditions. Communication with the development team has improved as a result of leveraging automated tools. As problems are uncovered, they can be backed up with quantifiable evidence.

Many an IT manager has pitched a case for purchasing automated testing solutions and been challenged to prove the business value. The cost justification: How long before the investment outweighs the cost? After all, the foray into automating the testing process is expensive. Automated tools are costly, as are the qualified staff and hardware resources necessary to operate and maintain them.

What tends to most easily convince businesses of the need to establish formal automated test processes is a bad experience with an application providing poor service, or one that fails altogether. The upside of that experience is that application quality becomes a bigger issue and gains the business support needed to survive. The downside is that adopting automated tools in response to a crisis can seem to worsen problems if it's assumed that having the tools will immediately equate to better application quality. Tools require process support and qualified people to run them. Typical client consulting engagements require 75% of the training to focus on the test process, with the remainder spent on learning how and where to leverage automated tools.

In addition to the traditional vendors responsible for turning automated testing into a distinct discipline, there exists a new breed of vendors that are providing automated testing solutions built exclusively for the Web. The industry leader for this is Mercury. Mercury in partnership with Testhouse ( www.testhouse.org ), a UK based Test consultancy now established in Dubai, offers the complete solution. This allows organisations to get into Testing in the correct manner.

Load testing services from Testhouse are gaining popularity. These services are beneficial for companies under pressure to deliver but short on resources and time. In addition, these services give businesses the chance to have their applications evaluated by a third party, which adds an element of objectivity. Also, large volumes of realistic user loads can be leveraged through these services, which take into account realistic Web usage scenarios across geographic locations.

Because these services are geared for a fully functional application, they do detract from the life-cycle testing approach, which requires that testing begins early and continues throughout the development process. Therefore, the best solutions for Web-enabled enterprises will require some combination of in-house tools and outsourced services.

Another trend extending the value of automated testing is application monitoring in the production environment, from the perspective of the end user. For testing-tool vendors, application monitoring stretches quality practices into production. This functionality is critical for Web environments because businesses must constantly be aware of how users experience their applications.

Given the undeniably complex array of technologies and the unpredictability of users' loads inherent in the Web-enabled enterprise, automated testing practices will continue to gain business acceptance for companies participating in the Internet economy.

It will always be a challenge to blend new technologies with existing ones and test that they work together synergistically. Rapid-release cycles and continuous changes make test automation a more practical and reliable way to ensure quality IT systems.

Testing should be regarded as a business investment not as an optional overhead. What's put into it will, if correctly implemented and well managed, directly correlate to the business value that's derived, unsurprisingly the opposite is also true. Furthermore, like any good investment, businesses must think of their returns over the long term. Applications are rarely future proof, but fundamental strategy, architecture and component based development can be durable.

About The Author

Sug Sahadevan has had over 20 years in IT starting from developer, to project management, programme management. sug@testhouse.org

For more information on Software Testing, visit Software Testing Concepts.

7 Tips for Improving Scalability Testing

Contributed by Mark Trellis

Systems that work well during development, deployed on a small scale, can fail to meet performance goals when the deployment is scaled up to support real levels of use.

An apposite example of this comes from a major blue chip company that recently outsourced the development of an innovative high technology platform. Though development was behind schedule, this was deemed acceptable. The system gradually passed through functional elements of the user acceptance testing and eventually it looked like a deployment date could be set. But then the supplier started load testing and scalability testing. There followed a prolonged and costly period of architectural changes and changes to the system requirements. The supplier battled heroically to provide an acceptable system, until finally the project was mothballed.

This is not an isolated case. IT folklore abounds with similar tales. From ambulance dispatch systems to web-sites for the electronic submission of tax returns, systems fail as they scale and experience peak demands. All of these projects appear not to have identified and ordered the major risks they faced. This is a fundamental stage of risk based testing, and applies equally to scalability testing or load testing as it does to functionality testing or business continuity testing. With no risk assessment they did not recognise that scaling was amongst the biggest risks, far more so that delivering all the functionality

Recent trends towards Service Oriented Architecture (SOA) attempt to address the issue of scalability but also introduce new issues. Incorporating externally provided services into your overall solution means that your ability to scale now depends upon these external system operate under load. Assuring this is a demanding task and sadly the load testing and stress testing here is often overlooked.

Better practice is to start the development of a large scale software system with its performance clearly in mind, particularly scalability testing, volume testing and load testing. To create this performance testing focus:

1. Research and quantify the data volumes and transaction volumes the target market implies. Some of these figures can be eye openers and help the business users realise the full scale of the system. This alone can lead to reassessment of the priority of many features.

2, Determine the way features could be presented to users and the system structured in order to make scaling of the system easier. Do not try and have the same functionality you would have for a single user desktop solution provide an appropriate scalable alternative.

3. Recognise that an intrinsic part of the development process is load testing at representative scale on each incremental software release. This is continual testing, focusing on the biggest risk to the project: the ability to operate at full scale.

4. Ensure load testing is adequate both in scope and rigour. Load testing is not just about measuring response times with a performance test. The load testing programme needs to include other types of load testing including stress testing, reliability testing, and endurance testing.

5. Don't forget that failures will occur. Large scale systems generally include server clusters with fail-over behaviour. Failure testing, fail-over testing and recovery testing carried out on representative scale systems operating under load should be included.

6. Don't forget catastrophic failure could occur. For large scale problems, disaster testing and disaster recovery testing should be carried out at representative scale and loads. These activities can be considered the technical layers of business continuity testing.

7. Recognise external services if you use them. Where you are adopting an SOA approach and are dependent on external services you need to be certain that the throughput and turnaround time on these services will remain acceptable as your system scales and its demands increase. A smart system architecture will include a graceful response and fall-back operation should the external service behaviour deteriorate or fail.

About The Author

Mark Trellis is an experienced consultant working in performance testing, scalability testing and load testing. For further information visit: http://www.acutest.co.uk or http://www.acutest.co.uk/performance-testing.html; mailto:software-testing@hotmail.co.uk

For more information on Software Testing, visit Software Testing Concepts.

Software Test Automation For Accuracy And Precision

Contributed by Roy Upton

More and more companies these days are using automated testing tools for accuracy and precision. Testing is really important to make sure their software is working well before it goes into use. In this day and age, you have to guarantee that the applications that have been created for online use by the general public work perfectly.

However, automated software tests can be quite difficult, especially if you are a quality assurance manager or working in the IT department. Therefore, handling software testing without losing your wits can be quite a challenge for you.

The first thing to remember is that automated testing is not fully automated. Test automation can give you test execution elements, but there are other ways of testing with computers. For instance, there is software available that can do more jobs and do them well too. The different jobs it can do include test data generation, installations, file and database comparisons, and analyzing test results.

You must know the goal of testing before you begin testing. For that purpose it is a good idea to create a schedule and try to stick to it. Of course, you may have to re-evaluate your goals as the project moves along. You also have to use both computers and human strengths to help you figure out and prevent problems. Another good way to keep a project going smoothly is to make sure management is confident in what your team is doing. With all the different types of testing software available out there, it pays to find an application that will serve you well in the long run and test for many different things.

Testing software that includes a whole range of testing capabilities is very practical for testing software properly. However, although you may want to automate everything with such an application, just be aware that some tasks cannot be carried out through the use of an application tool. But sometimes you can still find out how to automate the process using other tools you can find elsewhere. Sometimes it is more practical to test manually using human skills and intelligence, but usually an automated software testing program really does do the trick. Some tasks cannot be automated, but most can.

The secret of success lies in the organization. Your aim should be to set up the project in such a manner that when you work with your team, each member of the team provides value. It is best to start from the ground level and plan for small achievements, and go from there. This is the way to about it when you are testing software for problems.

Test automation is a good solution for performing load testing, performance testing, functional testing, regression testing, and bug tracking. The automated process can make the process of finding problems in software more accurate and quick.

About the Author:

Roy Upton is an experienced software developer, who now runs a site providing load testing tools. Click here to go to his bug tracking site.


For more information on Software Testing, visit Software Testing Concepts.

Career Change Time? Consider Software Testing

Contributed by Mikhail Portnov

The profession of software testing emerged in the early nineties when personal computers became more popular as they became more affordable. The fast-growing population of PC users created new opportunities for software companies as well as strong competition for the consumers business.

The new generation of software users quite naturally expected their applications to work as advertised. At the same time, market forces encouraged the fast release of new software often at the sacrifice of thorough testing. Defective software does not sell.

The software industry soon recognized that, to achieve success, they would have to set quality standards prior to release and create thorough end-user testing procedures in-house.

In 1992, I got my very first job as a Software QA Engineer literally by accident: an old friend introduced me to a small startup company in Newark where he worked at the time. My job there was to identify functionality and performance problems in a client-server database application.

I searched for fellow testers for professional networking; but I found none. I approached over two dozen software developers asking if they knew of anyone who tests software for a living. They had never heard of software testers and could see no use for them since they tested their own software.

I found myself wondering what growth potential, if any, there may be in this career. In particular, I wanted to know how much I could earn as a software tester. I approached our VP of Engineering with this question. He suggested that, if I stay with the company for five years and do really well, I might hope to make up to $40,000 a year.

A small group of developers who had heard this exchange were clearly skeptical. I read the look on their faces, "That'll be the day!"

In May of 1993 the startup I worked for collapsed. In the course of a week, there were five advertisements in the San Jose Mercury News for software QA positions. I sent a resume to each, which resulted in two job interviews the following week and one on-the-spot job offer.

My new employer was a multimedia startup. And guess what - that job paid 25 percent more than my previous one. Three months later I got a raise, which brought me to a $40,000 salary, exactly the projected five-year target thought to be unrealistic. My new employers were exceptionally successful. They sold the company profitably six months later. The new owners restructured the business and I was back in the job market again.

What I discovered in my new job search amazed me. Where I had found only five software quality assurance listings over the course of a week, I was now finding 10-12 listings a day. I had 3-4 interviews a week, sometimes two interviews a day, and received many offers within a month. The market had grown dramatically within a single year and the demand for software testers far exceeded the supply.

I chose the company that offered me strong exposure to automated testing, my passion at the time; but I could not help mulling over the amazing growth in demand for software testers and the equally amazing lack of supply.

In the mid-90s, software testing was still a new profession. Between 1994 and 1997, half of QA graduates of many small and big local QA schools became the first person in their companies specifically hired as software testers.

Today, most software companies have a dedicated quality assurance department with one or more managers and a staff ranging from junior testers to senior quality assurance engineers.

Before the recent recession, starting salary in QA was about $60,000 on average with 2-3 weeks spent on job search. Those who liked to change jobs every year or so as they acquired experience, saw their salaries grow to $90,000-95,000 within two-three years. When the recession hit Silicon Valley job market in 2001, there appeared to be no jobs at all for the inexperienced software tester.

But in the year 2007, the recession was over. On average, an entry level QA job seeker in Silicon Valley would get 2 job interviews a week. It seems to take only 3 or 4 interviews to land an offer. Finding a QA job today seems to be no more difficult than it was in the 90s.

Software QA is a unique job niche in many ways: Maturity is an asset in software testing unlike other IT fields. Maturity is easily marketed as patience, attention to detail, and tolerance for routine tasks, all of which are highly valued in software QA.

Whatever your prior education or work experience, it is likely to be an asset because there is likely to be software that specializes in your field of expertise. If you have experience in education, accounting, banking, publishing, workflow or contact management, sales, client relations, drafting, stock or bond trading, image processing, to name but a few industries, you will find software companies that target your field.

Testing software is basically about finding the discrepancy between the expected behavior of the application and its actual behavior. If you have an accounting background, for example, you are better positioned to understand what the expected behavior of a software application should be and how an accounting department would use it.

Testing is not a difficult concept to learn. We all have some experience testing something. We test new recipes, test-drive cars, double-check our change at the convenience store. In each case we are testing to see that the actual result meets our expected result.

Entry-level jobs in software QA do not require a computer science degree. The field covers a broad spectrum of technical proficiency. The niche is large enough to accommodate you.

We see individuals of all ages transitioning from H1B visas to green cards, for example, becoming two-income families and homeowners, and establishing themselves in their new country.

Software testing is definitely a consideration for college educated people of all the ages and professional background looking for a career change.

About the Author:

Mikhail Portnov has been helping people changing their career path to the Software Testing field since 1994. He is the founder and CEO of Portnov Computer School in Silicon Valley, which has 2000+ successful graduates. Find out how you can change your career in 4-6 month at http: //www.portnov.com

For more information on Software Testing, visit Software Testing Concepts.

Web Design Development And Testing

Contributed by Umair Khan

Many organizations are interested in building web applications for their business but are unaware of the various steps that are needed to build a compelling web application. In this article I will attempt to put together the various pieces of the puzzle. Application development involves several distinct efforts that need to come together to build a compelling end product. A compelling end product is the combination of design, development architecture, development implementation, automated regression and functional testing and performance and load testing.

Design:

People often confuse design with development. Moreover even within design, user interface design is often confused with graphics design. Web user interface design involves the design of the flow of the website and the layout of the specific web pages within the website. The web user interface designer concentrates on the usability of the application. The user interface designer will typically develop "wireframes" using tools like Adobe Photoshop to convey the design. These are often initially developed as prototypes and usability testing is carried with user groups out to ensure that the web application will be intuitive and easy to use. Graphics design on the other hand relates to the aesthetics of the page. The graphics designer is responsible for the aesthetic layout of the pages and the creation of the various graphical objects inside the pages such as images and flash objects. The graphics Designer will typically use a combination of tools such as Adobe Photoshop, Adobe Illustrator and Adobe Captivate to create the actual graphics objects. A designer will need to work closely with other groups to make sure the design process does not compromise the performance of the application by making sure that the graphics objects are small thus ensuring that the various web performance metrics are unaffected.

Development:

This involves converting the design into an actual application. The development typically involves an architectural phase where the underlying modules that make up the application are scoped out. If persistent data storage is needed, a database schema should be designed to accommodate the data storage needs. The choice of the operating system (e.g. Windows, Linux or Solaris) where the web application will run, the web server (e.g. Microsoft IIS, Apache or Tomcat) which will run the web application and the back end database (e.g. Microsoft SQL Server, Oracle, MySQL or Postgres) which stores the data will need to be made. Various development frameworks are available to build web applications. The most common ones are ASP and ASP.NET from Microsoft, Java Servlets and JSP from Sun, PHP and Perl that are open source. The choice of the application framework is typically dictated by the strengths of the members of the development team. The architectural phase is followed by the implementation phase. This is typically the longest part of the project and during this phase the actual code is written using the design specifications and graphics objects developed by the design team. The programming will typically be done using a combination of the application frameworks mentioned earlier together with HTML, JavaScript and CSS style sheets.

Quality Assurance and Testing:

A surprising number of people are of the view that quality assurance and testing is desirable but is not actually needed. Unfortunately this view has its roots in total ignorance of the process that is needed to build a good end product. Regardless of how pretty or slick we make the application, if it does not work as expected, users will reject it. Quality assurance and testing involve two different kinds of tasks. Functional and regression testing is used to verify that the developed application is doing what it is supposed to do. This is achieved by test automation using a functional testing tool. Load and Performance testing is used to ensure that the application performs as intended when it is subjected to the typical load of a production environment. Load testing is practically speaking impossible to perform without using an automated load testing tool since it involves the simulation of a large number of concurrent virtual users. This effect cannot really be achieved manually and needs the assistance of an application that is designed to subject the application to a specified load and then measure its performance when it is subjected to that load. Quality Assurance teams will need to track the bugs or defects in the application using bug tracking tools. Such tools will allow defects to be tracked by all members of the team.

The three groups mentioned above tend to be specialized for their skill set. As an example, people often make the mistake of using developers as quality assurance testers. This is not a wise strategy because most developers who are good at writing software are quite poor at finding bugs or defects in their own software.

Writing good and compelling web applications requires an understanding of all phases of the process, design, development and quality assurance. Skipping phases or taking shortcuts will result in low quality software that will generally cost more in the long run.

About the Author:

Umair Khan is Chairman of Verisium, Inc., maker of vPerformer (performance and load testing) and vTest (functional testing).

For more information on Software Testing, visit Software Testing Concepts.

Friday, April 24, 2009

Software Testing Trade Offs

Contributed by Grosha N Fabiola

Running software testing projects is far more difficult than people outside of the software testing arena seem to realize. It is not uncommon for senior management, project management and development teams to adversely pressurize the test team to cut corners in order to meet delivery deadlines. Yes, everyone wants to release a quality product, on time and on budget. Believe it or not even the software testing team want to hit the delivery date, with a product that is on budget. Pushing the software testing team to cut corners is not the answer though.

It is easy to see that everyone involved with a development project wants to achieve the same goal and the same successful release, it is just that the test team are more cautious than most. And for good reason; Software testing is difficult! There is no set process that ensures a successful testing project and there are no software testing tools which guarantee a successful release. Yet despite these clearly obvious facts senior managers, project managers and development teams always seem to think it is the software test team that can perform some magical act to bring a project back on schedule when project deliver schedules start to slip. Well they can't!

At least they can't if they continue to act professionally, accurately and effectively. The test team are, without question, the last check point prior to a company potentially releasing a product that destroys the companies reputation. That is no small responsibility to take on.

So why is it that it always falls on the software test team to bring in the schedule when projects start slipping? We'll, that isn't a difficult one to answer although there are a couple of reasons, one of which might surprise you. Firstly as testing commonly falls at the end of the development cycle the software testing component is the only area left where it is even possible to make up time. Secondly, and possibly more interestingly, those who have little knowledge of complexities of software testing (for example project mangers) think that a little less testing will only have a little impact on the quality of the product. How wrong that assumption can be!Releases of products with serious defects usually happen because the software test team are forced to cut corners.

The imprecise nature of software testing, and the pressure to cut corners, means it is very difficult to confidently target the test areas such that you minimize the risk of releasing with serious defects left uncovered. The very fact that we leave some areas of our testing incomplete means we have no idea about what we are leaving uncovered. Software testing tools can help but as in many walks of life it all comes down to a trade offs between quality and time, but with software testing the consequences of getting the trades offs wrong can be disastrous.

Software testing is hard enough already, so why make it even harder and don't use a good and reliable software testing tools like the free, open source one offered by www.softwaretesting.net or www.testmanagement.com.

For more information on Software Testing, visit Software Testing Concepts.

Software Testing Phases

Contributed by Debajyoti Basu

IEEE standards are most accepted in the software testing industry. However, it is not mandatory that all software testing processes have to follow the standard. Software testing has many different phases but we cover the test planning, test specification, and test reporting phase in this article.

The test plan is the most important phase in the software testing process. It gets the process rolling and describes the scope of the testing assignment, the approach methodology, the resource requirement for testing and the project plan or time schedule. The test plan outlines the test items, system features testing, or checking out the functionality of the system, the testing tasks, responsibility matrix and the risks associated with the process.

The testing task is achieved by testing different types of test data. The steps that are followed in system testing are program testing, string testing, system testing, system documentation, and user acceptance testing. I will discuss about each of these in my next article "Software System Testing".

The test specification document helps in refining the test approach that has been planned for executing the test plan. It identifies the test cases, procedures, and the pass/fail criteria for the assignment.

The test case specification document outlines the actual values required as input parameters in the testing process and the expected outputs of the testing results. It also identifies the various constraints related to the test case. It is important to note that test cases are re-usable components and one test case can be used in various test designs. The test procedure outlines all the processes that are required to test the system and implement the test cases.

During the testing phase, all the activities that occur are documented. There are various reasons why clear documentation is required during testing. It helps the development team to understand the problems and fix them quickly. In case there is a change in the testing team, it will help the new team members to quickly understand the process and help in a quick transition. The overall summary report of the testing process helps the entire project team to understand the initial flaws in design and development and ensure that the same errors are not repeated again.

There are four types of testing documents:

* the transmittal report which specifies the testing events being transmitted from the development team to the testing team,

* the test log which is a very important document and used to document the events that happened during execution,

* the test incident report which has a list of testing events that requires further investigation

* the test summary report which summarizes the overall testing activities.

Many software testing companies follow the IEEE standard of software testing when executing their testing projects. Software application development companies may have their own testing templates which they use for their testing requirements. Outsourcing the testing requirements to a third party vendor helps in improving the quality of the software to a great extent. Also an unbiased view helps to find many different loopholes that are existent in the software system.

About the Author:

Debajyoti Basu is a management graduate from India who, along with a friend, has started a software testing and quality assurance service company. Their other line of business is SEO and SEM. At IntelligentQ, they have a vision of being a niche company focused on Software Quality Assurance, Testing, and web site marketing services. Their team consists of experienced professionals who believe in delivering quality services, first time, and every time. Working with clients across the globe, they have made an impact on the clients' businesses. Connect with IntelligentQ to feel the difference they can make to your software testing processes and Internet marketing initiatives. http://www.intelligent-q.com

For more information on Software Testing, visit Software Testing Concepts.

In-House Controls in an Organization

Introduction

As technology advances in leaps and bounds today, much attention is paid by companies, especially IT organizations, to safeguard security. In spite of the advancement, security continues to be a vulnerable area in most organizations. This paper throws light on the important aspects of in-house controls, testing security controls, identifying penetration points, assessing security and the attributes of an effective security control.

In-House Control

Interest in in-house control has been highlighted by publicized penetrations of security and the increased importance of information systems and the data contained by those systems. The passage of the Sarbanes-Oxley Act in particular, highlighted interest in in-house control.

The Sarbanes-Oxley Act, sometimes referred to as SOX, was passed in response to the numerous accounting scandals such as Enron and WorldCom. While much of the act relates to financial controls, there is a major section relating to in-house controls. Because misleading attestation statements is a criminal offense, top corporate executives take in-house control as a very important topic. Many of those controls are incorporated into information systems, and thus the need for testing those controls.

The following four key terms are used extensively in in-house control and security: risk, exposure, threat, controls.

Let's look at an example of these terms using a homeowner's insurance policy. To that policy we will look at one risk, which is the risk of fire. The exposure associated with a risk of fire would be the value of your home. A threat that might cause that risk to turn into a loss might be an improper electrical connection or children playing with matches. Controls that would minimize the loss associated with risk would include such things as fire extinguishers, sprinkler systems, fire alarms and non-combustible material used in construction.

In looking at the same situation in IT, we might look at the risk of someone penetrating a banking system and improperly transferring funds to the perpetrators personal account. The risk obviously is the loss of funds in the account, which was penetrated. The exposure is the amount of money in the account, or the amount of money that the bank allows to be transferred electronically. The threat is inadequate security systems, which allow the perpetrator to penetrate the banking system. Controls can include passwords limiting access, limiting the amount that can be transferred at any one time, and unusual transactions such as transferring the money to an overseas account, a control which limits who can transfer money from the account.

Testing Security Controls

Security is too important to organizations that testing them can be ignored. The following tasks can add value to the security control testing:

Task 1 --Where Security is Vulnerable to Penetration

Data and report preparation areas and computer operations facilities with the highest concentration of manual functions are areas most vulnerable to having security penetrated. Nine primary IT locations are listed below:

Vulnerable Areas Rank

1. Data and Report Preparation Facilities
2. Computer Operations
3. Non-IT Areas
4. Online Systems
5. Programming Offices
6. Online Data and Report Preparation
7. Digital Media Storage Facilities
8. Online Operations
9. Central Processors

Task 2 -- Building a Penetration Point Matrix Interface Activities
  • Technical interface to the computer environment
  • Development and maintenance of application systems
  • Privileged users
  • Vendor interfaces Development Activities
  • Training
  • Database administration
  • Communications
  • Documentation
  • Program change control
  • Records retention program

Operations Activities

  • Media libraries
  • Error handling
  • Production library control
  • Computer operations
  • Disaster planning
  • Privileged utilities and commands
  • Understand their roles and responsibilities related to the organizational mission.

Task 3 -- Assess Security Awareness Training

Step 1 -- Create a Security Awareness Policy

Step 2 -- Develop a Security Awareness Strategy

Step 3 -- Assign the Roles for Security Awareness

Task 4 -- Understand the Attributes of an Effective Security Control

When security control is evaluated, we need to understand what makes an effective security control. The following security control attributes of an effective security control are designed to help determine whether or not a security control is effective.

Task 5 -- Selecting Techniques to Test Security

Often, several of these testing techniques are used together to gain more comprehensive assessment of the overall network security posture. For example, penetration testing usually includes network scanning and vulnerability scanning to identify vulnerable hosts and services that may be targeted for later penetration. Some vulnerability scanners incorporate password cracking. None of these tests by themselves will provide a complete picture of the network or its security posture.

Conclusion

References

  • Dustin, Elfriede, et al. Quality Web Systems: Performance, Security, and Usability. Addison-Wesley, First Edition, 2001
  • Mosley, Daniel J. and Bruce A. Posey. Just Enough Software Test Automation. Prentice Hall, First Edition, 2002
  • Pham, Hoang. Software Reliability and Testing. IEEE Computer Society Press, First Edition, 1995
G.R.Brindha Shivak

For more information on Software Testing, visit Software Testing Concepts.

Wednesday, April 22, 2009

Risky Business -- Testing Security Software

Contributed by Tim Klemmer

Ever ask yourself the following question as you're standing in the aisle at CompUSA or Best Buy: how well will this piece of software work with my other programs? Probably not. There is a high expectation that whatever piece of software you buy will work acceptably on your computer and won't infringe on other programs. Games, word processors, spreadsheets, music players are just those types of self-contained software programs that you wouldn't expect any trouble from. And for the most part, you don't experience problems.

Security software, on the other hand, by its very nature is more invasive and more likely to intrude on your way of computing. First and foremost, all good anti-virus software packages install on-access/on-demand scanning. This means that every time you start up a program, every time you access a document or spreadsheet, every time you access a directory in Explorer, the anti-virus program will scan it for viruses.

Unfortunately, the consequence of this is that it slows down your computer. Unfortunately still, all vendors set on-access/on-demand scanning up as the default when you install the software. They have to. When you install security software it has to install itself in such a way that it will always have the upper hand when new programs are run on a PC. Why? For the simple reason that you are installing this software to protect you from bad software. Security software tries to analyze anything you do on your computer and decide if it is a good thing or not.

But will the software make good decisions? Will this software cooperate with other programs? Security vendors have spent years perfecting their testing and testing against enormous suites of commercial software. But they can't test every combination of software, every different version of software (there are still PCs out there running DOS 3.0 programs). They have to concentrate on mainstream.

The problem is they may have no idea that your video card in combination with those two older games you installed will wreak havoc with their detection algorithms. We see this all the time. Users send in emails or write notes in newsgroups complaining that such-and-such a package is preventing them from installing a new game or that such-and-such version is saying that their new game is infected.

Or worse still, things just don't work the same anymore since the software was installed. Downloads become more tedious because instead of just clicking download, now users are forced to answer questions about each download or approve downloads.

So what's the answer? The answer is to move to a more centralized approach. Instead of installing scanning software on your computer, install behavior-based software on an off-site testing server that receives test requests from the email server. All emails are routed through the testing server. This then can be expanded to include web traffic that runs on a 10-second delay much like talk radio. You connect through the Internet, all subsequent downloads, ActiveX controls, etc. are routed via a testing server and then arrive on your PC or are halted and removed and you receive the appropriate message.

In the time that it takes to receive a file, it can be tested, and trouble software can be detected.

This approach works for detecting everything from viruses to worms to spyware. You as a user notice no long waiting, no downtime, no drag, and no incompatibilities.

About The Author:

Tim Klemmer is CEO, OnceRed LLC http://www.checkinmyemail.com/ Tim Klemmer has spent the better part of 12 years designing and perfecting the first patented behavior-based solution to malicious software.

For more information on Software Testing, visit Software Testing Concepts.

Practical Measurements For Software Testing

Every software development company focuses on developing quality software. The only way to track the software quality is evaluating it at every stage of its development. It requires some kind of metrics, which is obtained through effective testing methods. Each stage of software testing is effectively monitored for the software QA.

1. Software measurements are used for:

2. Deriving basis for estimates

3. Tracking project progress

4. Determining (relative) complexity

5. Understanding the stage of desired quality

6. Analyzing defects

7. Validating best practices experimentally

Here, some software testing metrics are proposed for black box testing that has real world applications. It discusses:

Importance of software testing measurement

Different techniques/processes for measuring software testing

Metrics for analyzing testing

Methods for measuring/computing the metrics

Advantages of implementing these metrics

These metrics helps in understanding the inadequacies in different software QA stages and finding better correcting practices.

What is measurement and why it is required?

The process of assigning numbers or symbols to attributes of real world entities for describing them according to defined rules is called measurement.

For developing quality software, several characteristics like requirements, time and effort, infrastructural cost, requirement testability, system faults, and improvements for more productive resources should be measured.

Measuring software testing is required:

1. If the available test cases cover all the system's aspects

2. For tracking problems

3. For quantifying testing

Choose the suitable metrics

Several metrics can measure software-testing process.

Here, the following types of metrics are identified:

Base metrics:

These raw data are collected in a testing effort and applied in formulae used to derive Calculated Metrics.

The Test Metrics comprise of the Number of

Test Cases Passed, Failed, Under Investigation, Blocked, Re-executed and Test Execution Time.

Calculated metrics:

They convert the Base Metrics data into useful information. Every test efforts must implement the following Calculated Metrics:

% Complete

% Defects Corrected

% Test Coverage

% Rework

% Test Cases Passed & Blocked

% Test Effectiveness & Efficiency

% 1st Run Failures

% Failures

Defect Discovery Rate

Defect Removal Cost

Measurements for Software Testing

The corresponding software testing process in software development measures each step for ensuring quality product delivery.

1. Software Size:

The amount of functionality of an application determines this and is calculated by

Function Point Analysis

Task Complexity Estimation Methodology

2. Requirements review:

Before software development, the Software requirement specifications (SRS) from the client are obtained. It must be:

Complete

Consistent

Correct

Structured

Ranked

Testable

Traceable

Unambiguous

Validate

Verified

The Review Efficiency is a metric that offers insight on the review quality and testing.

Review efficiency=100*Total number of defects found by reviews/Total number of project defects

3. Effectiveness of testing requirements:

It is measured by maintaining Requirement Trace-ability matrix and specification of requirements, which should have:

SRS Objective, purpose

Interfaces

Functional Capabilities

Performance Levels

Data Structures/Elements Safety

Reliability

Security/Privacy

Quality

Constraints & limitations

Next comes the updating of the crucial requirement trace-ability matrix or RTM, which determines the number and types of tests.

While measuring the mapping of test cases, the number and priority of requirement it tests, its execution effort and requirement coverage must be determined.

The Requirement compliance factor (RCF) measures the coverage provided by the test cases to one or set of requirement(s).

Mathematically, RCFj=∑(Pi*Xi)/(maxXi)*(∑Pi)i=1

Where,

j is a set of requirements and (j=1-m);

Xi=2, if the test case (say Tj) tests requirements Ri completely,

=1, if it tests partially,

=0, if otherwise. Effectiveness=RCFj/Ej where Ej=Time required for executing a test case

4. Evaluating estimation accuracy

Relative error=(A-E)/A where E is estimate of a value and A is actual value.

For a collection of estimates, the mean RE for n projects is

__ n RE=1/n∑REi i=1

For a set of n projects, the mean magnitude of RE (MRE) is

___ n MRE=1/n∑MREi i=1

Of a set of n projects, an acceptable level for MRE is less than 0.25.

If K is the number of projects whose mean magnitude of relative error is less than or equal to q, then the prediction quality pred(q)=K/n

5. Measurement of Efficiency in testing process

In software testing, we must keep tabs on what we had planned and what we have actually achieved for measuring efficiency. Here, the following attributes play major roles: -

Cost: The Cost Variance (CV) factor measures the risk associated with cost.

CV=100*(AC -- PC)/PC, AC=Actual Cost, PC=Planned/Budgeted Cost.

Effort: Effort Variance (EV) measures effort.

EV=100*(AE -- PE)/PE (AE=Actual Effort, PE=Planned Effort)

Schedule: Schedule Variance (SV) is important for project scheduling.

SV=100*(AD-PD)/PD where AD=Actual duration and PD=Planned duration.

Cost of quality: It indicates the total effort expended on prevention, appraisal and rework/failure activities versus all project activities.

Prevention Effort=Effort expended on planning, training and defect prevention. Appraisal Effort=Effort expended on quality control activities.

Failure effort=Effort expended on rework, idle time etc.

COQ=100*(PE + AE + FE)/Total project effort.

Product -

Size variance: It is the degree of variation between estimated and actual sizes. Size Variance=100*(Actual Software Size--Initial Estimated Software Size)/Initial Estimated Software Size

Defect density: It is the total number of defects in software with respect to its size.

Defect density=Total number of defects detected/software size

Mean Time Between Failures: MTBF is the mean time between two critical system failures or breakdowns.

MTBF=Total time of software system operation/Number of critical software system failures.

Defects: Defects are measured through:

Defect distribution: It indicates the distribution of total project defects. Defect Distribution=100*Total number of defects attributed to the specific phase/Total number of defects.

Defect removal effectiveness: Adding the number of defects removed during the phase to the number of defects found later approximates this.

Benefits of implementing metrics in software testing:

Improves project planning.

Understanding the desired quality achieved.

Helps in improving the processes followed.

Analyzing the associated risks.

Improving defect removal efficiency.


By: RTG Marketing

ReadyTestGo is a professional Software Testing Company and Outsourcing QA . For more details, please contact marketing@readytestgo.com


For more information on Software Testing, visit Software Testing Concepts.

Thursday, February 19, 2009

Testing a Disaster Recovery Plan

Contributed by Amy Nutt

Disaster can strike at any moment within your business, which is why you should have a good (http://www.fusepoint.com) disaster recovery plan in place. Without a good plan in place, then it is fair to say that your staff will be running around, wondering what to do next. That is not a situation you want to find yourself in. You want business to continue as usual while your company figures out how to figure out a particular disaster. Whether that is data center failure or a natural disaster, you have to have a plan in place so that you don't lose your customers. You would be surprised at how much money a company can lose in just a matter of days from when a disaster strikes.

When you have a disaster recovery plan in place, you are doing what is necessary to make sure that your business can serve customers to the best of its ability. Even if it cannot service all, serving some can make an incredible difference. A company that does not have a disaster recovery plan in place is a company that can go out of business in a heartbeat. This is because their customers were not aware of the situation and thought the company was giving them bad service.

Types of Disaster Testing

There are several different types of testing that you can use when testing a disaster recovery plan. You can do walkthrough testing, simulation testing, checklist testing, full interruption testing, and parallel testing.

Many companies decide to go ahead with a checklist to then proceed to a simulation test. The simulation test is important so that employees know what to do when a disaster actually occurs. The company may decide to do a full interruption test while doing a simulation test, but that really depends on if the company has the type of budget that will allow for this type of testing.

Testing Your Disaster Recovery Plan

There are many different disasters that can take place. You may have a fire in the building, you may have some sort of natural disaster such as an earthquake, or your entire data center can fail. Although data centers are very reliable and it is rare that they fail, it does tend to be the most common failure. Suddenly, employees are unable to retrieve customer information. That is why you need to check the following with your disaster recovery plan:
  • The feasibility of your recovery plan
  • Making sure that backup facilities are feasible
  • Ensure the adequacy of the procedures and make sure teams are working on their part
  • Ensure the training of team managers
  • Providing all employees with the means to maintain and update the recovery plan
  • Making sure an acceptable amount of time to recover has been established
  • Ensure that every location within the company is prepared
  • Verify the cost to perform the test to ensure that the budget is adhered to
From there, you need the entire staff to go into "pretend" mode and simulate that a disaster is really occurring. For example, the (http://www.fusepoint.com/english/html/data_centre_information.html) data center failure recovery plan may be the first one that you want to test. Now, if you are on a strict budget when conducting your test, you may have to work on testing multiple scenarios at once so that all you have to do is one test. You may require two if you have a lot of issues that need fixed. Once they are fixed, it is very important to test them to make sure they will work. Once you have determined that you have a solid plan, you can be rest assured that you'll be in good shape when a disaster actually occurs.

Leading Canadian provider of managed IT solutions with offices strategically located in Toronto, Vancouver, Montreal and Quebec City.

For more information on Software Testing, visit Software Testing Concepts.

Wednesday, February 11, 2009

Performance Testing of Web Applications

Contributed by RTG Marketing

Need for Testing Web Applications:

Superior web experience is the key to success on the World Wide Web. Web applications have to support many online user interactions daily. All foresighted corporations should invest in performance engineering of their web applications.

Process Overview:

Any performance testing process should ensure repeatability, consistently high quality of delivery, complete coverage and a strong feedback mechanism to leverage knowledge.

Test Planning:

Based on the requirement analysis, a comprehensive project plan is prepared. Resources such as Number of Engineers, Testing Tools, Servers, Load Generators, and Bandwidth are identified and planned by performance analyst. The number of test runs, number of transactions, scenarios to be tested, access to various systems (in case of live/customer site test), etc., are planned.

In performance consulting, the mode of testing is also finalized during the test planning stage in order to determine whether the testing has to be conducted over the Internet, on-site at the customer's data center, or at an offshore lab.

Understanding the Requirements:

Web User Testing is analyzed for performance benchmarking by determining end-to-end requirements. A usage pattern is outlined based on customer feedback and analysis of web logs. Details, such as percentage usage per transaction, type and version of browsers used, connection speed, etc., are estimated in performance consulting. Based on this, a Web User Strategy is developed.

Templates and checklists help to decide load levels, method of delivery, etc. The website infrastructure is also studied to understand the arrangement with ISP(s), various component usages like firewall, hardware platforms, etc.

Development of Simulation Scripts:

Test cases are designed and scripted to cover all the transactions identified as a part of performance benchmarking. In some cases, the library of test scripts available with an independent testing company can be customized and used in order to reduce testing time.

Collecting the Data:

During the execution of the test, test logs are to be recorded and maintained for performance engineering. The emphasis is to be on identifying application bottlenecks under various load conditions. It is recommended to re-run the test to validate the effect of any fixes made.

Analyzing the Data:

The test results are to be reviewed and analyzed by performance analysts with experience in Hardware Platforms, Operating systems, Databases, and Software design. Such a team can identify bottlenecks, analyze the root cause, and provide recommendations for corrective action.

Providing Results and Recommendations:

A report including observations and recommendations, along with the metrics collected during the test by the performance analyst, is to be submitted on the completion of each test run through performance consulting.

Best Practices:

The objective of a performance testing engagement is to ensure stability and scalability of the web application. Understanding and following best practices in this area will help isolate and fix bottlenecks rapidly and would also help in making the performance benchmarking process more effective.

1. Test for common performance bottlenecks

Web Server, Database Server and Network problems are the most common reasons why web applications fail to scale. It is best to start performance engineering from the easiest to the most difficult application layer.

2. Test for common transactions

It is critical to test these transactions first as they are the ones that will put the most load on the system.

3. Create reusable test scripts

Maintaining a library of reusable test scripts can minimize rework and significantly improve testing cycle time.

4. Track defects to closure

A performance testing engagement cannot be conducted on the basis of a pre-determined number of test runs. Repeating tests is important to ensure that the recommendations once implemented have resulted in fixing the issues.

ReadyTestGo is a professional Software Testing Company. For more details about Software Performance Testing, please contact marketing@readytestgo.com

For more information on Software Testing, visit Software Testing Concepts.