Chapter 1 – Software Testing Basics

Chapter 1: Software Testing Basics

(B) In Which Software Life Cycle Phase Does Testing Occur?
OR
(B) Can You Explain the PDCA Cycle and Where Testing Fits in?

Software testing is an important part of the software development process. In normal software development there are four important steps, also referred to, in short, as the PDCA (Plan, Do, Check, Act) cycle.

Image from book
Figure 4: PDCA cycle

Let’s review the four steps in detail.

  1. Plan: Define the goal and the plan for achieving that goal.

  2. Do/Execute: Depending on the plan strategy decided during the plan stage we do execution accordingly in this phase.

  3. Check: Check/Test to ensure that we are moving according to plan and are getting the desired results.

  4. Act: During the check cycle, if any issues are there, then we take appropriate action accordingly and revise our plan again.

So now to answer our question, where does testing fit in….you guessed it, the check part of the cycle. So developers and other stakeholders of the project do the “planning and building,” while testers do the check part of the cycle.

(B) What is the Difference between White Box, Black Box, and Gray Box Testing?

Black box testing is a testing strategy based solely on requirements and specifications. Black box testing requires no knowledge of internal paths, structures, or implementation of the software being tested.

White box testing is a testing strategy based on internal paths, code structures, and implementation of the software being tested. White box testing generally requires detailed programming skills.

There is one more type of testing called gray box testing. In this we look into the “box” being tested just long enough to understand how it has been implemented. Then we close up the box and use our knowledge to choose more effective black box tests.

The following figure shows how both types of testers view an accounting application during testing. Black box testers view the basic accounting application. While during white box testing the tester knows the internal structure of the application. In most scenarios white box testing is done by developers as they know the internals of the application. In black box testing we check the overall functionality of the application while in white box testing we do code reviews, view the architecture, remove bad code practices, and do component level testing.

Image from book
Figure 5: White box and black box testing in action

(B) What is the Difference between a Defect and a Failure?

When a defect reaches the end customer it is called a failure and if the defect is detected internally and resolved it’s called a defect.

Image from book
Figure 6: Defect and failure

(B) What are the Categories of Defects?

There are three main categories of defects:

  • Wrong: The requirements have been implemented incorrectly. This defect is a variance from the given specification.

  • Missing: There was a requirement given by the customer and it was not done. This is a variance from the specifications, an indication that a specification was not implemented, or a requirement of the customer was not noted properly.

  • Extra: A requirement incorporated into the product that was not given by the end customer. This is always a variance from the specification, but may be an attribute desired by the user of the product. However, it is considered a defect because it’s a variance from the existing requirements.

Image from book
Figure 7: Broader classification of defects

(B) What is the Difference between Verification and Validation?

Verification is a review without actually executing the process while validation is checking the product with actual execution. For instance, code review and syntax check is verification while actually running the product and checking the results is validation.

(B) How Does Testing Affect Risk?

A risk is a condition that can result in a loss. Risk can only be controlled in different scenarios but not eliminated completely. A defect normally converts to a risk. For instance, let’s say you are developing an accounting application and you have done the wrong tax calculation. There is a huge possibility that this will lead to the risk of the company running under loss. But if this defect is controlled then we can either remove this risk completely or minimize it. The following diagram shows how a risk gets converted to a risk and with proper testing how it can be controlled.

Image from book
Figure 8: Verification and validation
Image from book
Figure 9: Defect and risk relationship

(B) Does an Increase in Testing Always Improve the Project?

No an increase in testing does not always mean improvement of the product, company, or project. In real test scenarios only 20% of test plans are critical from a business angle. Running those critical test plans will assure that the testing is properly done. The following graph explains the impact of under testing and over testing. If you under test a system the number of defects will increase, but if you over test a system your cost of testing will increase. Even if your defects come down your cost of testing has gone up.

Image from book
Figure 10: Testing cost curve

(I) How Do You Define a Testing Policy?

Note

This question will be normally asked to see whether you can independently set up testing departments. Many companies still think testing is secondary. That’s where a good testing manager should show the importance of testing. Bringing in the attitude of testing in companies which never had a formal testing department is a huge challenge because it’s not about bringing in a new process but about changing the mentality.

The following are the important steps used to define a testing policy in general. But it can change according to your organization. Let’s discuss in detail the steps of implementing a testing policy in an organization.

Definition: The first step any organization needs to do is define one unique definition for testing within the organization so that everyone is of the same mindset.

How to achieve: How are we going to achieve our objective? Is there going to be a testing committee, will there be compulsory test plans which need to be executed, etc?.

Evaluate: After testing is implemented in a project how do we evaluate it? Are we going to derive metrics of defects per phase, per programmer, etc. Finally, it’s important to let everyone know how testing has added value to the project?.

Standards: Finally, what are the standards we want to achieve by testing. For instance, we can say that more than 20 defects per KLOC will be considered below standard and code review should be done for it.

Image from book
Figure 11: Establishing a testing policy

The previous methodology is from a general point of view. Note that you should cover the steps in broader aspects.

(B) Should Testing Be Done Only After the Build and Execution Phases are Complete?

Note

This question will normally be asked to judge whether you have a traditional or modern testing attitude.

In traditional testing methodology (sad to say many companies still have that attitude) testing is always done after the build and execution phases. But that’s a wrong way of thinking because the earlier we catch a defect, the more cost effective it is. For instance, fixing a defect in maintenance is ten times more costly than fixing it during execution.

Image from book
Figure 12: Traditional way of testing

Testing after code and build is a traditional approach and many companies have improved on this philosophy. Testing should occur in conjunction with each phase as shown in the following figure.

In the requirement phase we can verify if the requirements are met according to the customer needs. During design we can check whether the design document covers all the requirements. In this stage we can also generate rough functional data. We can also review the design document from the architecture and the correctness perspectives. In the build and execution phase we can execute unit test cases and generate structural and functional data. And finally comes the testing phase done in the traditional way. i.e., run the system test cases and see if the system works according to the requirements. During installation we need to see if the system is compatible with the software. Finally, during the maintenance phase when any fixes are made we can retest the fixes and follow the regression testing.

Image from book
Figure 13: Modern way of testing

(B) Are There More Defects in the Design Phase or in the Coding Phase?

Note

This question is asked to see if you really know practically which phase is the most defect prone.

The design phase is more error prone than the execution phase. One of the most frequent defects which occur during design is that the product does not cover the complete requirements of the customer. Second is wrong or bad architecture and technical decisions make the next phase, execution, more prone to defects. Because the design phase drives the execution phase it’s the most critical phase to test. The testing of the design phase can be done by good review. On average, 60% of defects occur during design and 40% during the execution phase.

Image from book
Figure 14: Phase-wise defect percentage

(B) What Kind of Input Do We Need from the End User to Begin Proper Testing?

The product has to be used by the user. He is the most important person as he has more interest than anyone else in the project. From the user we need the following data:

  • The first thing we need is the acceptance test plan from the end user. The acceptance test defines the entire test which the product has to pass so that it can go into production.

  • We also need the requirement document from the customer. In normal scenarios the customer never writes a formal document until he is really sure of his requirements. But at some point the customer should sign saying yes this is what he wants.

  • The customer should also define the risky sections of the project. For instance, in a normal accounting project if a voucher entry screen does not work that will stop the accounting functionality completely. But if reports are not derived the accounting department can use it for some time. The customer is the right person to say which section will affect him the most. With this feedback the testers can prepare a proper test plan for those areas and test it thoroughly.

  • The customer should also provide proper data for testing. Feeding proper data during testing is very important. In many scenarios testers key in wrong data and expect results which are of no interest to the customer.

Image from book
Figure 15: Expectations from the end user for testing

(B) What is the Difference between Latent and Masked Defects?

A latent defect is an existing defect that has not yet caused a failure because the exact set of conditions were never met.

A masked defect is an existing defect that hasn’t yet caused a failure just because another defect has prevented that part of the code from being executed.

The following flow chart explains latent defects practically. The application has the ability to print an invoice either by laser printer or by dot matrix printer. In order to achieve it the application first searches for the laser printer. If it finds a laser printer it uses the laser printer and prints it. If it does not find a laser printer, the application searches for dot matrix printer. If the application finds a dot matrix printer (DMP) the application prints using or an error is given.

Now for whatever reason this application never searched for the dot matrix printer. So the application never got tested for the DMP. That means the exact conditions were never met for the DMP. This is called a latent defect.

Now the same application has two defects: one defect is in the DMP search and the other defect is in the DMP print. But because the search of the DMP fails the print DMP defect is never detected. So the print DMP defect is a masked defect.

Image from book
Figure 16: Latent and masked defects

(B) A Defect Which Could Have Been Removed During the Initial Stage is Removed in a Later Stage. How Does this Affect Cost?

If a defect is known at the initial stage then it should be removed during that stage/phase itself rather than at some later stage. It’s a recorded fact that if a defect is delayed for later phases it proves more costly. The following figure shows how a defect is costly as the phases move forward. A defect if identified and removed during the requirement and design phase is the most cost effective, while a defect removed during maintenance is 20 times costlier than during the requirement and design phases. For instance, if a defect is identified during requirement and design we only need to change the documentation, but if identified during the maintenance phase we not only need to fix the defect, but also change our test plans, do regression testing, and change all documentation. This is why a defect should be identified/removed in earlier phases and the testing department should be involved right from the requirement phase and not after the execution phase.

Image from book
Figure 17: Cost of defect increases with each phase

(I) Can You Explain the Workbench Concept?

In order to understand testing methodology we need to understand the workbench concept. A Workbench is a way of documenting how a specific activity has to be performed. A workbench is referred to as phases, steps, and tasks as shown in the following figure.

Image from book
Figure 18: Workbench with phases and steps

There are five tasks for every workbench:

  • Input: Every task needs some defined input and entrance criteria. So for every workbench we need defined inputs. Input forms the first steps of the workbench.

  • Execute: This is the main task of the workbench which will transform the input into the expected output.

  • Check: Check steps assure that the output after execution meets the desired result.

  • Production output: If the check is right the production output forms the exit criteria of the workbench.

  • Rework: During the check step if the output is not as desired then we need to again start from the execute step.

The following figure shows all the steps required for a workbench.

Image from book
Figure 19: Phases in a workbench

In real scenarios projects are not made of one workbench but of many connected workbenches. A workbench gives you a way to perform any kind of task with proper testing. You can visualize every software phase as a workbench with execute and check steps. The most important point to note is we visualize any task as a workbench by default we have the check part in the task. The following figure shows how every software phase can be visualized as a workbench. Let’s discuss the workbench concept in detail:

Image from book
Figure 20: Workbench and software lifecycles
  • Requirement phase workbench: The input is the customer’s requirements; we execute the task of writing a requirement document, we check if the requirement document addresses all the customer needs, and the output is the requirement document.

  • Design phase workbench: The input is the requirement document, we execute the task of preparing a technical document; review/check is done to see if the design document is technically correct and addresses all the requirements mentioned in the requirement document, and the output is the technical document.

  • Execution phase workbench: This is the actual execution of the project. The input is the technical document; the execution is nothing but implementation/coding according to the technical document, and the output of this phase is the implementation/source code.

  • Testing phase workbench: This is the testing phase of the project. The input is the source code which needs to be tested; the execution is executing the test case and the output is the test results.

  • Deployment phase workbench: This is the deployment phase. There are two inputs for this phase: one is the source code which needs to be deployed and that is dependent on the test results. The output of this project is that the customer gets the product which he can now start using.

  • Maintenance phase workbench: The input to this phase is the deployment results, execution is implementing change requests from the end customer, the check part is nothing but running regression testing after every change request implementation, and the output is a new release after every change request execution.

    (B) What’s the Difference between Alpha and Beta Testing?

    Alpha and beta testing has different meanings to different people. Alpha testing is the acceptance testing done at the development site. Some organizations have a different visualization of alpha testing. They consider alpha testing as testing which is conducted on early, unstable versions of software. On the contrary beta testing is acceptance testing conducted at the customer end. In short, the difference between beta testing and alpha testing is the location where the tests are done.

    Image from book
    Figure 21: Alpha and beta testing

    (I) Can You Explain the Concept of Defect Cascading?
    OR
    (B) Can You Explain How One Defect Leads to Other Defects?

    Defect cascading is a defect which is caused by another defect. One defect triggers the other defect. For instance, in the accounting application shown here there is a defect which leads to negative taxation. So the negative taxation defect affects the ledger which in turn affects four other modules.

    Image from book
    Figure 22: Defect cascading

    (B) Can You Explain Usability Testing?

    Usability testing is a testing methodology where the end customer is asked to use the software to see if the product is easy to use, to see the customer’s perception and task time. The best way to finalize the customer point of view for usability is by using prototype or mock-up software during the initial stages. By giving the customer the prototype before the development start-up we confirm that we are not missing anything from the user point of view.

    Image from book
    Figure 23: Prototype and usability testing

    (B) What are the Different Strategies for Rollout to End Users?

    There are four major ways of rolling out any project:

    • Pilot: The actual production system is installed at a single or limited number of users. Pilot basically means that the product is actually rolled out to limited users for real work.

    • Gradual Implementation: In this implementation we ship the entire product to the limited users or all users at the customer end. Here, the developers get instant feedback from the recipients which allow them to make changes before the product is available. But the downside is that developers and testers maintain more than one version at one time.

    • Phased Implementation: In this implementation the product is rolled out to all users in incrementally. That means each successive rollout has some added functionality. So as new functionality comes in, new installations occur and the customer tests them progressively. The benefit of this kind of rollout is that customers can start using the functionality and provide valuable feedback progressively. The only issue here is that with each rollout and added functionality the integration becomes more complicated.

    • Parallel Implementation: In these types of rollouts the existing application is run side by side with the new application. If there are any issues with the new application we again move back to the old application. One of the biggest problems with parallel implementation is we need extra hardware, software, and resources.

    The following figure shows the different launch strategies for a project rollout.

    Image from book
    Figure 24: Launch strategies

    (I) Can You Explain Requirement Traceability and its Importance?

    In most organizations testing only starts after the execution/coding phase of the project. But if the organization wants to really benefit from testing, then testers should get involved right from the requirement phase.

    If the tester gets involved right from the requirement phase then requirement traceability is one of the important reports that can detail what kind of test coverage the test cases have.

    The following figure shows how we can measure the coverage using the requirement traceability matrix.

    We have extracted the important functionality from the requirement document and aligned it on the left-hand side of the sheet. On the other side, at the top, we have mapped the test cases with the requirement. With this we can ensure that all requirements are covered by our test cases. As shown we can have one or more test cases covering the requirements. This is also called requirement coverage.

    Image from book
    Figure 25: Requirement Traceability

    (B) What is the Difference between Pilot and Beta Testing?

    The difference between pilot and beta testing is that pilot testing is nothing but actually using the product (limited to some users) and in beta testing we do not input real data, but it’s installed at the end customer to validate if the product can be used in production.

    Image from book
    Figure 26: Pilot and beta testing

    (B) How Do You Perform a Risk Analysis During Software Testing?
    OR
    (B) How Do You Conclude Which Section is Most Risky in Your Application?

    Note

    Here the interviewer is expecting a proper approach to rating risk to the application modules so that while testing you pay more attention to those risky modules, thus minimizing risk in projects.

    The following is a step by step approach for testing planning:

    • The first step is to collect features and concerns from the current documentation and data available from the requirement phase. For instance, here is a list of some features and concerns:

    Table 1: Features and concerns Open table as spreadsheet

    Features

    Add a user

    Check user preferences

    Login user

    Add new invoice

    Print invoice

    Open table as spreadsheet

    Concerns

    Maintainability

    Security

    Performance

    The table shows features and concerns. Features are functionalities which the end user will use, while concerns are global attributes of the project. For instance, the security has to be applied to all the features listed.

    • Once we have listed the features and concerns, we need to rate the probability/likelihood of failures in this feature. In the following section we have rated the features and concerns as low, high, and medium, but you can use numerical values if you want.

    • Once we have rated the failure probability, we need to rate the impact. Impact means if we make changes to this feature, how many other features will be affected? You can see in the following table that we have marked the impact section accordingly.

    • We also need to define the master priority rating table depending on the impact and probability ratings. The following table defines the risk priority.

    • Using the priority rating table we have defined priority for the following listed features. Depending on priority you can start testing those features first.

    • Once the priority is set you can then review it with your team members to validate it.

    Table 2: Probability rating according to features and concerns Open table as spreadsheet

    Features

    Probability of failure

    Add a user

    Low

    Check user

    preferences

    Low

    Login user

    Low

    Add new invoice

    High

    Print invoice

    Medium

    Open table as spreadsheet

    Concerns

    Probability of failure

    Maintainability

    Low

    Security

    High

    Performance

    High

    Table 3: Impact and probability rating Open table as spreadsheet

    Features

    Probability of failure

    Impact

    Add a user

    Low

    Low

    Check user preferences

    Low

    Low

    Login user

    Low

    High

    Add new invoice

    High

    High

    Print invoice

    Medium

    High

    Open table as spreadsheet

    Concerns

    Probability of failure

    Impact

    Maintainability

    Low

    Low

    Security

    High

    High

    Performance

    High

    Low

    Table 4: Priority rating
    Open table as spreadsheet

    Probability of failure

    Impact

    Risk Priority

    Low

    Low

    1

    Low

    High

    2

    Medium

    High

    3

    High

    High

    4

    Open table as spreadsheet

    Features

    Probability of failure

    Impact

    Priority

    Add a user

    Low

    Low

    1

    Check user preferences

    Low

    Low

    1

    Login user

    Low

    High

    2

    Add new invoice

    High

    High

    4

    Print invoice

    Medium

    High

    3

    Open table as spreadsheet

    Concerns

    Probability of failure

    Impact

    Maintainability

    Low

    Low

    1

    Security

    High

    High

    4

    Performance

    High

    Low

    3

    Figure 27: Priority set according to the risk priority table

    The following figure shows the summary of the above steps. So list your concerns, rate the probabilities of failures, provide an impact rating, calculate risk/priority, and then review, review, and review.

    Image from book
    Figure 28: Testing analysis and design

    (B) What Does Entry and Exit Criteria Mean in a Project?

    Entry and exit criteria are a must for the success of any project. If you do not know where to start and where to finish then your goals are not clear. By defining exit and entry criteria you define your boundaries. For instance, you can define entry criteria that the customer should provide the requirement document or acceptance plan. If this entry criteria is not met then you will not start the project. On the other end, you can also define exit criteria for your project. For instance, one of the common exit criteria in projects is that the customer has successfully executed the acceptance test plan.

    Image from book
    Figure 29: Entry and exit criteria

    (B) On What Basis is the Acceptance Plan Prepared?

    In any project the acceptance document is normally prepared using the following inputs. This can vary from company to company and from project to project.

    • Requirement document: This document specifies what exactly is needed in the project from the customers perspective.

    • Input from customer: This can be discussions, informal talks, emails, etc.

    • Project plan: The project plan prepared by the project manager also serves as good input to finalize your acceptance test.

    Note

    In projects the acceptance test plan can be prepared by numerous inputs. It is not necessary that the above list be the only criteria. If you think you have something extra to add, go ahead.

    The following diagram shows the most common inputs used to prepare acceptance test plans.

    Image from book
    Figure 30: Acceptance test input criteria

    (B) What’s the Relationship between Environment Reality and Test Phases?

    Environment reality becomes more important as test phases start moving ahead. For instance, during unit testing you need the environment to be partly real, but at the acceptance phase you should have a 100% real environment, or we can say it should be the actual real environment. The following graph shows how with every phase the environment reality should also increase and finally during acceptance it should be 100% real.

    Image from book
    Figure 31: Environmental reality

    (B) What are Different Types of Verifications?
    OR
    (B) What’s the Difference between Inspections and Walkthroughs?

    As said in the previous sections the difference between validation and verification is that in validation we actually execute the application, while in verification we review without actually running the application. Verifications are basically of two main types: Walkthroughs and Inspections. A walkthrough is an informal form of verification. For instance, you can call your colleague and do an informal walkthrough to just check if the documentation and coding is correct. Inspection is a formal procedure and official. For instance, in your organization you can have an official body which approves design documents for any project. Every project in your organization needs to go through an inspection which reviews your design documents. If there are issues in the design documents, then your project will get a NC (non-conformance) list. You cannot proceed without clearing the NCs given by the inspection team.

    Image from book
    Figure 32: Walkthrough and inspection

    (B) Can You Explain Regression Testing and Confirmation Testing?

    Regression testing is used for regression defects. Regression defects are defects occur when the functionality which was once working normally has stopped working. This is probably because of changes made in the program or the environment. To uncover such kind of defect regression testing is conducted.

    The following figure shows the difference between regression and confirmation testing. If we fix a defect in an existing application we use confirmation testing to test if the defect is removed. It’s very possible because of this defect or changes to the application that other sections of the application are affected. So to ensure that no other section is affected we can use regression testing to confirm this.

    Image from book
    Figure 33: Regression testing in action

    (I) What is Coverage and What are the Different Types of Coverage Techniques?

    Coverage is a measurement used in software testing to describe the degree to which the source code is tested. There are three basic types of coverage techniques as shown in the following figure:

    Image from book
    Figure 34: Coverage techniques
    • Statement coverage: This coverage ensures that each line of source code has been executed and tested.

    • Decision coverage: This coverage ensures that every decision (true/false) in the source code has been executed and tested.

    • Path coverage: In this coverage we ensure that every possible route through a given part of code is executed and tested.

    (A) How Does a Coverage Tool Work?

    Note

    We will be covering coverage tools in more detail in later chapters, but for now let’s discuss the fundamentals of how a code coverage tool works.

    While doing testing on the actual product, the code coverage testing tool is run simultaneously. While the testing is going on, the code coverage tool monitors the executed statements of the source code. When the final testing is completed we get a complete report of the pending statements and also get the coverage percentage.

    Image from book
    Figure 35: Coverage tool in action

    (B) What is Configuration Management?

    Configuration management is the detailed recording and updating of information for hardware and software components. When we say components we not only mean source code. It can be tracking of changes for software documents such as requirement, design, test cases, etc.

    When changes are done in adhoc and in an uncontrolled manner chaotic situations can arise and more defects injected. So whenever changes are done it should be done in a controlled fashion and with proper versioning. At any moment of time we should be able to revert back to the old version. The main intention of configuration management is to track our changes if we have issues with the current system. Configuration management is done using baselines.

    (B) Can You Explain the Baseline Concept in Software Development?

    Baselines are logical ends in a software development lifecycle. For instance, let’s say you have software whose releases will be done in phases, i.e., Phase 1, Phase 2, etc. You can baseline your software product after every phase. In this way you will now be able to track the difference between Phase 1 and Phase 2. Changes can be in various sections. For instance, the requirement document (because some requirements changed), technical (due to changes in the architecture), source code (source code changes), test plan changes, and so on.

    For example, consider the following figure which shows how an accounting application had undergone changes and was then baselined with each version. When the accounting application was released it was released with ver 1.0 and baselined. After some time some new features where added and version 2.0 was generated. This was again a logical end so we again baselined the application. So now in case we want to trace back and see the changes from ver 2.0 to ver 1.0 we can do so easily. After some time the accounting application went through some defect removal, ver 3.0 was generated, and again baselined and so on.

    The following figure depicts the various scenarios.

    Image from book
    Figure 36: Baseline

    Baselines are very important from a testing perspective. Testing on a software product that is constantly changing will not get you anywhere. So when you actually start testing you need to first baseline the application so that what you test is for that baseline. If the developer fixes something then create a new baseline and perform testing on it. In this way any kind of conflict will be avoided.

    (B) What are the Different Test Plan Documents in a Project?

    Note

    This answer varies from project to project and company to company. You can tailor this answer according to your experience. This book will try to answer the question from the authors view point.

    There are a minimum of four test plan documents needed in any software project. But depending on the project and team members agreement some of the test plan documents can be deleted.

    Central/Project test plan: The central test plan is one of the most important communication channels for all project participants. This document can have essentials such as resource utilization, testing strategies, estimation, risk, priorities, and more.

    Acceptance test plan: The acceptance test plan is mostly based on user requirements and is used to verify whether the requirements are satisfied according to customer needs. Acceptance test cases are like a green light for the application and help to determine whether or not the application should go into production.

    System test plan: A system test plan is where all main testing happens. This testing, in addition to functionality testing, has also load, performance, and reliability tests.

    Integration testing: Integration testing ensures that the various components in the system interact properly and data is passed properly between them.

    Unit testing: Unit testing is done more on a developer level. In unit testing we check the individual module in isolation. For instance, the developer can check his sorting function in isolation, rather than checking in an integrated fashion.

    The following figure shows the interaction between the entire project test plan.

    Image from book
    Figure 37: Different test plans in a project

    (B) How Do Test Documents in a Project Span Across the Software Development Lifecycle?

    The following figure shows pictorially how test documents span across the software development lifecycle. The following discusses the specific testing documents in the lifecycle:

    Central/Project test plan: This is the main test plan which outlines the complete test strategy of the software project. This document should be prepared before the start of the project and is used until the end of the software development lifecyle.

    Image from book
    Figure 38: Test documents across phases

    Acceptance test plan: This test plan is normally prepared with the end customer. This document commences during the requirement phase and is completed at final delivery.

    System test plan: This test plan starts during the design phase and proceeds until the end of the project.

    Integration and unit test plan: Both of these test plans start during the execution phase and continue until the final delivery.

    Note

    The above answer is a different interpretation of V-model testing. We have explained the V-model in this chapter in more detail in one of the questions. Read it once to understand the concept.

    (A) Can You Explain Inventories?
    OR
    (A) How Do You Do Analysis and Design for Testing Projects?
    OR
    (A) Can You Explain Calibration?

    The following are three important steps for doing analysis and design for testing:

    Test objectives: These are broad categories of things which need to be tested in the application. For instance, in the following figure we have four broad categories of test areas: polices, error checking, features, and speed.

    Inventory: Inventory is a list of things to be tested for an objective. For instance, the following figure shows that we have identified inventory such as add new policy, which is tested for the object types of policies. Change/add address and delete customer is tested for the features objective.

    Image from book
    Figure 39: Software testing planning and design

    Tracking matrix: Once we have identified our inventories we need to map the inventory to test cases. Mapping of inventory to the test cases is called calibration.

    Image from book
    Figure 40: Calibration

    The following is a sample inventory tracking matrix. “Features” is the objective and “add new policy,” “change address,” and “delete a customer” are the inventory for the objective. Every inventory is mapped to a test case. Only the “delete a customer” inventory is not mapped to any test case. This way we know if we have covered all the aspects of the application in testing. The inventory tracking matrix gives us a quick global view of what is pending and hence helps us to also measure coverage of the application. The following figure shows the “delete a customer” inventory is not covered by any test case thus alerting us of what is not covered.

    Image from book
    Figure 41: Inventory tracking matrix

    (B) Which Test Cases are Written First: White Boxes or Black Boxes?

    Normally black box test cases are written first and white box test cases later. In order to write black box test cases we need the requirement document and, design or project plan. All these documents are easily available at the initial start of the project. White box test cases cannot be started in the initial phase of the project because they need more architecture clarity which is not available at the start of the project. So normally white box test cases are written after black box test cases are written. Black box test cases do not require system understanding but white box testing needs more structural understanding. And structural understanding is clearer in the later part of project, i.e., while executing or designing. For black box testing you need to only analyze from the functional perspective which is easily available from a simple requirement document.

    Image from book
    Figure 42: White box and black box test cases

    (I) Can You Explain Cohabiting Software?

    When we install the application at the end client it is very possible that on the same PC other applications also exist. It is also very possible that those applications share common DLLs, resources etc., with your application. There is a huge chance in such situations that your changes can affect the cohabiting software. So the best practice is after you install your application or after any changes, tell other application owners to run a test cycle on their application.

    Image from book
    Figure 43: Cohabiting software

    (B) What Impact Ratings Have You Used in Your Projects?

    Normally, the impact ratings for defects are classified into three types:

    • Minor: Very low impact but does not affect operations on a large scale.

    • Major: Affects operations on a very large scale.

    • Critical: Brings the system to a halt and stops the show.

    Image from book
    Figure 44: Test Impact rating

    (B) What is a Test Log?

    The IEEE Std. 829-1998 defines a test log as a chronological record of relevant details about the execution of test cases. It’s a detailed view of activity and events given in chronological manner. The following figure shows a test log and is followed by a sample test log.

    Image from book
    Figure 45: Test Log

    Image from book
    Figure 46: Sample test log

    (I) Explain the SDLC (Software Development Lifecycle) in Detail.
    OR
    (I) Can You Explain the Waterfall Model?
    OR
    (I) Can You Explain the Big-Bang Waterfall Model?
    OR
    (I) Can You Explain the Phased Waterfall Model?
    OR
    (I) Explain the Iterative Model, Incremental Model, Spiral Model, Evolutionary Model and The V-Model?
    OR
    (I) Explain Unit Testing, Integration Tests, System Testing and Acceptance Testing?

    Every activity has a lifecycle and the software development process is no exception. Even if you are not aware of the SDLC you are still following it unknowingly. But if a software professional is aware of the SDLC he can execute the project in a controlled fashion. The biggest benefit of this awareness is that developers will not start execution (coding) which can really lead to the project running in an uncontrolled fashion. Second, it helps customer and software professionals to avoid confusion by anticipating the problems and issues before hand. In short, the SDLC defines the various stages in a software lifecycle.

    But before we try to understand what SDLC is all about we need to get a broader view of the beginning and ending of the SDLC. Any project started if it does not have a start and end then its already in trouble. It’s like if you go out for a drive you should know where to start and where to end or else you are moving around endlessly.

    The figure shows a more global view of the how the SDLS starts and ends. Any project should have entry criteria and exit criteria. For instance, a proper estimation document can be an entry criteria condition. That means if you do not have a proper estimation document in place the project will not start. It can also be more practical. If half payment is not received the project will not start. There can be a list of points which need to be completed before a project starts. Finally, there should be an end to the project which defines when the project will end. For instance, if all the test scenarios given by the end customer are completed the project is finished. In the figure we have the entry criteria as an estimation document and the exit criteria as a signed document by the end client saying the software is delivered.

    Image from book
    Figure 47: Entry, SDLC, and Exit in action

    The following figure shows the typical flow in the SDLC which has six main models. Developers can select a model for their project.

    • Waterfall model

    • Big bang model

    • Phased model

    • Iterative model

    • Spiral model

    • Incremental model

    Waterfall Model

    Let’s have a look at the Waterfall Model which is basically divided into two subtypes: Big Bang waterfall model and the Phased waterfall model.

    As the name suggests waterfall means flow of water which always goes in one direction so when we say Waterfall model we expect that every phase/stage is frozen.

    Big Bang Waterfall Model

    The figure shows the Waterfall Big Bang model which has several stages and are described below:

    • Requirement stage: During this stage basic business needs required for the project which are from a user perspective are produced as Word documents with simple points or may be in the form of complicated use case documents.

    • Design stage: Use case document/requirement document is the input for this stage. Here we decide how to design the project technically and produce a technical document which has a class diagram, pseudo code, etc.

    • Build stage: This stage uses technical documents as input so code can be generated as output at this stage. This is where the actual execution of the project takes place.

    • Test stage: Here, testing is done on the source code produced by the build stage and the final software is given the greenlight.

    • Deliver stage: After succeeding in the test stage the final product/project is finally installed at client end for actual production. This stage is the beginning of the maintenance stage.

    Image from book
    Figure 48: The SDLC in action (Waterfall Big Bang model)

    In the Waterfall Big Bang model, it is assumed that all stages are frozen which means it’s a perfect world. But in actual projects such processes are impractical.

    Phased Waterfall Model

    In this model the project is divided into small chunks and delivered at intervals by different teams. In short, chunks are developed in parallel by different teams and get integrated in the final project. But the disadvantage of this model is that improper planning may lead to project failure during integration or any mismatch of co-ordination between the team may cause failure.

    Iterative Model

    The Iterative model was introduced because of problems occuring in the Waterfall model.

    Now let’s take a look at the Iterative model which also has a two subtypes:

    Incremental Model

    In this model work is divided into chunks like the Phase Waterfall model but the difference is that in the Incremental model one team can work on one or many chunks unlike in the Phase Waterfall model.

    Spiral Model

    This model uses a series of prototypes which refine our understanding of what we are actually going to deliver. Plans are changed if required per refining of the prototype. So everytime refining of the prototype is done the whole process cycle is repeated.

    Evolutionary Model

    In the Incremental and Spiral model the main problem is for any changes done in the between the SDLC we need to iterate a whole new cycle. For instance, during the final (deliver) stage, if the customer demands a change we have to iterate the whole cycle again which means we need to update all the previous (requirement, technical documents, source code & test plan) stages.

    In the Evolutionary model, we divide software into small units which can be delivered earlier to the customer’s end. In later stages we evolve the software with new customer needs.

    Note

    The Vs model is one of the favorite questions asked by interviews.

    V-model

    This type of model was developed by testers to emphasize the importance of early testing. In this model testers are involved from the requirement stage itself. The following diagram (V-model cycle diagram) shows how for every stage some testing activity is done to ensure that the project is moving forward as planned.

    Image from book
    Figure 49: V-model cycle flow

    For instance,

    • In the requirement stage we have acceptance test documents created by the testers. Acceptance test documents outline that if these tests pass then the customer will accept the software.

    • In the specification stage testers create the system test document. In the following section, system testing is explained in more detail.

    • In the design stage we have the integration documents created by testers. Integration test documents define testing steps for how the components should work when integrated. For instance, you develop a customer class and product class. You have tested the customer class and the product class individually. But in a practical scenario the customer class will interact with the product class. So you also need to test to ensure the customer class is interacting with the product class properly.

    • In the implement stage we have unit documents created by the programmers or testers.

    Let’s take a look at each testing phase in more detail.

    Unit Testing

    Starting from the bottom the first test level is “Unit Testing.” It involves checking that each feature specified in the “Component Design” has been implemented in the component.

    In theory, an independent tester should do this, but in practice the developer usually does it, as they are the only people who understand how a component works. The problem with a component is that it performs only a small part of the functionality of a system, and it relies on cooperating with other parts of the system, which may not have been built yet. To overcome this, the developer either builds, or uses, special software to trick the component into believing it is working in a fully functional system.

    Integration Testing

    As the components are constructed and tested they are linked together to make sure they work with each other. It is a fact that two components that have passed all their tests, when connected to each other, produce one new component full of faults. These tests can be done by specialists, or by the developers.

    Integration testing is not focused on what the components are doing but on how they communicate with each other, as specified in the “System Design.” The “System Design” defines relationships between components.

    The tests are organized to check all the interfaces, until all the components have been built and interfaced to each other producing the whole system.

    System Testing

    Once the entire system has been built then it has to be tested against the “System Specification” to see if it delivers the features required. It is still developer focused, although specialist developers known as systems testers are normally employed to do it.

    In essence, system testing is not about checking the individual parts of the design, but about checking the system as a whole. In fact, it is one giant component.

    System testing can involve a number of special types of tests used to see if all the functional and non-functional requirements have been met. In addition to functional requirements these may include the following types of testing for the non-functional requirements:

    • Performance – Are the performance criteria met?

    • Volume – Can large volumes of information be handled?

    • Stress – Can peak volumes of information be handled?

    • Documentation – Is the documentation usable for the system?

    • Robustness – Does the system remain stable under adverse circumstances?

    There are many others, the need for which is dictated by how the system is supposed to perform.

    (I) What’s the Difference between System Testing and Acceptance Testing?

    Acceptance testing checks the system against the “Requirements.” It is similar to System testing in that the whole system is checked but the important difference is the change in focus:

    System testing checks that the system that was specified has been delivered. Acceptance testing checks that the system will deliver what was requested.

    The customer should always do Acceptance testing and not the developer. The customer knows what is required from the system to achieve value in the business and is the only person qualified to make that judgment. This testing is more about ensuring that the software is delivered as defined by the customer. It’s like getting a greenlight from the customer that the software meets expectations and is ready to be used.

    (I) Which is the Best Model?

    In the previous section we looked through all the models. But in actual projects, hardly one complete model can fulfill the entire project requirement. In real projects, tailored models are proven to be the best, because they share features from The Waterfall, Iterative, Evolutionary models, etc., and can fit into real life time projects. Tailored models are most productive and beneficial for many organizations. If it’s a pure testing project, then the V model is the best.

    (I) What Group of Teams Can Do Software Testing?

    When it comes to testing everyone in the world can be involved right from the developer to the project manager to the customer. But below are different types of team groups which can be present in a project.

    Isolated test team: This is a special team of testers which do only testing. The testing team is not related to any project. It’s like having a pool of testers in an organization, which are picked up on demand by the project and after completion again get pushed back to the pool. This approach is costly but the most helpful because we have a different angle of thinking from a different group, which is isolated from development.

    Outsource: In outsourcing, we contact an external supplier, hire testing resources, and do testing for our project. Again, there are it has two sides of the coin. The good part is resource handling is done by the external supplier. So you are freed from the worry of resources leaving the company, people management, etc. But the bad side of the coin is outsourced vendors do not have domain knowledge of your business. Second, at the initial stage you need to train them on domain knowledge, which is again, an added cost.

    Inside test team: In this approach we have a separate team, which belongs to the project. The project allocates a separate budget for testing and this testing team works on this project only. The good side is you have a dedicated team and because they are involved in the project they have strong knowledge of it. The bad part is you need to budget for them which increases the project cost.

    Developers as testers: In this approach the developers of the project perform the testing. The good part of this approach is developers have a very good idea of the inner details so they can perform good level of testing. The bad part of this approach is the developer and tester are the same person, so it’s very likely that many defects can be missed.

    QA/QC team: In this approach the quality team is involved in testing. The good part is the QA team is involved and a good quality of testing can be expected. The bad part is that the QA and QC team of any organization is also involved with many other activities which can hamper the testing quality of the project.

    The following diagram shows the different team approaches.

    Image from book
    Figure 50: Types of teams

Comments

Leave a comment