AddThis

Bookmark and Share

Monday, November 1, 2010

Manual Testing and more.


5 years back I got assignment writing on Manual testing. We know about it but when I have started searching on Manual testing it was really hard to find info. I was really surprised for not finding enough resources and because all the way we are using the term and working. Then why it’s too hard to find it?

I have studied on it and made documents for Manual testing. Even today I cannot recall the reference paper those I have followed. I have added few things from wiki and it could be useful for people not to memorizing but having an idea.

 

Manual Testing


                a. Paper based test cases
                b. Tools for version management of test cases
                c. Usability Test


*Definition of Manual Testing: Developing and executing tests that rely primarily on direct and continuous human interaction, especially evaluating correctness and test status (pass, fail, warn, etc.)

Manual testing is when testers:
·         Enter data
·         Observe behaviors
·         Actively run tests

Note: Manual testing may involve the use of tools
·         To create specific test conditions like load or errors
·         To capture performance statistics or internal states

**Stages: There are several stages of Manual testing. They are:

  • Unit Testing: This initial stage in testing normally carried out by the developer who wrote the code and sometimes by a peer using the white box testing technique.

  • Integration Testing: This stage is carried out in two modes viz. as a complete package or as an increment to the earlier package. Most of the time black box testing technique is used. However, sometimes a combination of Black and White box testing is also used in this stage.

  • System Testing: In this stage the software is tested from all possible dimensions for all intended purposes and platforms. In this stage Black box testing technique is normally used.

  • User Acceptance Testing: This testing stage carried out in order to get customer sign-off of finished product. A 'pass' in this stage also ensures that the customer has accepted that the software is ready for their use.

Process of Manual Testing:

  • Preparing Test Cases
                                                SRS (Software Requirement Specification)
                                                UserStorySpec
A/V (Audio/video) files
ESSS (Electronic Static Screenshots)
RAD template
  • Reviewing Test Cases
  • Test Executes

*Challenges of Manual Testing:

• How do you staff the manual test team?
• What do test neophytes need to know?
• Can you maximize use of scarce equipment with manual testing?
• Can you sell management on the plan?
• What technical and managerial pitfalls exist for manual testing?

*Sizing the Manual Test Effort:

Given a set of tests to run in a period of time, plan number of techs needed
                a. Test considerations
·         For each test case, know…
Person-hours effort
Wall-clock duration
Dependencies and prerequisites
b. Use project-planning software to create schedule, keeping in mind...
c. Project considerations
·         For process and team, know overhead of…
Reporting defects
Documenting test status
Communication, e-mail and management guidance
Breaks
Blocking issues/debugging
d. Rules of thumb
·         6 test hrs/8-10 hr day
·         75% downtime (bad SW), 25% (good SW)

*Technical Caveats:

a. Tests Well-Suited for Manual Testing
·         Functional
·         Use cases (user scenarios)
·         Error handling and recovery
·         Localization
·         User interface
·         Configurations and compatibility
·         Operations and maintenance
·         Date and time handling
·         Installation, conversion, and setup testing
·         Documentation and help screens

b. Tests Poorly Suited for Manual Testing
·         Monkey (or random)
·         Load, volume, and capacity
·         Reliability and stability (MTBF)
·         Code coverage
·         Performance
·         Standards compliance

Caution: Using manual testing inappropriately can mislead people about the extent of test coverage

Weaknesses of Manual Testing:

  • Incomplete testing
  • Skipped input event sequences
  • Incorrect verification of output
  • Personal attention required
  • Unattended testing ruled out
  • Aborting a test implies manual retesting
  • Increased cost
  • Regression testing implies re -executing test cases
  • Overall testing cost and effort multiplies
  • No unattended testing
  • Same test
Different inputs
Different versions
Different platforms

**Comparison to Automated Testing:

Test automation may be able to reduce or eliminate the cost of actual testing. A computer can follow a root sequence of steps more quickly than a person, and it can run the tests overnight to present the results in the morning. However, the labor that is saved in actual testing must be spent instead authoring the test program. Depending on the type of application to be tested, and the automation tools that are chosen, this may require more labor than a manual approach. In addition, some testing tools present a very large amount of data, potentially creating a time consuming task of interpreting the results. From a cost-benefit perspective, test automation becomes more cost effective when the same tests can be reused many times over, such as for regression testing and test-driven development, and when the results can be interpreted quickly. If future reuse of the test software is unlikely, then a manual approach is preferred.

Things such as device drivers and software libraries must be tested using test programs. In addition, testing of large numbers of users (performance testing and load testing) is typically simulated in software rather than performed in practice.

Conversely, graphical user interfaces whose layout changes frequently are very difficult to test automatically. There are test frameworks that can be used for regression testing of user interfaces. They rely on recording of sequences of keystrokes and mouse gestures, then playing them back and observing that the user interface responds in the same way every time. Unfortunately, these recordings may not work properly when a button is moved or relabeled in a subsequent release. An automatic regression test may also be fooled if the program output varies significantly (e.g. the display includes the current system time). In cases such as these, manual testing may be more effective

*All about Manual Testing:

·         Full test automation is not always possible
·         Manual testing is an effective and economical alternative
·         Manual testing involves facing some unique but tractable challenges
·         Proper planning of the manual effort and training of the technicians are critical
·         Manual testing can solve problems for you and for management



References:

  1. http://www.rbcs-us.com/images/documents/ShoestringManualTesting%28Slides%29.pdf (*)
  2. http://en.wikipedia.org/wiki/Manual_testing (**)
  3. Asterisks are for reference indication

Saturday, September 18, 2010

Automation testing: how tools are important?




Automation testing is a buzzword in today’s testing discussion. Whenever anyone met with buddy from other testing organization there will be very common chat “how do you automate your testing task; which tools you are using and so on”.

Today I am going to write on this discussion and how fruitful this automation; machine base testing.

There are two types of testing tools available to implement; one from different vendors and other from open source. I guess I have left one more type that is tools using different language and for this type there is no barrier. As per the testing needs we can build the tools/script.

Testing is a sequence of actions and cannot be accurately executing by testing tools. There needs to be intervention of human action.

Vendor tools: Scripts are created using record and even can write manually and these scripts are always vendor defined structured e.g. C like, VB like and so on. There are so many vendor’s namely Mercury now HP Mercury, Segue Inc., IBM. These tools are easy for the non-programmers since scripts are creating by record and play back system.

These vendor’s tools are very expensive and all organization cannot afford of these specially the load test tools.

I have read the article from Bret PettichordHey Vendors, Give Us Real Scripting Languages” and I was commenting: Vendors can easily make standard language base scripting and that will be benefited both for vendor and tester. Only drawback is; they will not be able to earn money by certification :)

Now we can use few tools for free: WATIR, WATIN, Selenium and these are really good for scripting.

Still I will say these tools are useful and if we cannot afford of these we have choices :)

Open source tools: Watir, WatiN, Selenium, iMacro, JMeter and so on. These tools are very popular in the automation testing world since it’s FREE. We have our choice for functional regression test as well performance test. We are happy to use these tools and now we are not depending on vendor’s tools.

There is drawback for having knowledge only on open source tools that is many big organization wish to have test engineer with vendor’s certification and they will go for those vendor base testing tools but fact is it’s not always. Personally I am happy with these open source tools but sometime got weakness for vendor tools since I have started with those tools in my CSTP courses. Another drawback of open source tools is having bug or limitation since its open and there is no dedicated team to support it. We need to grow the community to help the open source tools more mature.

Programming language: different programming languages used for specific test; now days this is common practice to build different tools for testing purpose and other time creating scripts for resolving the harness of vendor and open source tools.

How these tools are important: One of the most important point for automation tool is it can be very useful for repetitive tasks. Data entry type job could be manageable with least effort and easily. For this type of task we can use iMacro; this is very easy to generate the scripts and change it as needed and run for the next time. It saves time while regression test since the automation tools take care of the maximum scenario coverage as we have prepared earlier.

But if we like to do some complex functional job with automation test tools; we need an expert in any tools. Because logics need to be implemented and writing the scripts from one to another tool vary. So, first of all we need to select the proper tools for the targeted job done. Changes in the code are another vital point for doing automated tests; how frequently the codes and UI are changing. If there are frequent major changes then the test scripts need to be work out for working with the changes.

I have used Watir for one of our big project; its fun. But problem comes e.g. I have used some XPath since there was no ID or NAME to capture the steps. Later what happened, the HTML & CSS are changed and now my XPath is not working. It’s simply horrible, my XPath is getting the pages it’s supposed to not.

One thing I have noticed; if we go for automation testing then development time itself there should be test suitable code so that it will be easier to do write test script. Otherwise we have to face those problems and it’s easy to minimize if we have taken care earlier.

Things to consider for automation test:

Test plan & early start: If we plan to go for automation testing; first thing to considered is test plan. We must include development team and overview of the importance of the automation test and how it will be effective. The plan and the meeting should be as early as possible before the projects start so that it could come up with the resolution as I have mentioned earlier. Late start always makes problem since it’s always better something to change earlier stage of the project. In the test plan itself we could come with tools selection, talents assignment, time estimation and targeted plan for the automation task.


100% automation test: I am doing testing job last 6 years and 90% of my execution was manual and planning, test case writing, project/task assigning. Still I believe that 100% coverage of automation test is not possible better to say will not be a wise decision.

“Automated tests execute a sequence of actions without human intervention. This approach helps eliminate human error, and provides faster results. Since most products require tests to be run many times, automated testing generally leads to significant labor cost savings over time. Typically a company will pass the break-even point for labor costs after just two or three runs of an automated test.”

Manual testing, on the other hand, is a process that adapts easily to change and can cope with complexity. Humans are able to detect hundreds of problem patterns, in a glance, an instantly distinguish them from harmless anomalies. Humans may not even be aware of all the evaluation that they are doing, but in a mere "sequence of actions" every evaluation must be explicitly planned. Testing may seem like just a set of actions, but good testing is an interactive cognitive process. That's why automation is best applied only to a narrow spectrum of testing, not to the majority of the test process.

If you set out to automate all the necessary test execution, you'll probably spend a lot of money and time creating relatively weak tests that ignore many interesting bugs, and find many "problems" that turn out to be merely unanticipated correct behavior.

My opinion for automation test; it saves many things e.g. time, cost, effort and faster but it cannot save MANUAL test.


References:


  1. http://www.stickyminds.com/sitewide.asp?ObjectId=2326&Function=edetail&ObjectType=
  2. http://www.stickyminds.com/sitewide.asp?ObjectId=2084&ObjectType=COL&Function=edetail
  3. http://www.io.com/~wazmo/papers/seven_steps.html
  4. Test Automation Snake Oil V2.1 6/13/99: http://www.satisfice.com/articles/test_automation_snake_oil.pdf
  5. http://www.io.com/~wazmo/papers/seven_steps.html
  6. Software Test Automation: A Real-World Problem, Cem Kaner, Ph.D., J.D.

Sunday, August 8, 2010

Startup testing part 1


I am starting here my new sharing “Startup testing part 1” for those who likes to start career of software testing. This will give a basic idea about testing and glimpse of it.

Definition:

  • Testing is a process of evaluating a system by manual or automation means and verifies that it satisfies specified requirements or identifies difference between expected and actual result.
  • Quality provides customer satisfaction for the first time and every time. It is the factor affecting an organizations long term performance and improves productivity and competitiveness.

Why Testing?

  • Software testing is important as it may cause mission failure, impact on operational performance and reliability if not done properly.
  • Deliver quality software products; satisfy user requirements, needs and expectation.
  • Uncover defects before the products install in production, it can save a huge loss.

Participants in Testing:

  • Software Customer
  • Software User
  • Software Developer
  • Tester
  • Information Service Management
  • Senior Organization Management

Some Recent Major Computer System Failures Caused by Software Bugs:

  • According to news reports in April’04 software bug was determined to be a major contribution to the 2003 Northeast blackout, the worst power system failure in North American history. The failure involved loss of electrical power to 50 million customers, forced shutdown of 100 power plants, and economic losses estimated at $6 billion. The bug was reportedly in one utility company’s vendor supplied power monitoring and management systems, which was unable to correctly handle and report on an unusual confluence of initially localized events. The error was found and corrected after examining million of lines of code.

  • In India September ’04 Aircel Cell Company got a defect in their prepaid subscriber billing system. Result nearly one month we got all free outgoing. They found the defects earlier but correcting the defects itself it took long time. I don’t have the exact official estimation of loss. But I made all ISD call nearly 100 hours free of cost.

  • A software bugs in a Soviet early warning monitoring system nearly brought on nuclear war in 1983, according to news reports in early 1999. The software was supposed to filter out false missile detections caused by Soviet Satellites picking up sunlight reflections off cloud tops, but failed to do so. Disaster was averted when a Soviet commander, based on what he said was a’...funny feeling in my gut’, decided the apparent missile attack was a false alarm. The filtering software was rewritten.  

Software Development Life Cycle:


  • Requirement- SRS (Software Requirement Specification)
                             SRAS (Software Requirement & Analysis Specification)
                             FS (Functional Specification)
  • Design- HLD (High Level Design)
                    LLD (Low Level Design)
  • Coding- According to code format
  • Testing
  • Implementation
  • Maintenance

Testing Economic & Cost:

Traditional Test
Continuous Test
Accumulated Test Cost
Accumulated Error Remaining
Development Cycle
Accumulated Test Cost
Accumulated Error Remaining
0
20
Requirement
10
$10
0
40
Design
15
$25
0
60
Code
18
$42
$480
12
Testing
4
$182
$1690
0
Production
0
$582

Testing:
Static (Review)
Dynamic (Execution)

Static:
       Only review not execution of the program

Dynamic:
Structural (logic, white box testing, developer)
Functional (no logic, black box testing, tester)

What is Test Plan?

·        Road map for the entire testing activity

What are Test Cases?


·        Set of procedures which we execute in our system to find defects

Primary Role of Software Testing:

·        Determine whether the system meets specification (Producer View)
·        Determine whether the system meets business and user needs (Customer View)

Role of Tester:

    • Find defect not correcting the defects

What is Defects?
    • A defect is a variance from a desired product attributes
    • Variance from customer/user expectation

Classification of Defects:
  • Wrong (ER! = AR)
  • Missing (Missing some point)
  • Extra (Extra point)

Regression Test:

Tester-> 1000-test cases-> 100 defects-> developer-> tester

Functional Testing:

  • Structure of the program is not considered
  • Test cases are decided base on the requirements or specification of the program or module
  • Hence it is called “Black Box” testing

Structural Testing:

  • Concerned with testing the implementation of the program
  • Focus on the internal structure of the program
  • The intention of structural testing is not to be exercise all the different I/P or O/P condition but to exercise the different programming structure and the data structure of the program

Testing Levels:
  • Unit Testing
  • Integration Testing
  • System Testing &
  • Application Under Test (AUT) or
User Acceptance Test (UAT)

Unit Testing:

  • LLD
  • Module Testing
  • Individually Testing
  • White Box Testing
  • Developer job

    1. Test each module individually
    2. Follow White Box Testing (logic of the program)

Integration Testing:

      • LLD+ HLD (Developer+ Tester)
      • Communication+ Data Flow
      • WB+ BB= Gray Box
      • Integrate two or more module i.e. Communicate between modules
      • Follow a White Box Testing (testing the codes)

System Testing:

  • Confirms that the system as a whole delivers the functionality originally required.
  • Follow Black Box Testing
  • Functionality Testing, Tester job

User Acceptance Testing:

  • Building the confidence of the client and users is the role of the acceptance testing phase
  • It is depend on the business scenario
  • Red Box Testing (crucial)

Sunday, August 1, 2010

Test Estimation tools using FP-UCP.

I am sharing with you my “Test Estimation Tools” that I have worked for several weeks. Though initially it will be time consuming to understand; once we will implement; it will be quite easier to estimate the testing time.

There are variables always for all the estimation tools we have e.g. Project hour estimation using Function Point or Use Case Point. I am sharing this big effort to all because if I have it within myself that will be less useful than sharing through the blogs and later this might be get popularity for the effectiveness of it.

If you have any question on “Test Estimation Tools” let me know; I will try to answer as easy way as possible. Here is the link to download: http://bd.linkedin.com/in/shaiful you will find it in my BOX.net file: Test Estimation using FP-UCP.xls

If you unable to download from above profile please check out here: http://bit.ly/dj2ScB

You can estimate your testing time using both Function Point and Use Case Point.

I know there will be so many questions, e.g. it’s not worthiest or difficult to understand. Anyway, have a look and let me know how it is? Please share for this in my blog and you can put comments.


Thanks.

We can careful while testing the system (checklist)

1. Every Field should be tested
2. All integers
3. Mix regular text, numeric and special characters
4. All special characters
5. Put spaces
6. Leave it blank
7. Test Numeric fields with Negative Numbers and Positive Numbers both (-99,+99), terminal values (/,:),
8. Enter a digit, then press before continuing
9. Fill up the controls typing all character value (a-A, b-B, c-C,---------), numeric value(0,1,2,------), alpha numeric value(a3bc2----), type all special character ( *, &, <,>,?,/,@,$,%,^,(), ’,”,!,~,------)
10. Serve Maximum values in all fields
11. Do Wrong / Negative Input (Check that the application responds as expected)
12. Do Right / Positive Input (Check that the application responds to illegal input and unexpected transactions)
13. Database value should be tested
14. Every Interface should be tested
16. Sequential Order of Buttons should be checked
17. Check All Buttons of each Interface (Size, Position, Color, Caption)
18. Check the Result Carefully (Inserting Same Data, Different Data)
19. Maintains the Standard Layout through the Whole Project (e.g. Color Management, Size of Controls, Size of Screen, Position of Buttons)
20. Check the output (Reports) Carefully (Expected Result : Actual Result)
21. Check all Lists View / Tree View ( Click the List view while it is empty)
22. Check all Option Buttons
23. Check all Check Boxes
24. Check all Cells of Grid
25. Check Tab Order
26. Check Menu Items (Clicking on the Items)
27. Check the Sequence of Data
28. Check all Standards (Naming Convention, Layout Convention)
29. Check all Messages (Same Messages should be displayed for the same purpose)
30. Check the value after Add/Modify/Delete main form to other related form for integration
31. Check Required / Not Required /Validation
32. Check Date Format
33. Check Clear action initialize Displayed Data (like Date/Month will be Current System Date/Month)

Wednesday, July 28, 2010

Common mistakes in software testing and how to overcome

There are common mistakes we do in testing. I am giving few examples from it.

1. Regression testing: we never follow
2. Developers word: this is my code; 100% accurate
3. Always testing with only the positive scenario
4. While reporting defect never thinking of developer’s situation or mind
5. Never explore the application
6. Missing code in the final deployment
7. Performance never observed or too late
8. Testing time estimation
9. Start testing too late
10. Clients always like to change the requirements
11. Test is continuing still code changes in the final deployment; not a full cycle testing/complete testing
12. Recruiting testers from the ranks of failed programmers
13. Using testing as a transitional job for new programmers and anyone can test
14. Better understanding and efficient communication between programmers and test engineers
15. Programmers cannot test their own code, not happy with Unit testing and testing script
16. Liking more execution of test rather than design/ un-reviewed test designs
17. Not but testing also need to take as group work not isolated than development
18. UI issue or cosmetic issue never gets priority by developers
19. Attempting to automate all tests/ expecting to rerun all manual tests
20. Test coverage


1. Regression testing: Whenever its time to regression tests; we never have enough support for time, talents and test environment. If we like to regression test properly then help of automation script base test required. But initial cost and time of automation is more and expertise for automation test is not available. So, never come up with better solution for regression test and this is basically partial test and obviously it’s dangerous for live project like www.ourleague.com

To overcome this issue we need to allocate enough time, talent and test environment.

2. Developers word: this is my code; 100% accurate: Over confidence is dangerous same as lack of confidence. Sometime developer would say: this is my code; 100% accurate and never believe.

I know how super smart programmer you are and obviously I will appreciate this but I will do my testing that should be fulfilled my already planned test. If you believe this type of word simply you will get killed.

3. Always testing with only the positive scenario: Some or for many reason we do only positive testing and basically this is not testing. Software will work, everyone knows but why we are testing? We need to break the box so that client never gets the panic.

The programmer also tests their own code and they work for running the application. So, if we test engineer do the same task with more exploring the positive scenario that’s not sounds good. We need to cover negative scenario as well.

Here also time, cost and test environment is the de facto.


4. While reporting defect never thinking of developer’s situation or mind: This is a very common mistake for test engineers, while reporting defect they never think from developer mind. We always report defects and think it’s understandable. This is something like handwriting; if someone thinks that I can read my hand writing and saying: other people can read because I can read it.

Defect reporting is very special quality for a test engineer. Before reporting please reproduce it properly and try to make the path of replication clear and concise, give idea to the developer what makes this defect. Then only they will respect and will give us value for saving their time by proper defect reporting. Every time we must think it’s not me but others also need to be understandable. Write it with text, if needed put images if more then make video file.

5. Never explore the application: Exploring the application is very important so that we can rarely miss any defects. Most of us never try to explore the application; only trying with limited scenario. Later clients find bugs and the test engineer cried loudly!!! What makes me miss this silly mistake; it’s needed exploring.

For example, MS Word; we can print several ways 1. File option 2. Ctrl+P 3. Print button 4. Preview and Print button and so on

If there are several input fields then changing the sequence may work.

How to do this I cannot give any answer but just one word “EXPLORE”


6. Missing code in the final deployment: This is the worst case ever we worked. This can messed up every good works, one of our build is ready for the live suddenly we found mixing of old and new codes. Unfortunately it’s not the entire version code but partial code and finding and merging it more difficult and the final moment of deployment.

I guess we need more awareness of code management tools and use. Moreover tag for the version is essential and every developer must be careful of checking in and out and if there are branches then the merging also important. We should take care of these before the final deployment and for this we can use checklist.


7. Performance never observed or too late: Application goes in live without measuring the performance that really crucial. How many users will hit at time? Or how will be the hardware configuration? These things need to be resolved before deploying in the production. Maximum cases we never care about the performance and if we think then too it’s late just before the deployment we used do some formal performance testing not enough for the live. As a result when server goes down; nothing could be done except cutting off the engineers sleeping time and heavy load work and returning home late night with panic.

We can avoid this panic if early performance tests plan and execution possible.


8. Testing time estimation: This is an issue always fussing between testing and development team. The programmer thinks about testing that it should not take longer time. Obviously maximum cases deployment get ready and test engineer starts test execution and it will be the eleventh hour for the shipment and not getting enough time to test. There are some tools to estimate the testing time but you know non can say the estimation is accurate; so many variables and dependency.

As a software test manager better is to define the task could be finish by timeline. If management asks about testing time estimation we could say by this time we could cover these area and if you give us time then we can cover full regression etc.

9. Start testing too late: It is another common problem with testing. Test engineer never get involve from the very beginning of the project. So, there were huge gap with test engineer and the project; may be the test engineer is busy with testing another project.

But you know it’s never been wise decision always it will kill our time. If the test engineer could be engaged with the project he/she could give better output. So, eleventh hour testing and humping and jumping won’t work to build better software. Continuous testing is always better than the traditional testing, it saves time and cost.

10. Clients always like to change the requirements: This happens for maximum clients and we cannot ignore it. There are lots of funny story about client requirement changes. Things like their requirement initially and finally far difference it’s because of unable to fix their requirements.

To cope up with the requirements we need very close interaction with the clients from the start to end of the projects. We have to understand the clients’ knowledge on the domain of the software and regular feedback from customer. It’s better to build the software sprint wise and feedback will be accommodated within the sprint to save panic and headache.

11. Test is continuing still code changes in the final deployment: Again common scenario; we have started our regression testing and supposed to be complete the regression cycle without having any more changes. But we CANNOT stop it, management or may be influence of present needs there will be code changes and deployment while we are half or some portion of regression testing.

Finally half done situation we will continue with regression testing with the code changes and deployment and you believe me this is not the full cycle REGRESSION test.

12. Recruiting testers from the ranks of failed programmers: This is simply horrible; some people got the idea TESTING is the easiest job in the world anyone can do it. I have faced this problem from my experience; management insist to hire those are failed in programming skill. Even I have seen one of the engineers in software testing even does not have the idea EXCEL needs to save to retrieve data for the future.

There is only one way to stop this type of situation that is showing how critical testing JOB is and how much important to make the software better to compete in the market to grab clients and users.


13. Using testing as a transitional job for new programmers and anyone can test: Here is another thinking of software testing. Management why to unutilize our new programmers they can do testing job and that is enough. Even some of them are thinking anyone can do testing…


14. Better understanding and efficient communication between programmers and test engineers: This is a vital point to build better software. It’s a TEAM work not one man show job. Because one man can do many things but his concentration will be focused on some area and other area will be dark enough to spoil it. When there are better communication channel with programmers and testers its make huge different than NOT.

Regular team meeting and milestone of the project and understanding between team members are essential. This type of gap can mess up in the end. We can use tools for better communication and regular team meeting achieve this goal.


15. Programmers cannot test their own code, not happy with Unit testing and testing script: I have seen so many programmers; they are great in their coding skill nicely implementing critical logic but when you say you have to write test code. That’s make over-burden to them but I guess it should not be like this. Anyone who is passionate on developing software should not afraid of test code. I know it’s again headache to write the codes but later it will bring the best fruit. And may be always its better to test code with peer or something like that to bring better results e.g. we review our test script/test cases.


16. Liking more execution of test rather than design/ un-reviewed test designs: This is major part of testers and maximum we failed here to make it. Whenever we develop test plan and test cases/scripts once execution starts we never update or re-design the test plan and test cases/scripts. Everyone wish to execute/testing more rather updating or re-designing the test plan and test cases/scripts. As a result we unable uncover critical issue in the end.


17. Not but testing also need to take as group work not isolated than development: Some people have the idea that testing is a different kind of job and so this talents are isolated from the TEAM. We need to remember again and again it’s a TEAM work. This lead to miscommunication, misunderstanding and the other type of mis…


18. UI issue or cosmetic issue never gets priority by developers: Huh! This is very minor issue and we are not going to fix it- its development view. Developer used to blame testers you unable find bugs with the functionality and finding this type of low priority issues. When we met someone we glance each other face not the internal things e.g. how soft minded the person would be? That part will come later. Same way the UI issues might be not critical as functional breakage but again this is like the given example. So many clients are there; they liked to see the UI is accurate first otherwise they will not go for touch it.


19. Attempting to automate all tests/ expecting to rerun all manual tests:
Automate test is superb; it’s faster, efficient, less time and cost consuming. True, but we cannot automate ALL the features. Nothing is above MANUAL testing. Some will understand that we will automate our testing process and it will take care of manual testing. That’s what there is certain criteria before going for automation tools and again this cannot fulfill the gap of manual testing. We should have better understanding of it.


20. Test coverage: How we will measure test coverage? We know there is NO way to complete testing. But we have stop our testing process and it cannot be an infinite process. There are certain points to set the test coverage. In my opinion, we can set the criteria depending on nature of the project and always it will differ one to another.

We can have a look here for complete testing and test coverage: http://www.kaner.com/pdfs/impossible.pdf
http://asusrl.eas.asu.edu/cse565/content/coverage/coverage.pdf



References:

1. Classic Testing Mistakes, Brian Marick, Testing Foundations

2. M. Cusumano and R. Selby, Microsoft Secrets, Free Press, 1995.


3. Michael Dyer, The Cleanroom Approach to Quality Software Development, Wiley, 1992.

4. M. Friedman and J. Voas, Software Assessment: Reliability, Safety, Testability, Wiley, 1995.

5. C. Kaner, J. Falk, and H.Q. Nguyen, Testing Computer Software (2/e), Van Nostrand Reinhold, 1993


6. Cem Kaner, J.D., Ph.D. (http://www.kaner.com)

7. http://asusrl.eas.asu.edu/srlab/