Can test automation tools replace the human testers?

December 25, 2009 Leave a comment

In Software Testing industry, I think all are thinking on this question once in a while.

I have thought many times and asked some question to my self such as,

1. Can automation tools ever *replace* human testers?
2. Can testing tools catch defects/bugs *automatically*?
3. Can tools ever develop enough (artificial) intelligence to think like a human tester?
4. Talking about scripted regression test tools (like QTP, WinRunner, LoadRunner, JMeter, Rational Robot, blah blah blah), which operate at the GUI (not unit) level. Is their valuation/pricing/license cost fair enough to provide you a healthy ROI (Return On Investment)?
5. Is it a question of Manual Testing Vs. Automated, or something much more complex and intricate?

Before we can come to a conclusion and start answering these questions let me come up with some facts about the contenders here: automation tools and human testers. I am a human being not a robot. So I will prefer starting with the human testers. Here we go.

What is manual testing? I think the term “manual” means, “involving or using human effort, skill, power, energy, etc.” Going by this understanding, any testing that requires the human intervention can be considered as manual testing. Let us see the areas where human testers can fair better than the automation tools:

Manual Testing Pros:
1. A human can think. She has the ability to think on her own and make decisions. Though decision-making is possible in case of computers (automation tools), they are yet to reach the level of (artificial) intelligence to beat a human being when it comes to decision-making.

2. A human tester can explore new applications using her intuition and self-learning abilities. Even if she has no experience of testing anything similar before, if she is skillful she can be able to quickly learn, explore and test the application. An automation tool might not!

3. A human tester can think out side of the pre decided test boundaries. In my knowledge, more defects are unearthed using out of the box exploratory testing approach than running automation scripts. Automation tests can only find you bugs for which they were written. They can’t find you any NEW bug. If you want to find new bugs, then you need to either find those yourself manually (that’s right, by a human tester), or write NEW automation scripts to find them.

4. A human tester needs no less spoon-feeding as compared to a tool. Computers are dumb, and so are automation tools. You change a variable/object name in a line of code of your AUT (Application Under Test) and the tool would struggle to figure out where it has gone! More over tools need step-by-step instructions to perform a specific testing task. They (tools) are poor at understanding natural human language. Imagine asking a human tester to test a “User Help Manual”. What if we are to automate this test? Can a tool test a human readable help manual efficiently? In case, you happen to know of a way to automate this process, kindly let me know. I am in search of such a tool.

5. Human testers rock when it comes to Usability testing where automation tools fail miserably. Tools are poor in judging what is more usable from a human point of view like screen alignments, appearance of windows, smoothness of object alignments, color combinations, ease of usability, an entertaining user experience etc. After all we develop software to be used by human beings, not computers! So who would be a better candidate to judge the usability of such an application? You already know the answer.

6. Human beings are good examples of being adaptable. We could adapt to the changing environment and survived when big creatures like Wooly Mammoth and the Dinosaursperished and became extinct due to their lack of adaptability. Human testers can adapt and learn from past experience. But (automation) tool may fail to learn on it’s own. The entire suite of test automation scripts may soon start failing if a simple thing like an object name has been changed/renamed in recent code refactoring. A tool cannot remember and learn from past experiences to adjust it accordingly. Human testers can.

7. Considering the huge cost of License fees of the so-called GUI automation tools, human testers are still cheaper. More often than not you might run into a project where the ROI (Return On Investment) for such a tool would be in negative as compared to when the tests were done manually by human testers.

I think that is enough of pro-manual and anti-automation rant. Before you (wrongly) classify me as an ardent opposer of test automation, let me clarify that I am in fact a great supporter of test automation, if done rightly! If you have read my earlier post, then you might already know that I love to test in agile environment like TDD and as you know no agile development can be imagined without test automation. What I oppose is the idea of automating tests at the GUI level using commercial tools that eat up a large chunk of your testing budget. I am a clear supporter of test automation if done at the unit level. And of course there are areas of testing (like stress testing, performance testing, penetration testing, link testing, API testing etc) where the need of automating your tests becomes an absolute necessity.

Test Automation Pros:
1. Computers work much faster than a human tester and are less prone to get confused while doing multi-tasking (doing several tasks at the same time, switching between them). Computers never care to spend time on attending a phone call or attending a review meeting/presentation, unlike a human tester. This can result in much more productivity as compared to a human tester, of course under certain contexts.

2. Computers are great at mathematical calculations. Computer’s memory is much more accurate than a human brain, it’s capacity to remember stuffs is much more than a human and it’s capacity to retrieve data from stored memory is much faster and more accurate as compared to human brain. This accuracy of computation can be exploited via test automation, especially in tests that involve high volume of computation work.

3. Computers never skip any hours of testing. People get tired, people get distracted, but computers are great at repetitive tasks humans are not very good at. Computers need no rest, shows no sign of fatigue, never gets bored with repetitive work, can work on weekends, holidays and even night shifts without complaining at all (of course until they encounter a break down/malfunctioning). This power of computers can be used to our advantage via test automation.

4. Computers never complain about salary hikes, number of paid leaves, holidays. Nor do they ask for change of project if it starts to get boring working on the same project for a long time. Tools lack emotions and it can prove to be a good thing when looked from an employee satisfaction point of view. [Remember, an organization spends on a tool. Hence the tool is also like an employee working for the organization]

5. At certain contexts, test automation can be cost effective than manual testing. Paying for the development of a tool and a small group of testers to run it may be cheaper at contexts, as compared to maintaining a whole big group of testers testing everything manually.

6. When it comes to load test, obviously automation is an absolute requirement. Gone are the days when you summoned 100 employees into a big lab, asked each of them to hit the Enter key at the same time and hoped that they hit it exactly at the same point of time (millisecond level)! In testing we need accuracy of actions. And for scenarios like load and performance testing automation tests can offer us such precision.

7. Going for test automation can be a good choice for regression testing nightly builds. If you have a set of tests that you need to test periodically over a long period of time and if you are confident that little or no major change would be made in areas covered by those tests, then you might opt to automate such tests to save you some precious time.

Having said all this let me summarize my opinion. Coming to the core question of “can automation tools ever *replace* human testers”, I honestly don’t think that it is going to happen ever. So I believe – , “Automation testing tools can never replace human testers”.

Categories: General

Agile and Testing: Some Myths Exposed

November 13, 2009 Leave a comment

Agile is a methodology that is seeing increasingly widespread adoption, and it is easy to understand why-especially if you consider the developer and user point of view.

Users: Don’t want to spend ages being quizzed in detail about the exact requirements and processes for the whole system, and then have to review a large specification, which they know could come back to haunt them.

Developers: Don’t want to have to follow a tight specification, without any expression of their own imagination and creative talents, especially if they can see a better way.

Yet for the QA professional an Agile approach can cause discomfort – In the ideal world they would have a ‘finished’ product to verify against a finished specification. To be asked to validate a moving target against a changing backdrop is counter intuitive. It means that the use of technology and automation are much more difficult, and it requires a new approach to testing, in the same way that it does for the users and the developers.

QA teams need to know the real impact of an Agile methodology, there are boundless myths circulating the industry.

Categories: Agile Testing

Codes and Bugs Quotes

October 29, 2009 Leave a comment
  • “All code is guilty, until proven innocent.” – Anonymous
  • “First, solve the problem. Then, write the code.” – Anonymous
  • “A code that cannot be tested is flawed.” – Anonymous
  • “Good programmers write code for humans first and computers next.” – Anonymous
  • “Don’t fix it if it ain’t broke.” – Anonymous
  • “A bug in the hand is worth two in the box.” – Anonymous
  • “The only certainties in life are death, taxes and bugs in code.” – Anonymous
  • “Failure is not an option. It comes bundled with the software.” – Anonymous
  • “Blame doesn’t fix bugs.” – Anonymous

Software Testers Quotes

October 29, 2009 Leave a comment
  • “Software testers do not make software; they only make them better.” – Anonymous
  • “The principle objective of software testing is to give confidence in the software.” – Anonymous
  • “Software testers always go to heaven; they’ve already had their fair share of hell.” – Anonymous
  • “f u cn rd ths, u cn gt a gd jb n sftwr tstng.” – Anonymous

Software Testing Quotes

October 29, 2009 Leave a comment
  • “Software testing proves the existing of bugs not their absence.” – Anonymous
  • “Alpha is simply that you want somebody to share your pain!”  – Anonymous
  • “Just because you’ve counted all the trees doesn’t mean you have seen the forest.” – Anonymous
  • “More than the act of testing, the act of designing tests is one of the best bug preventers known. The thinking that must be done to create a useful test can discover and eliminate bugs before they are coded – indeed, test-design thinking can discover and eliminate bugs at every stage in the creation of software, from conception to specification, to design , coding and the rest.” – Boris Beizer
  • ” If you don’t like unit testing your product, most likely your customers won’t like to test it either.” – Anonymous

 

Quality Quotes

October 29, 2009 Leave a comment

A collection of Software Testing Quotes. Some are inspirational, some are outrageous and some are stark. Be stirred!

  • “Quality is never an accident; it is always the result of intelligent effort.” – John Ruskin
  • “Quality is free, but only to those who are willing to pay heavily for it.” – T. DeMarco and T. Lister
  • “Quality is the ally of schedule and cost, not their adversary. If we have to sacrifice quality to meet schedule, it’s because we are doing the job wrong from the very beginning.” – James A. Ward
  • “The bitterness of poor quality remains long after the sweetness of meeting the schedule has been forgotten.” – Anonymous
  • “Software never was perfect and won’t get perfect. But is that a licence to create garbage? The missing ingredient is our reluctance to quantify quality.” – Boris Beizer
  • “A true professional does not waste the time and money of other people by handing over software that is not reasonably free of obvious bugs; that has not undergone minimal unit testing; that does not meet the specifications and requirements; that is gold-plated  with unnecessary features; or that looks like junk.” – Daniel Read
  • “It’s more about good enough than it is about right or wrong.” – James Bach

The Glass

October 29, 2009 Leave a comment
  • To an optimist, the glass is half full.
  • To a pessimist, the glass is half empty.
  • To a good tester, the glass is twice as big as it needs to be.
Categories: General

Why You Need To Secure Your Web Applications?

October 1, 2009 Leave a comment

Website security is possibly today’s most overlooked aspect of securing the enterprise and should be a priority in any organization.

Increasingly, hackers are concentrating their efforts on web-based applications – shopping carts, forms, login pages, dynamic content, etc. Accessible 24/7 from anywhere in the world, insecure web applications provide easy access to backend corporate databases and also allow hackers to perform illegal activities using the attacked sites. A victim’s website can be used to launch criminal activities such as hosting phishing sites or to transfer illicit content, while abusing the website’s bandwidth and making its owner liable for these unlawful acts.

Hackers already have a wide repertoire of attacks that they regularly launch against organizations including SQL Injection, Cross Site Scripting, Directory Traversal Attacks, Parameter Manipulation (e.g., URL, Cookie, HTTP headers, web Forms), Authentication Attacks, Directory Enumeration and other exploits. Moreover, the hacker community is very close-knit; newly discovered web application intrusions are posted on a number of forums and websites known only to members of that exclusive group. These are called Zero Day exploits. Postings are updated daily and are used to propagate and facilitate further hacking.

Web applications – shopping carts, forms, login pages, dynamic content, and other bespoke applications – are designed to allow your website visitors to retrieve and submit dynamic content including varying levels of personal and sensitive data.

If these web applications are not secure, then your entire database of sensitive information is at serious risk. A Gartner Group study reveals that 75% of cyber attacks are done at the web application level.

Why does this happen?

  • Websites and related web applications must be available 24 hours a day, 7 days a week to provide the required service to customers, employees, suppliers and other stakeholders.
  • Firewalls and SSL provide no protection against web application hacking, simply because access to the website has to be made public.
  • Web applications often have direct access to backend data such as customer databases and, hence, control valuable data and are much more difficult to secure.
  • Corporate web applications have large amounts of bandwidth available. Since bandwidth is expensive, for a hacker to transfer huge amounts of illegal content, they revert to steal bandwidth from others.
  • Most web applications are custom-made and, therefore, involve a lesser degree of testing than off-the-shelf software. Consequently, custom applications are more susceptible to attack.

Various high-profile hacking attacks have proven that web application security remains the most critical. If your web applications are compromised, hackers will have complete access to your backend data even though your firewall is configured correctly and your operating system and applications are patched repeatedly.

Network security defense provides no protection against web application attacks since these are launched on port 80 (default for websites) which has to remain open to allow regular operation of the business.

For the most comprehensive security strategy, it is therefore imperative that you regularly and consistently audit your web applications for exploitable vulnerabilities.

Categories: Security Testing

“The Values of Load/Performance Testing”

September 28, 2009 Leave a comment
Objective
The objective of this white paper is to give the clear picture of the values of
Load/Performance testing, different aspects how to fulfill the gap of performance into any
web application by certain efforts from reliable resources.

Objective

The objective of this article is to give the clear picture of the values of Load/Performance testing, different aspects how to fulfill the gap of performance into any web application by certain efforts from reliable resources.

What is ‘Load Testing’? – Introduction, concept & definition

Introduction

As a human being, we always tend to check the actual performance/output of each and every object by some or more number of manual inputs whether it would be in a form of any object like a pen, electronic gadget, etc. to get desired output from every object. In order to achieve required output we generally put ‘x’ amount of load on particular object to check the maximum capability and desired response time.

There are three main terms for testing by simultaneously acting multiple users (real or simulated): Load, Performance and Stress testing.

Definition

‘Load Testing’ – we can define by different ways, but as a part of initial understanding we derived some of as given below:

1. Testing an application under heavy loads, such as testing of a web site under a range of load to determine at what point the system’s response time degrades or fails.

2. Load testing is the process of subjecting a computer, peripheral, server, network or application to a work level approaching the limits of its specification. Load testing can be done under controlled lab conditions to compare the capabilities of different systems or to accurately measure the capabilities of a single system.

Load testing is a part of a more general process known as performance testing. Examples of load testing include:

 Downloading a series of large files from the Internet.

 Running multiple applications on a computer or server simultaneously.

 Assigning many jobs to a printer in a queue.

 Subjecting a server to large amount of e-mail traffic.

 Writing and reading data from to and from a hard disk continuously.

Performance and Stress Testing

 Performance Testing
Performance testing is a generic term that can refer to many different types of performance-related testing, each of which addresses a specific problem area and provides its own benefits, risks and challenges. Let’s see the concept of performance and stress testing by their definition.
1. Performance testing is the process of determining the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the actual response time at which a system functions. Qualitative attributes such as reliability, scalability, and interoperability may also be evaluated. Performance testing is often done in conjunction with stress testing.

2. Performance testing is a method of investigating quality related characteristics of an application that may impact actual users by subjecting it to a reality based simulations.

3. Performance tests are used to test each part of the web server or the web application to discover how best to optimize them for increased web traffic. Most often this is done by testing various implementations of single web page/scripts to check what version of the code is the fastest.

 Stress Testing

Stress testing refers to put the maximum load on system till system gets crashed. With the help of stress testing, we can figure out the actual capabilities of the system under heavy traffic of user, network, peripheral etc. let’s see the technical definition with example:

1. Stress tests are simulated “ brute-force” attacks that apply excessive load to your web server. ‘Real world’ situations like this can be crated by a massive spike of users – caused by a larger referrer (imagine your website being mentioned on national TV..). Another example would be an email marketing campaign sent to prospective customers that asks them to come to the website to register for a service or request additional information.

The purpose of stress testing is to estimate the maximum load that your web server can support. Such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical value, large complex queries to database system, etc.

2. Stress testing – verifies the acceptability of the target-of-test’s performance behavior when abnormal conditions are encountered, such as diminished resources or extremely high number of users.

The values of Performance Testing
Performance testing helps any system to determine it’s actual output/result of the system by placing more or less load in form of multiple concurrent users. We need to follow some standards to perform ‘Performance testing’. Performance testing can be done on many different automated tools like WAPT, OpenSTA, WSOP, LoadRunner, ANTS Load, etc.
– Let’s see the core activities of Performance Testing with the help of following chart.

Core Performance Testing Activities

1. Identify Test Environment

2. Identify Performance Test Criteria

3. Plan & Design Tests

4. Configure Test Environment

5. Implement Test Design

6. Execute Tests

7. Analyze, Report and Retest

In order to achieve best test results of ‘Performance Testing’, person has to follow above standard of ‘Core Performance Testing Activities’. Let’s see the benefits of performance testing.

Benefits of Performance Testing

 Determines the speed, scalability and stability characteristics of an application, thereby providing an input to making sound business decisions.

 Focuses on determining if the user of the system will be satisfied with the performance characteristics of the application.

 Identify mismatches between performance-related expectations and reality.

 Supports tuning, capacity planning, and optimization efforts.

Ideal Solution – How we can optimize performance of web based application?

As new web-based applications and delivery models revealed to the world, it has become clear that there is an inherent gap between business applications and processes and the underlying network and application infrastructure.

Customers, partners and their distributed workforce rely on availability, performance and security of mission-critical application services to effectively conduct business. To provide users with highest level of support, today’s enterprise data need to safeguard themselves from

 Application-level attacks

 Application scaling pains

 Database bottlenecks

 Server and network downtime and overloads

 Slow response time
We may face some challenges and areas always during performance testing for web based applications which may not have addressed during testing activities.
 May not detect some functional defects that only appear under load.
 If not carefully validated and designed, may only be indicative of performance characteristics in a small number of production scenarios.

 Unless tests are conducted on the production hardware, from the same machines the user will be using, there will always be a degree of uncertainty in the results.

To overcome from such situations, we should always approach to every single problem in such a way that our kind of infrastructure and application meets the actual requirement with optimal performance to our system. In this context we can derive some genuine solution to optimize performance as per given below.

Solution

1) Optimize source code: Check whether written source code is up to the mark with standards of programming language, as most of the time performance issues lies into optimization of source code.

2) Minimize HTTP Requests: 80% of the end-user’s response time is spent on the front-end.

Most of the time tied up in downloading all the components in the page: images, CSS, scripts, flash, etc. Reducing the number of the components in turn reduces the number of HTTP requests required to render the page. This is the key to faster page.

3) Database Script optimization: Many a times developer/programmer might not have followed the right standards in writing database scripts, stored procedures, SQL queries etc. At the same time software tester should approach white-box testing and give some efforts in cross check of database queries by doing DB review to optimize database scripts.

4) Optimize images and flash objects: After a designer is done with creating the images for the web page, there are still some things you can try before you FTP those images to your web server.

 Check the GIFs and see if they are using more space compared to color of the image using any standard image compression tool/software.

 Try converting GIFs to PNG and JPEG and see if there is a saving of space. In most cases PNG and JPEG will occupy lesser space compared to GIF.

 If possible avoid using larger flash components on the web page, in place of flash use some images as this will improve performance of the web page.

 Minimize use of highly loaded CSS, scripts, redirects, external third party components into your system.

Conclusion

From above description, we can conclude this topic as:

Performance Testing is the overall process.

Load Testing checks if the system will support the expected conditions.

Stress Testing tries to break the system.

-Thank you-

Testing Types

September 24, 2009 2 comments
  • Black box testing – not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
  • White box testing – based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths, conditions.
  • Unit testing – the most ‘micro’ scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
  • Incremental integration testing – continuous testing of an application as new functionality is added; requires that various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
  • Integration testing – testing of combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
  • Agile TestingAgile testing involves testing from the customer perspective as early as possible, testing early and often as code becomes available and stable from module/unit level testing.
  • Functional testing – black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn’t mean that the programmers shouldn’t check that their code works before releasing it (which of course applies to any stage of testing.)
  • System testing – black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
  • End-to-end testing – similar to system testing; the ‘macro’ end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
  • Sanity testing or Smoke testing – typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a ‘sane’ enough condition to warrant further testing in its current state.
  • Regression testing – re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
  • Acceptance testing – final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
  • Load testing – testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
  • Stress testing – term often used interchangeably with ‘load’ and ‘performance’ testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
  • Performance testing – term often used interchangeably with ‘stress’ and ‘load’ testing. Ideally ‘performance’ testing (and any other ‘type’ of testing) is defined in requirements documentation or QA or Test Plans.
  • Usability testing – testing for ‘user-friendliness’. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
  • Install/uninstall testing – testing of full, partial, or upgrade install/uninstall processes.
  • Recovery testing – testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
  • Failover testing – typically used interchangeably with ‘recovery testing’
  • Security testing – testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
  • Compatibility testing – testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
  • Exploratory testing – often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
  • Ad-hoc testing – similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
  • Context-driven testing – testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game.
  • User acceptance testing – determining if software is satisfactory to an end-user or customer.
  • Comparison testing – comparing software weaknesses and strengths to competing products.
  • Alpha testing – testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
  • Beta testing – testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.