Why You Need To Secure Your Web Applications?

October 1, 2009 Leave a comment

Website security is possibly today’s most overlooked aspect of securing the enterprise and should be a priority in any organization.

Increasingly, hackers are concentrating their efforts on web-based applications – shopping carts, forms, login pages, dynamic content, etc. Accessible 24/7 from anywhere in the world, insecure web applications provide easy access to backend corporate databases and also allow hackers to perform illegal activities using the attacked sites. A victim’s website can be used to launch criminal activities such as hosting phishing sites or to transfer illicit content, while abusing the website’s bandwidth and making its owner liable for these unlawful acts.

Hackers already have a wide repertoire of attacks that they regularly launch against organizations including SQL Injection, Cross Site Scripting, Directory Traversal Attacks, Parameter Manipulation (e.g., URL, Cookie, HTTP headers, web Forms), Authentication Attacks, Directory Enumeration and other exploits. Moreover, the hacker community is very close-knit; newly discovered web application intrusions are posted on a number of forums and websites known only to members of that exclusive group. These are called Zero Day exploits. Postings are updated daily and are used to propagate and facilitate further hacking.

Web applications – shopping carts, forms, login pages, dynamic content, and other bespoke applications – are designed to allow your website visitors to retrieve and submit dynamic content including varying levels of personal and sensitive data.

If these web applications are not secure, then your entire database of sensitive information is at serious risk. A Gartner Group study reveals that 75% of cyber attacks are done at the web application level.

Why does this happen?

  • Websites and related web applications must be available 24 hours a day, 7 days a week to provide the required service to customers, employees, suppliers and other stakeholders.
  • Firewalls and SSL provide no protection against web application hacking, simply because access to the website has to be made public.
  • Web applications often have direct access to backend data such as customer databases and, hence, control valuable data and are much more difficult to secure.
  • Corporate web applications have large amounts of bandwidth available. Since bandwidth is expensive, for a hacker to transfer huge amounts of illegal content, they revert to steal bandwidth from others.
  • Most web applications are custom-made and, therefore, involve a lesser degree of testing than off-the-shelf software. Consequently, custom applications are more susceptible to attack.

Various high-profile hacking attacks have proven that web application security remains the most critical. If your web applications are compromised, hackers will have complete access to your backend data even though your firewall is configured correctly and your operating system and applications are patched repeatedly.

Network security defense provides no protection against web application attacks since these are launched on port 80 (default for websites) which has to remain open to allow regular operation of the business.

For the most comprehensive security strategy, it is therefore imperative that you regularly and consistently audit your web applications for exploitable vulnerabilities.

Advertisements
Categories: Security Testing

“The Values of Load/Performance Testing”

September 28, 2009 Leave a comment
Objective
The objective of this white paper is to give the clear picture of the values of
Load/Performance testing, different aspects how to fulfill the gap of performance into any
web application by certain efforts from reliable resources.

Objective

The objective of this article is to give the clear picture of the values of Load/Performance testing, different aspects how to fulfill the gap of performance into any web application by certain efforts from reliable resources.

What is ‘Load Testing’? – Introduction, concept & definition

Introduction

As a human being, we always tend to check the actual performance/output of each and every object by some or more number of manual inputs whether it would be in a form of any object like a pen, electronic gadget, etc. to get desired output from every object. In order to achieve required output we generally put ‘x’ amount of load on particular object to check the maximum capability and desired response time.

There are three main terms for testing by simultaneously acting multiple users (real or simulated): Load, Performance and Stress testing.

Definition

‘Load Testing’ – we can define by different ways, but as a part of initial understanding we derived some of as given below:

1. Testing an application under heavy loads, such as testing of a web site under a range of load to determine at what point the system’s response time degrades or fails.

2. Load testing is the process of subjecting a computer, peripheral, server, network or application to a work level approaching the limits of its specification. Load testing can be done under controlled lab conditions to compare the capabilities of different systems or to accurately measure the capabilities of a single system.

Load testing is a part of a more general process known as performance testing. Examples of load testing include:

 Downloading a series of large files from the Internet.

 Running multiple applications on a computer or server simultaneously.

 Assigning many jobs to a printer in a queue.

 Subjecting a server to large amount of e-mail traffic.

 Writing and reading data from to and from a hard disk continuously.

Performance and Stress Testing

 Performance Testing
Performance testing is a generic term that can refer to many different types of performance-related testing, each of which addresses a specific problem area and provides its own benefits, risks and challenges. Let’s see the concept of performance and stress testing by their definition.
1. Performance testing is the process of determining the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the actual response time at which a system functions. Qualitative attributes such as reliability, scalability, and interoperability may also be evaluated. Performance testing is often done in conjunction with stress testing.

2. Performance testing is a method of investigating quality related characteristics of an application that may impact actual users by subjecting it to a reality based simulations.

3. Performance tests are used to test each part of the web server or the web application to discover how best to optimize them for increased web traffic. Most often this is done by testing various implementations of single web page/scripts to check what version of the code is the fastest.

 Stress Testing

Stress testing refers to put the maximum load on system till system gets crashed. With the help of stress testing, we can figure out the actual capabilities of the system under heavy traffic of user, network, peripheral etc. let’s see the technical definition with example:

1. Stress tests are simulated “ brute-force” attacks that apply excessive load to your web server. ‘Real world’ situations like this can be crated by a massive spike of users – caused by a larger referrer (imagine your website being mentioned on national TV..). Another example would be an email marketing campaign sent to prospective customers that asks them to come to the website to register for a service or request additional information.

The purpose of stress testing is to estimate the maximum load that your web server can support. Such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical value, large complex queries to database system, etc.

2. Stress testing – verifies the acceptability of the target-of-test’s performance behavior when abnormal conditions are encountered, such as diminished resources or extremely high number of users.

The values of Performance Testing
Performance testing helps any system to determine it’s actual output/result of the system by placing more or less load in form of multiple concurrent users. We need to follow some standards to perform ‘Performance testing’. Performance testing can be done on many different automated tools like WAPT, OpenSTA, WSOP, LoadRunner, ANTS Load, etc.
– Let’s see the core activities of Performance Testing with the help of following chart.

Core Performance Testing Activities

1. Identify Test Environment

2. Identify Performance Test Criteria

3. Plan & Design Tests

4. Configure Test Environment

5. Implement Test Design

6. Execute Tests

7. Analyze, Report and Retest

In order to achieve best test results of ‘Performance Testing’, person has to follow above standard of ‘Core Performance Testing Activities’. Let’s see the benefits of performance testing.

Benefits of Performance Testing

 Determines the speed, scalability and stability characteristics of an application, thereby providing an input to making sound business decisions.

 Focuses on determining if the user of the system will be satisfied with the performance characteristics of the application.

 Identify mismatches between performance-related expectations and reality.

 Supports tuning, capacity planning, and optimization efforts.

Ideal Solution – How we can optimize performance of web based application?

As new web-based applications and delivery models revealed to the world, it has become clear that there is an inherent gap between business applications and processes and the underlying network and application infrastructure.

Customers, partners and their distributed workforce rely on availability, performance and security of mission-critical application services to effectively conduct business. To provide users with highest level of support, today’s enterprise data need to safeguard themselves from

 Application-level attacks

 Application scaling pains

 Database bottlenecks

 Server and network downtime and overloads

 Slow response time
We may face some challenges and areas always during performance testing for web based applications which may not have addressed during testing activities.
 May not detect some functional defects that only appear under load.
 If not carefully validated and designed, may only be indicative of performance characteristics in a small number of production scenarios.

 Unless tests are conducted on the production hardware, from the same machines the user will be using, there will always be a degree of uncertainty in the results.

To overcome from such situations, we should always approach to every single problem in such a way that our kind of infrastructure and application meets the actual requirement with optimal performance to our system. In this context we can derive some genuine solution to optimize performance as per given below.

Solution

1) Optimize source code: Check whether written source code is up to the mark with standards of programming language, as most of the time performance issues lies into optimization of source code.

2) Minimize HTTP Requests: 80% of the end-user’s response time is spent on the front-end.

Most of the time tied up in downloading all the components in the page: images, CSS, scripts, flash, etc. Reducing the number of the components in turn reduces the number of HTTP requests required to render the page. This is the key to faster page.

3) Database Script optimization: Many a times developer/programmer might not have followed the right standards in writing database scripts, stored procedures, SQL queries etc. At the same time software tester should approach white-box testing and give some efforts in cross check of database queries by doing DB review to optimize database scripts.

4) Optimize images and flash objects: After a designer is done with creating the images for the web page, there are still some things you can try before you FTP those images to your web server.

 Check the GIFs and see if they are using more space compared to color of the image using any standard image compression tool/software.

 Try converting GIFs to PNG and JPEG and see if there is a saving of space. In most cases PNG and JPEG will occupy lesser space compared to GIF.

 If possible avoid using larger flash components on the web page, in place of flash use some images as this will improve performance of the web page.

 Minimize use of highly loaded CSS, scripts, redirects, external third party components into your system.

Conclusion

From above description, we can conclude this topic as:

Performance Testing is the overall process.

Load Testing checks if the system will support the expected conditions.

Stress Testing tries to break the system.

-Thank you-

Testing Types

September 24, 2009 2 comments
  • Black box testing – not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
  • White box testing – based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths, conditions.
  • Unit testing – the most ‘micro’ scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
  • Incremental integration testing – continuous testing of an application as new functionality is added; requires that various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
  • Integration testing – testing of combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
  • Agile TestingAgile testing involves testing from the customer perspective as early as possible, testing early and often as code becomes available and stable from module/unit level testing.
  • Functional testing – black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn’t mean that the programmers shouldn’t check that their code works before releasing it (which of course applies to any stage of testing.)
  • System testing – black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
  • End-to-end testing – similar to system testing; the ‘macro’ end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
  • Sanity testing or Smoke testing – typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a ‘sane’ enough condition to warrant further testing in its current state.
  • Regression testing – re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
  • Acceptance testing – final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
  • Load testing – testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
  • Stress testing – term often used interchangeably with ‘load’ and ‘performance’ testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
  • Performance testing – term often used interchangeably with ‘stress’ and ‘load’ testing. Ideally ‘performance’ testing (and any other ‘type’ of testing) is defined in requirements documentation or QA or Test Plans.
  • Usability testing – testing for ‘user-friendliness’. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
  • Install/uninstall testing – testing of full, partial, or upgrade install/uninstall processes.
  • Recovery testing – testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
  • Failover testing – typically used interchangeably with ‘recovery testing’
  • Security testing – testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
  • Compatibility testing – testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
  • Exploratory testing – often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
  • Ad-hoc testing – similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
  • Context-driven testing – testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game.
  • User acceptance testing – determining if software is satisfactory to an end-user or customer.
  • Comparison testing – comparing software weaknesses and strengths to competing products.
  • Alpha testing – testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
  • Beta testing – testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

Agile Development Methodologies

September 20, 2009 Leave a comment

Agile Development Methodologies:

  • Extreme Pragramming (XP)
  • Crystal
  • Adaptive Software Development (ASD)
  • Scrum
  • Feature Driven Development (FDD)
  • Dynamic System Development Method (DSDM)
  • XBreed

Some of the well-known Agile Software Development methods:

-> Agile Modeling

-> Agile Unified Process (AUP)

-> Agile Data

-> Daily kickoff and review of goals

-> Short release cycles

-> Responsive Development

-> Test Driven Development (TDD)

-> Database refactoring

Categories: Agile Testing

What is Agile Testing?

September 20, 2009 Leave a comment

“Agile Testing”  – involves testing from customer perspective as early as possible, testing early and often as code becomes available and stable from module/unit level testing.”

1. Testing practice that follows the agile manifesto, treating development as the customer of 
     testing –   In this light the context-driven manifesto provides a set of principles for agile testing.

Agile software development is a conceptual framework for software engineering that promotes development iterations throughout the life-cycle of the project. 

There are many agile development methods; most minimize risk by developing software in short amounts of time. Software developed during one unit of time is referred to as an iteration, which may last from one to four weeks. Each iteration is an entire software project: including planning, requirements analysis, design, coding, testing, and documentation. An iteration may not add enough functionality to warrant releasing the product to market but the goal is to have an available release (without bugs) at the end of each iteration. At the end of each iteration, the team re-evaluates project priorities.

Agile methods emphasize face-to-face communication over written documents. Most agile teams are located in a single open office sometimes referred to as a bullpen. At a minimum, this includes programmers and their “customers” (customers define the product; they may be product managers, business analysts, or the clients). The office may include testers, interaction designers, technical writers, and managers.

Agile methods also emphasize working software as the primary measure of progress. Combined with the preference for face-to-face communication, agile methods produce very little written documentation relative to other methods. This has resulted in criticism of agile methods as being undisciplined.

Categories: Agile Testing

How can new Software QA processes be introduced in an existing organization?

September 8, 2009 Leave a comment
  • A lot depends on the size of the organization and the risks involved. For large organizations with high-risk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary.
  • Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand.
  • For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers.
  • The most value for effort will often be in (a) requirements management processes, with a goal of clear, complete, testable requirement specifications embodied in requirements or design documentation, or in ‘agile’-type environments extensive continuous coordination with end-users, (b) design inspections and code inspections, and (c) post-mortems/retrospectives.
  • Other possibilities include incremental self-managed team approaches such as ‘Kaizen’ methods of continuous process improvement, the Deming-Shewhart Plan-Do-Check-Act cycle, and others.

Performance Engineering

February 18, 2009 Leave a comment

“Load/Performance Testing – The Best Practice”

Performance – of the application becomes a great challenge for the software industry all over the world. Applications are being developed and deployed by In-house developers or by Out-sourced vendors but performance of applications are not up to the mark and users are not satisfied with it. For example, a user blames that a web page takes 3 seconds to be appeared on normal load but when the no. of user increases, the same page takes 7-8 seconds so, performance of the application degraded when the load is increased. This issue becomes more critical in banking applications when heavy financial transactions are being processed but the server is not responding due to heavy load or huge volume of data. This may be cause of degradation in response time, throughput etc and other performance related factors. Whether you design, build, test, maintain, or manage applications, you need to consider performance. If your software does not meet its performance objectives, your application is unlikely to be a success. If you do not know your performance objectives, it is unlikely that you will meet them.

Performance affects different roles in different ways:

  • As an architect, you need to balance performance and scalability with other quality-of-service (QoS) attributes such as manageability, interoperability, security, and maintainability.
  • As a developer, you need to know where to start, how to proceed, and when you have optimized your software enough.
  • As a tester, you need to validate whether the application supports expected workloads.
  • As an administrator, you need to know when an application no longer meets its service level agreements, and you need to be able to create effective growth plans.
  • As an organization, you need to know how to manage performance throughout the software life cycle, as well as lower total cost of ownership of the software that your organization creates.
  • In QA department, we have defined flow of performance testing according to the best practices and the discipline. From an organization’s perspective it is necessary their QA department takes care of every application regarding Performance Engineering activities!