Tuesday, 21 February 2012

The Grinder – An open source Performance testing alternative


Owing to the cut throat competition, IT companies are striving to go one step ahead than their competitors to woo their prospective clients. Cutting down the costs without compromising on the quality has been the effective strategy these days. Open source tools not only promise to cut down the costs drastically, but are also more flexible and provide certain unique features of their own. The huge expense involved in procuring performance testing tools has urged the testing community to look for an open source alternative that would go easy on the budget.

The Grinder is an open source performance testing tool originally developed by Paco Gomez and Peter Zadrozny. The Grinder is a JavaTM load testing framework that makes it easy to run a distributed test using many load injector machines.

1.1          Why Grinder?

  • The Grinder can be a viable open source option for performance testing.It is freely available under a BSD-style open-source license that can be downloaded from SourceForge.net http://www.sourceforge.net/projects/grinder
  • The test scripts are written in simple and flexible Jython language which makes it very powerful and flexible. As it is an implementation of the Python programming language written in Java, all of the advantages of Java are also inherent here. No separate plug-ins or libraries are required
  • The Grinder makes use of a powerful distributed Java load testing framework that allows simulation of multiple user loads across different “agents” which can be managed by a centralized controller or “console”. From this console you can edit the test scripts and distribute them to the worker processes as per the requirement
  • It is a surprisingly lightweight tool, fairly easy to set up and run and it takes almost no time at all to get started. The installation simply involves downloading and configuring the recorder, console and agent batch files.The Grinder 3  is distributed as two zip files. The grinder properties file can be customized to suit our requirement, each time we execute a test
  • From a developer’s point of view, Grinder is the preferred load testing tool wherein the developers can opt to test their own application. I.e. The programmers get to test the interior tiers of their own application
  • The Grinder tool has a strong support base in the form of mailing lists i.e. http://sourceforge.net/
  • The Grinder tool has excellent compatibility with Grinder Analyzer which is also available as an open source license. The analyzer extracts data from grinder log and generates report and graphs containing response time, transactions per second, and network bandwidth used
  • Other than HTTP and HTTPS, The Grinder does support internet protocols such as POP3, SMTP, FTP, and LDAP

1.2          Difficulties that should be considered before opting for the Grinder

  • No user friendly interface is provided for coding, scripting(parameterization, correlation, Inserting custom functions etc) and other enhancements at the test script preparation level
  • Annoying syntax errors do creep in into the coding part due to the rigid syntax and indentation required at the coding level as Jython scripting is used
  • A lot depends on the tester’s ability to understand the interior tiers of the application unlike other commercial tools  where the tester can blindly follow the standard procedures without any insight into the intricate complexity of the application and can still succeed in completing his job successfully
  • It is dependent on Grinder Analyzer for analysis, report generation etc
  • The protocols supported by the Grinder are limited, whereas commercial tools such as LoadRunner, Silk Performer provides support for all the web based protocols. This is one major limiting factor as the web based applications these days use multi protocols for communication.
  • Unlike LoadRunner and other vendor licensed tools, it does not offer effective monitoring solutions and in-depth diagnostic capabilities. Also there is no separate user friendly interface component dedicated for analyzing the test results
  • More support is required in the form of forums, blogs and user communities
In short Tom Braverman sums it up brilliantly in a post to the Grinder use,
“I don’t try and persuade my clients that The Grinder is a replacement for LoadRunner, etc. I tell them that The Grinder is for use by the developers and that they’ll still want the QA team to generate scalability metrics using LoadRunner or some other tool approved for the purpose by management”

For an open source testing tool, it has to be admitted that the Grinder does have the capabilities feature wise to make a stand amidst other commercial alternatives.

Thanks for Reading This Blogs. Know More: Performance Testing & Quality Assurance

Wednesday, 1 February 2012

SIX Trends to FIX the QA Needs


The quality assurance landscape is undergoing a major transformation as QA organizations try to align their goals with the business goals of their companies.

QA has a tough balancing act to perform — tackling business risks as well as cost reduction and ROI concerns, while building agility in their organizations to respond to business goals.

Testing teams have long been viewed as an insurance by IT departments to assure themselves and their business partners on what is being delivered. Over the years, IT departments have spent more time and money in trying to ascertain the delivery worthiness of code.

More than ever, business teams are asking today how testing teams could deliver better insights and greater value into what is being produced by development teams. The argument is if testing teams could serve as Quality Gates throughout the development lifecycle, there would be fewer surprises towards the end, and lesser tradeoffs and compromises between inadequate functionality and faster time-to-market, which paves ways to the following emerging Trends in QA.

Six key quality assurance trends emerging:


1st Trend: Embracing Early Lifecycle Validation to Drive Down Costs and Improve Time-to-Market


The adoption of early lifecycle validation helps QA organizations to fix defects early in the lifecycle, thus significantly reducing risks and lowering total cost of ownership.

Methodologies gaining traction include:

* requirements/model-based testing
* early involvement and lifecycle testing
* risk-based testing
* risk-based security testing
* predictive performance modeling

2nd Trend: Increased Adoption of Test Lifecycle Management, Testing Metrics and Automation Solutions to Improve Overall Testing Processes


As QA organizations work to build greater quality into applications, they are adopting solutions such as test lifecycle management and automation technologies. “These solutions help to drive greater traceability throughout the testing lifecycle and to automate all stages of the lifecycle, with the aim of overall efficiencies and ROI.”

The emergence of new frameworks and dashboards for defining, measuring and monitoring testing metrics. “All of these metrics seek to enable quick decision-making and driving greater efficiency within existing or emerging testing processes/frameworks/solutions”.

3rd Trend: More Domain-based Testing


“Domain excellence is becoming a key factor in the testing industry, forcing QA organizations to build or buy point/platform-based solutions that combine core business processes and advanced testing frameworks” .

Examples of such testing creations include solutions for regulatory compliance for SOX and HIPAA; and for specialized processes such as POS, e-commerce, and banking.

4th Trend: The Emergence of Non-functional Testing Solutions Aimed at Enhancing the Customer Experience


The widespread use of e-commerce is forcing quality assurance organizations to deploy more solutions for measuring and enhancing end-customer experience.

“This is putting stress on the requirement for non-functional validation services and solutions”.

Key emerging areas include: testing for usability and accessibility, and predictive performance modeling.

5th trend: The Development of Testing Frameworks for Newer Technologies


Newer technologies such as SOA and cloud computing pose a different set of testing challenges to established technologies.

“Traditional models and frameworks of testing don’t work so well with these new technologies so QA organizations are creating new models and frameworks to address the issues raised” .

6th Trend: Special Focus on ERP Testing


For years, organizations have implemented ERP packages without thinking much about the testing complexities that will emerge as the packages evolve in changing IT environments.

Consequently, today these packages require specialized skills and methodologies to facilitate the business goals, implementation testing, and smooth rollouts and upgrades of the packages.

QA testing is one of the key pain-points in ERP implementations and upgrades today”.

To sum it up, the question still remains, where, when and how these techniques can be used? With the assumption that the benefits will differ in a variety of situations, including the efficacy of application. Needless to say it would be very interesting to consider some of these techniques and discuss the practical implications of these emerging trends.

Concurrent User Estimation


Concurrent user estimation is an important step before going for Performance Validation and capacity planning as it is directly related to consumption of system resources. Therefore, before entering into the load testing phase we need to determine the peak user load or the maximum concurrent user load for designing a workload model. People often estimate the number of concurrent users by intuition or wild guessing with little justification. This often leads to improper performance testing and capacity planning. In this article we would like to share a very reliable method proposed by Eric Man Wong to calculate the concurrent number of users using estimated and justified parameters.

The method involves estimating the peak user load by calculating the average number of concurrent users, based on the total number of user sessions, the average length of the user sessions.

1. Estimating the Average number of concurrent users

For calculating the average concurrent user load, we need to find the following parameters,
  • Period of concern (T): It is the time duration for which we are calculating the total number of user sessions.
  • Total number of user sessions (n): The number of user sessions at the specified time duration
  • The average Length of User sessions (L):  The length of a user session is the amount of time that the particular user takes for completing his activity
(During which he consumes a certain amount of the system resource). The average length of user sessions is simply the mean value of the session length of all the users.
(Say,
where s is the total number of user sessions). The average length of a user session can be estimated by observing how a sample of users uses the system.

A user session is a time interval defined by a start time and end time. Within a single session, let us assume that the user is in active state which means that the user is consuming a certain percentage of the total system memory. Between the start time and end time, there are one or more system resources being held. The number of concurrent users at any particular time is defined as the number of user sessions into which the time instance falls. This is illustrated in the following example

Each horizontal line segment represents a user session. Since the vertical line at time t0 intercepts with three user sessions, the number of concurrent users at time t0 is equal to three. Let us focus on the time interval from 0 to an arbitrary time instant T. The following result can be mathematically proven:
Alternatively, if the total number of user sessions from time 0 to T equals n, and the average length of a user session equals L, then

[NOTE: In the above diagram, t0 represents any particular instance of time. Whereas in the formulae we use the value T which gives us a specific duration or a time period between 2 instances of the time say t1 and t2]

2.  Estimating the peak number of concurrent users
For determining the peak user load we make use of some basic probability distribution theorems in the following manner.
We determine the probability of X concurrent users occupying the system at a particular time. We make use of the Poisson distribution for the same. Then we use the normal distribution pattern to determine the pea amount of user load.
By Poisson distribution,

Under this assumption, it can be proven that the concurrent number of users at any time instant also has a Poisson distribution,
Where C is the average number concurrent users we find using the formula
It is well known that the Poisson distribution with mean = C can be approximated by the normal distribution with mean C and standard deviation √c. We denote the number of concurrent users by X.
This implies that (X-C)/√c  has the standard normal distribution with mean 0 and standard deviation  1. Looking up the statistical table for the normal distribution, we have the following result:
The above equation means that the probability of the number of concurrent users being smaller than C + 3√c is 99.87%. The probability is large enough for most purposes that we can approximate the peak number of concurrent users by C +√c

We see that the simplicity by which we can determine the peak concurrent users just by determining the average concurrent user load makes it highly efficient. The Eric Man Wong method remains the most reliable method to replicate a realistic and sensible workload model for the performance testing activity.

Read More About:  Concurrent User Estimation

Tuesday, 31 January 2012

CALL WAITING….. ??? – Costly Miss Costly Fix


On Jan. 15, 1990, around 60,000 AT & T long-distance customers tried to place long-distance calls as usual — and got nothing. Behind the scenes, the company’s 4ESS long-distance switches, all 114 of them, kept rebooting in sequence. AT&T assumed it was being hacked, and for nine hours, the company and law enforcement tried to work out what was happening. In the end, AT&T uncovered the culprit: an obscure fault in its new Oracle Software.

Here’s how the switches were supposed to work: If one switch gets congested, it sends a “do not disturb” message to the next switch, which picks up its traffic. The second switch resets itself to keep from disturbing the first switch. Switch 2 checks back on Switch 1, and if it detects activity, it does another reset to reflect that Switch 1 is back online. So far, so simple.

The month before the crash, AT & T Tweaked the code to speed up the process. The trouble was, things were too fast. The first server to overload sent two messages, one of which hit the second server just as it was resetting. The second server assumed that there was a fault in its CCS7 internal logic and reset itself. It put up its own “do not disturb” sign and passed the problem on to a third switch.

The third switch also got overwhelmed and reset itself, and so the problem cascaded through the whole system. All 114 switches in the system kept resetting themselves, until engineers reduced the message load on the whole system and the wave of resets finally broke.

In the meantime, AT&T lost an estimated $60 million in long-distance charges from calls that didn’t go through. The company took a further financial hit a few weeks later when it knocked a third off its regular long-distance rates on Valentine’s Day to make amends with customers

HP SiteScope – Monitoring Made Easy


HP SiteScope software monitors the availability and performance of distributed IT infrastructures including servers, operating systems, network and Internet services, applications and application components.

HP SiteScope continually monitors more than 75 types of IT infrastructure through Web‑based architecture that is lightweight and highly customizable. With HP SiteScope, you gain the real‑time information you need to verify infrastructure operations, stay apprised of problems, and solve bottlenecks before they become critical. HP SiteScope is an important component of both the HP Operations Center software and the HP Business Availability Center software, providing agentless availability and performance monitoring and management.

How HP SiteScope works
  • HP SiteScope provides a centralized, scalable architecture.
  • HP SiteScope is implemented as a Java™ server application and runs on a single, central system as a daemon process.
  • HP SiteScope Java server supports three key functions: data collection, alerting, and reporting.
  • HP SiteScope enables system administrators to monitor your IT infrastructure remotely from a central installation without the need for agents on the monitored systems.
  • HP SiteScope accomplishes remote monitoring by logging into systems as a user from its central server, which can run on Windows®, UNIX®, and Linux® platforms.
  • HP SiteScope offers optional failover support to give you added redundancy and automatic failover protection in the event that an HP SiteScope server fails.
Advantages of HP SiteScope
  • Features an agentless, enterprise ready architecture that lowers Total Cost of Ownership
  • Monitors more than 75 different target types for critical health and performance characteristics
  • Generates daily, weekly, and monthly summaries of single and multiple monitor readings with built-in management server‑based reports
  • Serves as an integrated component of HP Operations Center and the monitoring foundation for HP Business Availability Center and HP LoadRunner
  • With HP Operations Manager, can deliver a combined agentless and agent-based monitoring
  • solution to deliver the breadth and depth you require
  • Gathers detailed performance data for IT infrastructure using agentless technology installed on your managed server or device
  • Enables the easy installation and monitoring of IT infrastructure monitoring in less than one hour
  • Reduces the time and cost of maintenance by consolidating all maintenance to one central server
  • Reduces the time to make administrative and configuration changes by providing templates and global change capabilities
  • Enables quick and efficient operations management with automated actions initiated upon monitor status change alerts
  • Offers solution templates that include specialized monitors, default metrics, proactive tests, and best practices
  • Supports easy customization to provide standard monitoring of previously unmanaged or hard-to-manage systems and devices
 Read About: HP SiteScope