Monday, 27 February 2012

Tips for Handling Recording Problems In LoadRunner


When we were using LoadRunner for our Business Process Testing engagements we have encountered certain recording problems. We have listed down some of the most common problems and the steps that should be followed to troubleshoot those issues.

Problem 1: NOT PROXIED error in the recording events log.

This error occurs when there is some sort of spyware software installed on the system.

To troubleshoot this issue follow the below steps:

1. Use process explorer to get the list of dlls getting hooked into Internet Explorer.

2. Compare the list with the dlls from a machine where you can record.

3. Run Ad-Aware software. To download this, visit http://www.lavasoftusa.com/software/adaware/. This usually eliminates the spyware programs.

4. Make sure to check carefully the processes running on the machine. If you find any suspicious DLL\exe, just google the name and see if it’s a known spyware. If so, uninstall the software related to this DLL\executable.

5. Sort the list of DLLs by ‘Path Name’ in Process Explorer. The DLLs to scrutinize carefully are the ones which are not located in LoadRunner\bin and c:\windows\system32 or c:\winnt\system32.

Problem 2: “Cannot connect to remote server” in the recording log

This error occurs when the communication from this server and on that particular port should not be filtered

To troubleshoot this issue follow the below steps:

Try port mapping if it is a HTTPS site.
Go to Recording Options > Port Mapping and add the Target Server’s IP address, Port Number as 443, Service ID as HTTP, Connection Type as SSL. Make sure “Test SSL” returns success. Change the SSL Ciphers and SSL Version to achieve success for “Test SSL”.

Problem 3: “Connection prematurely aborted by the client” error in the recording log.

This error occurs when the communication from this server and on that particular port should not be filtered

To troubleshoot this issue follow the below steps:

Try port mapping with Direct Trapping. Here’s how we can do direct trapping
a. Enable the Direct trapping option:
[HKEY_CURRENT_USER\Software\Mercury Interactive\LoadRunner\Protocols\WPlus\Analyzer]
“EnableWSPDebugging”=dword:00000001
“ShowWSPDirectTrapOption”=dword:00000001
“EnableWSPDirectTrapping”=dword:00000001

b. Before Recording Multi-Web, add port mapping entries for HTTP/TCP (and NOT SSL) connections.
Set their ‘Record Type’ from ‘Proxy’ to ‘Direct’.
c. Start recording.

Problem 4: Application sends a “Client Side” certificate to the server and waits for server authentication.

This error occurs because LoadRunner sits in between the client and server, it takes this certificate information from the client and pass it on to the server. So, you need to have the certificate in .pem format.

To troubleshoot this issue follow the below steps:

Use openSSL to convert the client side certificate to .pem format and add it in the port mapping in recording options

In Internet Explorer:

1. Choose Tools > Internet Options. Select the Content tab and click Certificates.
2. Select a certificate from the list and click Export.
3. Click Next several times until you are prompted for a password.
4. Enter a password and click Next.
5. Enter a file name and click Next.
6. Click Finish

In Netscape:

1. Choose Communicator > Tools > Security Info.
2. Click on the Yours link in the Certificates category in the left pane.
3. Select a certificate from the list in the right pane, and click Export
4. Enter a password and click OK.
5. Enter a filename and save the information.

The resulting certificate file is in PKCS12 format. To convert the file to PEM format, use the openssl.exe utility located in the LoadRunner bin directory. To run the utility:
Open a DOS command window.

Set the current directory to the LoadRunner bin directory.
Type openssl pkcs12 -in <input_file> -out <output file.pem>
Enter the password you chose in the export process.
Enter a new password (it can be the same as before).
Enter the password again for verification.

In Recording Options > Port Mapping > check the option for “Use specified client side certificate” and point to the saved .pem file.

Problem 5: Server sends a certificate to the client for authorization and only when authorized by the client a connection can be established between the client and server. This is mostly true for Java based applications. The error we get is “No trusted certificate found”

This error occurs since the recorder is in between client and server, you should authorize the certificate from the server. So, your wplusca.crt should be in the server’s certificate repository.

To troubleshoot this issue follow the below steps:

Copy wplusca.crt file into the certificate repository of the server.

1. Locate keytool.exe executable (usually under JRE_ROOT\bin directory).
2. Add its directory to the PATH environment variable.
3. Locate cacerts file (with no extension) under JRE_ROOT\lib\security directory.
4. Copy the attached cacert.pem to this directory.
5. Make this to be the current directory in the command prompt.
6. Execute:
keytool -list -keystore cacerts
7. It will ask you for the password, reply:
“changeit”
8. It will list all the CA certificates installed (usually 25).
Notice the number of certificates.
9. Execute:
keytool -import -file cacert.pem -keystore cacerts -alias mercuryTrickyCA
10. It will ask you for the password, reply:
“changeit”
11. It will ask you if you are sure you want to add the certificate to the store, reply:
“yes”
12. Execute:
keytool -list -keystore cacerts
13. It will ask you for the password, reply:
“changeit”
14. It will list all the CA certificates installed.
The number of certificates should be bigger by 1

Problem 6: The recording log does not indicate any error but the connection between the client & Server has ended abruptly. This could happen possibly due to the ‘WPLUSCA’ certificate with a private key

This error occurs when because the recorder sends the ‘WPLUSCA’ certificate with a private key to the server for authentication while recording.

To troubleshoot this issue follow the below steps:

1. Go to LoadRunner\bin\certs and make a copy of the WPlusCA.crt file.
2. Open it in a notepad
3. Delete the portion between “Begin RSA Private Key” and “End RSA Private Key”
4. Save the file.
5. Now right click the file and choose “Install certificate” and follow the on screen instructions.
This will now send a certificate without the private key and if the server accepts the communication will continue.

Problem 7: “Failed to resolve hostname” error in the recording log

If VuGen box is not able to resolve the server name that is given in URL, it throws this error in the recording log and then crashes the IE

To troubleshoot this issue follow the below steps:

Modify hosts file on the vugen machine with server name and any IP address
The above problems are some of the common problems when we are using LoadRunner for Performance Testing. The above mentioned troubleshooting steps will help you to fix these issues if they occur during your performance testing assignment.

Thanks For Reading This Blog. Please Know More About: LoadRunner

Thursday, 23 February 2012

Capacity Planning in Performance Testing


What is Capacity Planning

Capacity Planning is the process of determining what type of hardware and software configuration is required to meet application needs. Capacity planning, performance benchmarks and validation testing are essential components of successful enterprise implementations. Capacity planning is an iterative process. A good capacity management plan is based on monitoring and measuring load data over time and implementing flexible solutions to handle variances without impacting performance.

The goal of capacity planning is to identify the right amount of resources required to meet service demands now and in the future. It is a proactive discipline with far-reaching impact, supporting:
• IT and business alignment, helping to show the cost and business need for infrastructure upgrades

Hexaware's Consolidation and Virtualization Strategies, ensuring that consolidated real and virtual system configurations will meet service levels

Capacity Planning Approach

Capacity Planning is planning efficient resource use by applications during the development and deployment of an application, but also when it is operational. It addresses considering how different resources can be accessed simultaneously by different applications, but also knowing when it is done in an optimal way. Big organizations and operational environments have high expectations in means of capacity planning.
The thesis considers which type of configurations (clustered, unclustered, etc.) should be taken into consideration, which types / forms / categories of applications can run on the same server / cluster, but also what should be avoided when planning for capacity.

Challenges Faced

Capacity planning should be conducted when:

• Designing a new system
• Migrating from one solution to another
• Business processes and models have changed, thus requiring an update to the application architecture
• End user community has significantly changed in number, location, or function

Typical objective of capacity planning is to estimate:

• Number and speed of CPU Cores
• Required Network bandwidth
• Memory size
• Storage type and size

Key items influencing capacity:

• Number of concurrent users
• User workflows
• Architecture
• Tuning and implementation of best practices

Capacity planning is about how many resources an application uses. It implies knowing a system’s profile. For instance, if you have two applications, e.g. application A and application B, each known to use certain values of Central Processing Unit (CPU), memory, disk and network resources when being the single application running on your machine, but you only have one machine. If application A uses only little of one resource and application B much of the same one. This is a simple case of capacity planning, and one must have in mind that when the applications are executed in parallel on the machine, the total resource usage is no simple addition of the sole execution of each one of them. There could for instance be overlapping of memory portions, which would make parallel execution impossible, and re-writing the code would be necessary to empower this.

The process of determining what type of hardware and software configuration is required to adequately meet application needs is called capacity planning.
Because the needs of an application are determined among others by the number of users, in other words the number of parallel accesses, capacity planning can also be defined as: how many users can a system handle before changes need to be made? Thus, when an application is deployed, one should have in mind how large it will be at first, but also how fast the number of users/servers/machines will increment, so that enough margins is left and one must not wholly change an application because of e.g. the addition of a single user.
To perform capacity planning, essential data is collected and analyzed to determine usage patterns and to project capacity requirements and performance characteristics. Tools are used to determine optimum hardware/software configurations.

Proposed Solution(s)

Bottlenecks, or areas of marked performance degradation, should be addressed while developing your capacity management plan. The objective of identifying bottlenecks is to meet your performance goals, not eliminate all bottlenecks. Resources within a system are finite. By definition, at least one resource (CPU, memory, or I/O) can be a bottleneck in the system. Planning for anticipated peak usage, for example, may help minimize the impact of bottlenecks on your performance objectives.
There are several ways to address system bottlenecks. Some common solutions include:

• Using Clustered Configurations
• Using Connection Pooling
• Setting the Max Heap Size on JVM
• Increasing Memory or CPU
• Segregation of Network Traffic



Clustered configurations distribute workloads among multiple identical cluster member instances. This
effectively multiplies the amount of resources available to the distributed process, and provides for seamless fail over for high availability.

Using Connection Pooling
To improve the performance by using existing database connections, you can limit the number of connections, timing of the sessions and other parameters by modifying the connection strings.
Setting the Max Heap Size on JVM (Java Virtual Machines)
This is a application-specific tunable that enables a tradeoff between garbage collection times and the number of JVMs that can be run on the same hardware. Large heaps are used more efficiently and often result in fewer garbage collections. More JVM processes offer more fail over points.

Increasing Memory or CPU
Aggregating more memory and/or CPU on a single hardware resource allows localized communication between the instances sharing the same hardware. More physical memory and processing power on a single machine enables the JVMs to scale and run much larger and more powerful instances, especially
64-bit JVMs. Large JVMs tend to use the memory more efficiently, and Garbage Collections tend to occur less frequently. In some cases, adding more CPU means that the machine can have more instruction and data cache available to the processing units, which means even higher processing efficiency.
Segregation of Network Traffic
Network-intensive applications can introduce significant performance issues for other applications using network. Segregating the network traffic of time-critical applications from network-intensive applications, so that they get routed to different network interfaces, may reduce performance impacts. It is also possible to assign different routing priorities to the traffic originating from different network interfaces.

Business benefits-

• Increase revenue through maximum availability, decreased downtime, improved response times, greater productivity, greater responsiveness to market dynamics, greater return on existing IT investment

• Decrease costs through higher capacity utilization, more efficient processes, just-in-time upgrades, greater cost control

Future Direction/Long Term Focus

Capacity planning process is a forecast or plan for the organization’s future. Capacity planning is a process for determining the optimal way to satisfy business requirements such as forecasted increases in the amount of work to be done, while at the same time meeting service level requirements. Future processing requirements can come from a variety of sources. Inputs from management may include expected growth in the business, requirements for implementing new applications, IT budget limitations and requests for consolidation of IT resources.

Recommendations

The basic steps involved in developing a capacity plan are:

1. To determine service level requirements
a. Define workloads
b. Determine the unit of work
c. Identify service levels for each workload

2. To analyze current system capacity
a. Measure service levels and compare to objectives
b. Measure overall resource usage
c. Measure resource usage by workload
d. Identify components of response time

3. To plan for the future
a. Determine future processing requirements
b. Plan future system configuration

By following these steps, we can help to ensure that your organization will be prepared for the future, ensuring that service level requirements will be met using an optimal configuration.

Tuesday, 21 February 2012

The Grinder – An open source Performance testing alternative


Owing to the cut throat competition, IT companies are striving to go one step ahead than their competitors to woo their prospective clients. Cutting down the costs without compromising on the quality has been the effective strategy these days. Open source tools not only promise to cut down the costs drastically, but are also more flexible and provide certain unique features of their own. The huge expense involved in procuring performance testing tools has urged the testing community to look for an open source alternative that would go easy on the budget.

The Grinder is an open source performance testing tool originally developed by Paco Gomez and Peter Zadrozny. The Grinder is a JavaTM load testing framework that makes it easy to run a distributed test using many load injector machines.

1.1          Why Grinder?

  • The Grinder can be a viable open source option for performance testing.It is freely available under a BSD-style open-source license that can be downloaded from SourceForge.net http://www.sourceforge.net/projects/grinder
  • The test scripts are written in simple and flexible Jython language which makes it very powerful and flexible. As it is an implementation of the Python programming language written in Java, all of the advantages of Java are also inherent here. No separate plug-ins or libraries are required
  • The Grinder makes use of a powerful distributed Java load testing framework that allows simulation of multiple user loads across different “agents” which can be managed by a centralized controller or “console”. From this console you can edit the test scripts and distribute them to the worker processes as per the requirement
  • It is a surprisingly lightweight tool, fairly easy to set up and run and it takes almost no time at all to get started. The installation simply involves downloading and configuring the recorder, console and agent batch files.The Grinder 3  is distributed as two zip files. The grinder properties file can be customized to suit our requirement, each time we execute a test
  • From a developer’s point of view, Grinder is the preferred load testing tool wherein the developers can opt to test their own application. I.e. The programmers get to test the interior tiers of their own application
  • The Grinder tool has a strong support base in the form of mailing lists i.e. http://sourceforge.net/
  • The Grinder tool has excellent compatibility with Grinder Analyzer which is also available as an open source license. The analyzer extracts data from grinder log and generates report and graphs containing response time, transactions per second, and network bandwidth used
  • Other than HTTP and HTTPS, The Grinder does support internet protocols such as POP3, SMTP, FTP, and LDAP

1.2          Difficulties that should be considered before opting for the Grinder

  • No user friendly interface is provided for coding, scripting(parameterization, correlation, Inserting custom functions etc) and other enhancements at the test script preparation level
  • Annoying syntax errors do creep in into the coding part due to the rigid syntax and indentation required at the coding level as Jython scripting is used
  • A lot depends on the tester’s ability to understand the interior tiers of the application unlike other commercial tools  where the tester can blindly follow the standard procedures without any insight into the intricate complexity of the application and can still succeed in completing his job successfully
  • It is dependent on Grinder Analyzer for analysis, report generation etc
  • The protocols supported by the Grinder are limited, whereas commercial tools such as LoadRunner, Silk Performer provides support for all the web based protocols. This is one major limiting factor as the web based applications these days use multi protocols for communication.
  • Unlike LoadRunner and other vendor licensed tools, it does not offer effective monitoring solutions and in-depth diagnostic capabilities. Also there is no separate user friendly interface component dedicated for analyzing the test results
  • More support is required in the form of forums, blogs and user communities
In short Tom Braverman sums it up brilliantly in a post to the Grinder use,
“I don’t try and persuade my clients that The Grinder is a replacement for LoadRunner, etc. I tell them that The Grinder is for use by the developers and that they’ll still want the QA team to generate scalability metrics using LoadRunner or some other tool approved for the purpose by management”

For an open source testing tool, it has to be admitted that the Grinder does have the capabilities feature wise to make a stand amidst other commercial alternatives.

Thanks for Reading This Blogs. Know More: Performance Testing & Quality Assurance

Wednesday, 1 February 2012

SIX Trends to FIX the QA Needs


The quality assurance landscape is undergoing a major transformation as QA organizations try to align their goals with the business goals of their companies.

QA has a tough balancing act to perform — tackling business risks as well as cost reduction and ROI concerns, while building agility in their organizations to respond to business goals.

Testing teams have long been viewed as an insurance by IT departments to assure themselves and their business partners on what is being delivered. Over the years, IT departments have spent more time and money in trying to ascertain the delivery worthiness of code.

More than ever, business teams are asking today how testing teams could deliver better insights and greater value into what is being produced by development teams. The argument is if testing teams could serve as Quality Gates throughout the development lifecycle, there would be fewer surprises towards the end, and lesser tradeoffs and compromises between inadequate functionality and faster time-to-market, which paves ways to the following emerging Trends in QA.

Six key quality assurance trends emerging:


1st Trend: Embracing Early Lifecycle Validation to Drive Down Costs and Improve Time-to-Market


The adoption of early lifecycle validation helps QA organizations to fix defects early in the lifecycle, thus significantly reducing risks and lowering total cost of ownership.

Methodologies gaining traction include:

* requirements/model-based testing
* early involvement and lifecycle testing
* risk-based testing
* risk-based security testing
* predictive performance modeling

2nd Trend: Increased Adoption of Test Lifecycle Management, Testing Metrics and Automation Solutions to Improve Overall Testing Processes


As QA organizations work to build greater quality into applications, they are adopting solutions such as test lifecycle management and automation technologies. “These solutions help to drive greater traceability throughout the testing lifecycle and to automate all stages of the lifecycle, with the aim of overall efficiencies and ROI.”

The emergence of new frameworks and dashboards for defining, measuring and monitoring testing metrics. “All of these metrics seek to enable quick decision-making and driving greater efficiency within existing or emerging testing processes/frameworks/solutions”.

3rd Trend: More Domain-based Testing


“Domain excellence is becoming a key factor in the testing industry, forcing QA organizations to build or buy point/platform-based solutions that combine core business processes and advanced testing frameworks” .

Examples of such testing creations include solutions for regulatory compliance for SOX and HIPAA; and for specialized processes such as POS, e-commerce, and banking.

4th Trend: The Emergence of Non-functional Testing Solutions Aimed at Enhancing the Customer Experience


The widespread use of e-commerce is forcing quality assurance organizations to deploy more solutions for measuring and enhancing end-customer experience.

“This is putting stress on the requirement for non-functional validation services and solutions”.

Key emerging areas include: testing for usability and accessibility, and predictive performance modeling.

5th trend: The Development of Testing Frameworks for Newer Technologies


Newer technologies such as SOA and cloud computing pose a different set of testing challenges to established technologies.

“Traditional models and frameworks of testing don’t work so well with these new technologies so QA organizations are creating new models and frameworks to address the issues raised” .

6th Trend: Special Focus on ERP Testing


For years, organizations have implemented ERP packages without thinking much about the testing complexities that will emerge as the packages evolve in changing IT environments.

Consequently, today these packages require specialized skills and methodologies to facilitate the business goals, implementation testing, and smooth rollouts and upgrades of the packages.

QA testing is one of the key pain-points in ERP implementations and upgrades today”.

To sum it up, the question still remains, where, when and how these techniques can be used? With the assumption that the benefits will differ in a variety of situations, including the efficacy of application. Needless to say it would be very interesting to consider some of these techniques and discuss the practical implications of these emerging trends.

Concurrent User Estimation


Concurrent user estimation is an important step before going for Performance Validation and capacity planning as it is directly related to consumption of system resources. Therefore, before entering into the load testing phase we need to determine the peak user load or the maximum concurrent user load for designing a workload model. People often estimate the number of concurrent users by intuition or wild guessing with little justification. This often leads to improper performance testing and capacity planning. In this article we would like to share a very reliable method proposed by Eric Man Wong to calculate the concurrent number of users using estimated and justified parameters.

The method involves estimating the peak user load by calculating the average number of concurrent users, based on the total number of user sessions, the average length of the user sessions.

1. Estimating the Average number of concurrent users

For calculating the average concurrent user load, we need to find the following parameters,
  • Period of concern (T): It is the time duration for which we are calculating the total number of user sessions.
  • Total number of user sessions (n): The number of user sessions at the specified time duration
  • The average Length of User sessions (L):  The length of a user session is the amount of time that the particular user takes for completing his activity
(During which he consumes a certain amount of the system resource). The average length of user sessions is simply the mean value of the session length of all the users.
(Say,
where s is the total number of user sessions). The average length of a user session can be estimated by observing how a sample of users uses the system.

A user session is a time interval defined by a start time and end time. Within a single session, let us assume that the user is in active state which means that the user is consuming a certain percentage of the total system memory. Between the start time and end time, there are one or more system resources being held. The number of concurrent users at any particular time is defined as the number of user sessions into which the time instance falls. This is illustrated in the following example

Each horizontal line segment represents a user session. Since the vertical line at time t0 intercepts with three user sessions, the number of concurrent users at time t0 is equal to three. Let us focus on the time interval from 0 to an arbitrary time instant T. The following result can be mathematically proven:
Alternatively, if the total number of user sessions from time 0 to T equals n, and the average length of a user session equals L, then

[NOTE: In the above diagram, t0 represents any particular instance of time. Whereas in the formulae we use the value T which gives us a specific duration or a time period between 2 instances of the time say t1 and t2]

2.  Estimating the peak number of concurrent users
For determining the peak user load we make use of some basic probability distribution theorems in the following manner.
We determine the probability of X concurrent users occupying the system at a particular time. We make use of the Poisson distribution for the same. Then we use the normal distribution pattern to determine the pea amount of user load.
By Poisson distribution,

Under this assumption, it can be proven that the concurrent number of users at any time instant also has a Poisson distribution,
Where C is the average number concurrent users we find using the formula
It is well known that the Poisson distribution with mean = C can be approximated by the normal distribution with mean C and standard deviation √c. We denote the number of concurrent users by X.
This implies that (X-C)/√c  has the standard normal distribution with mean 0 and standard deviation  1. Looking up the statistical table for the normal distribution, we have the following result:
The above equation means that the probability of the number of concurrent users being smaller than C + 3√c is 99.87%. The probability is large enough for most purposes that we can approximate the peak number of concurrent users by C +√c

We see that the simplicity by which we can determine the peak concurrent users just by determining the average concurrent user load makes it highly efficient. The Eric Man Wong method remains the most reliable method to replicate a realistic and sensible workload model for the performance testing activity.

Read More About:  Concurrent User Estimation