Wednesday, 7 March 2012

Common Problems & Solutions For Performance Testing Flex Applications Using LoadRunner


This article lists the common problems & solutions that performance engineers come across when testing flex applications.

Problem #1 : Overlapped transmission error occurs when a flex script is run for the first time from controller. But the same script works fine in VuGen.

Error -27740: Overlapped transmission of request to “www.url.com” for URL“http://www.url.com/ProdApp/” failed: WSA_IO_PENDING.

Solution : The transmission of data to the server failed. It could be a network, router, or server problem. The word Overlapped refers to the way LoadRunner sends data in order to get a Web Page Breakdown. To resolve this problem, add the following statement to the beginning of the script to disable the breakdown of the “First Buffer” into server and network time.

web_set_sockets_option (“OVERLAPPED_SEND”, “0″);


Problem #2 : During script replay Vugen crashes due to mmdrv error. mmdrv has encountered a problem and needs to close. Additional details Error: mmdrv.exe caused an Microsoft C++ Exception in module kernel32.dll at 001B:7C81EB33, RaiseException () +0082 byte(s)

Solution : The cause of this issue is unknown. HP released a patch that can be downloaded from their site.

Problem #3 : AMF error: Failed to find request and response

Solution : LoadRunner web protocol has a mechanism to prevent large body requests to appear in the action files, by having the body in the lrw_custom_body.h. In AMF and Flex protocol, LR cannot handle these values and fails to generate the steps. Follow these steps to fix the problem:

1. Go to the “generation log”
2. Search for the highest value for “Content-Length”
3. Go to <LoadRunner installation folder>/config
4. Open vugen.ini
5. Add the following:
[WebRecorder]
BodySize=<size found in step (2)>
6. Regenerate the script

Problem #4 : There are duplicate AMF calls in the recording log as well as in the generated code.

Solution : Capture level may be set to Socket and WinInet, make sure under Recording Options –> Network –> Port Mapping –> Capture level is set to WinInet (only)

Problem #5 : A Flex script which has Flex_AMF and Flex_RTMP calls, on replay will have mismatch in the tree view between the request and the response. After looking in the replay log we can see that correct calls are being made but they are being displayed incorrectly in the tree view (only the replay in tree view is incorrect). Sometimes it shows the previous or next Flex_AMF call in the tree view in place of the Flex_RTMP call.

Solution : This issue has been identified as a bug by R&D in LR 9.51 and LR 9.52. R&D issued a new flexreplay.dll which resolved the issue and will be included in the next Service Pack.

Problem #6 : Flex protocol script fails with “Error: Encoding of AMF message failed” or “Error: Decoding of AMF message failed”

Solution : The cause for this error is the presence of special characters (&gt, &lt, &amp etc…) in the flex request. Send the request enclosed in CDATA Example: <firmName>XXXXXXX &amp; CO. INC.</firm Name> in script to <firmName><![CDATA[XXXXXXXXXXXX &amp; CO. INC.]]></firmName>

Problem #7 : When creating a Multi Protocol Script that contains FLEX and WEB protocols sometimes VuGen closes automatically without any warning/error message displayed. This happens when the Web protocol is set to be in HTML Mode. When in URL mode the crash does not occur. There is no error code except a generic Windows message stating the VuGen needs to close.

Solution : This issue can be seen on Machines that are running on Windows XP, and using Mfc80.dll. Refer to Microsoft KB Article in the link below that provides a solution for the same. Microsoft released a hot fix for Windows specific issue that can cause VuGen to close.
http://support.microsoft.com/kb/961894

Problem #8 : When recording a FLEX script, RTMP calls are not being captured correctly so the corresponding FLEX_RTMP_Connect functions are not generated in the script.

Solution : First set the Port Mapping (Choose Recording options –> Network –> Port Mapping –> set Capture Level to ‘Socket level and WinINet level data’) set to ‘Socket level and if this doesn’t help, follow the next step. Record a FLEX + Winsock script. In Port mapping section, Set the Send-Receive buffer size threshold to 1500 under the options. Create a new entry and select Service ID as SOCKET, enter the Port (such as 2037 or whatever port the FLEX application is using for connection), Connection Type as Plain, Record Type as Proxy, and Target Server can be the default value(Any Server).

Problem #9 : Replaying a Flex script containing a flex_rtmp_send() that has an XML argument string may result in the mmdrv process crashing with a failure in a Microsoft Dynamics.

Solution : The VuGen script generation functionality does not handle the XML parameter string within the function correctly. This results in the mmdrv process crashing during replay. If you have the 9.51 version, installing a specific patch (flex9.51rup.zip) or service pack 2 will resolve the problem

Problem #10 : During the test executions in controller, sometimes the scripts throw an error ‘Decoding of AMF message failed. Error is: Externalizable parsing failed’.

Solution : This is mostly due to the file transfer problem. It is always advised to place the jar files in a share path common to all load agents.

Other Flex Supported Load Testing Tools


There are other Commercial & Open Source tools available, that support the flex application testing. Some tools (For example, Neoload) have much considerable support for RTMP even when compared to LoadRunner. The way all these tools test the flex application is quite similar, each tool has its own AMF/XML conversion engine, which serializes the binary data to readable XML format
Open Source
  • Data Services Stress Testing Framework
  • JMeter
Commercial Tools
  • Silk Performer by Borland
  • NeoLoad by Neotys
  • WebLOAD by RadView
Performance Improvement Recommendations


When it comes to performance improvement of an application, our first concern would be to enhance the scalability for a specified hardware & software configuration.
  • In case of flex, the scalability issues derive from the fact that BlazeDS is deployed in a conventional Java Servlet container, and performance/scalability of BlazeDS also depends on the number of concurrent connections supported by the server such as Tomcat, WebSphere, Web Logic … BlazeDS runs in a servlet container, which maintains a thread pool.
  • Each thread is assigned to client request and returns to a reusable pool after the request is processed. When a particular client request uses a thread for a longer duration, the thread is being locked by that corresponding client till the request is processed. So the number of the concurrent users in BlazeDS depends on the number of threads a particular servlet container can hold.
  • While BlazeDS is preconfigured with just 10 simultaneous connections, it can be increased to several hundred, and the actual number depends on the server’s threading configuration, CPU and the size of its JVM heap memory. This number can also be affected by the number of messages processed by server in the unit of time and size of the messages.
  • Tomcat or WebSphere can support upto several hundred of users, and any servlet container that supports Servlets 3.0, BlazeDS can be used in the more demanding applications that require support of thousands concurrent users.
Based on our project experience in performance testing Flex applications using LoadRunner we have pointed out some of the common problems that might arise during performance testing Flex applications. This will save you a lot of time as we have also provided the solutions to troubleshoot the errors if they occur.

Thanks For Reading Blog. Know More About: Flex

Log Options In Load Runner Runtime Settings


As a part of Business Process Testing activity, it is a tester’s role to debug all the scripts before it reaches the execution phase. In LoadRunner we can make use of the logging options provided in the runtime settings for the same. It allows us to select the level of details about script playback that are to be displayed at the replay log.

  • There two types of logs available in load runner to specify the level of information to be logged during the replay,
1. Standard Log
2. Extended Log

Standard Log
  • A standard log displays the details about the set of functions executed and the messages displayed
  • This can be used to validate whether the functions are executed as expected
Extended Log
  • This is used for logging not only the function calls but also additional details such as parameter substitution, request headers, request body, response headers and the response body. The extended log has three log levels;
1. Parameter Substitution

It adds the details about the parameters substituted and the dynamic values that were captured.

2. Data returned by server

The response header and the response body details for each request is contained in the replay log if this option is selected, i.e. it captures the data that is returned from the server.

3. Advanced trace

With this option enabled, the replay log almost looks like a generation log as it contains details about every request and response that has been sent to and received from the server.

Manual Correlation with extended log
  • Extended logging feature is preferred for debugging a script as we can inspect the values to be correlated
To conclude the above, logging the function calls, requests, and responses will be helpful for debugging the script but they utilize the system resources heavily. So the logging is to be disabled for long scenarios. Besides, when going for an execution with very large number of vusers, it is recommended that we disable the logging feature altogether by unchecking the enable logging option.


Monday, 5 March 2012

Tips To Record A Script In OpenSTA Using Mozilla Firefox


In global everyone thinks that OpenSTA is compatible with only Internet Explorer & Netscape Navigator browsers to record a script. Through this article you will come to know how to record a script in OpenSTA using Mozilla Firefox.
In my project we had a scenario which was not compatible with IE browser but we were supposed to automate that script. After doing some R&D, finally we got a solution which I’m sharing through this post. Please find below the details to overcome this issue.
OpenSTA records only the “HTTP requests/responses”. Any browser is only a “HTTP communication vehicle”, so we can use any browser which supports “HTTP 1.0/1.1” to record a script.
Configuration details to be followed in OpenSTA
1. Open the Script Modeler in OpenSTA tool.
2. In Option → Browsers → Select any Default Browser.
3. Option → Gateway (Please keep the below mentioned setting in the Gateway)
  • In Capture: Remote
  • Administration Port: 3000
  • Port: 81
  • In Proxy:-
    • Address: proxy IP
    • Port: proxy Port
    • Secure: Machine IP; Port: 81
Configuration details to be followed in Mozilla Browser
  • After changing the above configuration details in OpenSTA, there are few more changes to be done in the “proxy details” of Mozilla browser also. Please find the below given details.
    • Open the Mozilla Browser.
    • Tools à Option à Advanced à Network à Settings.
    • Select the “Manual Proxy Configuration” radio button.
    • In HTTP Proxy: Machine IP ; Port: 81
After changing all the above configuration details we can start to record the script in Mozilla Firefox using OpenSTA.
But after recording the script there is no possiblity to view the URL detail screens in the HTML view. You will receive an error message “The webpage is unavailable because you are offline”. To overcome this issue please set the below configuration in Mozilla Browser.
  • Open the Mozilla Browser
  • In the URL tab type “About : Config”
  • In the displaying page filter by the Keyword “Network”.
  • Right click the “network.http.accept-encoding” and click Modify.
  • Remove the “gzip,deflate” field and click Ok.
By following all the above mentioned steps you can overcome this issue and record a script in Mozilla Firefox using OpenSTA.

Ensuring The Scalability Of Complex Applications Using Rational Performance Test Tool In Performance Test Automation


In this IT market, we have “N” no of tools w.r.t Commercial tools (Loadrunner, Rational Performance Tester…) and Open Source tools (Grinder, OpenSTA…) to measure the performance and scalability of the Web based and desktop based applications. Also it is an open area for performance Test Engineer/Test Consultant to choose an appropriate performance testing tool to measure the performance and scalability of the application to detect and resolve bottlenecks in the application while considering Web Servers, Application Servers and Database Servers.

 In this paper, I have focused on primary objective of using commercial tool – Rational Performance Tester (RPT) for end to end activity on performance testing for Web based and desktop based applications.

Introduction:
IBM Rational Performance Tester (RPT) software is a performance test creation, execution and analysis tool for teams validate the scalability and reliability of their Web and Enterprise Resource Planning (ERP) – based applications before deployment.
  • RPT is a load and performance testing solution for concerned about the scalability of Web-based applications.
  • Combining ease of use with deep analysis capabilities, RPT simplifies test creation, load generation, and data collection to help ensure that applications can scale to thousands of concurrent users.
  • It combines a simple-to-use test recorder with advanced scheduling, real-time reporting, automated data variation and a highly scalable execution engine to help ensure that applications are prepared to handle large user loads.
Key Highlights on RPT tool:
  • Creates code free tests quickly without programming knowledge
  • Executes multiuser performance testing for Microsoft Windows, Linux, UNIX and mainframe environments with an available
  • Windows and Linux software based user interface
  • Supports load testing against a broad range of applications such as HTTP, SAP, Siebel, Entrust, Citrix and SOA/Web Services and Supports Windows, Linux and z/OS as distributed controller agents
  • Rendered HTML view of Web pages visited during test recording
  • Java code insertion for flexible test customization
  • Reports in real time to enable immediate recognition of performance problems and renders an HTML browser-like view of Web pages in the test
  • Enables Windows, Linux and mainframe technology – based test execution
  • Provides a rich, tree-based test editor that delivers both high level and detailed views of tests
  • Collection and visualization of server resource data
  • Automates identification and management of dynamic server responses
  • Automates test data variation – Data substitution with data pools
  • High extensibility with Java coding: custom coding should be supported in a well known standard language that is platform independent and widely available. Java makes an ideal choice for any tool’s extensibility language
  • Built-in Verification Points (VPs)
  • Collects and integrates server resource data with real-time application performance data
  • A low memory and processor footprint that enables large, multi-user tests with limited hardware resources
  • Accurately emulates large population user transactions with minimal hardware
  • Runs large scale performance tests to validate application scalability
  • Provides no code testing, point and click wizards, report readability, usability and customization
  • Delivers both high-level and detailed views of tests with a rich, tree-based text editor
  • Enables large, multi-user tests with minimal hardware resources
  • Windows and Linux-based user interface  and test execution agents
  • Graphical test editing and workload modeling
  • Offers flexible modeling and emulation of diverse user populations
  • Real-time monitoring and reporting
  • Report customization and export capability
  • Programming extensibility with Java custom code
  • Real-time reporting for immediate performance problem identification with the presence and cause of application performance bottlenecks
  • Diagnoses the root cause of performance bottlenecks by quickly identifying slow performing lines of code and integrates with Tivoli composite application management solutions to identify the source of production performance problems
  • Leverage existing assets for load and performance testing
Advantage of RPT:

High productivity with no programming: Basic skills required to use a performance testing tool should only include knowledge of how to schedule and scale the user scenarios and where to put in the data variation for the application being tested.

Rapid adoption: Rational Performance Tester contains features explicitly designed to enable you to quickly build, execute and analyze the impact of load on your application environment.

Robust analysis and reporting: Individual page or screen response times can be decomposed into response times for individual page elements (for example, JPGs, Java Server Pages, Active Server Pages), which helps testers identify the elements responsible for poor page or screen response time.
  • And the ability to insert custom Java™ code that can be executed at any point during test execution supplements automated data correlation and data generation capabilities. This capability permits advanced data manipulation and diagnostic techniques.
  • During test execution, system resource information such as CPU and memory utilization statistics can be collected from remote servers and correlated with response time and throughput data.
  • Collected resource data is crucial for diagnosing which remote system—router, Web server, application server, database server, etc.—is responsible for detected delays, as well as for pinpointing the component (for example, CPU, RAM, disk) that is causing the bottleneck.
Lowered cost of performance testing: RPT generates a low processor and memory footprint when emulating multiple users. As a result, high levels of scalability can be achieved even if the team does not have access to excessive computing power. In addition, test execution and system information retrieval can occur on Microsoft Windows, UNIX and Linux software based machines, optimizing a team’s usage of existing hardware resources. It provides automation support for essentially all aspects of software development.

Customized reporting with all classes of data: the number of data points has moved from 100,000 to 100 million individual measurements, a design imperative is to reduce the data in real time during the test in a distributed fashion. This principle is coupled with the increased complexity of the tested system architectures yield a need to have complete flexibility in reporting and correlating a variety of data sources.

In this paper, have discussed about need for using Rational Performance Test tool (RPT) in performance testing with sufficient key important advantage of this tool in addition to lot of factors like High productivity with no programming, Rapid adoption, Robust analysis and reporting, Lowered cost of performance testing and Customized reporting with all classes of data while using this tool. Once again it is an open area for performance Test Engineer/Test Consultant to choose an appropriate performance testing tool to measure the performance and scalability of the application to detect and resolve bottlenecks in the application while considering Web Servers, Application Servers and Database Servers.

Thanks For Reading This Blog. Visit Rational Performance Test To Know More.

Application Performance Testing In Production Environment


Performance testing in production is not practiced widely owing to many risks involved, which include taking the entire production environment offline thus affecting availability, taking part of the production environment offline thus affecting performance, and the risk of updating production data during the test.

However, for applications running on large infrastructure for which there are no production equivalent test environments (e.g. Superdome servers / farm of servers etc), it is not uncommon to reuse the system resources of Production and point it to a test database sitting on a disk sub-system which is equivalent to Production.  This typically happens in a period of low load (weekends, holiday season etc) when the actual Production application can be temporarily migrated to some other smaller hardware such as a Passive environment or a DR site.

Generally, production database is almost never used in a test, due to the risk of having test data getting mixed with real data. In rare cases where a production database is leveraged for a test, it is used purely for read-only or view transactions.

One practice to be somewhat prevalent for Application going live for the first time is to use the production environment for performance testing as part of the UAT. The other relative practices are to make pilot test with real users.  Here, the production release is opened to users in stages; in the first stage a limited number of users are asked to use the system for a defined period of time before opening up the application to the entire user community. Occasionally such an approach may also be used to observe the impact on system performance by imposing a fraction of the full expected load (say by using users of one full department).  Such an option is chosen in cases where the performance test scenario in the test environment may not have been able to fully capture real user behavior or where we may also wish to benchmark the performance test results.

To summarize the following precautions/Best practices has to be adopted for Application performance testing in production environment
  • The test has to be scheduled in non-working hours when the live production traffic is expected to be nearly zero.
  • Part of the production environment (say one application server node from a large cluster of application servers) may be isolated for the purpose of using it in a performance test with read-only usage of the production database.
  • Approvals from all relevant stakeholders and directors need to be taken prior to the test.
  • A conference call has to be arranged for the duration of the test, in which all the stakeholders would participate.  All project teams would monitor their systems during the test, and if any system issues are found to occur, the test has to be stopped immediately.
To Conclude - As the industry is striving towards steady state Performance testing which demands for higher test accuracy which in turn is largely dependent on the Environment setup and Network traffic simulation. Extrapolating the test results/metrics to Production environment is also not convincing to make decisions.

In that case -It is not a bad idea to leverage Production environment for Performance Validation which ensures high accurate performance testing to fix actual performance bottlenecks at an earlier stage that might encounter during the go-Live day.

I would agree – it is not that easy to accomplish production performance Testing but still it all depends on how cautiously we plan and how wisely we execute the Test in production that would eliminate risks associated with it.