Showing posts with label Performance Testing. Show all posts
Showing posts with label Performance Testing. Show all posts

Tuesday, 20 November 2012

Oracle R12 Applications Using LoadRunner


The Challenge
We recently load tested our first Oracle R12 release (All modules for nationwide and international wide of Oracle ERP R12). The company was upgrading to R12 from 11.5.8 largely for performance reasons.
We knew we’d be “cutting new ground” with LoadRunner on R12. This became evident with our first testrecord-and-playback, which failed even after finding and fixing all the missing correlations. We raised a ticket with HP (SR# #4622615067), and with their initial help, step by step we overcame all the nuances of coaxing vugen to record successfully, and then creatively working around its inability to recognize the full set of identifiers for a new java ITEMTREE object.

Friday, 24 August 2012

Performance Center Best Practices


For Performance Testing we have started using HP Performance Center due to many advantages it provides to the testing team. We have listed out some of the best practices which can be followed when using Performance Center.

Architecture – Best Practices

  • Hardware Considerations
    • CPU, Memory, Disk sized to match the role and usage levels
    • Redundancy added for growth accommodation and fault-tolerance
    • Never install multiple critical components on the same hardware
  • Network Considerations
    • Localization of all PC server traffic - Web to Database, Web to File Server, Web to Utility Server, Web to Controllers, Controller to Database, Controller to File Server, Controller to Utility Server.
    • Separation of operational and virtual user traffic – PC operational traffic should not share same network resources as virtual user traffic – for optimal network performance.
  • Backup and Recovery Considerations
    • Take periodic backup Oracle Database and File System (\\<fileserver>\LRFS)
  • Backups of PC servers and hosts are optional.
  • Monitoring Considerations
    • Monitor services (eg. SiteScope) should be employed to manage availability and responsiveness of each PC component

Configuration – Best Practice

  • Set ASP upload buffer to the maximum size of a file that you will permit to be uploaded to the server.
    • Registry: HKLM\SYSTEM\CurrentControlSet\Services\w3svc\Parameters
  • Modify MaxClientRequestBuffer
    • create as a DWORD if it does not exist)
    • Ex. 2097152 is 2 Mb
  • Limit access to the PC File System (LRFS) for security
    • Performance Center User (IUSR_METRO) needs “Full Control”
  • We recommend 2 LoadTest Web Servers when
    • Running 3 or more concurrent runs
    • Having 10 plus users viewing tests
  • The load balancing needs an external, web session based, load balancer
  • In Internet Explorer, set “Check for newer versions of stored pages” to “Every visit to the page”
    • NOTE: This should be done on the client machines that are accessing the Performance Center web sites

Script Repository – Best Practice

  • Use VuGen integration for direct script upload
  • Ensure dependent files are within zip file
  • Re-configure script with optimal RTS
  • Validate script execution on PC load generators
  • Establish meaningful script naming convention
  • Clean-up script repository regularly

Monitor Profile – Best Practice

  • Avoid information overload
    • Min-Max principle – Minimum metrics for maximum detection
  • Consult performance experts and developers for relevant metrics
    • Standard Process Metrics (CPU, Available Memory, Disk Read/Write Bytes, Network Bandwidth Utilization)
    • Response Times / Durations (Avg. Execution Time)
    • Rates and Frequencies (Gets/sec, Hard Parses/sec)
    • Queue Lengths (Requests Pending)
    • Finite Resource Consumption (JVM Available Heap Size, JDBC Pool’s Active Connections)
    • Error Frequency (Errors During Script Runtime, Errors/sec)

Load Test – Best Practice

  • General
  • Create new load test for any major change in scheduling logic or script types
  • Use versioning (by naming convention) to track changes
  • Scripts
  • When scripts are updated with new run-logic settings, remove and reinsert updated script in load test
  • Scheduling
  • Each ramp-up makes queries to Licensing (Utility) Server, and LRFS file system.  Do not ramp at intervals less than 5 seconds.
  • Configure ramp-up quantity per interval to match available load generators
  • Do not run (many/any) users on Controller

Timeslots – Best Practice

  • Scheduling
    • Always schedule time slots in advance of load test
    • Always schedule extra time (10-30 minutes) for large or critical load tests
    • Allow for gaps between scheduled test runs (in case of emergencies)
  • Host Selection
    • Use automatic host selection whenever possible
    • Reserve manual hosts only when specific hosts are needed (because of runtime configuration requirements)
The above mentioned solutions will help you to make use of Performance Center without any issues and will also save you a lot of time by avoiding some issues which might arise because of not doing some of the above mentioned practices.

Thanks For Reading This Block. Want To Know More Visit At: Performance Center Best Practices

Wednesday, 7 March 2012

LoadRunner Runtime Settings – Multithreading Options


Performance testers are confronted with this classic dilemma when they decide to execute their script in LoadRunner. Whether to run the Vuser as a thread or as a process?

1.1  Difference between a thread and a process 

A Process

  • Let us consider a process as an independent entity or unit that has an exclusive virtual address space for itself.
  • A process can interact with another process only through IPC (inter process communication). More than one process could run at any given time but no two processes can share the same memory address space.
E.g. when we open an application say notepad from our Windows OS, we see that a notepad.exe process is displayed in our task manager under processes tab. If we open another such notepad a new notepad.exe process is displayed. This process has its own set of virtual address space.

A Thread

  • Threads are contained inside a process. More than one thread can exist within the same process and can share the memory space between them.
  • The advantage here is that multiple threads can share the same memory space. I.e. when a thread is in idle state another thread can utilize the resource thereby faster execution rate is achieved.
  • A memory space can be accessed by another thread if one thread remains idle for a long time.
  • Threads can also access common data structures if required.

1.2  Multithreading

While defining the runtime settings in LoadRunner, we have to choose between running the Vuser as a thread or a process. “The Controller uses a driver program (such as mdrv.exe or r3Vuser.exe) to run your Vusers. If you run each Vuser as a process, then the same driver program is launched (and loaded) into the memory again and again for every instance of the Vuser.” – LoadRunner User Guide. The driver program mentioned is nothing but a process that runs when we generate a Vuser load.

Runtime Settings


1.3  Run Vuser as a process – Disable Multithreading

  • If we choose the first option and run ‘n’ number of Vusers as a process, we will be able to see ‘n’ number of mmdrv.exe processes running in the Load generator machine. Each of this process would be consuming their own memory space.
  • When this option is selected, each of the Vuser process establishes at least one connection with the web/app server.

1.4  Run Vuser as a thread – Enable Multithreading

  • But we can choose to run the Vuser as a thread if we want to go easy on the resources. This way more number of Vusers can be generated with the same amount of available load generator memory.
  • When this option is selected, each of the Vuser thread can share the open connections between them (connection pooling). Opening and maintaining a connection for each Vuser process, is resource consuming. In connection pooling, the amount of time a user must wait to establish a connection to the database is also reduced.This is surely an advantage right? Wrong. The argument is that this is not an accurate replication of the user load - A single connection for each Vuser should be created like in a real time scenario and to achieve this we have to run Vuser as a process. There are other factors such as thread safety to be considered. When we run a large amount of Vusers as a single multi threaded process, the Vusers run as threads which share the same memory location. Thus one thread may impact, interfere or modify data elements of another thread posing serious thread safety concerns. Before selecting either of the options we need to determine the load generator capacity such as available system resources, memory space and also the thread safety of the protocols used.
Please Visit At: LoadRunner Runtime Settings For Know More.

Common Problems & Solutions For Performance Testing Flex Applications Using LoadRunner


This article lists the common problems & solutions that performance engineers come across when testing flex applications.

Problem #1 : Overlapped transmission error occurs when a flex script is run for the first time from controller. But the same script works fine in VuGen.

Error -27740: Overlapped transmission of request to “www.url.com” for URL“http://www.url.com/ProdApp/” failed: WSA_IO_PENDING.

Solution : The transmission of data to the server failed. It could be a network, router, or server problem. The word Overlapped refers to the way LoadRunner sends data in order to get a Web Page Breakdown. To resolve this problem, add the following statement to the beginning of the script to disable the breakdown of the “First Buffer” into server and network time.

web_set_sockets_option (“OVERLAPPED_SEND”, “0″);


Problem #2 : During script replay Vugen crashes due to mmdrv error. mmdrv has encountered a problem and needs to close. Additional details Error: mmdrv.exe caused an Microsoft C++ Exception in module kernel32.dll at 001B:7C81EB33, RaiseException () +0082 byte(s)

Solution : The cause of this issue is unknown. HP released a patch that can be downloaded from their site.

Problem #3 : AMF error: Failed to find request and response

Solution : LoadRunner web protocol has a mechanism to prevent large body requests to appear in the action files, by having the body in the lrw_custom_body.h. In AMF and Flex protocol, LR cannot handle these values and fails to generate the steps. Follow these steps to fix the problem:

1. Go to the “generation log”
2. Search for the highest value for “Content-Length”
3. Go to <LoadRunner installation folder>/config
4. Open vugen.ini
5. Add the following:
[WebRecorder]
BodySize=<size found in step (2)>
6. Regenerate the script

Problem #4 : There are duplicate AMF calls in the recording log as well as in the generated code.

Solution : Capture level may be set to Socket and WinInet, make sure under Recording Options –> Network –> Port Mapping –> Capture level is set to WinInet (only)

Problem #5 : A Flex script which has Flex_AMF and Flex_RTMP calls, on replay will have mismatch in the tree view between the request and the response. After looking in the replay log we can see that correct calls are being made but they are being displayed incorrectly in the tree view (only the replay in tree view is incorrect). Sometimes it shows the previous or next Flex_AMF call in the tree view in place of the Flex_RTMP call.

Solution : This issue has been identified as a bug by R&D in LR 9.51 and LR 9.52. R&D issued a new flexreplay.dll which resolved the issue and will be included in the next Service Pack.

Problem #6 : Flex protocol script fails with “Error: Encoding of AMF message failed” or “Error: Decoding of AMF message failed”

Solution : The cause for this error is the presence of special characters (&gt, &lt, &amp etc…) in the flex request. Send the request enclosed in CDATA Example: <firmName>XXXXXXX &amp; CO. INC.</firm Name> in script to <firmName><![CDATA[XXXXXXXXXXXX &amp; CO. INC.]]></firmName>

Problem #7 : When creating a Multi Protocol Script that contains FLEX and WEB protocols sometimes VuGen closes automatically without any warning/error message displayed. This happens when the Web protocol is set to be in HTML Mode. When in URL mode the crash does not occur. There is no error code except a generic Windows message stating the VuGen needs to close.

Solution : This issue can be seen on Machines that are running on Windows XP, and using Mfc80.dll. Refer to Microsoft KB Article in the link below that provides a solution for the same. Microsoft released a hot fix for Windows specific issue that can cause VuGen to close.
http://support.microsoft.com/kb/961894

Problem #8 : When recording a FLEX script, RTMP calls are not being captured correctly so the corresponding FLEX_RTMP_Connect functions are not generated in the script.

Solution : First set the Port Mapping (Choose Recording options –> Network –> Port Mapping –> set Capture Level to ‘Socket level and WinINet level data’) set to ‘Socket level and if this doesn’t help, follow the next step. Record a FLEX + Winsock script. In Port mapping section, Set the Send-Receive buffer size threshold to 1500 under the options. Create a new entry and select Service ID as SOCKET, enter the Port (such as 2037 or whatever port the FLEX application is using for connection), Connection Type as Plain, Record Type as Proxy, and Target Server can be the default value(Any Server).

Problem #9 : Replaying a Flex script containing a flex_rtmp_send() that has an XML argument string may result in the mmdrv process crashing with a failure in a Microsoft Dynamics.

Solution : The VuGen script generation functionality does not handle the XML parameter string within the function correctly. This results in the mmdrv process crashing during replay. If you have the 9.51 version, installing a specific patch (flex9.51rup.zip) or service pack 2 will resolve the problem

Problem #10 : During the test executions in controller, sometimes the scripts throw an error ‘Decoding of AMF message failed. Error is: Externalizable parsing failed’.

Solution : This is mostly due to the file transfer problem. It is always advised to place the jar files in a share path common to all load agents.

Other Flex Supported Load Testing Tools


There are other Commercial & Open Source tools available, that support the flex application testing. Some tools (For example, Neoload) have much considerable support for RTMP even when compared to LoadRunner. The way all these tools test the flex application is quite similar, each tool has its own AMF/XML conversion engine, which serializes the binary data to readable XML format
Open Source
  • Data Services Stress Testing Framework
  • JMeter
Commercial Tools
  • Silk Performer by Borland
  • NeoLoad by Neotys
  • WebLOAD by RadView
Performance Improvement Recommendations


When it comes to performance improvement of an application, our first concern would be to enhance the scalability for a specified hardware & software configuration.
  • In case of flex, the scalability issues derive from the fact that BlazeDS is deployed in a conventional Java Servlet container, and performance/scalability of BlazeDS also depends on the number of concurrent connections supported by the server such as Tomcat, WebSphere, Web Logic … BlazeDS runs in a servlet container, which maintains a thread pool.
  • Each thread is assigned to client request and returns to a reusable pool after the request is processed. When a particular client request uses a thread for a longer duration, the thread is being locked by that corresponding client till the request is processed. So the number of the concurrent users in BlazeDS depends on the number of threads a particular servlet container can hold.
  • While BlazeDS is preconfigured with just 10 simultaneous connections, it can be increased to several hundred, and the actual number depends on the server’s threading configuration, CPU and the size of its JVM heap memory. This number can also be affected by the number of messages processed by server in the unit of time and size of the messages.
  • Tomcat or WebSphere can support upto several hundred of users, and any servlet container that supports Servlets 3.0, BlazeDS can be used in the more demanding applications that require support of thousands concurrent users.
Based on our project experience in performance testing Flex applications using LoadRunner we have pointed out some of the common problems that might arise during performance testing Flex applications. This will save you a lot of time as we have also provided the solutions to troubleshoot the errors if they occur.

Thanks For Reading Blog. Know More About: Flex

Monday, 5 March 2012

Ensuring The Scalability Of Complex Applications Using Rational Performance Test Tool In Performance Test Automation


In this IT market, we have “N” no of tools w.r.t Commercial tools (Loadrunner, Rational Performance Tester…) and Open Source tools (Grinder, OpenSTA…) to measure the performance and scalability of the Web based and desktop based applications. Also it is an open area for performance Test Engineer/Test Consultant to choose an appropriate performance testing tool to measure the performance and scalability of the application to detect and resolve bottlenecks in the application while considering Web Servers, Application Servers and Database Servers.

 In this paper, I have focused on primary objective of using commercial tool – Rational Performance Tester (RPT) for end to end activity on performance testing for Web based and desktop based applications.

Introduction:
IBM Rational Performance Tester (RPT) software is a performance test creation, execution and analysis tool for teams validate the scalability and reliability of their Web and Enterprise Resource Planning (ERP) – based applications before deployment.
  • RPT is a load and performance testing solution for concerned about the scalability of Web-based applications.
  • Combining ease of use with deep analysis capabilities, RPT simplifies test creation, load generation, and data collection to help ensure that applications can scale to thousands of concurrent users.
  • It combines a simple-to-use test recorder with advanced scheduling, real-time reporting, automated data variation and a highly scalable execution engine to help ensure that applications are prepared to handle large user loads.
Key Highlights on RPT tool:
  • Creates code free tests quickly without programming knowledge
  • Executes multiuser performance testing for Microsoft Windows, Linux, UNIX and mainframe environments with an available
  • Windows and Linux software based user interface
  • Supports load testing against a broad range of applications such as HTTP, SAP, Siebel, Entrust, Citrix and SOA/Web Services and Supports Windows, Linux and z/OS as distributed controller agents
  • Rendered HTML view of Web pages visited during test recording
  • Java code insertion for flexible test customization
  • Reports in real time to enable immediate recognition of performance problems and renders an HTML browser-like view of Web pages in the test
  • Enables Windows, Linux and mainframe technology – based test execution
  • Provides a rich, tree-based test editor that delivers both high level and detailed views of tests
  • Collection and visualization of server resource data
  • Automates identification and management of dynamic server responses
  • Automates test data variation – Data substitution with data pools
  • High extensibility with Java coding: custom coding should be supported in a well known standard language that is platform independent and widely available. Java makes an ideal choice for any tool’s extensibility language
  • Built-in Verification Points (VPs)
  • Collects and integrates server resource data with real-time application performance data
  • A low memory and processor footprint that enables large, multi-user tests with limited hardware resources
  • Accurately emulates large population user transactions with minimal hardware
  • Runs large scale performance tests to validate application scalability
  • Provides no code testing, point and click wizards, report readability, usability and customization
  • Delivers both high-level and detailed views of tests with a rich, tree-based text editor
  • Enables large, multi-user tests with minimal hardware resources
  • Windows and Linux-based user interface  and test execution agents
  • Graphical test editing and workload modeling
  • Offers flexible modeling and emulation of diverse user populations
  • Real-time monitoring and reporting
  • Report customization and export capability
  • Programming extensibility with Java custom code
  • Real-time reporting for immediate performance problem identification with the presence and cause of application performance bottlenecks
  • Diagnoses the root cause of performance bottlenecks by quickly identifying slow performing lines of code and integrates with Tivoli composite application management solutions to identify the source of production performance problems
  • Leverage existing assets for load and performance testing
Advantage of RPT:

High productivity with no programming: Basic skills required to use a performance testing tool should only include knowledge of how to schedule and scale the user scenarios and where to put in the data variation for the application being tested.

Rapid adoption: Rational Performance Tester contains features explicitly designed to enable you to quickly build, execute and analyze the impact of load on your application environment.

Robust analysis and reporting: Individual page or screen response times can be decomposed into response times for individual page elements (for example, JPGs, Java Server Pages, Active Server Pages), which helps testers identify the elements responsible for poor page or screen response time.
  • And the ability to insert custom Java™ code that can be executed at any point during test execution supplements automated data correlation and data generation capabilities. This capability permits advanced data manipulation and diagnostic techniques.
  • During test execution, system resource information such as CPU and memory utilization statistics can be collected from remote servers and correlated with response time and throughput data.
  • Collected resource data is crucial for diagnosing which remote system—router, Web server, application server, database server, etc.—is responsible for detected delays, as well as for pinpointing the component (for example, CPU, RAM, disk) that is causing the bottleneck.
Lowered cost of performance testing: RPT generates a low processor and memory footprint when emulating multiple users. As a result, high levels of scalability can be achieved even if the team does not have access to excessive computing power. In addition, test execution and system information retrieval can occur on Microsoft Windows, UNIX and Linux software based machines, optimizing a team’s usage of existing hardware resources. It provides automation support for essentially all aspects of software development.

Customized reporting with all classes of data: the number of data points has moved from 100,000 to 100 million individual measurements, a design imperative is to reduce the data in real time during the test in a distributed fashion. This principle is coupled with the increased complexity of the tested system architectures yield a need to have complete flexibility in reporting and correlating a variety of data sources.

In this paper, have discussed about need for using Rational Performance Test tool (RPT) in performance testing with sufficient key important advantage of this tool in addition to lot of factors like High productivity with no programming, Rapid adoption, Robust analysis and reporting, Lowered cost of performance testing and Customized reporting with all classes of data while using this tool. Once again it is an open area for performance Test Engineer/Test Consultant to choose an appropriate performance testing tool to measure the performance and scalability of the application to detect and resolve bottlenecks in the application while considering Web Servers, Application Servers and Database Servers.

Thanks For Reading This Blog. Visit Rational Performance Test To Know More.

Application Performance Testing In Production Environment


Performance testing in production is not practiced widely owing to many risks involved, which include taking the entire production environment offline thus affecting availability, taking part of the production environment offline thus affecting performance, and the risk of updating production data during the test.

However, for applications running on large infrastructure for which there are no production equivalent test environments (e.g. Superdome servers / farm of servers etc), it is not uncommon to reuse the system resources of Production and point it to a test database sitting on a disk sub-system which is equivalent to Production.  This typically happens in a period of low load (weekends, holiday season etc) when the actual Production application can be temporarily migrated to some other smaller hardware such as a Passive environment or a DR site.

Generally, production database is almost never used in a test, due to the risk of having test data getting mixed with real data. In rare cases where a production database is leveraged for a test, it is used purely for read-only or view transactions.

One practice to be somewhat prevalent for Application going live for the first time is to use the production environment for performance testing as part of the UAT. The other relative practices are to make pilot test with real users.  Here, the production release is opened to users in stages; in the first stage a limited number of users are asked to use the system for a defined period of time before opening up the application to the entire user community. Occasionally such an approach may also be used to observe the impact on system performance by imposing a fraction of the full expected load (say by using users of one full department).  Such an option is chosen in cases where the performance test scenario in the test environment may not have been able to fully capture real user behavior or where we may also wish to benchmark the performance test results.

To summarize the following precautions/Best practices has to be adopted for Application performance testing in production environment
  • The test has to be scheduled in non-working hours when the live production traffic is expected to be nearly zero.
  • Part of the production environment (say one application server node from a large cluster of application servers) may be isolated for the purpose of using it in a performance test with read-only usage of the production database.
  • Approvals from all relevant stakeholders and directors need to be taken prior to the test.
  • A conference call has to be arranged for the duration of the test, in which all the stakeholders would participate.  All project teams would monitor their systems during the test, and if any system issues are found to occur, the test has to be stopped immediately.
To Conclude - As the industry is striving towards steady state Performance testing which demands for higher test accuracy which in turn is largely dependent on the Environment setup and Network traffic simulation. Extrapolating the test results/metrics to Production environment is also not convincing to make decisions.

In that case -It is not a bad idea to leverage Production environment for Performance Validation which ensures high accurate performance testing to fix actual performance bottlenecks at an earlier stage that might encounter during the go-Live day.

I would agree – it is not that easy to accomplish production performance Testing but still it all depends on how cautiously we plan and how wisely we execute the Test in production that would eliminate risks associated with it.

Loadrunner Simulation With Safari


How to Simulate the Load from Safari Browser
There was an interesting requirement where we had to simulate Safari browser at the time of  load execution.This article provides information on how to achieve this.

Client Requirement:-
  • Recently we were involved in a PoC to check the feasibility of SalesForce.com with LoadRunner.
  • There was one interesting requirement that the customer wanted to simulate the load from Safari browser.
  • The reason behind this is that the SalesForce.com will be primarily accessed by Sales team who will be mostly traveling using iPad with Safari browser.
Challenge:-
  • By default, LoadRunner supports only I.E, Mozilla & Netscape browsers.
  • And it does not support safari to emulate the load.
Analysis and Solution:-
  • This can be achieved through Custom settings in loadrunner.
  • For this all we need is safari – User Agent String that suits our requirement.
The following is an example of Safari – user agent string:

Mozilla/5.0 (iPad; U; CPU OS 4_3 like Mac OS X; nl-nl) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8F190 Safari/6533.18.5

*Please note for recording you can use any browser and simulate the user load with any required browser using User Agent String as above.

Wednesday, 29 February 2012

Database Tuning


Why Database Tuning?

It is a primary responsibility of a performance Engineer to provide tuning recommendations for a database server when it fails to respond as expected.
Performance tuning can be applied when you face the following scenarios:-
  • If the real user waiting time is high
  • To keep your database updated on par with your business
  • If you want to use your hardware optimally
General Tuning Recommendations
  • SQL Query Optimization:-
Poorly written SQL queries can push any database to respond badly. It is widely accepted that 75% of performance issues arise due to poorly written SQL. Manual Tuning of SQL query is practically difficult; but we have more SQL Profiler tools available in the market.

The following types of SQL Queries are suggested to use

✔  Avoid table scan queries – especially long table scans.
✔  Utilize the indexes promptly.
✔  Avoid complex joins wherever possible – especially unions.
  • Memory Tuning
Most of the Oracle Database can operate efficiently by utilizing its memory structures instead of disk IO’s.

The theme behind this is - A read from (or) write to memory is much faster than a read from (or) write to disk.


For efficient Memory
✔  Ensure that the buffer hit ratio, shared pool (library and dictionary hit ratio) meet the recommended levels.
✔  Properly size all other buffers including the redo log buffer, PGA, java pool, etc.
  • Disk IO Tuning
It is crucial that there be no IO contention on the physical disk devices. It is therefore important to spread IO’s across many devices.

Spreading data across disks to avoid I/O contention

✔  You can avoid bottlenecks by spreading data storage across multiple disks and multiple disk controllers:
✔  Put databases with critical performance requirements on separate devices. If possible, also use separate controllers from those used by other databases. Use segments as needed for critical tables and partitions as needed for parallel queries.
✔  Put heavily used tables on separate disks.
✔  Put frequently joined tables on separate disks.
✔  Use segments to place tables and indexes on their own disks.
✔  Ensure that indexes are properly tuned.
  • Sorting Tuning
Ensure that your sort area size is large enough so that most of your sorts are done in memory; not on disk.
  • Operating System Concerns
Ensure that no memory paging or swapping is occurring.
  • Lock Contention
Performance can be devastated if a user waiting for another user to free up a resource.
  • Wait Events
Wait events help to locate where the database issues may be.

Conclusion

In an actual Load Testing, it is an essential practice to simulate the database as in production. Due to this, a lots of database issues may be encountered. The above given solutions are general tuning mechanisms for any database. These solutions may be ideal for most of the performance issues in a database.

Thanks For Reading This Blog. Please Visit At: Database Tuning Know More

Tuesday, 28 February 2012

Performance Testing in the Cloud


What is Cloud?

A cloud consists of three components, one or multiple datacenters, a network and a “zillion” number of devices. That’s what it’s all about, the components and their interaction. Ultimately the user is interested in the end-to-end experience, regardless of the components.

A cloud is grouped into private or public cloud based on the location of the data center where the services are being virtualized. In general, a public cloud is an environment that exists outside the purview of company firewall, could be a service/technology offered by a third party vendor while private cloud acts behind the firewall for the exclusive benefit of an organization and its customers.


Source : HP

Testing and the Cloud
While many companies are approaching cloud computing with cautious optimism, testing appears to be one area where they are willing to be more adventurous. There are several factors that account for this openness toward testing in the cloud:

Testing is a periodic activity and requires new environments to be set up for each project
  Test labs in companies typically sit idle for longer periods, consuming capital, power and space.   Approximately 50% to 70% of the technology infrastructure earmarked for testing is underutilized, according  to both anecdotal and published reports.

Testing is considered an important but non-business-critical activity. Moving testing to the cloud is seen as a safe bet because it doesn’t include sensitive corporate data and has minimal impact on the organization’s business-as-usual activities.

Applications are increasingly becoming dynamic, complex, distributed and component- based, creating a multiplicity of new challenges for testing teams. 
For instance,mobile and Web applications must be tested for multiple operating systems and updates,multiple browser platforms and versions, different types of hardware and a large number of concurrent users to understand their performance in real-time. The conventional approach of manually creating in-house testing environments that fully mirror these complexities and multiplicities consumes huge capital and resources.

Why Opting for Cloud Computing as source of Performance testing
Many companies use their Web sites for sales and marketing purposes. In fact, a company can spend millions of pounds creating engaging content and running promotional campaigns to draw users to its site. Unfortunately, if the site crashes or response time crawls all that time, energy and money could be wasted.
A first case is when demand for a service varies with time. Provisioning a data center for the peak load it must sustain a few days per month leads to underutilization at other times, for example. Instead, Cloud Computing lets an organization pay by the hour for computing resources, potentially leading to cost savings even if the hourly rate to rent a machine from a cloud provider is higher than the rate to own one. A second case is when demand is unknown in advance.

For example, a web startup will need to support a spike in demand when it becomes popular, followed potentially by a reduction once some of the visitors turn away. Finally, organizations that perform batch analytics can use the ”cost associativity” of cloud computing to finish computations faster: using 1000 EC2 machines for 1 hour costs the same as using 1 machine for 1000 hours. For the first case of a web business with varying demand over time and revenue proportional to user hours, we have captured the tradeoff in the equation below.The left-hand side multiplies the net revenue per user-hour by the number of user-hours, giving the expected profit from using Cloud Computing. The right-hand side performs the same calculation for a fixed-capacity datacenter by factoring in the average utilization, including nonpeak workloads, of the datacenter. Whichever side is greater represents the opportunity for higher profit.

Cloud Areas and their Differentiators:
  • Private cloud is on premise, dedicated to one organization, often located in the same datacenter(s) as the legacy applications; address the needs of mission critical applications and applications having access to large amount of confidential data.
  • Hosted private cloud is really a variant of the previous category, located either in the client’s or the hoster datacenter, but managed by a hosting provider. It addresses the same needs as the private cloud.
  • Virtual private cloud is a single or multi-tenant environment, located in a clearly defined geography, with documented and auditable security processes & procedures, clear SLA’s, addressing the needs of business critical applications. Formal contracts are established, payment is by consumption, but with regular invoices. Access to the environment can be over the internet, VPN or leased lines. HP calls this an Enterprise Class Cloud.
  • Public Cloud is a multi-tenant environment providing IaaS, PaaS and/or SaaS services on a pay- per-use basis without formal contracts. Payment is typically via credit card. Such environments address the needs of web applications, test & development, large scale processing, (if not too much data needs to be transferred).


Source: HP

The first two categories are asset intensive and typically do not lend themselves to a pay-per-use model; the latter two are pay-per-use. The first two are typically single tenant, this means they are used by one organization; the latter two are most often multi-tenant environments where multiple companies share the same assets.

Performance Testing in Cloud – Benefits & Limitations:
• Fast provisioning using preconfigured images. You can set up the infrastructure you need in minutes.

• Simplified security. All required protections are set up by default, including firewall, certificates, and encryption.

• Improved scalability. Leading load testing solution providers have negotiated with cloud providers to allow users of their software to employ more virtual machines (for the purpose of load testing) than are allowed by default.

• A unified interface for multiple cloud providers. Load testing solutions can hide provisioning and billing details, so you can take maximum advantage of the cloud in a minimum of time.

• Advanced test launching. You can save time and effort by defining and launching load generators in the cloud directly from the load testing interface.

• Advanced results reporting. Distinct results from each geographic region involved in the test are available for analysis.

Limitations:
Although testing from the cloud is, in many cases, more realistic than testing in the lab, simply moving to the cloud is not enough to ensure the most realistic tests. Real users often have access to less bandwidth than a load generator in a cloud data center. With a slower connection, the real user will have to wait longer than the load generator to download all the data needed for a web page or application. This has two major implications:

• Response times measured as-is from the cloud with virtually unlimited bandwidth are better than for real users. This can lead test engineers to draw the wrong conclusions, thinking that users will see an acceptable response time when in reality they will not.

• The total number of connections established with the server will increase, because on average, connections for real users will be open longer than connections for the load generator. This can lead to a situation in which the server unexpectedly refuses additional connections under load.

Conclusion: The benefits of Cloud Computing Solution cannot be ignored by companies running Performance tests striving to overcome the constraints of their current IT hardware to simulate the realistic environment whilst struggling to justify the cost of investing in major upgrades.

Protocol – In Performance Testing View


Protocol in performance testing view is “Communication Protocol”. Communicating between the physical systems; it may be a Load Generator, Application and Web servers.

The key elements of a protocol are:

Syntax: Include Time data formats and signal levels.
Semantics: Includes control information and error handling.

Communication Protocol is a set of rules and regulations that determine how data is transmitted between the systems or a communications protocol is a system of digital message formats and rules for exchanging those messages in or between computing systems.

Protocols include sending, receiving, authentication and error detection and correction capabilities. Protocols are used for communications between entities in a system. Entities use protocols in order to implement their service definitions.

Multiple protocols could be used in different circumstances. Also communication Protocols are used as a suite or layering.  For Example: Internet Protocol (Web) suites consists application, transport, internet, and network interfaces.

Below listed communication protocols for performance testing could be used by Hexaware Performance Testing CoE.
  • Web – HTTP/HTTPS
  • J2EE
  • Citrix
  • .NET (Client & Web)
  • ERP – PeopleSoft (SAP, Siebel, Oracle Apps)
  • Web Services
  • SQL
  • Client – Server (COM/DCOM)
  • Mobile
  • Action Message Format (AMF)
  • AJAX (Click and Script)
Classification schemes for protocols usually focus on domain of use and function. Based on the application communication protocol could be selected or performance testing tool adviser is used to finalize the protocol.
From the above synopsis we could understand; what is protocol? How it works and an example for protocol layering/suite then which protocols is used by Hexaware performance testing CoE. And then final paragraph explains; how it is used or identifies the protocol for a particular domain, application or business functions.

Thanks For Reading This Blog. Know More Visit At: Performance Testing

Thursday, 23 February 2012

Capacity Planning in Performance Testing


What is Capacity Planning

Capacity Planning is the process of determining what type of hardware and software configuration is required to meet application needs. Capacity planning, performance benchmarks and validation testing are essential components of successful enterprise implementations. Capacity planning is an iterative process. A good capacity management plan is based on monitoring and measuring load data over time and implementing flexible solutions to handle variances without impacting performance.

The goal of capacity planning is to identify the right amount of resources required to meet service demands now and in the future. It is a proactive discipline with far-reaching impact, supporting:
• IT and business alignment, helping to show the cost and business need for infrastructure upgrades

Hexaware's Consolidation and Virtualization Strategies, ensuring that consolidated real and virtual system configurations will meet service levels

Capacity Planning Approach

Capacity Planning is planning efficient resource use by applications during the development and deployment of an application, but also when it is operational. It addresses considering how different resources can be accessed simultaneously by different applications, but also knowing when it is done in an optimal way. Big organizations and operational environments have high expectations in means of capacity planning.
The thesis considers which type of configurations (clustered, unclustered, etc.) should be taken into consideration, which types / forms / categories of applications can run on the same server / cluster, but also what should be avoided when planning for capacity.

Challenges Faced

Capacity planning should be conducted when:

• Designing a new system
• Migrating from one solution to another
• Business processes and models have changed, thus requiring an update to the application architecture
• End user community has significantly changed in number, location, or function

Typical objective of capacity planning is to estimate:

• Number and speed of CPU Cores
• Required Network bandwidth
• Memory size
• Storage type and size

Key items influencing capacity:

• Number of concurrent users
• User workflows
• Architecture
• Tuning and implementation of best practices

Capacity planning is about how many resources an application uses. It implies knowing a system’s profile. For instance, if you have two applications, e.g. application A and application B, each known to use certain values of Central Processing Unit (CPU), memory, disk and network resources when being the single application running on your machine, but you only have one machine. If application A uses only little of one resource and application B much of the same one. This is a simple case of capacity planning, and one must have in mind that when the applications are executed in parallel on the machine, the total resource usage is no simple addition of the sole execution of each one of them. There could for instance be overlapping of memory portions, which would make parallel execution impossible, and re-writing the code would be necessary to empower this.

The process of determining what type of hardware and software configuration is required to adequately meet application needs is called capacity planning.
Because the needs of an application are determined among others by the number of users, in other words the number of parallel accesses, capacity planning can also be defined as: how many users can a system handle before changes need to be made? Thus, when an application is deployed, one should have in mind how large it will be at first, but also how fast the number of users/servers/machines will increment, so that enough margins is left and one must not wholly change an application because of e.g. the addition of a single user.
To perform capacity planning, essential data is collected and analyzed to determine usage patterns and to project capacity requirements and performance characteristics. Tools are used to determine optimum hardware/software configurations.

Proposed Solution(s)

Bottlenecks, or areas of marked performance degradation, should be addressed while developing your capacity management plan. The objective of identifying bottlenecks is to meet your performance goals, not eliminate all bottlenecks. Resources within a system are finite. By definition, at least one resource (CPU, memory, or I/O) can be a bottleneck in the system. Planning for anticipated peak usage, for example, may help minimize the impact of bottlenecks on your performance objectives.
There are several ways to address system bottlenecks. Some common solutions include:

• Using Clustered Configurations
• Using Connection Pooling
• Setting the Max Heap Size on JVM
• Increasing Memory or CPU
• Segregation of Network Traffic



Clustered configurations distribute workloads among multiple identical cluster member instances. This
effectively multiplies the amount of resources available to the distributed process, and provides for seamless fail over for high availability.

Using Connection Pooling
To improve the performance by using existing database connections, you can limit the number of connections, timing of the sessions and other parameters by modifying the connection strings.
Setting the Max Heap Size on JVM (Java Virtual Machines)
This is a application-specific tunable that enables a tradeoff between garbage collection times and the number of JVMs that can be run on the same hardware. Large heaps are used more efficiently and often result in fewer garbage collections. More JVM processes offer more fail over points.

Increasing Memory or CPU
Aggregating more memory and/or CPU on a single hardware resource allows localized communication between the instances sharing the same hardware. More physical memory and processing power on a single machine enables the JVMs to scale and run much larger and more powerful instances, especially
64-bit JVMs. Large JVMs tend to use the memory more efficiently, and Garbage Collections tend to occur less frequently. In some cases, adding more CPU means that the machine can have more instruction and data cache available to the processing units, which means even higher processing efficiency.
Segregation of Network Traffic
Network-intensive applications can introduce significant performance issues for other applications using network. Segregating the network traffic of time-critical applications from network-intensive applications, so that they get routed to different network interfaces, may reduce performance impacts. It is also possible to assign different routing priorities to the traffic originating from different network interfaces.

Business benefits-

• Increase revenue through maximum availability, decreased downtime, improved response times, greater productivity, greater responsiveness to market dynamics, greater return on existing IT investment

• Decrease costs through higher capacity utilization, more efficient processes, just-in-time upgrades, greater cost control

Future Direction/Long Term Focus

Capacity planning process is a forecast or plan for the organization’s future. Capacity planning is a process for determining the optimal way to satisfy business requirements such as forecasted increases in the amount of work to be done, while at the same time meeting service level requirements. Future processing requirements can come from a variety of sources. Inputs from management may include expected growth in the business, requirements for implementing new applications, IT budget limitations and requests for consolidation of IT resources.

Recommendations

The basic steps involved in developing a capacity plan are:

1. To determine service level requirements
a. Define workloads
b. Determine the unit of work
c. Identify service levels for each workload

2. To analyze current system capacity
a. Measure service levels and compare to objectives
b. Measure overall resource usage
c. Measure resource usage by workload
d. Identify components of response time

3. To plan for the future
a. Determine future processing requirements
b. Plan future system configuration

By following these steps, we can help to ensure that your organization will be prepared for the future, ensuring that service level requirements will be met using an optimal configuration.

Tuesday, 21 February 2012

The Grinder – An open source Performance testing alternative


Owing to the cut throat competition, IT companies are striving to go one step ahead than their competitors to woo their prospective clients. Cutting down the costs without compromising on the quality has been the effective strategy these days. Open source tools not only promise to cut down the costs drastically, but are also more flexible and provide certain unique features of their own. The huge expense involved in procuring performance testing tools has urged the testing community to look for an open source alternative that would go easy on the budget.

The Grinder is an open source performance testing tool originally developed by Paco Gomez and Peter Zadrozny. The Grinder is a JavaTM load testing framework that makes it easy to run a distributed test using many load injector machines.

1.1          Why Grinder?

  • The Grinder can be a viable open source option for performance testing.It is freely available under a BSD-style open-source license that can be downloaded from SourceForge.net http://www.sourceforge.net/projects/grinder
  • The test scripts are written in simple and flexible Jython language which makes it very powerful and flexible. As it is an implementation of the Python programming language written in Java, all of the advantages of Java are also inherent here. No separate plug-ins or libraries are required
  • The Grinder makes use of a powerful distributed Java load testing framework that allows simulation of multiple user loads across different “agents” which can be managed by a centralized controller or “console”. From this console you can edit the test scripts and distribute them to the worker processes as per the requirement
  • It is a surprisingly lightweight tool, fairly easy to set up and run and it takes almost no time at all to get started. The installation simply involves downloading and configuring the recorder, console and agent batch files.The Grinder 3  is distributed as two zip files. The grinder properties file can be customized to suit our requirement, each time we execute a test
  • From a developer’s point of view, Grinder is the preferred load testing tool wherein the developers can opt to test their own application. I.e. The programmers get to test the interior tiers of their own application
  • The Grinder tool has a strong support base in the form of mailing lists i.e. http://sourceforge.net/
  • The Grinder tool has excellent compatibility with Grinder Analyzer which is also available as an open source license. The analyzer extracts data from grinder log and generates report and graphs containing response time, transactions per second, and network bandwidth used
  • Other than HTTP and HTTPS, The Grinder does support internet protocols such as POP3, SMTP, FTP, and LDAP

1.2          Difficulties that should be considered before opting for the Grinder

  • No user friendly interface is provided for coding, scripting(parameterization, correlation, Inserting custom functions etc) and other enhancements at the test script preparation level
  • Annoying syntax errors do creep in into the coding part due to the rigid syntax and indentation required at the coding level as Jython scripting is used
  • A lot depends on the tester’s ability to understand the interior tiers of the application unlike other commercial tools  where the tester can blindly follow the standard procedures without any insight into the intricate complexity of the application and can still succeed in completing his job successfully
  • It is dependent on Grinder Analyzer for analysis, report generation etc
  • The protocols supported by the Grinder are limited, whereas commercial tools such as LoadRunner, Silk Performer provides support for all the web based protocols. This is one major limiting factor as the web based applications these days use multi protocols for communication.
  • Unlike LoadRunner and other vendor licensed tools, it does not offer effective monitoring solutions and in-depth diagnostic capabilities. Also there is no separate user friendly interface component dedicated for analyzing the test results
  • More support is required in the form of forums, blogs and user communities
In short Tom Braverman sums it up brilliantly in a post to the Grinder use,
“I don’t try and persuade my clients that The Grinder is a replacement for LoadRunner, etc. I tell them that The Grinder is for use by the developers and that they’ll still want the QA team to generate scalability metrics using LoadRunner or some other tool approved for the purpose by management”

For an open source testing tool, it has to be admitted that the Grinder does have the capabilities feature wise to make a stand amidst other commercial alternatives.

Thanks for Reading This Blogs. Know More: Performance Testing & Quality Assurance

Wednesday, 1 February 2012

Concurrent User Estimation


Concurrent user estimation is an important step before going for Performance Validation and capacity planning as it is directly related to consumption of system resources. Therefore, before entering into the load testing phase we need to determine the peak user load or the maximum concurrent user load for designing a workload model. People often estimate the number of concurrent users by intuition or wild guessing with little justification. This often leads to improper performance testing and capacity planning. In this article we would like to share a very reliable method proposed by Eric Man Wong to calculate the concurrent number of users using estimated and justified parameters.

The method involves estimating the peak user load by calculating the average number of concurrent users, based on the total number of user sessions, the average length of the user sessions.

1. Estimating the Average number of concurrent users

For calculating the average concurrent user load, we need to find the following parameters,
  • Period of concern (T): It is the time duration for which we are calculating the total number of user sessions.
  • Total number of user sessions (n): The number of user sessions at the specified time duration
  • The average Length of User sessions (L):  The length of a user session is the amount of time that the particular user takes for completing his activity
(During which he consumes a certain amount of the system resource). The average length of user sessions is simply the mean value of the session length of all the users.
(Say,
where s is the total number of user sessions). The average length of a user session can be estimated by observing how a sample of users uses the system.

A user session is a time interval defined by a start time and end time. Within a single session, let us assume that the user is in active state which means that the user is consuming a certain percentage of the total system memory. Between the start time and end time, there are one or more system resources being held. The number of concurrent users at any particular time is defined as the number of user sessions into which the time instance falls. This is illustrated in the following example

Each horizontal line segment represents a user session. Since the vertical line at time t0 intercepts with three user sessions, the number of concurrent users at time t0 is equal to three. Let us focus on the time interval from 0 to an arbitrary time instant T. The following result can be mathematically proven:
Alternatively, if the total number of user sessions from time 0 to T equals n, and the average length of a user session equals L, then

[NOTE: In the above diagram, t0 represents any particular instance of time. Whereas in the formulae we use the value T which gives us a specific duration or a time period between 2 instances of the time say t1 and t2]

2.  Estimating the peak number of concurrent users
For determining the peak user load we make use of some basic probability distribution theorems in the following manner.
We determine the probability of X concurrent users occupying the system at a particular time. We make use of the Poisson distribution for the same. Then we use the normal distribution pattern to determine the pea amount of user load.
By Poisson distribution,

Under this assumption, it can be proven that the concurrent number of users at any time instant also has a Poisson distribution,
Where C is the average number concurrent users we find using the formula
It is well known that the Poisson distribution with mean = C can be approximated by the normal distribution with mean C and standard deviation √c. We denote the number of concurrent users by X.
This implies that (X-C)/√c  has the standard normal distribution with mean 0 and standard deviation  1. Looking up the statistical table for the normal distribution, we have the following result:
The above equation means that the probability of the number of concurrent users being smaller than C + 3√c is 99.87%. The probability is large enough for most purposes that we can approximate the peak number of concurrent users by C +√c

We see that the simplicity by which we can determine the peak concurrent users just by determining the average concurrent user load makes it highly efficient. The Eric Man Wong method remains the most reliable method to replicate a realistic and sensible workload model for the performance testing activity.

Read More About:  Concurrent User Estimation

Tuesday, 31 January 2012

HP SiteScope – Monitoring Made Easy


HP SiteScope software monitors the availability and performance of distributed IT infrastructures including servers, operating systems, network and Internet services, applications and application components.

HP SiteScope continually monitors more than 75 types of IT infrastructure through Web‑based architecture that is lightweight and highly customizable. With HP SiteScope, you gain the real‑time information you need to verify infrastructure operations, stay apprised of problems, and solve bottlenecks before they become critical. HP SiteScope is an important component of both the HP Operations Center software and the HP Business Availability Center software, providing agentless availability and performance monitoring and management.

How HP SiteScope works
  • HP SiteScope provides a centralized, scalable architecture.
  • HP SiteScope is implemented as a Java™ server application and runs on a single, central system as a daemon process.
  • HP SiteScope Java server supports three key functions: data collection, alerting, and reporting.
  • HP SiteScope enables system administrators to monitor your IT infrastructure remotely from a central installation without the need for agents on the monitored systems.
  • HP SiteScope accomplishes remote monitoring by logging into systems as a user from its central server, which can run on Windows®, UNIX®, and Linux® platforms.
  • HP SiteScope offers optional failover support to give you added redundancy and automatic failover protection in the event that an HP SiteScope server fails.
Advantages of HP SiteScope
  • Features an agentless, enterprise ready architecture that lowers Total Cost of Ownership
  • Monitors more than 75 different target types for critical health and performance characteristics
  • Generates daily, weekly, and monthly summaries of single and multiple monitor readings with built-in management server‑based reports
  • Serves as an integrated component of HP Operations Center and the monitoring foundation for HP Business Availability Center and HP LoadRunner
  • With HP Operations Manager, can deliver a combined agentless and agent-based monitoring
  • solution to deliver the breadth and depth you require
  • Gathers detailed performance data for IT infrastructure using agentless technology installed on your managed server or device
  • Enables the easy installation and monitoring of IT infrastructure monitoring in less than one hour
  • Reduces the time and cost of maintenance by consolidating all maintenance to one central server
  • Reduces the time to make administrative and configuration changes by providing templates and global change capabilities
  • Enables quick and efficient operations management with automated actions initiated upon monitor status change alerts
  • Offers solution templates that include specialized monitors, default metrics, proactive tests, and best practices
  • Supports easy customization to provide standard monitoring of previously unmanaged or hard-to-manage systems and devices
 Read About: HP SiteScope

Ensuring Accuracy in Performance Testing


Most of the time applications may face performance issues even after rigorous performance validation. This is primarily because of improper performance test environment setup and model. It is a common issue across the industry that the testing tool might not have performed correctly during the load testing. So it is always a best practice and a mandate to validate that the testing tool simulates the network traffic as expected genuinely and to ensure the test environment is also accurate. Here is an idea, how Queuing theory Laws can be applied in validating the Performance test accuracy and to ensure that the application has a smooth accessibility in production without performance issues.

Little’s Law
The long-term average number of customers in a stable system L is equal to the long-term average effective arrival rate, λ, multiplied by the average time a customer spends in the system, W and it is expressed algebraically,
LλW


Applying Little’s Law in Performance Testing
The Average number of (virtual) users N in the system (server) at any instance is equal to the product of average throughput X and average response time Z. It is expressed algebraically,

N= X * (Z + R), where R=think time
Demonstration of Little’s Law to ensure Performance Testing
From the results obtained from the performance testing tool, we can find how many actual users have been generated to test the application using Little’s Law. A sample load test done on a sample application with 10 users has obtained following test results.

Average Transactions/sec=1.7, Average transaction response time=0.5 sec, Average Think time=5sec


By Little’s Law, Number of virtual users emulated by the performance testing tool is, N=X*(Z+R) =1.7*(0.5+5)
N=9.35≈10 virtual users have been emulated during load test

If the actual virtual users used in the system is equal to the Little’s Law result, then neither the tool nor the server has undergone any problem. If the Little’s Law result is less than the actual virtual users, then it means remaining users were idle throughout the test.

* It is understood that the throughput data above has been extracted from the tool but it is always preferred and a best practice to use the throughput data from the server.