Wednesday, 29 February 2012

Database Tuning


Why Database Tuning?

It is a primary responsibility of a performance Engineer to provide tuning recommendations for a database server when it fails to respond as expected.
Performance tuning can be applied when you face the following scenarios:-
  • If the real user waiting time is high
  • To keep your database updated on par with your business
  • If you want to use your hardware optimally
General Tuning Recommendations
  • SQL Query Optimization:-
Poorly written SQL queries can push any database to respond badly. It is widely accepted that 75% of performance issues arise due to poorly written SQL. Manual Tuning of SQL query is practically difficult; but we have more SQL Profiler tools available in the market.

The following types of SQL Queries are suggested to use

✔  Avoid table scan queries – especially long table scans.
✔  Utilize the indexes promptly.
✔  Avoid complex joins wherever possible – especially unions.
  • Memory Tuning
Most of the Oracle Database can operate efficiently by utilizing its memory structures instead of disk IO’s.

The theme behind this is - A read from (or) write to memory is much faster than a read from (or) write to disk.


For efficient Memory
✔  Ensure that the buffer hit ratio, shared pool (library and dictionary hit ratio) meet the recommended levels.
✔  Properly size all other buffers including the redo log buffer, PGA, java pool, etc.
  • Disk IO Tuning
It is crucial that there be no IO contention on the physical disk devices. It is therefore important to spread IO’s across many devices.

Spreading data across disks to avoid I/O contention

✔  You can avoid bottlenecks by spreading data storage across multiple disks and multiple disk controllers:
✔  Put databases with critical performance requirements on separate devices. If possible, also use separate controllers from those used by other databases. Use segments as needed for critical tables and partitions as needed for parallel queries.
✔  Put heavily used tables on separate disks.
✔  Put frequently joined tables on separate disks.
✔  Use segments to place tables and indexes on their own disks.
✔  Ensure that indexes are properly tuned.
  • Sorting Tuning
Ensure that your sort area size is large enough so that most of your sorts are done in memory; not on disk.
  • Operating System Concerns
Ensure that no memory paging or swapping is occurring.
  • Lock Contention
Performance can be devastated if a user waiting for another user to free up a resource.
  • Wait Events
Wait events help to locate where the database issues may be.

Conclusion

In an actual Load Testing, it is an essential practice to simulate the database as in production. Due to this, a lots of database issues may be encountered. The above given solutions are general tuning mechanisms for any database. These solutions may be ideal for most of the performance issues in a database.

Thanks For Reading This Blog. Please Visit At: Database Tuning Know More

Tuesday, 28 February 2012

Performance Testing in the Cloud


What is Cloud?

A cloud consists of three components, one or multiple datacenters, a network and a “zillion” number of devices. That’s what it’s all about, the components and their interaction. Ultimately the user is interested in the end-to-end experience, regardless of the components.

A cloud is grouped into private or public cloud based on the location of the data center where the services are being virtualized. In general, a public cloud is an environment that exists outside the purview of company firewall, could be a service/technology offered by a third party vendor while private cloud acts behind the firewall for the exclusive benefit of an organization and its customers.


Source : HP

Testing and the Cloud
While many companies are approaching cloud computing with cautious optimism, testing appears to be one area where they are willing to be more adventurous. There are several factors that account for this openness toward testing in the cloud:

Testing is a periodic activity and requires new environments to be set up for each project
  Test labs in companies typically sit idle for longer periods, consuming capital, power and space.   Approximately 50% to 70% of the technology infrastructure earmarked for testing is underutilized, according  to both anecdotal and published reports.

Testing is considered an important but non-business-critical activity. Moving testing to the cloud is seen as a safe bet because it doesn’t include sensitive corporate data and has minimal impact on the organization’s business-as-usual activities.

Applications are increasingly becoming dynamic, complex, distributed and component- based, creating a multiplicity of new challenges for testing teams. 
For instance,mobile and Web applications must be tested for multiple operating systems and updates,multiple browser platforms and versions, different types of hardware and a large number of concurrent users to understand their performance in real-time. The conventional approach of manually creating in-house testing environments that fully mirror these complexities and multiplicities consumes huge capital and resources.

Why Opting for Cloud Computing as source of Performance testing
Many companies use their Web sites for sales and marketing purposes. In fact, a company can spend millions of pounds creating engaging content and running promotional campaigns to draw users to its site. Unfortunately, if the site crashes or response time crawls all that time, energy and money could be wasted.
A first case is when demand for a service varies with time. Provisioning a data center for the peak load it must sustain a few days per month leads to underutilization at other times, for example. Instead, Cloud Computing lets an organization pay by the hour for computing resources, potentially leading to cost savings even if the hourly rate to rent a machine from a cloud provider is higher than the rate to own one. A second case is when demand is unknown in advance.

For example, a web startup will need to support a spike in demand when it becomes popular, followed potentially by a reduction once some of the visitors turn away. Finally, organizations that perform batch analytics can use the ”cost associativity” of cloud computing to finish computations faster: using 1000 EC2 machines for 1 hour costs the same as using 1 machine for 1000 hours. For the first case of a web business with varying demand over time and revenue proportional to user hours, we have captured the tradeoff in the equation below.The left-hand side multiplies the net revenue per user-hour by the number of user-hours, giving the expected profit from using Cloud Computing. The right-hand side performs the same calculation for a fixed-capacity datacenter by factoring in the average utilization, including nonpeak workloads, of the datacenter. Whichever side is greater represents the opportunity for higher profit.

Cloud Areas and their Differentiators:
  • Private cloud is on premise, dedicated to one organization, often located in the same datacenter(s) as the legacy applications; address the needs of mission critical applications and applications having access to large amount of confidential data.
  • Hosted private cloud is really a variant of the previous category, located either in the client’s or the hoster datacenter, but managed by a hosting provider. It addresses the same needs as the private cloud.
  • Virtual private cloud is a single or multi-tenant environment, located in a clearly defined geography, with documented and auditable security processes & procedures, clear SLA’s, addressing the needs of business critical applications. Formal contracts are established, payment is by consumption, but with regular invoices. Access to the environment can be over the internet, VPN or leased lines. HP calls this an Enterprise Class Cloud.
  • Public Cloud is a multi-tenant environment providing IaaS, PaaS and/or SaaS services on a pay- per-use basis without formal contracts. Payment is typically via credit card. Such environments address the needs of web applications, test & development, large scale processing, (if not too much data needs to be transferred).


Source: HP

The first two categories are asset intensive and typically do not lend themselves to a pay-per-use model; the latter two are pay-per-use. The first two are typically single tenant, this means they are used by one organization; the latter two are most often multi-tenant environments where multiple companies share the same assets.

Performance Testing in Cloud – Benefits & Limitations:
• Fast provisioning using preconfigured images. You can set up the infrastructure you need in minutes.

• Simplified security. All required protections are set up by default, including firewall, certificates, and encryption.

• Improved scalability. Leading load testing solution providers have negotiated with cloud providers to allow users of their software to employ more virtual machines (for the purpose of load testing) than are allowed by default.

• A unified interface for multiple cloud providers. Load testing solutions can hide provisioning and billing details, so you can take maximum advantage of the cloud in a minimum of time.

• Advanced test launching. You can save time and effort by defining and launching load generators in the cloud directly from the load testing interface.

• Advanced results reporting. Distinct results from each geographic region involved in the test are available for analysis.

Limitations:
Although testing from the cloud is, in many cases, more realistic than testing in the lab, simply moving to the cloud is not enough to ensure the most realistic tests. Real users often have access to less bandwidth than a load generator in a cloud data center. With a slower connection, the real user will have to wait longer than the load generator to download all the data needed for a web page or application. This has two major implications:

• Response times measured as-is from the cloud with virtually unlimited bandwidth are better than for real users. This can lead test engineers to draw the wrong conclusions, thinking that users will see an acceptable response time when in reality they will not.

• The total number of connections established with the server will increase, because on average, connections for real users will be open longer than connections for the load generator. This can lead to a situation in which the server unexpectedly refuses additional connections under load.

Conclusion: The benefits of Cloud Computing Solution cannot be ignored by companies running Performance tests striving to overcome the constraints of their current IT hardware to simulate the realistic environment whilst struggling to justify the cost of investing in major upgrades.

Protocol – In Performance Testing View


Protocol in performance testing view is “Communication Protocol”. Communicating between the physical systems; it may be a Load Generator, Application and Web servers.

The key elements of a protocol are:

Syntax: Include Time data formats and signal levels.
Semantics: Includes control information and error handling.

Communication Protocol is a set of rules and regulations that determine how data is transmitted between the systems or a communications protocol is a system of digital message formats and rules for exchanging those messages in or between computing systems.

Protocols include sending, receiving, authentication and error detection and correction capabilities. Protocols are used for communications between entities in a system. Entities use protocols in order to implement their service definitions.

Multiple protocols could be used in different circumstances. Also communication Protocols are used as a suite or layering.  For Example: Internet Protocol (Web) suites consists application, transport, internet, and network interfaces.

Below listed communication protocols for performance testing could be used by Hexaware Performance Testing CoE.
  • Web – HTTP/HTTPS
  • J2EE
  • Citrix
  • .NET (Client & Web)
  • ERP – PeopleSoft (SAP, Siebel, Oracle Apps)
  • Web Services
  • SQL
  • Client – Server (COM/DCOM)
  • Mobile
  • Action Message Format (AMF)
  • AJAX (Click and Script)
Classification schemes for protocols usually focus on domain of use and function. Based on the application communication protocol could be selected or performance testing tool adviser is used to finalize the protocol.
From the above synopsis we could understand; what is protocol? How it works and an example for protocol layering/suite then which protocols is used by Hexaware performance testing CoE. And then final paragraph explains; how it is used or identifies the protocol for a particular domain, application or business functions.

Thanks For Reading This Blog. Know More Visit At: Performance Testing

Monday, 27 February 2012

Tips for Handling Recording Problems In LoadRunner


When we were using LoadRunner for our Business Process Testing engagements we have encountered certain recording problems. We have listed down some of the most common problems and the steps that should be followed to troubleshoot those issues.

Problem 1: NOT PROXIED error in the recording events log.

This error occurs when there is some sort of spyware software installed on the system.

To troubleshoot this issue follow the below steps:

1. Use process explorer to get the list of dlls getting hooked into Internet Explorer.

2. Compare the list with the dlls from a machine where you can record.

3. Run Ad-Aware software. To download this, visit http://www.lavasoftusa.com/software/adaware/. This usually eliminates the spyware programs.

4. Make sure to check carefully the processes running on the machine. If you find any suspicious DLL\exe, just google the name and see if it’s a known spyware. If so, uninstall the software related to this DLL\executable.

5. Sort the list of DLLs by ‘Path Name’ in Process Explorer. The DLLs to scrutinize carefully are the ones which are not located in LoadRunner\bin and c:\windows\system32 or c:\winnt\system32.

Problem 2: “Cannot connect to remote server” in the recording log

This error occurs when the communication from this server and on that particular port should not be filtered

To troubleshoot this issue follow the below steps:

Try port mapping if it is a HTTPS site.
Go to Recording Options > Port Mapping and add the Target Server’s IP address, Port Number as 443, Service ID as HTTP, Connection Type as SSL. Make sure “Test SSL” returns success. Change the SSL Ciphers and SSL Version to achieve success for “Test SSL”.

Problem 3: “Connection prematurely aborted by the client” error in the recording log.

This error occurs when the communication from this server and on that particular port should not be filtered

To troubleshoot this issue follow the below steps:

Try port mapping with Direct Trapping. Here’s how we can do direct trapping
a. Enable the Direct trapping option:
[HKEY_CURRENT_USER\Software\Mercury Interactive\LoadRunner\Protocols\WPlus\Analyzer]
“EnableWSPDebugging”=dword:00000001
“ShowWSPDirectTrapOption”=dword:00000001
“EnableWSPDirectTrapping”=dword:00000001

b. Before Recording Multi-Web, add port mapping entries for HTTP/TCP (and NOT SSL) connections.
Set their ‘Record Type’ from ‘Proxy’ to ‘Direct’.
c. Start recording.

Problem 4: Application sends a “Client Side” certificate to the server and waits for server authentication.

This error occurs because LoadRunner sits in between the client and server, it takes this certificate information from the client and pass it on to the server. So, you need to have the certificate in .pem format.

To troubleshoot this issue follow the below steps:

Use openSSL to convert the client side certificate to .pem format and add it in the port mapping in recording options

In Internet Explorer:

1. Choose Tools > Internet Options. Select the Content tab and click Certificates.
2. Select a certificate from the list and click Export.
3. Click Next several times until you are prompted for a password.
4. Enter a password and click Next.
5. Enter a file name and click Next.
6. Click Finish

In Netscape:

1. Choose Communicator > Tools > Security Info.
2. Click on the Yours link in the Certificates category in the left pane.
3. Select a certificate from the list in the right pane, and click Export
4. Enter a password and click OK.
5. Enter a filename and save the information.

The resulting certificate file is in PKCS12 format. To convert the file to PEM format, use the openssl.exe utility located in the LoadRunner bin directory. To run the utility:
Open a DOS command window.

Set the current directory to the LoadRunner bin directory.
Type openssl pkcs12 -in <input_file> -out <output file.pem>
Enter the password you chose in the export process.
Enter a new password (it can be the same as before).
Enter the password again for verification.

In Recording Options > Port Mapping > check the option for “Use specified client side certificate” and point to the saved .pem file.

Problem 5: Server sends a certificate to the client for authorization and only when authorized by the client a connection can be established between the client and server. This is mostly true for Java based applications. The error we get is “No trusted certificate found”

This error occurs since the recorder is in between client and server, you should authorize the certificate from the server. So, your wplusca.crt should be in the server’s certificate repository.

To troubleshoot this issue follow the below steps:

Copy wplusca.crt file into the certificate repository of the server.

1. Locate keytool.exe executable (usually under JRE_ROOT\bin directory).
2. Add its directory to the PATH environment variable.
3. Locate cacerts file (with no extension) under JRE_ROOT\lib\security directory.
4. Copy the attached cacert.pem to this directory.
5. Make this to be the current directory in the command prompt.
6. Execute:
keytool -list -keystore cacerts
7. It will ask you for the password, reply:
“changeit”
8. It will list all the CA certificates installed (usually 25).
Notice the number of certificates.
9. Execute:
keytool -import -file cacert.pem -keystore cacerts -alias mercuryTrickyCA
10. It will ask you for the password, reply:
“changeit”
11. It will ask you if you are sure you want to add the certificate to the store, reply:
“yes”
12. Execute:
keytool -list -keystore cacerts
13. It will ask you for the password, reply:
“changeit”
14. It will list all the CA certificates installed.
The number of certificates should be bigger by 1

Problem 6: The recording log does not indicate any error but the connection between the client & Server has ended abruptly. This could happen possibly due to the ‘WPLUSCA’ certificate with a private key

This error occurs when because the recorder sends the ‘WPLUSCA’ certificate with a private key to the server for authentication while recording.

To troubleshoot this issue follow the below steps:

1. Go to LoadRunner\bin\certs and make a copy of the WPlusCA.crt file.
2. Open it in a notepad
3. Delete the portion between “Begin RSA Private Key” and “End RSA Private Key”
4. Save the file.
5. Now right click the file and choose “Install certificate” and follow the on screen instructions.
This will now send a certificate without the private key and if the server accepts the communication will continue.

Problem 7: “Failed to resolve hostname” error in the recording log

If VuGen box is not able to resolve the server name that is given in URL, it throws this error in the recording log and then crashes the IE

To troubleshoot this issue follow the below steps:

Modify hosts file on the vugen machine with server name and any IP address
The above problems are some of the common problems when we are using LoadRunner for Performance Testing. The above mentioned troubleshooting steps will help you to fix these issues if they occur during your performance testing assignment.

Thanks For Reading This Blog. Please Know More About: LoadRunner

Thursday, 23 February 2012

Capacity Planning in Performance Testing


What is Capacity Planning

Capacity Planning is the process of determining what type of hardware and software configuration is required to meet application needs. Capacity planning, performance benchmarks and validation testing are essential components of successful enterprise implementations. Capacity planning is an iterative process. A good capacity management plan is based on monitoring and measuring load data over time and implementing flexible solutions to handle variances without impacting performance.

The goal of capacity planning is to identify the right amount of resources required to meet service demands now and in the future. It is a proactive discipline with far-reaching impact, supporting:
• IT and business alignment, helping to show the cost and business need for infrastructure upgrades

Hexaware's Consolidation and Virtualization Strategies, ensuring that consolidated real and virtual system configurations will meet service levels

Capacity Planning Approach

Capacity Planning is planning efficient resource use by applications during the development and deployment of an application, but also when it is operational. It addresses considering how different resources can be accessed simultaneously by different applications, but also knowing when it is done in an optimal way. Big organizations and operational environments have high expectations in means of capacity planning.
The thesis considers which type of configurations (clustered, unclustered, etc.) should be taken into consideration, which types / forms / categories of applications can run on the same server / cluster, but also what should be avoided when planning for capacity.

Challenges Faced

Capacity planning should be conducted when:

• Designing a new system
• Migrating from one solution to another
• Business processes and models have changed, thus requiring an update to the application architecture
• End user community has significantly changed in number, location, or function

Typical objective of capacity planning is to estimate:

• Number and speed of CPU Cores
• Required Network bandwidth
• Memory size
• Storage type and size

Key items influencing capacity:

• Number of concurrent users
• User workflows
• Architecture
• Tuning and implementation of best practices

Capacity planning is about how many resources an application uses. It implies knowing a system’s profile. For instance, if you have two applications, e.g. application A and application B, each known to use certain values of Central Processing Unit (CPU), memory, disk and network resources when being the single application running on your machine, but you only have one machine. If application A uses only little of one resource and application B much of the same one. This is a simple case of capacity planning, and one must have in mind that when the applications are executed in parallel on the machine, the total resource usage is no simple addition of the sole execution of each one of them. There could for instance be overlapping of memory portions, which would make parallel execution impossible, and re-writing the code would be necessary to empower this.

The process of determining what type of hardware and software configuration is required to adequately meet application needs is called capacity planning.
Because the needs of an application are determined among others by the number of users, in other words the number of parallel accesses, capacity planning can also be defined as: how many users can a system handle before changes need to be made? Thus, when an application is deployed, one should have in mind how large it will be at first, but also how fast the number of users/servers/machines will increment, so that enough margins is left and one must not wholly change an application because of e.g. the addition of a single user.
To perform capacity planning, essential data is collected and analyzed to determine usage patterns and to project capacity requirements and performance characteristics. Tools are used to determine optimum hardware/software configurations.

Proposed Solution(s)

Bottlenecks, or areas of marked performance degradation, should be addressed while developing your capacity management plan. The objective of identifying bottlenecks is to meet your performance goals, not eliminate all bottlenecks. Resources within a system are finite. By definition, at least one resource (CPU, memory, or I/O) can be a bottleneck in the system. Planning for anticipated peak usage, for example, may help minimize the impact of bottlenecks on your performance objectives.
There are several ways to address system bottlenecks. Some common solutions include:

• Using Clustered Configurations
• Using Connection Pooling
• Setting the Max Heap Size on JVM
• Increasing Memory or CPU
• Segregation of Network Traffic



Clustered configurations distribute workloads among multiple identical cluster member instances. This
effectively multiplies the amount of resources available to the distributed process, and provides for seamless fail over for high availability.

Using Connection Pooling
To improve the performance by using existing database connections, you can limit the number of connections, timing of the sessions and other parameters by modifying the connection strings.
Setting the Max Heap Size on JVM (Java Virtual Machines)
This is a application-specific tunable that enables a tradeoff between garbage collection times and the number of JVMs that can be run on the same hardware. Large heaps are used more efficiently and often result in fewer garbage collections. More JVM processes offer more fail over points.

Increasing Memory or CPU
Aggregating more memory and/or CPU on a single hardware resource allows localized communication between the instances sharing the same hardware. More physical memory and processing power on a single machine enables the JVMs to scale and run much larger and more powerful instances, especially
64-bit JVMs. Large JVMs tend to use the memory more efficiently, and Garbage Collections tend to occur less frequently. In some cases, adding more CPU means that the machine can have more instruction and data cache available to the processing units, which means even higher processing efficiency.
Segregation of Network Traffic
Network-intensive applications can introduce significant performance issues for other applications using network. Segregating the network traffic of time-critical applications from network-intensive applications, so that they get routed to different network interfaces, may reduce performance impacts. It is also possible to assign different routing priorities to the traffic originating from different network interfaces.

Business benefits-

• Increase revenue through maximum availability, decreased downtime, improved response times, greater productivity, greater responsiveness to market dynamics, greater return on existing IT investment

• Decrease costs through higher capacity utilization, more efficient processes, just-in-time upgrades, greater cost control

Future Direction/Long Term Focus

Capacity planning process is a forecast or plan for the organization’s future. Capacity planning is a process for determining the optimal way to satisfy business requirements such as forecasted increases in the amount of work to be done, while at the same time meeting service level requirements. Future processing requirements can come from a variety of sources. Inputs from management may include expected growth in the business, requirements for implementing new applications, IT budget limitations and requests for consolidation of IT resources.

Recommendations

The basic steps involved in developing a capacity plan are:

1. To determine service level requirements
a. Define workloads
b. Determine the unit of work
c. Identify service levels for each workload

2. To analyze current system capacity
a. Measure service levels and compare to objectives
b. Measure overall resource usage
c. Measure resource usage by workload
d. Identify components of response time

3. To plan for the future
a. Determine future processing requirements
b. Plan future system configuration

By following these steps, we can help to ensure that your organization will be prepared for the future, ensuring that service level requirements will be met using an optimal configuration.

Tuesday, 21 February 2012

The Grinder – An open source Performance testing alternative


Owing to the cut throat competition, IT companies are striving to go one step ahead than their competitors to woo their prospective clients. Cutting down the costs without compromising on the quality has been the effective strategy these days. Open source tools not only promise to cut down the costs drastically, but are also more flexible and provide certain unique features of their own. The huge expense involved in procuring performance testing tools has urged the testing community to look for an open source alternative that would go easy on the budget.

The Grinder is an open source performance testing tool originally developed by Paco Gomez and Peter Zadrozny. The Grinder is a JavaTM load testing framework that makes it easy to run a distributed test using many load injector machines.

1.1          Why Grinder?

  • The Grinder can be a viable open source option for performance testing.It is freely available under a BSD-style open-source license that can be downloaded from SourceForge.net http://www.sourceforge.net/projects/grinder
  • The test scripts are written in simple and flexible Jython language which makes it very powerful and flexible. As it is an implementation of the Python programming language written in Java, all of the advantages of Java are also inherent here. No separate plug-ins or libraries are required
  • The Grinder makes use of a powerful distributed Java load testing framework that allows simulation of multiple user loads across different “agents” which can be managed by a centralized controller or “console”. From this console you can edit the test scripts and distribute them to the worker processes as per the requirement
  • It is a surprisingly lightweight tool, fairly easy to set up and run and it takes almost no time at all to get started. The installation simply involves downloading and configuring the recorder, console and agent batch files.The Grinder 3  is distributed as two zip files. The grinder properties file can be customized to suit our requirement, each time we execute a test
  • From a developer’s point of view, Grinder is the preferred load testing tool wherein the developers can opt to test their own application. I.e. The programmers get to test the interior tiers of their own application
  • The Grinder tool has a strong support base in the form of mailing lists i.e. http://sourceforge.net/
  • The Grinder tool has excellent compatibility with Grinder Analyzer which is also available as an open source license. The analyzer extracts data from grinder log and generates report and graphs containing response time, transactions per second, and network bandwidth used
  • Other than HTTP and HTTPS, The Grinder does support internet protocols such as POP3, SMTP, FTP, and LDAP

1.2          Difficulties that should be considered before opting for the Grinder

  • No user friendly interface is provided for coding, scripting(parameterization, correlation, Inserting custom functions etc) and other enhancements at the test script preparation level
  • Annoying syntax errors do creep in into the coding part due to the rigid syntax and indentation required at the coding level as Jython scripting is used
  • A lot depends on the tester’s ability to understand the interior tiers of the application unlike other commercial tools  where the tester can blindly follow the standard procedures without any insight into the intricate complexity of the application and can still succeed in completing his job successfully
  • It is dependent on Grinder Analyzer for analysis, report generation etc
  • The protocols supported by the Grinder are limited, whereas commercial tools such as LoadRunner, Silk Performer provides support for all the web based protocols. This is one major limiting factor as the web based applications these days use multi protocols for communication.
  • Unlike LoadRunner and other vendor licensed tools, it does not offer effective monitoring solutions and in-depth diagnostic capabilities. Also there is no separate user friendly interface component dedicated for analyzing the test results
  • More support is required in the form of forums, blogs and user communities
In short Tom Braverman sums it up brilliantly in a post to the Grinder use,
“I don’t try and persuade my clients that The Grinder is a replacement for LoadRunner, etc. I tell them that The Grinder is for use by the developers and that they’ll still want the QA team to generate scalability metrics using LoadRunner or some other tool approved for the purpose by management”

For an open source testing tool, it has to be admitted that the Grinder does have the capabilities feature wise to make a stand amidst other commercial alternatives.

Thanks for Reading This Blogs. Know More: Performance Testing & Quality Assurance

Wednesday, 1 February 2012

SIX Trends to FIX the QA Needs


The quality assurance landscape is undergoing a major transformation as QA organizations try to align their goals with the business goals of their companies.

QA has a tough balancing act to perform — tackling business risks as well as cost reduction and ROI concerns, while building agility in their organizations to respond to business goals.

Testing teams have long been viewed as an insurance by IT departments to assure themselves and their business partners on what is being delivered. Over the years, IT departments have spent more time and money in trying to ascertain the delivery worthiness of code.

More than ever, business teams are asking today how testing teams could deliver better insights and greater value into what is being produced by development teams. The argument is if testing teams could serve as Quality Gates throughout the development lifecycle, there would be fewer surprises towards the end, and lesser tradeoffs and compromises between inadequate functionality and faster time-to-market, which paves ways to the following emerging Trends in QA.

Six key quality assurance trends emerging:


1st Trend: Embracing Early Lifecycle Validation to Drive Down Costs and Improve Time-to-Market


The adoption of early lifecycle validation helps QA organizations to fix defects early in the lifecycle, thus significantly reducing risks and lowering total cost of ownership.

Methodologies gaining traction include:

* requirements/model-based testing
* early involvement and lifecycle testing
* risk-based testing
* risk-based security testing
* predictive performance modeling

2nd Trend: Increased Adoption of Test Lifecycle Management, Testing Metrics and Automation Solutions to Improve Overall Testing Processes


As QA organizations work to build greater quality into applications, they are adopting solutions such as test lifecycle management and automation technologies. “These solutions help to drive greater traceability throughout the testing lifecycle and to automate all stages of the lifecycle, with the aim of overall efficiencies and ROI.”

The emergence of new frameworks and dashboards for defining, measuring and monitoring testing metrics. “All of these metrics seek to enable quick decision-making and driving greater efficiency within existing or emerging testing processes/frameworks/solutions”.

3rd Trend: More Domain-based Testing


“Domain excellence is becoming a key factor in the testing industry, forcing QA organizations to build or buy point/platform-based solutions that combine core business processes and advanced testing frameworks” .

Examples of such testing creations include solutions for regulatory compliance for SOX and HIPAA; and for specialized processes such as POS, e-commerce, and banking.

4th Trend: The Emergence of Non-functional Testing Solutions Aimed at Enhancing the Customer Experience


The widespread use of e-commerce is forcing quality assurance organizations to deploy more solutions for measuring and enhancing end-customer experience.

“This is putting stress on the requirement for non-functional validation services and solutions”.

Key emerging areas include: testing for usability and accessibility, and predictive performance modeling.

5th trend: The Development of Testing Frameworks for Newer Technologies


Newer technologies such as SOA and cloud computing pose a different set of testing challenges to established technologies.

“Traditional models and frameworks of testing don’t work so well with these new technologies so QA organizations are creating new models and frameworks to address the issues raised” .

6th Trend: Special Focus on ERP Testing


For years, organizations have implemented ERP packages without thinking much about the testing complexities that will emerge as the packages evolve in changing IT environments.

Consequently, today these packages require specialized skills and methodologies to facilitate the business goals, implementation testing, and smooth rollouts and upgrades of the packages.

QA testing is one of the key pain-points in ERP implementations and upgrades today”.

To sum it up, the question still remains, where, when and how these techniques can be used? With the assumption that the benefits will differ in a variety of situations, including the efficacy of application. Needless to say it would be very interesting to consider some of these techniques and discuss the practical implications of these emerging trends.

Concurrent User Estimation


Concurrent user estimation is an important step before going for Performance Validation and capacity planning as it is directly related to consumption of system resources. Therefore, before entering into the load testing phase we need to determine the peak user load or the maximum concurrent user load for designing a workload model. People often estimate the number of concurrent users by intuition or wild guessing with little justification. This often leads to improper performance testing and capacity planning. In this article we would like to share a very reliable method proposed by Eric Man Wong to calculate the concurrent number of users using estimated and justified parameters.

The method involves estimating the peak user load by calculating the average number of concurrent users, based on the total number of user sessions, the average length of the user sessions.

1. Estimating the Average number of concurrent users

For calculating the average concurrent user load, we need to find the following parameters,
  • Period of concern (T): It is the time duration for which we are calculating the total number of user sessions.
  • Total number of user sessions (n): The number of user sessions at the specified time duration
  • The average Length of User sessions (L):  The length of a user session is the amount of time that the particular user takes for completing his activity
(During which he consumes a certain amount of the system resource). The average length of user sessions is simply the mean value of the session length of all the users.
(Say,
where s is the total number of user sessions). The average length of a user session can be estimated by observing how a sample of users uses the system.

A user session is a time interval defined by a start time and end time. Within a single session, let us assume that the user is in active state which means that the user is consuming a certain percentage of the total system memory. Between the start time and end time, there are one or more system resources being held. The number of concurrent users at any particular time is defined as the number of user sessions into which the time instance falls. This is illustrated in the following example

Each horizontal line segment represents a user session. Since the vertical line at time t0 intercepts with three user sessions, the number of concurrent users at time t0 is equal to three. Let us focus on the time interval from 0 to an arbitrary time instant T. The following result can be mathematically proven:
Alternatively, if the total number of user sessions from time 0 to T equals n, and the average length of a user session equals L, then

[NOTE: In the above diagram, t0 represents any particular instance of time. Whereas in the formulae we use the value T which gives us a specific duration or a time period between 2 instances of the time say t1 and t2]

2.  Estimating the peak number of concurrent users
For determining the peak user load we make use of some basic probability distribution theorems in the following manner.
We determine the probability of X concurrent users occupying the system at a particular time. We make use of the Poisson distribution for the same. Then we use the normal distribution pattern to determine the pea amount of user load.
By Poisson distribution,

Under this assumption, it can be proven that the concurrent number of users at any time instant also has a Poisson distribution,
Where C is the average number concurrent users we find using the formula
It is well known that the Poisson distribution with mean = C can be approximated by the normal distribution with mean C and standard deviation √c. We denote the number of concurrent users by X.
This implies that (X-C)/√c  has the standard normal distribution with mean 0 and standard deviation  1. Looking up the statistical table for the normal distribution, we have the following result:
The above equation means that the probability of the number of concurrent users being smaller than C + 3√c is 99.87%. The probability is large enough for most purposes that we can approximate the peak number of concurrent users by C +√c

We see that the simplicity by which we can determine the peak concurrent users just by determining the average concurrent user load makes it highly efficient. The Eric Man Wong method remains the most reliable method to replicate a realistic and sensible workload model for the performance testing activity.

Read More About:  Concurrent User Estimation