Thursday, 29 March 2012

Strategies For Testing Data Warehouse Applications


Introduction:

There is an exponentially increasing cost associated with finding software defects later in the development lifecycle. In data warehousing, this is compounded because of the additional business costs of using incorrect data to make critical business decisions. Given the importance of early detection of software defects, let’s first review some general goals of testing an ETL application:

Below content describes the various common strategies used to test the Data warehouse system:
Data completeness: 

Ensures that all expected data is loaded in to target table.

1. Compare records counts between source and target..check for any rejected records.
2. Check Data should not be truncated in the column of target table.
3. Check unique values has to load in to the target. No duplicate records should be existing.
4. Check boundary value analysis (ex: only >=2008 year data has to load into the target)

Data Quality:

1.Number check: if in the source format of numbering the columns are as xx_30 but if the target is only 30 then it has to load not pre_fix(xx_) .. we need to validate.

2.  Date Check: They have to follow Date format and it should be same across all the records. Standard format : yyyy-mm-dd etc..

3. Precision Check: Precision value should display as expected in the target table.

Example: In source 19.123456 but in the target it should display as 19.123 or round of 20.

4.  Data Check: Based on business logic, few record which does not meet certain criteria should be filtered out.
Example: only record whose date_sid >=2008 and GLAccount != ‘CM001’ should only load in the
target table.

5. Null Check: Few columns should display “Null” based on business requirement
Example: Termination Date column should display null unless & until if his “Active status”
Column is “T” or “Deceased”.

Note: Data cleanness will be decided during design phase only.

Data cleanness:

Unnecessary columns should be deleted before loading into the staging area.

1.  Example: If a column have name but it is taking extra space , we have to “trim” space so before loading in the staging area with the help of expression transformation space will be trimed.

2. Example: Suppose telephone number and STD code in different columns and requirement says it should be in one column then with the help of expression transformation we will concatenate the values in one column.

Data Transformation: All the business logic implemented by using ETL-Transformation should reflect.

Integration testing:

Ensures that the ETL process functions well with other upstream and downstream processes.

Example:
1.  Downstream:Suppose if you are changing precision in one of the transformation “column”, let us assume a “EMPNO” is column having data type with size 16, this data type precision should be same for all transformation where ever this “EMPNO” column is used.

2.  Upstream: If the source is SAP/ BW and we are extracting data there will be ABAP code which will act as interface between SAP/ BW and map where there source is SAP /BW and to modify existing mapping we have to re-generate the ABAP code in the ETL tool (informatica)., if we don’t do it, wrong data will be extracted since ABAP code is not updated.

User-acceptance testing:

Ensures the solution meets users’ current expectations and anticipates their future expectations.
Example: Make sure none of the code should be hardcoded.

Regression testing:

Ensures existing functionality remains intact each time a new release of code is completed.

Conclusion:

Taking these considerations into account during the design and testing portions of building a data warehouse will ensure that a quality product is produced and prevent costly mistakes from being discovered in production.

Thursday, 15 March 2012

Automation Tool Selection Recommendation


  • Overview
  • Information Gathering
  • Tools and Vendors
  • Evaluation Criteria
  • Tools Evaluation
  • Matrix
  • Conclusion
  • Overview
“Automated Testing” means automating the manual testing process currently in use. This requires that a formalized “manual testing process” currently exists in the company or organization. Minimally, such a process includes:

–        Detailed test cases, including predictable “expected results”, which have been developed from Business Functional Specifications and Design documentation.

–        A standalone Test Environment, including a Test Database that is restorable to a known constant, such that the test cases are able to be repeated each time there are modifications made to the application.

Information Gathering

Following are sample questions asked to tester who have been using some the testing tools:

How long have you been using this tool and are you basically happy with it?

How many copies/licenses do you have and what hardware and software platforms are you using?

How did you evaluate and decide on this tool and which other tools did you consider before purchasing this tool?

How does the tool perform and are there any bottlenecks?

What is your impression of the vendor (commercial professionalism, on-going level of support, documentation and training)?

Tools and Vendors
  • Robot – Rational Software
  • WinRunner 7 – Mercury
  • QA Run 4.7 – Compuware
  • Visual Test – Rational Software
  • Silk Test – Segue
  • QA Wizard – Seapine Software
Tools Overview

Robot – Rational Software

–        IBM Rational Robot v2003 automates regression, functional and configuration testing for e-commerce, client/server and ERP Applications. It’s used to test applications constructed in a wide variety of IDEs and languages, and ships with IBM Rational TestManager. Rational TestManager provides desktop management of all testing activities for all types of testing.

WinRunner 7 – Mercury

–        Mercury WinRunner is a powerful tool for enterprise wide functional and regression testing.

–        WinRunner captures, verifies, and replays user interactions automatically to identify defects and ensure that business processes work flawlessly upon deployment and remain reliable.

–        WinRunner allows you to reduce testing time by automating repetitive tasks and optimize testing efforts by covering diverse environments with a single testing tool.

QA Run 4.7 – Compuware

–        With QA Run, programmers get the automation capabilities they need to quickly and productively create and execute test scripts, verify tests and analyze test results.

–        Uses an object-oriented approach to automate test script generation, which can significantly increase the accuracy of testing in the time you have available.

Visual Test 6.5 – Rational Software

–        Based on the BASIC language and used to simulate user actions on a User Interface.

–        Is a powerful language providing support for pointers, remote procedure calls, working with advanced data types such as linked lists, open-ended hash tables, callback functions, and much more.

–        Is a host of utilities for querying an application to determine how to access it with Visual Test, screen capture/comparison, script executor, and scenario recorder.

Silk Test – Segue

–        Is an automated tool for testing the functionality of enterprise applications in any environment.

–        Designed for ease of use, Silk Test includes a host of productivity-boosting features that let both novice and expert users create functional tests quickly, execute them automatically and analyze results accurately.

–        In addition to validating the full functionality of an application prior to its initial release, users can easily evaluate the impact of new enhancements on existing functionality by simply reusing existing test casts.

QA Wizard – Seapine Software

–        Completely automates the functional regression testing of your applications and Web sites.

–        It’s an intelligent object-based solution that provides data-driven testing support for multiple data sources.

–        Uses scripting language that includes all of the features of a modern structured language, including flow control, subroutines, constants, conditionals, variables, assignment statements, functions, and more.

Evaluation Criteria

Record and Playback         Object Mapping
Web Testing Object              Identity Tool
Environment Support        Extensible Language
Cost                                            Integration
Ease of Use                             Image Testing
Database Tests                     Test/Error Recovery
Data Functions                    Object Tests
Support

3 = Basic  2 = Good  1 = Excellent

Tool Selection Recommendation

Tool evaluation and selection is a project in its own right.

It can take between 2 and 6 weeks. It will need team members, a budget, goals and timescales.
There will also be people issues i.e. “politics”.

Start by looking at your current situation
– Identify your problems
– Explore alternative solutions
– Realistic expectations from tool solutions
– Are you ready for tools?

Make a business case for the tool

–What are your current and future manual testing costs?
–What are initial and future automated testing costs?
–What return will you get on investment and when?

Identify candidate tools

– Identify constraints (economic, environmental, commercial, quality, political)
– Classify tool features into mandatory & desirable
– Evaluate features by asking questions to tool vendors
– Investigate tool experience by asking questions to other tool users Plan and schedule in-house demonstration by vendors
– Make the decision

Choose a test tool that best fits the testing requirements of your organization or company.

An “Automated Testing Handbook” is available from the Software Testing Institute (www.ondaweb.com/sti), which covers all of the major considerations involved in choosing the right test tool for your purposes.

Wednesday, 7 March 2012

Performance Counters And Their Values For Performance Analysis


Performance Counters:
Performance counters are used to monitor system components such as processors, memory, network and the I/O devices. Performance counters are organized and grouped into performance counter categories. For instance the processor category contains all counters related to the operation of the processor such as the processor time, idle time, interrupt time and henceforth.  If performance counters are used in the application, they can publish performance-related data to compare them against acceptable criteria.
The number of counter parameters to be considered by the load tester/designers greatly varies based on the type and size of the application to be tested. Some of the Performance Counters and their Threshold values for Hexaware Performance Analysis are as follows:
Memory Counters:
Memory: Available Mbytes –This describes the amount of physical RAM available to processes running on the system.
Threshold to watch for:
Available Mbytes consistent value of less than 20 to 25 percent of installed RAM is an indication of insufficient memory. Values below 100 MB may indicate memory pressure.
Note: This counter displays the last observed value only. It is not an average.
Memory – Pages /sec-Indicates the rate at which pages are read from or written to disk to resolve hard page faults.
Threshold to watch for:
Memory-Pages /sec higher than 5 indicates a possible bottleneck
Process: Private Bytes: _Total -Indicates the current allocation of memory that cannot be shared with other processes. This counter can be used to identify memory leaks in.NET applications
Process: Working Set: _Total - This is the amount of physical memory being used by all processes combined. If the value for this counter is significantly below the value for Process: Private Bytes: _Total, it indicates that processes are paging too heavily. A difference of more than 10% is probably significant.
Processor Counters:
% Processor Time_Total Instance - Percentage of elapsed time a CPU is busy executing a non idle thread (An indicator or processor activity).
Threshold to watch for:
Processor % Time of sustained at or over 85% may indicate that processor performance (for that load) is the limiting factor.
% Privilege Time-Percent of threads running in privileged mode (file or network I/O, or allocate memory)
Threshold to watch for:
Processor % Privilege Time consistently over 75 percent indicates a bottleneck.
Processor Queue Length - Number of tasks ready to run than the processors can get to.
Threshold to watch for:
Processor Queue Length greater than 2 indicates a bottleneck.
Note: High values many not necessarily be bad for % Processor Time. However, if the other processor-related counters are increasing linearly such as % Privileged Time or Processor Queue Length, high CPU utilization may be worth investigating.
  • Less than 60% consumed = Healthy
  • 51% – 90% consumed = Monitor or Caution
  • 91% – 100% consumed = Critical or Out of Spec
System\Context Switches /sec. Occurs when higher priority threads preempts lower priority threads that are currently running, and can indicate when too many threads are competing for processor time. If much processor utilization is not seen and very low levels of context switching are seen, it could indicate that threads are blocked
Threshold to watch for:
As a general rule, context switching rates of less than 5,000 per second per processor are not worth worrying about. If context switching rates exceed 15,000 per second per processor, then there is a constraint.
Disk Counters:
Physical Disk (instance)\Disk Transfers/sec
To monitor disk activity, we can use this counter. When the measurement goes above 25 disk I/O’s per second then we got poor response time for the disk (which may well translate to a potential bottleneck. To further uncover the root cause we use the next mentioned counter.
Physical Disk (instance)\% Idle Time
This counter measures the percent time that the hard disk is idle during the measurement interval, and if we see this counter falling below 20% then we will likely get read/write requests queuing up for the disk which is unable to service these requests in a timely fashion. In this case it’s time to upgrade the hardware to use faster disks or scale out the application to better handle the load.
Avg. Disk sec/Transfer - The number of seconds it takes to complete one disk I/O.
Avg. Disk sec/Read - The average time, in seconds, of a read of data from the disk.
Avg. Disk sec/Write - The average time, in seconds, of a write of data to the disk.
Less than 10 msvery good
Between 10 – 20 msokay
Between 20 – 50 msslow, needs attention
Greater than 50 msserious I/O bottleneck
Note:  These three counters in the above list should consistently have values of approximately .020 (20 ms) or lower and should never exceed.050 (50 ms).
Source: Microsoft
Network Counters:
Network Interface: Output Queue Length - This is the number of packets in queue waiting to be sent. A bottleneck needs to be resolved if there is a sustained average of more than two packets in a queue.
Threshold to watch for:
If greater than 3 for 15 minutes or more, NIC (Network Interface Card) is bottleneck.
Network Segment: %Network Utilization - % of network bandwidth in use on this segment.
Threshold to watch for:
For Ethernet networks, if value is consistently about 50%-70%, this segment is becoming a bottleneck.
Conclusion : These values may not exactly depict the threshold limits but provides a consideration to be valued upon for Performance Analysis.

LoadRunner Runtime Settings – Multithreading Options


Performance testers are confronted with this classic dilemma when they decide to execute their script in LoadRunner. Whether to run the Vuser as a thread or as a process?

1.1  Difference between a thread and a process 

A Process

  • Let us consider a process as an independent entity or unit that has an exclusive virtual address space for itself.
  • A process can interact with another process only through IPC (inter process communication). More than one process could run at any given time but no two processes can share the same memory address space.
E.g. when we open an application say notepad from our Windows OS, we see that a notepad.exe process is displayed in our task manager under processes tab. If we open another such notepad a new notepad.exe process is displayed. This process has its own set of virtual address space.

A Thread

  • Threads are contained inside a process. More than one thread can exist within the same process and can share the memory space between them.
  • The advantage here is that multiple threads can share the same memory space. I.e. when a thread is in idle state another thread can utilize the resource thereby faster execution rate is achieved.
  • A memory space can be accessed by another thread if one thread remains idle for a long time.
  • Threads can also access common data structures if required.

1.2  Multithreading

While defining the runtime settings in LoadRunner, we have to choose between running the Vuser as a thread or a process. “The Controller uses a driver program (such as mdrv.exe or r3Vuser.exe) to run your Vusers. If you run each Vuser as a process, then the same driver program is launched (and loaded) into the memory again and again for every instance of the Vuser.” – LoadRunner User Guide. The driver program mentioned is nothing but a process that runs when we generate a Vuser load.

Runtime Settings


1.3  Run Vuser as a process – Disable Multithreading

  • If we choose the first option and run ‘n’ number of Vusers as a process, we will be able to see ‘n’ number of mmdrv.exe processes running in the Load generator machine. Each of this process would be consuming their own memory space.
  • When this option is selected, each of the Vuser process establishes at least one connection with the web/app server.

1.4  Run Vuser as a thread – Enable Multithreading

  • But we can choose to run the Vuser as a thread if we want to go easy on the resources. This way more number of Vusers can be generated with the same amount of available load generator memory.
  • When this option is selected, each of the Vuser thread can share the open connections between them (connection pooling). Opening and maintaining a connection for each Vuser process, is resource consuming. In connection pooling, the amount of time a user must wait to establish a connection to the database is also reduced.This is surely an advantage right? Wrong. The argument is that this is not an accurate replication of the user load - A single connection for each Vuser should be created like in a real time scenario and to achieve this we have to run Vuser as a process. There are other factors such as thread safety to be considered. When we run a large amount of Vusers as a single multi threaded process, the Vusers run as threads which share the same memory location. Thus one thread may impact, interfere or modify data elements of another thread posing serious thread safety concerns. Before selecting either of the options we need to determine the load generator capacity such as available system resources, memory space and also the thread safety of the protocols used.
Please Visit At: LoadRunner Runtime Settings For Know More.

Common Problems & Solutions For Performance Testing Flex Applications Using LoadRunner


This article lists the common problems & solutions that performance engineers come across when testing flex applications.

Problem #1 : Overlapped transmission error occurs when a flex script is run for the first time from controller. But the same script works fine in VuGen.

Error -27740: Overlapped transmission of request to “www.url.com” for URL“http://www.url.com/ProdApp/” failed: WSA_IO_PENDING.

Solution : The transmission of data to the server failed. It could be a network, router, or server problem. The word Overlapped refers to the way LoadRunner sends data in order to get a Web Page Breakdown. To resolve this problem, add the following statement to the beginning of the script to disable the breakdown of the “First Buffer” into server and network time.

web_set_sockets_option (“OVERLAPPED_SEND”, “0″);


Problem #2 : During script replay Vugen crashes due to mmdrv error. mmdrv has encountered a problem and needs to close. Additional details Error: mmdrv.exe caused an Microsoft C++ Exception in module kernel32.dll at 001B:7C81EB33, RaiseException () +0082 byte(s)

Solution : The cause of this issue is unknown. HP released a patch that can be downloaded from their site.

Problem #3 : AMF error: Failed to find request and response

Solution : LoadRunner web protocol has a mechanism to prevent large body requests to appear in the action files, by having the body in the lrw_custom_body.h. In AMF and Flex protocol, LR cannot handle these values and fails to generate the steps. Follow these steps to fix the problem:

1. Go to the “generation log”
2. Search for the highest value for “Content-Length”
3. Go to <LoadRunner installation folder>/config
4. Open vugen.ini
5. Add the following:
[WebRecorder]
BodySize=<size found in step (2)>
6. Regenerate the script

Problem #4 : There are duplicate AMF calls in the recording log as well as in the generated code.

Solution : Capture level may be set to Socket and WinInet, make sure under Recording Options –> Network –> Port Mapping –> Capture level is set to WinInet (only)

Problem #5 : A Flex script which has Flex_AMF and Flex_RTMP calls, on replay will have mismatch in the tree view between the request and the response. After looking in the replay log we can see that correct calls are being made but they are being displayed incorrectly in the tree view (only the replay in tree view is incorrect). Sometimes it shows the previous or next Flex_AMF call in the tree view in place of the Flex_RTMP call.

Solution : This issue has been identified as a bug by R&D in LR 9.51 and LR 9.52. R&D issued a new flexreplay.dll which resolved the issue and will be included in the next Service Pack.

Problem #6 : Flex protocol script fails with “Error: Encoding of AMF message failed” or “Error: Decoding of AMF message failed”

Solution : The cause for this error is the presence of special characters (&gt, &lt, &amp etc…) in the flex request. Send the request enclosed in CDATA Example: <firmName>XXXXXXX &amp; CO. INC.</firm Name> in script to <firmName><![CDATA[XXXXXXXXXXXX &amp; CO. INC.]]></firmName>

Problem #7 : When creating a Multi Protocol Script that contains FLEX and WEB protocols sometimes VuGen closes automatically without any warning/error message displayed. This happens when the Web protocol is set to be in HTML Mode. When in URL mode the crash does not occur. There is no error code except a generic Windows message stating the VuGen needs to close.

Solution : This issue can be seen on Machines that are running on Windows XP, and using Mfc80.dll. Refer to Microsoft KB Article in the link below that provides a solution for the same. Microsoft released a hot fix for Windows specific issue that can cause VuGen to close.
http://support.microsoft.com/kb/961894

Problem #8 : When recording a FLEX script, RTMP calls are not being captured correctly so the corresponding FLEX_RTMP_Connect functions are not generated in the script.

Solution : First set the Port Mapping (Choose Recording options –> Network –> Port Mapping –> set Capture Level to ‘Socket level and WinINet level data’) set to ‘Socket level and if this doesn’t help, follow the next step. Record a FLEX + Winsock script. In Port mapping section, Set the Send-Receive buffer size threshold to 1500 under the options. Create a new entry and select Service ID as SOCKET, enter the Port (such as 2037 or whatever port the FLEX application is using for connection), Connection Type as Plain, Record Type as Proxy, and Target Server can be the default value(Any Server).

Problem #9 : Replaying a Flex script containing a flex_rtmp_send() that has an XML argument string may result in the mmdrv process crashing with a failure in a Microsoft Dynamics.

Solution : The VuGen script generation functionality does not handle the XML parameter string within the function correctly. This results in the mmdrv process crashing during replay. If you have the 9.51 version, installing a specific patch (flex9.51rup.zip) or service pack 2 will resolve the problem

Problem #10 : During the test executions in controller, sometimes the scripts throw an error ‘Decoding of AMF message failed. Error is: Externalizable parsing failed’.

Solution : This is mostly due to the file transfer problem. It is always advised to place the jar files in a share path common to all load agents.

Other Flex Supported Load Testing Tools


There are other Commercial & Open Source tools available, that support the flex application testing. Some tools (For example, Neoload) have much considerable support for RTMP even when compared to LoadRunner. The way all these tools test the flex application is quite similar, each tool has its own AMF/XML conversion engine, which serializes the binary data to readable XML format
Open Source
  • Data Services Stress Testing Framework
  • JMeter
Commercial Tools
  • Silk Performer by Borland
  • NeoLoad by Neotys
  • WebLOAD by RadView
Performance Improvement Recommendations


When it comes to performance improvement of an application, our first concern would be to enhance the scalability for a specified hardware & software configuration.
  • In case of flex, the scalability issues derive from the fact that BlazeDS is deployed in a conventional Java Servlet container, and performance/scalability of BlazeDS also depends on the number of concurrent connections supported by the server such as Tomcat, WebSphere, Web Logic … BlazeDS runs in a servlet container, which maintains a thread pool.
  • Each thread is assigned to client request and returns to a reusable pool after the request is processed. When a particular client request uses a thread for a longer duration, the thread is being locked by that corresponding client till the request is processed. So the number of the concurrent users in BlazeDS depends on the number of threads a particular servlet container can hold.
  • While BlazeDS is preconfigured with just 10 simultaneous connections, it can be increased to several hundred, and the actual number depends on the server’s threading configuration, CPU and the size of its JVM heap memory. This number can also be affected by the number of messages processed by server in the unit of time and size of the messages.
  • Tomcat or WebSphere can support upto several hundred of users, and any servlet container that supports Servlets 3.0, BlazeDS can be used in the more demanding applications that require support of thousands concurrent users.
Based on our project experience in performance testing Flex applications using LoadRunner we have pointed out some of the common problems that might arise during performance testing Flex applications. This will save you a lot of time as we have also provided the solutions to troubleshoot the errors if they occur.

Thanks For Reading Blog. Know More About: Flex

Log Options In Load Runner Runtime Settings


As a part of Business Process Testing activity, it is a tester’s role to debug all the scripts before it reaches the execution phase. In LoadRunner we can make use of the logging options provided in the runtime settings for the same. It allows us to select the level of details about script playback that are to be displayed at the replay log.

  • There two types of logs available in load runner to specify the level of information to be logged during the replay,
1. Standard Log
2. Extended Log

Standard Log
  • A standard log displays the details about the set of functions executed and the messages displayed
  • This can be used to validate whether the functions are executed as expected
Extended Log
  • This is used for logging not only the function calls but also additional details such as parameter substitution, request headers, request body, response headers and the response body. The extended log has three log levels;
1. Parameter Substitution

It adds the details about the parameters substituted and the dynamic values that were captured.

2. Data returned by server

The response header and the response body details for each request is contained in the replay log if this option is selected, i.e. it captures the data that is returned from the server.

3. Advanced trace

With this option enabled, the replay log almost looks like a generation log as it contains details about every request and response that has been sent to and received from the server.

Manual Correlation with extended log
  • Extended logging feature is preferred for debugging a script as we can inspect the values to be correlated
To conclude the above, logging the function calls, requests, and responses will be helpful for debugging the script but they utilize the system resources heavily. So the logging is to be disabled for long scenarios. Besides, when going for an execution with very large number of vusers, it is recommended that we disable the logging feature altogether by unchecking the enable logging option.


Monday, 5 March 2012

Tips To Record A Script In OpenSTA Using Mozilla Firefox


In global everyone thinks that OpenSTA is compatible with only Internet Explorer & Netscape Navigator browsers to record a script. Through this article you will come to know how to record a script in OpenSTA using Mozilla Firefox.
In my project we had a scenario which was not compatible with IE browser but we were supposed to automate that script. After doing some R&D, finally we got a solution which I’m sharing through this post. Please find below the details to overcome this issue.
OpenSTA records only the “HTTP requests/responses”. Any browser is only a “HTTP communication vehicle”, so we can use any browser which supports “HTTP 1.0/1.1” to record a script.
Configuration details to be followed in OpenSTA
1. Open the Script Modeler in OpenSTA tool.
2. In Option → Browsers → Select any Default Browser.
3. Option → Gateway (Please keep the below mentioned setting in the Gateway)
  • In Capture: Remote
  • Administration Port: 3000
  • Port: 81
  • In Proxy:-
    • Address: proxy IP
    • Port: proxy Port
    • Secure: Machine IP; Port: 81
Configuration details to be followed in Mozilla Browser
  • After changing the above configuration details in OpenSTA, there are few more changes to be done in the “proxy details” of Mozilla browser also. Please find the below given details.
    • Open the Mozilla Browser.
    • Tools à Option à Advanced à Network à Settings.
    • Select the “Manual Proxy Configuration” radio button.
    • In HTTP Proxy: Machine IP ; Port: 81
After changing all the above configuration details we can start to record the script in Mozilla Firefox using OpenSTA.
But after recording the script there is no possiblity to view the URL detail screens in the HTML view. You will receive an error message “The webpage is unavailable because you are offline”. To overcome this issue please set the below configuration in Mozilla Browser.
  • Open the Mozilla Browser
  • In the URL tab type “About : Config”
  • In the displaying page filter by the Keyword “Network”.
  • Right click the “network.http.accept-encoding” and click Modify.
  • Remove the “gzip,deflate” field and click Ok.
By following all the above mentioned steps you can overcome this issue and record a script in Mozilla Firefox using OpenSTA.

Ensuring The Scalability Of Complex Applications Using Rational Performance Test Tool In Performance Test Automation


In this IT market, we have “N” no of tools w.r.t Commercial tools (Loadrunner, Rational Performance Tester…) and Open Source tools (Grinder, OpenSTA…) to measure the performance and scalability of the Web based and desktop based applications. Also it is an open area for performance Test Engineer/Test Consultant to choose an appropriate performance testing tool to measure the performance and scalability of the application to detect and resolve bottlenecks in the application while considering Web Servers, Application Servers and Database Servers.

 In this paper, I have focused on primary objective of using commercial tool – Rational Performance Tester (RPT) for end to end activity on performance testing for Web based and desktop based applications.

Introduction:
IBM Rational Performance Tester (RPT) software is a performance test creation, execution and analysis tool for teams validate the scalability and reliability of their Web and Enterprise Resource Planning (ERP) – based applications before deployment.
  • RPT is a load and performance testing solution for concerned about the scalability of Web-based applications.
  • Combining ease of use with deep analysis capabilities, RPT simplifies test creation, load generation, and data collection to help ensure that applications can scale to thousands of concurrent users.
  • It combines a simple-to-use test recorder with advanced scheduling, real-time reporting, automated data variation and a highly scalable execution engine to help ensure that applications are prepared to handle large user loads.
Key Highlights on RPT tool:
  • Creates code free tests quickly without programming knowledge
  • Executes multiuser performance testing for Microsoft Windows, Linux, UNIX and mainframe environments with an available
  • Windows and Linux software based user interface
  • Supports load testing against a broad range of applications such as HTTP, SAP, Siebel, Entrust, Citrix and SOA/Web Services and Supports Windows, Linux and z/OS as distributed controller agents
  • Rendered HTML view of Web pages visited during test recording
  • Java code insertion for flexible test customization
  • Reports in real time to enable immediate recognition of performance problems and renders an HTML browser-like view of Web pages in the test
  • Enables Windows, Linux and mainframe technology – based test execution
  • Provides a rich, tree-based test editor that delivers both high level and detailed views of tests
  • Collection and visualization of server resource data
  • Automates identification and management of dynamic server responses
  • Automates test data variation – Data substitution with data pools
  • High extensibility with Java coding: custom coding should be supported in a well known standard language that is platform independent and widely available. Java makes an ideal choice for any tool’s extensibility language
  • Built-in Verification Points (VPs)
  • Collects and integrates server resource data with real-time application performance data
  • A low memory and processor footprint that enables large, multi-user tests with limited hardware resources
  • Accurately emulates large population user transactions with minimal hardware
  • Runs large scale performance tests to validate application scalability
  • Provides no code testing, point and click wizards, report readability, usability and customization
  • Delivers both high-level and detailed views of tests with a rich, tree-based text editor
  • Enables large, multi-user tests with minimal hardware resources
  • Windows and Linux-based user interface  and test execution agents
  • Graphical test editing and workload modeling
  • Offers flexible modeling and emulation of diverse user populations
  • Real-time monitoring and reporting
  • Report customization and export capability
  • Programming extensibility with Java custom code
  • Real-time reporting for immediate performance problem identification with the presence and cause of application performance bottlenecks
  • Diagnoses the root cause of performance bottlenecks by quickly identifying slow performing lines of code and integrates with Tivoli composite application management solutions to identify the source of production performance problems
  • Leverage existing assets for load and performance testing
Advantage of RPT:

High productivity with no programming: Basic skills required to use a performance testing tool should only include knowledge of how to schedule and scale the user scenarios and where to put in the data variation for the application being tested.

Rapid adoption: Rational Performance Tester contains features explicitly designed to enable you to quickly build, execute and analyze the impact of load on your application environment.

Robust analysis and reporting: Individual page or screen response times can be decomposed into response times for individual page elements (for example, JPGs, Java Server Pages, Active Server Pages), which helps testers identify the elements responsible for poor page or screen response time.
  • And the ability to insert custom Java™ code that can be executed at any point during test execution supplements automated data correlation and data generation capabilities. This capability permits advanced data manipulation and diagnostic techniques.
  • During test execution, system resource information such as CPU and memory utilization statistics can be collected from remote servers and correlated with response time and throughput data.
  • Collected resource data is crucial for diagnosing which remote system—router, Web server, application server, database server, etc.—is responsible for detected delays, as well as for pinpointing the component (for example, CPU, RAM, disk) that is causing the bottleneck.
Lowered cost of performance testing: RPT generates a low processor and memory footprint when emulating multiple users. As a result, high levels of scalability can be achieved even if the team does not have access to excessive computing power. In addition, test execution and system information retrieval can occur on Microsoft Windows, UNIX and Linux software based machines, optimizing a team’s usage of existing hardware resources. It provides automation support for essentially all aspects of software development.

Customized reporting with all classes of data: the number of data points has moved from 100,000 to 100 million individual measurements, a design imperative is to reduce the data in real time during the test in a distributed fashion. This principle is coupled with the increased complexity of the tested system architectures yield a need to have complete flexibility in reporting and correlating a variety of data sources.

In this paper, have discussed about need for using Rational Performance Test tool (RPT) in performance testing with sufficient key important advantage of this tool in addition to lot of factors like High productivity with no programming, Rapid adoption, Robust analysis and reporting, Lowered cost of performance testing and Customized reporting with all classes of data while using this tool. Once again it is an open area for performance Test Engineer/Test Consultant to choose an appropriate performance testing tool to measure the performance and scalability of the application to detect and resolve bottlenecks in the application while considering Web Servers, Application Servers and Database Servers.

Thanks For Reading This Blog. Visit Rational Performance Test To Know More.

Application Performance Testing In Production Environment


Performance testing in production is not practiced widely owing to many risks involved, which include taking the entire production environment offline thus affecting availability, taking part of the production environment offline thus affecting performance, and the risk of updating production data during the test.

However, for applications running on large infrastructure for which there are no production equivalent test environments (e.g. Superdome servers / farm of servers etc), it is not uncommon to reuse the system resources of Production and point it to a test database sitting on a disk sub-system which is equivalent to Production.  This typically happens in a period of low load (weekends, holiday season etc) when the actual Production application can be temporarily migrated to some other smaller hardware such as a Passive environment or a DR site.

Generally, production database is almost never used in a test, due to the risk of having test data getting mixed with real data. In rare cases where a production database is leveraged for a test, it is used purely for read-only or view transactions.

One practice to be somewhat prevalent for Application going live for the first time is to use the production environment for performance testing as part of the UAT. The other relative practices are to make pilot test with real users.  Here, the production release is opened to users in stages; in the first stage a limited number of users are asked to use the system for a defined period of time before opening up the application to the entire user community. Occasionally such an approach may also be used to observe the impact on system performance by imposing a fraction of the full expected load (say by using users of one full department).  Such an option is chosen in cases where the performance test scenario in the test environment may not have been able to fully capture real user behavior or where we may also wish to benchmark the performance test results.

To summarize the following precautions/Best practices has to be adopted for Application performance testing in production environment
  • The test has to be scheduled in non-working hours when the live production traffic is expected to be nearly zero.
  • Part of the production environment (say one application server node from a large cluster of application servers) may be isolated for the purpose of using it in a performance test with read-only usage of the production database.
  • Approvals from all relevant stakeholders and directors need to be taken prior to the test.
  • A conference call has to be arranged for the duration of the test, in which all the stakeholders would participate.  All project teams would monitor their systems during the test, and if any system issues are found to occur, the test has to be stopped immediately.
To Conclude - As the industry is striving towards steady state Performance testing which demands for higher test accuracy which in turn is largely dependent on the Environment setup and Network traffic simulation. Extrapolating the test results/metrics to Production environment is also not convincing to make decisions.

In that case -It is not a bad idea to leverage Production environment for Performance Validation which ensures high accurate performance testing to fix actual performance bottlenecks at an earlier stage that might encounter during the go-Live day.

I would agree – it is not that easy to accomplish production performance Testing but still it all depends on how cautiously we plan and how wisely we execute the Test in production that would eliminate risks associated with it.

Loadrunner Simulation With Safari


How to Simulate the Load from Safari Browser
There was an interesting requirement where we had to simulate Safari browser at the time of  load execution.This article provides information on how to achieve this.

Client Requirement:-
  • Recently we were involved in a PoC to check the feasibility of SalesForce.com with LoadRunner.
  • There was one interesting requirement that the customer wanted to simulate the load from Safari browser.
  • The reason behind this is that the SalesForce.com will be primarily accessed by Sales team who will be mostly traveling using iPad with Safari browser.
Challenge:-
  • By default, LoadRunner supports only I.E, Mozilla & Netscape browsers.
  • And it does not support safari to emulate the load.
Analysis and Solution:-
  • This can be achieved through Custom settings in loadrunner.
  • For this all we need is safari – User Agent String that suits our requirement.
The following is an example of Safari – user agent string:

Mozilla/5.0 (iPad; U; CPU OS 4_3 like Mac OS X; nl-nl) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8F190 Safari/6533.18.5

*Please note for recording you can use any browser and simulate the user load with any required browser using User Agent String as above.

Friday, 2 March 2012

Cloud Computing – Technical Evolution


There is a growing belief that over the next five years, Cloud Computing will become a major stimulus for change in how corporations view and use information technology.

Cost, efficiency, scalability and availability are the main drivers in the discussion regarding cloud computing. Security and privacy are the main issues, which needs to be dealt with when using services in the cloud.

Trestle group consulting” made a Group research publication which talks about the Technical Evolution that’s under Cloud Test. The below articles says all about it.

Cloud computing?

Over the last decade, sourcing has become one of the most commonly used methods for a business to acquire services. The expression Cloud Computing is widely used in IT and business circles. Many users, however, are confused as to what the underlying service actually is and how it can be integrated into their IT and business process landscape.

The organizations see Cloud Computing as a model that enables access to a configurable computing resource, which is easily accessible with no or only minimal Service Provider input. It is a “Flexible and on a need-based IT service”.

One of the many advantages for the user is the fact that instead of high upfront Fixed cost investments, most if not all, costs are variable and can be spread over the duration of the usage. Cloud Computing can be seen as the next level of sourcing.

In the following section, we summarize the different types of Cloud Computing options depending on services delivered.

Cloud Computing Layers

Cloud computing itself can be separated into three service models, also defined as service layers.

Layer 1: Infrastructure as a Service (IaaS), offering virtual IT infrastructure (i.e., hardware, storage)

Layer 2: Platform as a Service (PaaS): offering virtual application infrastructure services (i.e.. database and middleware)

Layer 3: Software as a Service (SaaS): offering virtual application services (i.e.. applications and processes)
As in other service offerings, Cloud Services cover those specifics supplied by the service provider for a user or a specific set of users. The diagram below shows how users access the Cloud.

Cloud Services and Participants

Trestle Group Recommendations

Based upon the experience gained during reviews at customer locations, Trestle Group recommends the following points to be considered when evaluating Cloud as an IT- and/or Business-driven solution.

Before engaging in Cloud Computing, initiate a project that reviews existing processes and products. The review process should go through the following steps in order to optimize use of Cloud Services:
  • Evaluate the actual level of process automation within the business being serviced.

  • Identify the opportunities (from an IT and process perspective) within the existing operating model to virtualizes services.
  • Analyze level of automation of processes and identification of potential by using the Cloud for virtualized processes.
  • Introduce those selected services into the Cloud and define how to integrate the Cloud services into the existing IT/Process Environment.
Within Cloud Computing, a variety of technically innovative solutions are combined and can deliver the potential for an innovative business approach leading to cost reduction, cost structure improvement, variability of cost, flexibility of services and ultimately entire new business models.

Security, Legal and Regulatory Aspects of Cloud Computing

Within the Cloud, users expect to find an identical security framework as generally available in traditional IT environments. Items such as controlled access, data security and data protection need to be ensured and not assumed.
The following points are considered when it comes to Security under Cloud Computing :
  • Between the supplier and the user, SLAs need to be defined to ensure transparency of services supplied, especially in the case of outages.
  • In principle, Cloud Computing, as a new form of sourcing, does not lead to new challenges on a legal and regulatory basis.
  • The importance of reviewing the security and the legal & regulatory aspects increases significantly when Public Clouds are being used.
  • It is up to the providers to set-up the necessary frameworks that guarantee the user a secure processing environment adhering to the legal standards, rules and regulations within the country the user is registered in.
  • Prior to using Cloud Services, organizations should obtain a clear and transparent overview from the supplier which services are performed under which conditions.
  • The definition of clear, agreed upon SLAs and KPIs covering availability, quality of service and adherence to data security and legal/regulatory requirements is essential.
  • Once the overall structure of the services is defined, details should be documented in a contract defining duration of the services, payment cycles and clauses for termination and liability in case of non-delivery.
  • Additionally, organizations should include what needs to be done when services are taken out of the Cloud or obtained from a different supplier.

Conclusion

All major providers offer services in the Cloud, which can bring enormous advantages to large, small and midsized companies when properly implemented and used. A clear understanding of the different types of Cloud Services and their advantages, disadvantages and related risks need to be evaluated prior to making a decision on how to use the Cloud.

There are challenges in implementing Cloud services, which are similar to those inherent in IT sourcing engagements and can be successfully dealt with when addressed in a structured way.

The recommendations outlined in this post should serve as a starting point to effectively address a number of challenges under industries which enables to evaluate to which extent the available cloud services can be used in an optimal and secure manner.